基于CentOS7.5搭建Kubernetes集群

為了方便區(qū)分啃匿,所有執(zhí)行的命令都用粗斜體標(biāo)記
參考文章
https://www.kubernetes.org.cn/4948.html
https://www.kubernetes.org.cn/5025.html

環(huán)境準(zhǔn)備

  • 服務(wù)器
    Virtual IP:192.168.3.88
    k8s-master,192.168.3.80
    k8s-node1,192.168.3.81
    k8s-node2,192.168.3.82
    k8s-node3,192.168.3.83
    k8s-storage1,192.168.3.86
    docker-registry,192.168.3.89

  • 基礎(chǔ)環(huán)境
    基于CentOS-7-x86_64-Minimal-1810最小安裝

需要做的工作包括如下內(nèi)容

  • 更新系統(tǒng)
  • 關(guān)閉 SELINUX
  • 關(guān)閉交換分區(qū)
  • 調(diào)整時區(qū)并同步時間
  • 升級內(nèi)核

系統(tǒng)安裝完成后,執(zhí)行以下命令配置基礎(chǔ)環(huán)境

yum update -y
yum install wget net-tools yum-utils vim -y
修改源為阿里云
-- 先備份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
-- 下載
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
-- 禁用c7-media庫
yum-config-manager --disable c7-media
-- 或者 vim /etc/yum.repos.d/CentOS-Media.repo 修改enabled的值為0

  • 時鐘同步
    rm -rf /etc/localtime
    vim /etc/sysconfig/clock
    -- 文件中添加 Zone=Asia/Shanghai
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    reboot
    使用
    date -R
    確認(rèn)是+8時區(qū)洪乍,如果不是就重來一次上面的操作。
    -- 安裝ntp服務(wù)
    yum install ntp -y
    -- 修改成國內(nèi)時區(qū)并同步
    timedatectl set-timezone Asia/Shanghai
    timedatectl set-ntp yes
    -- 查看時間確保同步
    timedatectl
    或者執(zhí)行以下命令也可以完成時鐘同步
    yum install -y ntpdate
    ntpdate -u ntp.api.bz

  • 關(guān)閉SELINUX
    vim /etc/sysconfig/selinux
    SELINUX=permissive 修改為 SELINUX=disabled

  • 關(guān)閉Selinux/firewalld
    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

  • master節(jié)點到其他節(jié)點的ssh免登陸
    首先在每臺機(jī)器上執(zhí)行
    ssh 本機(jī)IP
    exit

    在master節(jié)點執(zhí)行如下名稱
    ssh-keygen -t rsa
    ssh-copy-id 192.168.3.81
    ssh-copy-id 192.168.3.82
    ssh-copy-id 192.168.3.83
    這里要注意拆讯,每臺機(jī)器都要保證能訪問自己也是免密的

  • 關(guān)閉交換分區(qū)
    swapoff -a
    yes | cp /etc/fstab /etc/fstab_bak
    cat /etc/fstab_bak |grep -v swap > /etc/fstab

  • 設(shè)置網(wǎng)橋包經(jīng)IPTables集币,core文件生成路徑
    echo """
    vm.swappiness = 0
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    """ > /etc/sysctl.conf

    sysctl -p

  • 查看內(nèi)核版本
    lsb_release -a
    如果提示命令不存在則安裝
    yum install -y redhat-lsb

  • 要求集群中所有機(jī)器具有不同的Mac地址、產(chǎn)品uuid勾拉、Hostname
    cat /sys/class/dmi/id/product_uuid
    ip link

  • 解決“cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 沒有那個文件或目錄”的問題
    先安裝相關(guān)庫
    yum install -y epel-release
    yum install -y conntrack ipvsadm ipset jq sysstat curl iptables
    執(zhí)行如下配置命令
    安裝模塊
    modprobe br_netfilter
    modprobe ip_vs
    添加配置項
    cat > /etc/rc.sysinit <<EOF
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    EOF

    sysctl -p
    成功!



安裝docker環(huán)境

  • 所有主機(jī)都要安裝docker
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

    yum makecache fast
    yum install -y docker-ce
    編輯systemctl的Docker啟動文件
    sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
    啟動docker
    systemctl daemon-reload
    systemctl enable docker
    systemctl start docker

搭建docker-registry私服

配置docker加速器(僅限于私服機(jī)器找默,本文是192.168.3.89)
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://7471d7b2.mirror.aliyuncs.com"]} EOF
systemctl daemon-reload
systemctl restart docker
安裝 registry
docker pull registry:latest
下載鏈接:鏈接:https://pan.baidu.com/s/1ZdmgnrYGVobc22FX__vYwg 提取碼:69gq ,隨后將該文件放置到registry機(jī)器上吼驶,并在registry主機(jī)上加載惩激、啟動該鏡像(嘉定該鏡像在/var/lib/docker目錄下)
docker load -i /var/lib/docker/k8s-repo-1.13.0
運行 docker images查看鏡像
docker run --restart=always -d -p 80:5000 --name repo harbor.io:1180/system/k8s-repo:v1.13.0
或者
docker run --restart=always -d -p 80:5000 --privileged=true --log-driver=none --name registry -v /home/registrydata:/tmp/registry harbor.io:1180/system/k8s-repo:v1.13.0

在瀏覽器輸入http://192.168.3.89/v2/_catalog

瀏覽器顯示如上圖則服務(wù)正常

  • 所有非registry主機(jī)配置私有源
    mkdir -p /etc/docker
    echo -e '{\n"insecure-registries":["k8s.gcr.io", "gcr.io", "quay.io"]\n}' > /etc/docker/daemon.json
    systemctl restart docker
    此處應(yīng)當(dāng)修改為registry所在機(jī)器的IP
    REGISTRY_HOST="192.168.3.89"
    設(shè)置Hosts
    yes | cp /etc/hosts /etc/hosts_bak
    cat /etc/hosts_bak|grep -vE '(gcr.io|harbor.io|quay.io)' > /etc/hosts
    echo """ $REGISTRY_HOST gcr.io harbor.io k8s.gcr.io quay.io """ >> /etc/hosts

安裝配置kubernetes(master & worker)

首先下載鏈接:鏈接:https://pan.baidu.com/s/1t3EWAt4AET7JaIVIbz-zHQ 提取碼:djnf ,并放置在k8s各個master和worker主機(jī)上蟹演,我放在/home下
yum install -y socat keepalived ipvsadm
cd /home/
scp k8s-v1.13.0-rpms.tgz 192.168.3.81:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.82:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.83:/home

然后依次在每臺機(jī)器上執(zhí)行如下命令
cd /home
tar -xzvf k8s-v1.13.0-rpms.tgz
cd k8s-v1.13.0
rpm -Uvh * --force
systemctl enable kubelet
kubeadm version -o short

  • 部署HA Master
    先使用ifconfig -a 查看網(wǎng)卡設(shè)備名风钻,這里是enp0s3
    在192.168.3.80上執(zhí)行
    cd ~/
    echo """
    CP0_IP=192.168.3.80
    CP1_IP=192.168.3.81
    CP2_IP=192.168.3.82
    VIP=192.168.3.88
    NET_IF=enp0s3
    CIDR=10.244.0.0/16
    """ > ./cluster-info

    bash -c "$(curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/kubeha-gen.sh)"
    該步驟將可能持續(xù)2到10分鐘,在該腳本進(jìn)行安裝部署前酒请,將有一次對安裝信息進(jìn)行檢查確認(rèn)的機(jī)會
    執(zhí)行結(jié)束記住輸出的join信息
    join command:
    kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f
  • 安裝helm
    如果需要安裝helm骡技,請先下載離線包鏈接:鏈接:https://pan.baidu.com/s/1B7WHuomXOmZKhHai4tV5MA 提取碼:kgzi
    cd /home/
    tar -xzvf helm-v2.12.0-linux-amd64.tar
    cd linux-amd64
    cp helm /usr/local/bin
    helm init --service-account=kubernetes-dashboard-admin --skip-refresh --upgrade
    helm version
  • 加入work node
    在要接入集群的節(jié)點主機(jī)執(zhí)行命令
    kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f

掛載擴(kuò)展存儲


  • kubeha-gen.sh腳本內(nèi)容如下(建議將腳本下載到本地然后修改其中的郵件地址等信息):
    #/bin/bash

function check_parm()
{
if [ "${2}" == "" ]; then
echo -n "${1}"
return 1
else
return 0
fi
}

if [ -f ./cluster-info ]; then
source ./cluster-info
fi

check_parm "Enter the IP address of master-01: " ${CP0_IP}
if [ $? -eq 1 ]; then
read CP0_IP
fi
check_parm "Enter the IP address of master-02: " ${CP1_IP}
if [ $? -eq 1 ]; then
read CP1_IP
fi
check_parm "Enter the IP address of master-03: " ${CP2_IP}
if [ $? -eq 1 ]; then
read CP2_IP
fi
check_parm "Enter the VIP: " ${VIP}
if [ $? -eq 1 ]; then
read VIP
fi
check_parm "Enter the Net Interface: " ${NET_IF}
if [ $? -eq 1 ]; then
read NET_IF
fi
check_parm "Enter the cluster CIDR: " ${CIDR}
if [ $? -eq 1 ]; then
read CIDR
fi

echo """
cluster-info:
master-01: ${CP0_IP}
master-02: ${CP1_IP}
master-02: ${CP2_IP}
VIP: ${VIP}
Net Interface: ${NET_IF}
CIDR: ${CIDR}
"""
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
while [ "${AGREE}" != "yes" ]; do
if [ "${AGREE}" == "no" ]; then
exit 0;
else
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
fi
done

mkdir -p ~/ikube/tls

IPS=(${CP0_IP} ${CP1_IP} ${CP2_IP})

PRIORITY=(100 50 30)
STATE=("MASTER" "BACKUP" "BACKUP")
HEALTH_CHECK=""
for index in 0 1 2; do
HEALTH_CHECK=${HEALTH_CHECK}"""
real_server ${IPS[$index]} 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
"""
done

for index in 0 1 2; do
ip=${IPS[${index}]}
echo """
global_defs {
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state ${STATE[{index}]} interface \{NET_IF}
virtual_router_id 80
priority ${PRIORITY[{index}]} advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { \{VIP}
}
}

virtual_server ${VIP} 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP

${HEALTH_CHECK}
}
""" > ~/ikube/keepalived-${index}.conf
scp ~/ikube/keepalived-${index}.conf ${ip}:/etc/keepalived/keepalived.conf

ssh ${ip} "
systemctl stop keepalived
systemctl enable keepalived
systemctl start keepalived
kubeadm reset -f
rm -rf /etc/kubernetes/pki/"
done

echo """
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
controlPlaneEndpoint: "${VIP}:6443"
apiServer:
certSANs:
- ${CP0_IP}
- ${CP1_IP}
- ${CP2_IP}
- ${VIP}
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: ${CIDR}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
""" > /etc/kubernetes/kubeadm-config.yaml

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config

kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/rbac.yaml
curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/calico.yaml | sed "s!8.8.8.8!${CP0_IP}!g" | sed "s!10.244.0.0/16!${CIDR}!g" | kubectl apply -f -

JOIN_CMD=kubeadm token create --print-join-command

for index in 1 2; do
ip=${IPS[${index}]}
ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf $ip:~/.kube/config

ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
done

echo "Cluster create finished."

echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes

[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = CN

stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = Dalian

localityName = Locality Name (eg, city)
localityName_value = Haidian

organizationName = Organization Name (eg, company)
organizationName_value = Channelsoft

organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_value = R & D Department

commonName = Common Name (eg, your name or your server's hostname)
commonName_value = *.multi.io

emailAddress = Email Address
emailAddress_value = lentil1016@gmail.com
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/traefik.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/metrics.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/kubernetes-dashboard.yaml

echo "Plugin install finished."
echo "Waiting for all pods into 'Running' status. You can press 'Ctrl + c' to terminate this waiting any time you like."
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
while [ "${POD_UNREADY}" != "" -o "${NODE_UNREADY}" != "" ]; do
sleep 1
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
done

echo

kubectl get cs
kubectl get nodes
kubectl get pods -n kube-system

echo """
join command:
`kubeadm token create --print-join-command`"""


最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市羞反,隨后出現(xiàn)的幾起案子布朦,更是在濱河造成了極大的恐慌,老刑警劉巖昼窗,帶你破解...
    沈念sama閱讀 219,589評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件是趴,死亡現(xiàn)場離奇詭異,居然都是意外死亡澄惊,警方通過查閱死者的電腦和手機(jī)唆途,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,615評論 3 396
  • 文/潘曉璐 我一進(jìn)店門富雅,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人肛搬,你說我怎么就攤上這事没佑。” “怎么了温赔?”我有些...
    開封第一講書人閱讀 165,933評論 0 356
  • 文/不壞的土叔 我叫張陵蛤奢,是天一觀的道長。 經(jīng)常有香客問我让腹,道長远剩,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,976評論 1 295
  • 正文 為了忘掉前任骇窍,我火速辦了婚禮,結(jié)果婚禮上锥余,老公的妹妹穿的比我還像新娘腹纳。我一直安慰自己,他們只是感情好驱犹,可當(dāng)我...
    茶點故事閱讀 67,999評論 6 393
  • 文/花漫 我一把揭開白布嘲恍。 她就那樣靜靜地躺著,像睡著了一般雄驹。 火紅的嫁衣襯著肌膚如雪佃牛。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,775評論 1 307
  • 那天医舆,我揣著相機(jī)與錄音俘侠,去河邊找鬼。 笑死蔬将,一個胖子當(dāng)著我的面吹牛爷速,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播霞怀,決...
    沈念sama閱讀 40,474評論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼惫东,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了毙石?” 一聲冷哼從身側(cè)響起廉沮,我...
    開封第一講書人閱讀 39,359評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎徐矩,沒想到半個月后滞时,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,854評論 1 317
  • 正文 獨居荒郊野嶺守林人離奇死亡丧蘸,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,007評論 3 338
  • 正文 我和宋清朗相戀三年漂洋,在試婚紗的時候發(fā)現(xiàn)自己被綠了遥皂。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,146評論 1 351
  • 序言:一個原本活蹦亂跳的男人離奇死亡刽漂,死狀恐怖演训,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情贝咙,我是刑警寧澤样悟,帶...
    沈念sama閱讀 35,826評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站庭猩,受9級特大地震影響窟她,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜蔼水,卻給世界環(huán)境...
    茶點故事閱讀 41,484評論 3 331
  • 文/蒙蒙 一震糖、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧趴腋,春花似錦吊说、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,029評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至蠢护,卻和暖如春雅宾,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背葵硕。 一陣腳步聲響...
    開封第一講書人閱讀 33,153評論 1 272
  • 我被黑心中介騙來泰國打工眉抬, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人贬芥。 一個月前我還...
    沈念sama閱讀 48,420評論 3 373
  • 正文 我出身青樓吐辙,卻偏偏與公主長得像,于是被迫代替她去往敵國和親蘸劈。 傳聞我的和親對象是個殘疾皇子昏苏,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 45,107評論 2 356

推薦閱讀更多精彩內(nèi)容