高可用集群指 1個lb + 3個master(etcd) + n個node,生產(chǎn)環(huán)境都推薦這種安裝方式
本文是參考官方文檔 https://kubernetes.io/docs/setup/independent/high-availability/適當(dāng)修改而來竿痰。
- 新版的k8s脆粥,etcd節(jié)點已經(jīng)可以完美和master節(jié)點共存于同一臺服務(wù)器上;
- etcd有3種方式安裝(獨立安裝影涉、docker方式变隔、k8s內(nèi)部集成);
雖然k8s集成方式是官方推薦的蟹倾,但是目前全是坑弟胀,這里還是采用獨立安裝方式。
1.準(zhǔn)備工作
節(jié)點 | IP | 功能 |
---|---|---|
proxy | 192.168.0.10 | haproxy |
master1 | 192.168.0.11 | master, etcd |
master2 | 192.168.0.12 | master, etcd |
master3 | 192.168.0.13 | master, etcd |
請先參考前文部署單點k8s集群構(gòu)建基礎(chǔ)鏡像
給每個master起個獨立的主機名(master1, master2, master3)喊式。
hostnamectl set-hostname master1
etcd和master是可以復(fù)用同一臺服務(wù)器的(土豪隨意)
后面有很多scp操作孵户,為了方便,需要配置master1到master1, master2, master3的免密登陸
ssh-keygen # 一路回車即可
scp .ssh/id_rsa.pub master1:
scp .ssh/id_rsa.pub master2:
scp .ssh/id_rsa.pub master3:
[root@master1 ~]# mkdir .ssh && cat id_rsa.pub >> .ssh/authorized_keys
[root@master2 ~]# mkdir .ssh && cat id_rsa.pub >> .ssh/authorized_keys
[root@master3 ~]# mkdir .ssh && cat id_rsa.pub >> .ssh/authorized_keys
2. 配置haproxy
如果是在云端構(gòu)建集群岔留,可以省去該步驟夏哭,直接使用供應(yīng)商提供的負(fù)載均衡服務(wù)替代
在proxy節(jié)點上運行
master1=192.168.0.11
master2=192.168.0.12
master3=192.168.0.13
yum install -y haproxy
systemctl enable haproxy
cat << EOF >> /etc/haproxy/haproxy.cfg
listen k8s-lb *:6443
mode tcp
balance roundrobin
server s1 $master1:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server s2 $master2:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server s3 $master3:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF
service haproxy start
3. 安裝etcd集群
- 在3臺master上安裝etcd
yum install -y etcd
systemctl enable etcd
- 生成配置
在master1上操作
etcd1=192.168.0.11
etcd2=192.168.0.12
etcd3=192.168.0.13
TOKEN=abcd1234
ETCDHOSTS=($etcd1 $etcd2 $etcd3)
NAMES=("infra0" "infra1" "infra2")
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/$NAME.conf
# [member]
ETCD_NAME=$NAME
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://$HOST:2380"
ETCD_LISTEN_CLIENT_URLS="http://$HOST:2379,http://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://$HOST:2380"
ETCD_INITIAL_CLUSTER="${NAMES[0]}=http://${ETCDHOSTS[0]}:2380,${NAMES[1]}=http://${ETCDHOSTS[1]}:2380,${NAMES[2]}=http://${ETCDHOSTS[2]}:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="$TOKEN"
ETCD_ADVERTISE_CLIENT_URLS="http://$HOST:2379"
EOF
done
- 覆蓋etcd配置
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
scp /tmp/$NAME.conf $HOST:
ssh $HOST "\mv -f $NAME.conf /etc/etcd/etcd.conf"
rm -f /tmp/$NAME.conf
done
- 在每臺節(jié)點上啟動etcd
master1上執(zhí)行
service etcd start
,會一直pending狀態(tài)献联,等master2的etcd啟動以后就會完成竖配。
[root@master1 ~]# service etcd start
[root@master2 ~]# service etcd start
[root@master3 ~]# service etcd start
- 任意節(jié)點驗證集群
etcdctl member list
etcdctl cluster-health
4. 安裝master集群
- 在master1上初始化集群
proxy=192.168.0.10
etcd1=192.168.0.11
etcd2=192.168.0.12
etcd3=192.168.0.13
master1=$etcd1
master2=$etcd2
master3=$etcd3
cat << EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "$proxy"
controlPlaneEndpoint: "$proxy:6443"
etcd:
external:
endpoints:
- "http://$etcd1:2379"
- "http://$etcd2:2379"
- "http://$etcd3:2379"
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
podSubnet: "10.244.0.0/16"
EOF
kubeadm init --config kubeadm-config.yaml
- 拷貝集群需要的證書到其它master節(jié)點
# make a list of required kubernetes certificate files
cat << EOF > certificate_files.txt
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
EOF
# create the archive
tar -czf control-plane-certificates.tar.gz -T certificate_files.txt
CONTROL_PLANE_IPS="$master2 $master3"
for host in ${CONTROL_PLANE_IPS}; do
scp control-plane-certificates.tar.gz $host:
done
- 配置其它master節(jié)點
到master2, master3上執(zhí)行如下腳本
mkdir -p /etc/kubernetes/pki
tar -xzf control-plane-certificates.tar.gz -C /etc/kubernetes/pki --strip-components 3
執(zhí)行master1上生成的kubeadm join
指令,在指令最后加入?yún)?shù)"–experimental-control-plane"里逆,指令最后類似
kubeadm join ha.k8s.example.com:6443 --token 5ynki1.3erp9i3yo7gqg1nv --discovery-token-ca-cert-hash sha256:a00055bd8c710a9906a3d91b87ea02976334e1247936ac061d867a0f014ecd81 --experimental-control-plane
- 安裝flannel網(wǎng)絡(luò)插件
在任意master節(jié)點上執(zhí)行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
到此集群搭建工作已經(jīng)完畢进胯,可用如下指令驗證集群
kubectl get cs # 查看etcd集群狀態(tài)
kubectl get pods -o wide -n kube-system # 查看系統(tǒng)服務(wù)狀態(tài)
kubectl get nodes # 查看集群節(jié)點狀態(tài)