一官脓、硬件環(huán)境準(zhǔn)備
序號 | ip | 系統(tǒng)版本 | hostname | 節(jié)點類型 |
---|---|---|---|---|
1 | 192.168.0.248 | CentOS 7.6.1810 (Core) | k8s-clusters | master |
2 | 192.168.0.170 | CentOS 7.6.1810 (Core) | k8s-clusters-1 | master |
3 | 192.168.0.222 | CentOS 7.6.1810 (Core) | k8s-clusters-2 | master |
4 | 192.168.0.55 | CentOS 7.6.1810 (Core) | k8s-clusters-3 | node |
二佃牛、系統(tǒng)軟件環(huán)境預(yù)置
詳見文章:k8s學(xué)習(xí)筆記——Kubeadm安裝k8s集群
三剩失、部署HAProxy+Keepalive高可用負(fù)載均衡器
1、確認(rèn)內(nèi)核版本后蚕愤,開啟IPVS
uname -r
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash
# 執(zhí)行腳本
/etc/sysconfig/modules/ipvs.modules
# 檢查ip_vs是否開啟
lsmod | grep ip_vs
2琐馆、準(zhǔn)備haproxy配置文件
mkdir /etc/haproxy
// 創(chuàng)建配置文件
cat >/etc/haproxy/haproxy.cfg<<EOF
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid
defaults
mode http
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout check 2s
listen admin_stats
mode http
bind 0.0.0.0:1080
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /stats
stats realm Haproxy\ Statistics
stats auth admin:admin
stats hide-version
stats admin if TRUE
frontend kube-apiserver
bind 0.0.0.0:8443
mode tcp
default_backend kube-apiserver
backend kube-apiserver
mode tcp
balance roundrobin
server master-1.k8s.com 192.168.0.248:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-2.k8s.com 192.168.0.170:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-3.k8s.com 192.168.0.222:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF
3、啟動haproxy
docker pull haproxy:1.7.8-alpine
docker run -d --name k8s-haproxy \
--net=host --restart=always \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
haproxy:1.7.8-alpine
4魔熏、瀏覽器查看狀態(tài)(密碼:admin:admin)
http://192.168.0.248:1080/stats
http://192.168.0.170:1080/stats
http://192.168.0.222:1080/stats
5衷咽、啟動keepalived
docker pull osixia/keepalived:1.4.4
docker run --net=host --cap-add=NET_ADMIN \
-e KEEPALIVED_INTERFACE=eth0 \
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['192.168.0.199']" \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.0.248','192.168.0.170','192.168.0.222']" \
-e KEEPALIVED_PASSWORD=admin \
--name k8s-keepalived \
--restart always \
-d osixia/keepalived:1.4.4
6、驗證keepalived是否部署成功
# 會看到兩個成為backup 一個成為master
docker logs k8s-keepalived
# ping測試
ping -c2 192.168.0.199
# 如果失敗后清理后蒜绽,重新實驗
docker rm -f k8s-keepalived
ip a del 192.168.0.199/32 dev ens33
四镶骗、多主K8S集群設(shè)置
1、kubeadm 初始化一臺節(jié)點
kubeadm init --image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--pod-network-cidr=10.1.0.0/16 \
--apiserver-advertise-address=192.168.0.248 \
--control-plane-endpoint=192.168.0.199:8443
- 執(zhí)行成功會輸出以下內(nèi)容:
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.0.199:8443 --token v4m8ry.z1sek31mp8uwe5cv \
--discovery-token-ca-cert-hash sha256:7b7c80f2a9816e828438b6f6ab94368ac15c2a55395bd235c8487187bb375a49 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.199:8443 --token v4m8ry.z1sek31mp8uwe5cv \
--discovery-token-ca-cert-hash sha256:7b7c80f2a9816e828438b6f6ab94368ac15c2a55395bd235c8487187bb375a49
2躲雅、為kubectl準(zhǔn)備Kubeconfig文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3鼎姊、將該Master上認(rèn)證文件同步到未加入的Master
# 設(shè)置節(jié)點間免密登錄
ssh-keygen
ssh-copy-id 192.168.0.170
ssh-copy-id 192.168.0.222
cat > scp_k8s_crt.sh < EOF
#!/bin/bash
USER=$1
CONTROL_PLANE_IPS=$2
for host in ${CONTROL_PLANE_IPS}; do
ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
EOF
bash scp_k8s_crt.sh root master-2.k8s.com
bash scp_k8s_crt.sh root master-3.k8s.com
4、加入其他master節(jié)點
# 在其他未加入的master節(jié)點中執(zhí)行kubeadm join
kubeadm join 192.168.0.199:8443 --token v4m8ry.z1sek31mp8uwe5cv \
--discovery-token-ca-cert-hash sha256:7b7c80f2a9816e828438b6f6ab94368ac15c2a55395bd235c8487187bb375a49 \
--control-plane
5、加入其他worker節(jié)點
# 在worker節(jié)點中執(zhí)行kubeadm join
kubeadm join 192.168.0.199:8443 --token v4m8ry.z1sek31mp8uwe5cv \
--discovery-token-ca-cert-hash sha256:7b7c80f2a9816e828438b6f6ab94368ac15c2a55395bd235c8487187bb375a49
6相寇、查看集群
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-clusters NotReady control-plane,master 60m v1.23.1
k8s-clusters-1 NotReady control-plane,master 47m v1.23.1
k8s-clusters-2 NotReady control-plane,master 46m v1.23.1
k8s-clusters-3 NotReady <none> 46m v1.23.1
# 可根據(jù)需要安裝flannel或calico等CNI插件后慰于,狀態(tài)將更新為Ready
References:
http://www.reibang.com/p/6a57d79f08a3