七、高可用架構(gòu)(擴(kuò)容多Master架構(gòu))
Kubernetes作為容器集群系統(tǒng)鬼譬,通過健康檢查+重啟策略實(shí)現(xiàn)了Pod故障自我修復(fù)能力娜膘,通過調(diào)度算法實(shí)現(xiàn)將Pod分布式部署,并保持預(yù)期副本數(shù)优质,根據(jù)Node失效狀態(tài)自動(dòng)在其他Node拉起Pod竣贪,實(shí)現(xiàn)了應(yīng)用層的高可用性军洼。
針對Kubernetes集群,高可用性還應(yīng)包含以下兩個(gè)層面的考慮:Etcd數(shù)據(jù)庫的高可用性和Kubernetes Master組件的高可用性演怎。 而Etcd我們已經(jīng)采用3個(gè)節(jié)點(diǎn)組建集群實(shí)現(xiàn)高可用匕争,本節(jié)將對Master節(jié)點(diǎn)高可用進(jìn)行說明和實(shí)施。
Master節(jié)點(diǎn)扮演著總控中心的角色爷耀,通過不斷與工作節(jié)點(diǎn)上的Kubelet和kube-proxy進(jìn)行通信來維護(hù)整個(gè)集群的健康工作狀態(tài)甘桑。如果Master節(jié)點(diǎn)故障,將無法使用kubectl工具或者API做任何集群管理歹叮。
Master節(jié)點(diǎn)主要有三個(gè)服務(wù)kube-apiserver跑杭、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler組件自身通過選擇機(jī)制已經(jīng)實(shí)現(xiàn)了高可用咆耿,所以Master高可用主要針對kube-apiserver組件德谅,而該組件是以HTTP API提供服務(wù),因此對他高可用與Web服務(wù)器類似萨螺,增加負(fù)載均衡器對其負(fù)載均衡即可窄做,并且可水平擴(kuò)容。
多Master架構(gòu)圖:
7.1 安裝Docker
同上屑迂,不再贅述浸策。
7.2 部署Master2 Node(192.168.31.74)
Master2 與已部署的Master1所有操作一致。所以我們只需將Master1所有K8s文件拷貝過來惹盼,再修改下服務(wù)器IP和主機(jī)名啟動(dòng)即可庸汗。
1. 創(chuàng)建etcd證書目錄
在Master2創(chuàng)建etcd證書目錄:
mkdir -p /opt/etcd/ssl
2. 拷貝文件(Master1操作)
拷貝Master1上所有K8s文件和etcd證書到Master2:
scp -r /opt/kubernetes root@192.168.31.74:/opt
scp -r /opt/cni/ root@192.168.31.74:/opt
scp -r /opt/etcd/ssl root@192.168.31.74:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.31.74:/usr/lib/systemd/system
scp /usr/bin/kubectl root@192.168.31.74:/usr/bin
3. 刪除證書文件
刪除kubelet證書和kubeconfig文件:
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
4. 修改配置文件IP和主機(jī)名
修改apiserver、kubelet和kube-proxy配置文件為本地IP:
vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.31.74 \
--advertise-address=192.168.31.74 \
...
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2
5. 啟動(dòng)設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy
6. 查看集群狀態(tài)
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
7. 批準(zhǔn)kubelet證書申請
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 34h v1.18.3
k8s-master2 Ready <none> 83m v1.18.3
k8s-node1 Ready <none> 33h v1.18.3
k8s-node2 Ready <none> 33h v1.18.3
如果你在學(xué)習(xí)中遇到問題或者文檔有誤可聯(lián)系阿良~ 微信: init1024
7.3 部署Nginx負(fù)載均衡器
kube-apiserver高可用架構(gòu)圖:
- Nginx是一個(gè)主流Web服務(wù)和反向代理服務(wù)器手报,這里用四層實(shí)現(xiàn)對apiserver實(shí)現(xiàn)負(fù)載均衡蚯舱。
- Keepalived是一個(gè)主流高可用軟件,基于VIP綁定實(shí)現(xiàn)服務(wù)器雙機(jī)熱備掩蛤,在上述拓?fù)渲型骰瑁琄eepalived主要根據(jù)Nginx運(yùn)行狀態(tài)判斷是否需要故障轉(zhuǎn)移(偏移VIP),例如當(dāng)Nginx主節(jié)點(diǎn)掛掉揍鸟,VIP會(huì)自動(dòng)綁定在Nginx備節(jié)點(diǎn)兄裂,從而保證VIP一直可用,實(shí)現(xiàn)Nginx高可用阳藻。
1. 安裝軟件包(主/備)
yum install epel-release -y
yum install nginx keepalived -y
2. Nginx配置文件(主/備一樣)
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四層負(fù)載均衡晰奖,為兩臺(tái)Master apiserver組件提供負(fù)載均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.31.71:6443; # Master1 APISERVER IP:PORT
server 192.168.31.74:6443; # Master2 APISERVER IP:PORT
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
EOF
3. keepalived配置文件(Nginx Master)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 修改為實(shí)際網(wǎng)卡名
virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
priority 100 # 優(yōu)先級腥泥,備服務(wù)器設(shè)置 90
advert_int 1 # 指定VRRP 心跳包通告間隔時(shí)間匾南,默認(rèn)1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虛擬IP
virtual_ipaddress {
192.168.31.88/24
}
track_script {
check_nginx
}
}
EOF
vrrp_script:指定檢查nginx工作狀態(tài)腳本(根據(jù)nginx狀態(tài)判斷是否故障轉(zhuǎn)移)
virtual_ipaddress:虛擬IP(VIP)
檢查nginx狀態(tài)腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
4. keepalived配置文件(Nginx Backup)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.31.88/24
}
track_script {
check_nginx
}
}
EOF
上述配置文件中檢查nginx運(yùn)行狀態(tài)腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根據(jù)腳本返回狀態(tài)碼(0為工作正常蛔外,非0不正常)判斷是否故障轉(zhuǎn)移蛆楞。
5. 啟動(dòng)并設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived
6. 查看keepalived工作狀態(tài)
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.31.88/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe04:f72c/64 scope link
valid_lft forever preferred_lft forever
可以看到溯乒,在ens33網(wǎng)卡綁定了192.168.31.88 虛擬IP,說明工作正常豹爹。
7. Nginx+Keepalived高可用測試
關(guān)閉主節(jié)點(diǎn)Nginx裆悄,測試VIP是否漂移到備節(jié)點(diǎn)服務(wù)器。
在Nginx Master執(zhí)行 pkill nginx
在Nginx Backup臂聋,ip addr命令查看已成功綁定VIP灯帮。
8. 訪問負(fù)載均衡器測試
找K8s集群中任意一個(gè)節(jié)點(diǎn),使用curl查看K8s版本測試逻住,使用VIP訪問:
curl -k https://192.168.31.88:6443/version
{
"major": "1",
"minor": "18",
"gitVersion": "v1.18.3",
"gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
"gitTreeState": "clean",
"buildDate": "2020-05-20T12:43:34Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}
可以正確獲取到K8s版本信息,說明負(fù)載均衡器搭建正常迎献。該請求數(shù)據(jù)流程:curl -> vip(nginx) -> apiserver
通過查看Nginx日志也可以看到轉(zhuǎn)發(fā)apiserver IP:
tail /var/log/nginx/k8s-access.log -f
192.168.31.81 192.168.31.71:6443 - [30/May/2020:11:15:10 +0800] 200 422
192.168.31.81 192.168.31.74:6443 - [30/May/2020:11:15:26 +0800] 200 422
到此還沒結(jié)束瞎访,還有下面最關(guān)鍵的一步。
7.4 修改所有Worker Node連接LB VIP
試想下吁恍,雖然我們增加了Master2和負(fù)載均衡器扒秸,但是我們是從單Master架構(gòu)擴(kuò)容的,也就是說目前所有的Node組件連接都還是Master1冀瓦,如果不改為連接VIP走負(fù)載均衡器伴奥,那么Master還是單點(diǎn)故障。
因此接下來就是要改所有Node組件配置文件翼闽,由原來192.168.31.71修改為192.168.31.88(VIP):
角色 | IP |
---|---|
k8s-master1 | 192.168.31.71 |
k8s-master2 | 192.168.31.74 |
k8s-node1 | 192.168.31.72 |
k8s-node2 | 192.168.31.73 |
也就是通過kubectl get node命令查看到的節(jié)點(diǎn)拾徙。
在上述所有Worker Node執(zhí)行:
sed -i 's#192.168.31.71:6443#192.168.31.88:6443#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy
檢查節(jié)點(diǎn)狀態(tài):
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 34h v1.18.3
k8s-master2 Ready <none> 101m v1.18.3
k8s-node1 Ready <none> 33h v1.18.3
k8s-node2 Ready <none> 33h v1.18.3
至此,一套完整的 Kubernetes 高可用集群就部署完成了感局!
PS:如果你是在公有云上尼啡,一般都不支持keepalived,那么你可以直接用它們的負(fù)載均衡器產(chǎn)品(內(nèi)網(wǎng)就行询微,還免費(fèi)~)崖瞭,架構(gòu)與上面一樣,直接負(fù)載均衡多臺(tái)Master kube-apiserver即可撑毛!
如果你在學(xué)習(xí)中遇到問題或者文檔有誤可聯(lián)系阿良~ 微信: init1024
<img src="https://k8s-1252881505.cos.ap-beijing.myqcloud.com/wx.png" style="zoom: 80%;" />