K8S v1.15.1高可用集群(實操排坑)

架構(gòu)說明

K8S高可用,其實簡單的說就是對K8S各核心組件做高可用溪掀。

  • apiversion高可用:通過haproxy+keepalived的方式實現(xiàn)事镣;
  • controller-manager高可用:通過k8s內(nèi)部通過選舉方式產(chǎn)生領(lǐng)導(dǎo)者(由--leader-elect 選型控制,默認(rèn)為true)揪胃,同一時刻集群內(nèi)只有一個controller-manager組件運(yùn)行璃哟;
  • scheduler高可用:通過k8s內(nèi)部通過選舉方式產(chǎn)生領(lǐng)導(dǎo)者(由--leader-elect 選型控制,默認(rèn)為true)喊递,同一時刻集群內(nèi)只有一個scheduler組件運(yùn)行随闪;
  • etcd高可用:有兩種實現(xiàn)方式(堆疊式和外置式),筆者建議使用外置式etcd骚勘。
  1. 堆疊式:etcd服務(wù)和控制平面被部署在同樣的節(jié)點(diǎn)中铐伴,意思是跟kubeadm一起部署,etcd只與本節(jié)點(diǎn)apiserver通信俏讹,該方案對基礎(chǔ)設(shè)施的要求較低当宴,對故障的應(yīng)對能力也較低。
  2. 外置式:etcd服務(wù)和控制平面被分離泽疆,每個etcd都與apiserver節(jié)點(diǎn)通信户矢,需要更多的硬件,也有更好的保障能力殉疼,當(dāng)然也可以部署在已部署kubeadm的服務(wù)器上梯浪,只是在架構(gòu)上實現(xiàn)了分離,不是所謂的硬件層面分離株依,筆者建議在硬件資源充足的情況下盡可能選擇空閑節(jié)點(diǎn)來部署。
k8s-ha.png

該架構(gòu)圖來自 https://www.kubernetes.org.cn/6964.html


準(zhǔn)備工作

主機(jī)名 設(shè)備IP 角色 系統(tǒng)版本
k8s-master90 192.168.1.90 Master延窜,Haproxy恋腕,Keepalived,Etcd CentOS 7.7
k8s-master91 192.168.1.91 Master逆瑞,Haproxy荠藤,Keepalived伙单,Etcd CentOS 7.7
k8s-master93 192.168.1.93 Master,Haproxy哈肖,Keepalived吻育,Etcd CentOS 7.7

為了節(jié)省服務(wù)器資源,此次操作僅使用三臺設(shè)備淤井,采用kubeadm方式完成K8S高可用集群搭建布疼!


環(huán)境配置

筆者使用一鍵部署shell腳本,快速為各節(jié)點(diǎn)配置基礎(chǔ)環(huán)境币狠,下載相關(guān)軟件包游两,啟動相關(guān)服務(wù);但筆者還是建議先在一臺設(shè)備上每條命令都手動敲一遍漩绵,能更直觀地看到每條命令的效果贱案,如果都很順利的話,可在其他的設(shè)備上直接跑腳本一鍵完成止吐!
使用該腳本的前提是yum源可用宝踪,防火墻可修改關(guān)閉,如需使用還得修改相關(guān)內(nèi)容碍扔,請各位小伙伴先閱讀清楚瘩燥!

auto_configure_env.sh

#!/bin/bash
echo "##### Update /etc/hosts #####"
cat >> /etc/hosts <<EOF
192.168.1.90 k8s-master90
192.168.1.91 k8s-master91
192.168.1.93 k8s-master93
EOF

echo "##### Stop firewalld #####"
systemctl stop firewalld
systemctl disable firewalld

echo "##### Modify iptables FORWARD policy #####"
iptables -P FORWARD ACCEPT

echo "##### Close selinux #####"
setenforce 0 
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

echo "##### Close swap #####"
swapoff -a

echo "##### Modify limits #####"
cat > /etc/security/limits.d/kubernetes.conf <<EOF
*       soft    nproc   131072
*       hard    nproc   131072
*       soft    nofile  131072
*       hard    nofile  131072
root    soft    nproc   131072
root    hard    nproc   131072
root    soft    nofile  131072
root    hard    nofile  131072
EOF

echo "##### Create /etc/sysctl.d/k8s.conf #####"
cat >> /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
fs.file-max = 52706963
fs.nr_open = 52706963
net.ipv6.conf.all.disable_ipv6 = 1
EOF

echo "##### Add kernel module and Sysctl #####"
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

echo "##### Add ipvs modules #####"
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

echo "##### Install ipset ipvsadm #####"
yum -y install ipset ipvsadm

echo "##### Install docker-ce-18.09.7 #####"
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.7-3.el7.x86_64.rpm
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.7-3.el7.x86_64.rpm
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.09.7-3.el7.x86_64 docker-ce-cli-18.09.7-3.el7.x86_64 containerd.io-1.2.13-3.1.el7.x86_64
systemctl start docker
systemctl enable docker

echo "##### Modify docker cgroup driver #####"
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
  "insecure-registries": ["https://hub.atguigu.com"]
}
EOF

echo "##### Restart docker service #####"
systemctl restart docker

echo "##### Install kubeadm kubelet kubectl #####"
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubeadm-1.15.1-0.x86_64 kubelet-1.15.1-0.x86_64 kubectl-1.15.1-0.x86_64

echo "##### Modify kubelet config #####"
sed -i "s/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=--fail-swap-on=false/g" /etc/sysconfig/kubelet

echo "##### Enable kubelet service #####"
systemctl enable kubelet.service

搭建haproxy服務(wù)

haproxy提供高可用性、負(fù)載均衡以及基于TCP和HTTP應(yīng)用的代理蕴忆,相對于nginx來說颤芬,有更好的負(fù)載均衡性能,并且可以支持?jǐn)?shù)以萬計的并發(fā)連接套鹅,支持Session的保持站蝠,Cookie的引導(dǎo),同時自帶強(qiáng)大的監(jiān)控服務(wù)器狀態(tài)的web頁面卓鹿,而且負(fù)載均衡策略也非常之多菱魔。

該服務(wù)需要部署在每臺master服務(wù)器上,服務(wù)配置都一樣吟孙;使用該腳本也需要yum源可用澜倦,并且修改后端的apiserver的地址,請仔細(xì)閱讀杰妓!

auto_install_haproxy.sh

#!/bin/bash
echo "##### Install harpoxy service #####"
# http://mirror.centos.org/centos/7/os/x86_64/Packages/haproxy-1.5.18-9.el7.x86_64.rpm
yum -y install haproxy-1.5.18-9.el7.x86_64

echo "##### Modify harpoxy cfg #####"
cat > /etc/haproxy/haproxy.cfg <<EOF
global
    log         127.0.0.1 local2
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  kubernetes-apiserver
    mode tcp
    bind *:12567    # 自定義監(jiān)聽的端口號

    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    default_backend      kubernetes-apiserver

backend kubernetes-apiserver
    mode tcp
    balance     roundrobin
    # 配置apiserver
    server  k8s-master90 192.168.1.90:6443 check
    server  k8s-master91 192.168.1.91:6443 check
    server  k8s-master93 192.168.1.93:6443 check
EOF

echo "##### Restart harpoxy service #####"
systemctl restart haproxy
systemctl enable haproxy

搭建keepalived服務(wù)

keepalived是以VRRP(虛擬路由冗余協(xié)議)為實現(xiàn)基礎(chǔ)的藻治,專門用于實現(xiàn)集群高可用的一個服務(wù)軟件;通過高優(yōu)先級的節(jié)點(diǎn)劫持vip來對外提供服務(wù)巷挥,當(dāng)該節(jié)點(diǎn)宕機(jī)時桩卵,該節(jié)點(diǎn)上的vip會自動漂移到下一個較高優(yōu)先級的節(jié)點(diǎn)上,從而實現(xiàn)多臺提供相同服務(wù)的機(jī)器之間的故障自動轉(zhuǎn)移,防止單點(diǎn)故障雏节。

筆者所寫的檢測腳本chk_haproxy.sh胜嗓,含義是:每次檢查haproxy服務(wù)端口時,如果結(jié)果的字符串為零(即端口不存在)钩乍,則關(guān)閉該節(jié)點(diǎn)上的keepalived服務(wù)辞州,讓vip地址能漂移到其他正常的主機(jī)服務(wù)上。

該服務(wù)需要部署在每臺master服務(wù)器上寥粹,服務(wù)配置基本一樣变过,只需要修改priority優(yōu)先級大小即可;使用該腳本也需要yum源可用排作,并且修改相關(guān)的內(nèi)容牵啦,筆者建議vip地址設(shè)置為服務(wù)器網(wǎng)段的一個空閑可用地址,請仔細(xì)閱讀妄痪!

auto_install_keepalived.sh

#!/bin/bash
echo "##### Install keepalived service #####"
# http://mirror.centos.org/centos/7/os/x86_64/Packages/keepalived-1.3.5-16.el7.x86_64.rpm
yum -y install keepalived-1.3.5-16.el7.x86_64

echo "##### Add check haproxy.service live script #####"
cat > /etc/keepalived/chk_haproxy.sh <<EOF
ID=$(netstat -tunlp | grep haproxy)
if [ -z $ID ]; then
   systemctl stop keepalived
   sleep 3
fi
EOF

echo "##### Chmod script #####"
chmod 777 /etc/keepalived/chk_haproxy.sh

echo "##### Modify keepalived conf #####"
cat < /etc/keepalived/keepalived.conf <<EOF
global_defs {
    router_id kv90      # 自定義用戶標(biāo)識本節(jié)點(diǎn)的名稱哈雏,每臺節(jié)點(diǎn)都不一樣
}

vrrp_script chk_haproxy {
    script "/etc/keepalived/chk_haproxy.sh"    # 腳本檢測
    interval 2
    weight -20
    fail 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP            # 建議全部節(jié)點(diǎn)都寫B(tài)ACKUP,通過優(yōu)先級來決定誰是MASTER
    interface enp5s0f0      # 綁定為宿主機(jī)上使用的物理網(wǎng)卡接口
    virtual_router_id 33    # 所有節(jié)點(diǎn)的虛擬路由id必須一致衫生,表示同個組裳瘪;可自定義,范圍是0-255
    priority 100            # 優(yōu)先級罪针,即初始權(quán)重彭羹;可自定義,范圍是1-254
    nopreempt               # 強(qiáng)烈建議配置為不可搶占模式泪酱,減少業(yè)務(wù)頻繁切換派殷;使用該選項時,state必須都得寫成BACKUP

    authentication {
        auth_type PASS
        auth_pass 123456    # 自定義認(rèn)證密碼
    }

    virtual_ipaddress {
        192.168.1.33        # 自定義vip地址
    }

    track_script {
        chk_haproxy         # 必須與vrrp_script的名稱一致
    }
}
EOF

echo "##### Restart keepalived service #####"
systemctl restart keepalived
systemctl enable keepalived

搭建etcd高可用集群

etcd是一個強(qiáng)一致性的分布式鍵值(key-value)存儲墓阀,它提供了一種可靠的方式來存儲需要被分布式系統(tǒng)或機(jī)器集群訪問的數(shù)據(jù)毡惜,它可以在網(wǎng)絡(luò)分區(qū)時優(yōu)雅地處理leader選舉,并且能夠容忍機(jī)器故障斯撮,即使在leader節(jié)點(diǎn)中也能容忍经伙;etcd機(jī)器之間的通信是通過Raft一致性算法處理。

筆者是在k8s的三臺服務(wù)器上部署etcd勿锅,因此基礎(chǔ)環(huán)境已經(jīng)配置過帕膜,如果是在空閑的服務(wù)器上部署etcd,也需要配置基礎(chǔ)環(huán)境溢十,關(guān)閉防火墻垮刹,禁用selinux;本文搭建的是一個TLS安全加固的外置式etcd高可用集群张弛。

該服務(wù)需要部署在每臺etcd服務(wù)器上荒典,筆者通過在k8s-master90上執(zhí)行下面的部署腳本即可將相關(guān)證書和etcd可執(zhí)行文件等內(nèi)容直接scp到其他etcd節(jié)點(diǎn)上宗挥,而且還通過ssh遠(yuǎn)程執(zhí)行命令修改其他節(jié)點(diǎn)上的etcd配置文件,并啟動etcd服務(wù)种蝶,請仔細(xì)閱讀!

auto_install_etcd.sh

#!/bin/bash
echo "##### Create CA certificate and private key #####"
mkdir -p /etc/ssl/etcd/ssl/
mkdir -p /var/lib/etcd/
mkdir -p /home/ssl
cd /home/ssl
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=home" -days 10000 -out ca.crt

echo "##### Create etcd certificate and private key #####"
cat > /home/ssl/etcd-ca.conf <<EOF
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
O = etcd
OU = home
CN = etcd

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = localhost
DNS.2 = k8s-master90
DNS.3 = k8s-master91
DNS.4 = k8s-master93
IP.1 = 127.0.0.1
IP.2 = 192.168.1.90
IP.3 = 192.168.1.91
IP.4 = 192.168.1.93

[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
EOF

openssl genrsa -out etcd.key 2048
openssl req -new -key etcd.key -out etcd.csr -config etcd-ca.conf
openssl x509 -req -in etcd.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out etcd.crt -days 10000 \
-extensions v3_ext -extfile etcd-ca.conf
openssl verify -CAfile ca.crt etcd.crt

echo "##### Scp certificates to other etcd nodes #####"
scp -r /etc/ssl/etcd 192.168.1.91:/etc/ssl/
scp -r /etc/ssl/etcd 192.168.1.93:/etc/ssl/

echo "##### Install etcd v3.4.7 #####"
mkdir -p /home/etcd-pkg
cd /home/etcd-pkg
wget https://github.com/coreos/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz
tar -xf etcd-v3.4.7-linux-amd64.tar.gz
cd ./etcd-v3.4.7-linux-amd64/
cp -a {etcd,etcdctl} /usr/local/bin/    # 前提:環(huán)境變量$PATH里有該路徑

echo "##### Scp etcd to other etcd nodes #####"
scp etcd etcdctl 192.168.1.91:/usr/local/bin/
scp etcd etcdctl 192.168.1.93:/usr/local/bin/

echo "##### Modify etcd cluster conf #####"
mkdir -p /etc/etcd/
cat > /etc/etcd/etcd.conf <<EOF
# [Member Flags]
# ETCD_ELECTION_TIMEOUT=1000
# ETCD_HEARTBEAT_INTERVAL=100
ETCD_NAME=k8s-master90
ETCD_DATA_DIR=/var/lib/etcd/

# [Cluster Flags]
# ETCD_AUTO_COMPACTION_RETENTIO:N=0
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.1.90:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.1.90:2380
ETCD_LISTEN_CLIENT_URLS=https://192.168.1.90:2379,https://127.0.0.1:2379
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_LISTEN_PEER_URLS=https://192.168.1.90:2380
ETCD_INITIAL_CLUSTER=k8s-master90=https://192.168.1.90:2380,k8s-master91=https://192.168.1.91:2380,k8s-master93=https://192.168.1.93:2380

# [Proxy Flags]
ETCD_PROXY=off

# [Security flags]
# ETCD_CLIENT_CERT_AUTH=
# ETCD_PEER_CLIENT_CERT_AUTH=
ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.crt
ETCD_CERT_FILE=/etc/ssl/etcd/ssl/etcd.crt
ETCD_KEY_FILE=/etc/ssl/etcd/ssl/etcd.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.crt
ETCD_PEER_CERT_FILE=/etc/ssl/etcd/ssl/etcd.crt
ETCD_PEER_KEY_FILE=/etc/ssl/etcd/ssl/etcd.key

# [Profiling flags]
# ETCD_METRICS={{ etcd_metrics }}
EOF

echo "##### Scp etcd conf to other etcd nodes #####"
scp -r /etc/etcd 192.168.1.91:/etc/
scp -r /etc/etcd 192.168.1.93:/etc/

echo "##### Modify etcd.conf with ssh #####"
ssh 192.168.1.91 "sed -i 's/k8s-master90/k8s-master91/g' /etc/etcd/etcd.conf ; sed -i '10,14s/192.168.1.90/192.168.1.91/' /etc/etcd/etcd.conf"
ssh 192.168.1.93 "sed -i 's/k8s-master90/k8s-master93/g' /etc/etcd/etcd.conf ; sed -i '10,14s/192.168.1.90/192.168.1.93/' /etc/etcd/etcd.conf"

echo "##### Create systemd etcd.service #####"
cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd
NotifyAccess=all
Restart=always
RestartSec=5s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target
EOF

echo "##### Scp systemd to other etcd nodes #####"
scp /usr/lib/systemd/system/etcd.service 192.168.1.91:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service 192.168.1.93:/usr/lib/systemd/system/

# 需要在一定時間范圍內(nèi)快速啟動三臺etcd服務(wù)瞒大,否則服務(wù)會啟動失敗
echo "##### Start etcd.service #####"
systemctl daemon-reload
systemctl restart etcd
systemctl enable etcd
systemctl status etcd
ssh 192.168.1.91 "systemctl restart etcd ; systemctl enable etcd ; systemctl status etcd"
ssh 192.168.1.93 "systemctl restart etcd ; systemctl enable etcd ; systemctl status etcd"

echo "##### Check etcd cluster status #####"
etcdctl \
  --cacert=/etc/ssl/etcd/ssl/ca.crt \
  --cert=/etc/ssl/etcd/ssl/etcd.crt \
  --key=/etc/ssl/etcd/ssl/etcd.key \
  --endpoints=https://192.168.1.90:2379,https://192.168.1.91:2379,https://192.168.1.93:2379 \
  endpoint health

# 輸出如下結(jié)果螃征,三個節(jié)點(diǎn)的狀態(tài)均為healthy,則表示etcd集群服務(wù)正常
# https://192.168.1.93:2379 is healthy: successfully committed proposal: took = 22.044394ms
# https://192.168.1.91:2379 is healthy: successfully committed proposal: took = 23.946175ms
# https://192.168.1.90:2379 is healthy: successfully committed proposal: took = 26.130848ms

# 小技巧透敌,為了方便執(zhí)行etcdctl命令盯滚,可以通過設(shè)置alias永久別名
cat >> /root/.bashrc <<EOF
alias etcdctl='etcdctl \
  --cacert=/etc/ssl/etcd/ssl/ca.crt \
  --cert=/etc/ssl/etcd/ssl/etcd.crt \
  --key=/etc/ssl/etcd/ssl/etcd.key \
  --endpoints=https://192.168.1.90:2379,https://192.168.1.91:2379,https://192.168.1.93:2379'
EOF

# 別名快捷命令,以表格形式輸出etcd成員列表
echo "##### List etcd member #####"
etcdctl --write-out=table member list

kubeadm初始化集群

Kubeadm是一個工具酗电,它提供了 kubeadm init 以及 kubeadm join 這兩個命令作為快速創(chuàng)建kubernetes集群的最佳實踐魄藕,本文使用的是kubeadm v1.15.1版本,下面的初始化腳本在k8s-master90上執(zhí)行即可撵术,使用該腳本需要修改相關(guān)內(nèi)容背率,請仔細(xì)閱讀!

初始化命令的參數(shù)說明:--upload-certs用于在后續(xù)執(zhí)行加入節(jié)點(diǎn)時自動分發(fā)證書文件嫩与;tee用于追加輸出日志到指定文件中寝姿。

kubeadm-init.sh

#!/bin/bash
echo "##### Create kubeadm config #####"
cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
imageRepository: docker.io/mirrorgooglecontainers
controlPlaneEndpoint: 192.168.1.33:12567    # 配置為vip地址和haproxy端口號
kind: ClusterConfiguration
kubernetesVersion: v1.15.1          # k8s的版本號
networking:
  podSubnet: 10.244.0.0/16          # k8s內(nèi)部pods子網(wǎng)網(wǎng)絡(luò),該網(wǎng)段最好不要修改划滋,后面flannel的默認(rèn)Network是該網(wǎng)段
  serviceSubnet: 10.10.0.0/16       # k8s內(nèi)部services子網(wǎng)網(wǎng)絡(luò)饵筑,可自定義,但不能撞網(wǎng)段
apiServer:
  certSANs:             # 最好寫上所有kube-apiserver的主機(jī)名处坪,設(shè)備IP根资,vip
    - 192.168.1.33
    - 192.168.1.90
    - 192.168.1.91
    - 192.168.1.93
    - k8s-master90
    - k8s-master91
    - k8s-master93
    - 127.0.0.1
    - localhost
etcd:
  external:             # 此處代表使用的是外置式etcd集群
    endpoints:
    - https://192.168.1.90:2379
    - https://192.168.1.91:2379
    - https://192.168.1.93:2379
    caFile: /etc/ssl/etcd/ssl/ca.crt
    certFile: /etc/ssl/etcd/ssl/etcd.crt
    keyFile: /etc/ssl/etcd/ssl/etcd.key
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"        # 修改kube-proxy的工作模式,默認(rèn)使用的是iptables
EOF

echo "##### Pull docker images #####"
kubeadm config images list --config kubeadm-config.yaml
kubeadm config images pull --config kubeadm-config.yaml
docker images | grep mirrorgooglecontainers | awk '{print "docker tag ",$1":"$2,$1":"$2}' | sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' | sh -x 
docker images | grep mirrorgooglecontainers | awk '{print "docker rmi ", $1":"$2}' | sh -x 
docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi coredns/coredns:1.3.1

echo "##### Init kubeadm #####"
kubeadm init --config kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

執(zhí)行腳本后同窘,會輸出如下類似內(nèi)容:

......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  # 其他master節(jié)點(diǎn)加入狡恬,則使用這條較長的join命令
  kubeadm join 192.168.1.33:12567 --token 3vjfpc.647mossfkxl2v6u6 \
    --discovery-token-ca-cert-hash sha256:64891f8de74bc48c969446061bd60069643de2a70732631301fc0eb8283d4cc3 \
    --control-plane --certificate-key ee78fe3d5d0666503018dccb3a0e664f3c8e3b65ba6ad1362804de63ff451737

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

# 其他node工作節(jié)點(diǎn)加入窿锉,則使用這條較短的join命令
kubeadm join 192.168.1.33:12567 --token 3vjfpc.647mossfkxl2v6u6 \
    --discovery-token-ca-cert-hash sha256:64891f8de74bc48c969446061bd60069643de2a70732631301fc0eb8283d4cc3

最后根據(jù)提示,在當(dāng)前節(jié)點(diǎn)上配置kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

驗證操作,如果打印出節(jié)點(diǎn)信息則表示成功

kubectl get nodes

在本文中盗胀,筆者的目標(biāo)是搭建三臺master節(jié)點(diǎn)的k8s高可用集群,因此最后在91和93兩臺服務(wù)器上執(zhí)行較長的join加入命令棋返;而且默認(rèn)master節(jié)點(diǎn)是自帶一個taints污點(diǎn)為NoSchedule卷雕,因此還需要執(zhí)行kubectl taint nodes --all node-role.kubernetes.io/master-,讓master節(jié)點(diǎn)允許被調(diào)度骂铁,最后再執(zhí)行kubectl get nodes吹零,結(jié)果如下:

NAME           STATUS      ROLES    AGE   VERSION
k8s-master90   NotReady    master   30h   v1.15.1
k8s-master91   NotReady    master   30h   v1.15.1
k8s-master93   NotReady    master   30h   v1.15.1

以上節(jié)點(diǎn)狀態(tài)為NotReady,其實是因為我們還沒有解決容器的跨主機(jī)通信問題拉庵,因此筆者在本文中將使用flannel網(wǎng)絡(luò)插件來打通這條網(wǎng)絡(luò)通道灿椅!


flannel網(wǎng)絡(luò)插件

flannel是kubernetes默認(rèn)提供網(wǎng)絡(luò)插件,它能協(xié)助kubernetes,給每個Node上的Docker容器都分配互相不沖突的IP地址茫蛹,而且在這些IP地址之間建立了一個覆蓋網(wǎng)絡(luò)(Overlay Network)操刀,通過這個覆蓋網(wǎng)絡(luò)將數(shù)據(jù)包原封不動地傳遞到目標(biāo)容器內(nèi)。

觀察下圖婴洼,flannel會在每臺Node上創(chuàng)建一個名為flannel0的網(wǎng)橋骨坑,并且這個網(wǎng)橋的一端連接docker0網(wǎng)橋,另一端連接名為flanneld的代理服務(wù)進(jìn)程柬采;而且這個flanneld進(jìn)程上連etcd欢唾,利用etcd來管理可分配的IP地址段資源,同時監(jiān)控etcd中每個Pod的實際地址粉捻,這其實也是為什么flannel不會分配沖突的IP地址的原因礁遣。

如果要獲取flannel的yml文件,需要先找到對應(yīng)域名的IP地址肩刃,本地做個域名映射(原因大家都懂)祟霍!

flannel.png

該圖來自 https://www.kubernetes.org.cn/4105.html

auto_install_flannel.sh

#!/bin/bash
cat > /etc/hosts <<EOF
151.101.108.133 raw.githubusercontent.com
EOF

echo "##### Install pod network (Flannel) #####"
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

應(yīng)用flannel的yml文件后,可以使用kubectl get pods -A查看對應(yīng)的pod是否生成盈包,并且查看其狀態(tài)是否為Running浅碾,得出的結(jié)果如下圖:

image

上圖中,除了flannel的pod為Running续语,其他pod都是Running狀態(tài)垂谢,而且?guī)讉€核心組件的pod數(shù)量都是三個,達(dá)到了我們要搭建的目標(biāo)疮茄;其實仔細(xì)看上圖中是沒有etcd相關(guān)的pod的滥朱,因為筆者部署的是外置式etcd,但可以通過kubectl get cs查看到etcd集群的健康狀態(tài)力试,如下圖:

image

遇到過的問題

當(dāng)k8s已經(jīng)運(yùn)行徙邻,然而因為之前寫kubeadm-config.yaml文件,沒有寫對pod或service子網(wǎng)網(wǎng)段等內(nèi)容畸裳,假設(shè)我修改了serviceSubnet100.100.0.0/16缰犁,需要重新初始化,使用了kubeadm reset等命令去清理集群怖糊,也刪除了$HOME/.kube/config帅容,然后再kubadm init初始化集群,會出現(xiàn)報錯提示如下圖:

image

然后我強(qiáng)制忽略這個報錯伍伤,最后所有初始化操作擼完一遍并徘,查看k8s集群的services,結(jié)果clusterip還是之前的那個10.10.0.0/16的網(wǎng)段IP扰魂,感覺就是連集群都沒清理干凈麦乞,接著就是后面各種討教大牛和百度蕴茴,最終找到的問題出處是在etcd上,因為相關(guān)組件都是對接etcd這個數(shù)據(jù)庫的姐直,而我們重置清理kubeadm其實并沒有清理etcd上的數(shù)據(jù)倦淀,最后的解決方法就是先刪除etcd上的舊數(shù)據(jù)(生產(chǎn)環(huán)境謹(jǐn)慎),然后再執(zhí)行初始化操作即可声畏!

# 查看etcd中所有的keys
etcdctl get --keys-only=true --prefix /

# 刪除etcd中所有的數(shù)據(jù)(相當(dāng)于 rm -rf /)
etcdctl del / --prefix
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末晃听,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子砰识,更是在濱河造成了極大的恐慌,老刑警劉巖佣渴,帶你破解...
    沈念sama閱讀 218,122評論 6 505
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件辫狼,死亡現(xiàn)場離奇詭異,居然都是意外死亡辛润,警方通過查閱死者的電腦和手機(jī)膨处,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,070評論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來砂竖,“玉大人真椿,你說我怎么就攤上這事『醭危” “怎么了突硝?”我有些...
    開封第一講書人閱讀 164,491評論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長置济。 經(jīng)常有香客問我解恰,道長,這世上最難降的妖魔是什么浙于? 我笑而不...
    開封第一講書人閱讀 58,636評論 1 293
  • 正文 為了忘掉前任护盈,我火速辦了婚禮,結(jié)果婚禮上羞酗,老公的妹妹穿的比我還像新娘腐宋。我一直安慰自己,他們只是感情好檀轨,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,676評論 6 392
  • 文/花漫 我一把揭開白布胸竞。 她就那樣靜靜地躺著,像睡著了一般参萄。 火紅的嫁衣襯著肌膚如雪撤师。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,541評論 1 305
  • 那天拧揽,我揣著相機(jī)與錄音剃盾,去河邊找鬼腺占。 笑死,一個胖子當(dāng)著我的面吹牛痒谴,可吹牛的內(nèi)容都是我干的衰伯。 我是一名探鬼主播,決...
    沈念sama閱讀 40,292評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼积蔚,長吁一口氣:“原來是場噩夢啊……” “哼意鲸!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起尽爆,我...
    開封第一講書人閱讀 39,211評論 0 276
  • 序言:老撾萬榮一對情侶失蹤怎顾,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后漱贱,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體槐雾,經(jīng)...
    沈念sama閱讀 45,655評論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,846評論 3 336
  • 正文 我和宋清朗相戀三年幅狮,在試婚紗的時候發(fā)現(xiàn)自己被綠了募强。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,965評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡崇摄,死狀恐怖擎值,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情逐抑,我是刑警寧澤鸠儿,帶...
    沈念sama閱讀 35,684評論 5 347
  • 正文 年R本政府宣布,位于F島的核電站厕氨,受9級特大地震影響捆交,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜腐巢,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,295評論 3 329
  • 文/蒙蒙 一品追、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧冯丙,春花似錦肉瓦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,894評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至船殉,卻和暖如春鲫趁,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背利虫。 一陣腳步聲響...
    開封第一講書人閱讀 33,012評論 1 269
  • 我被黑心中介騙來泰國打工挨厚, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留堡僻,地道東北人。 一個月前我還...
    沈念sama閱讀 48,126評論 3 370
  • 正文 我出身青樓疫剃,卻偏偏與公主長得像钉疫,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子巢价,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,914評論 2 355