搭建k8s高可用集群 - 二進(jìn)制方式

實踐環(huán)境準(zhǔn)備

服務(wù)器說明

我這里使用的是五臺CentOS-7.7的虛擬機(jī)籍胯,具體信息如下表:

系統(tǒng)版本 IP地址 節(jié)點角色 CPU Memory Hostname
CentOS-7.7 192.168.243.143 master >=2 >=2G m1
CentOS-7.7 192.168.243.144 master >=2 >=2G m2
CentOS-7.7 192.168.243.145 master >=2 >=2G m3
CentOS-7.7 192.168.243.146 worker >=2 >=2G n1
CentOS-7.7 192.168.243.147 worker >=2 >=2G n2

這五臺機(jī)器均需事先安裝好Docker叛溢,由于安裝過程比較簡單這里不進(jìn)行介紹蔚润,可以參考官方文檔:

軟件版本說明:

  • k8s:1.19.0
  • etcd:3.4.13
  • coredns:1.7.0
  • pause:3.2
  • calico:3.16.0
  • cfssl:1.2.0
  • kubernetes dashboard:2.0.3

以下是搭建k8s集群過程中ip、端口等網(wǎng)絡(luò)相關(guān)配置的說明,后續(xù)將不再重復(fù)解釋:

# 3個master節(jié)點的ip
192.168.243.143
192.168.243.144
192.168.243.145

# 2個worker節(jié)點的ip
192.168.243.146
192.168.243.147

# 3個master節(jié)點的hostname
m1璃吧、m2、m3

# api-server的高可用虛擬ip(keepalived會用到废境,可自定義)
192.168.243.101

# keepalived用到的網(wǎng)卡接口名畜挨,一般是eth0,可執(zhí)行ip a命令查看
ens32

# kubernetes服務(wù)ip網(wǎng)段(可自定義)
10.255.0.0/16

# kubernetes的api-server服務(wù)的ip噩凹,一般是cidr的第一個(可自定義)
10.255.0.1

# dns服務(wù)的ip地址巴元,一般是cidr的第二個(可自定義)
10.255.0.2

# pod網(wǎng)段(可自定義)
172.23.0.0/16

# NodePort的取值范圍(可自定義)
8400-8900

系統(tǒng)設(shè)置(所有節(jié)點)

1、主機(jī)名必須每個節(jié)點都不一樣驮宴,并且保證所有點之間可以通過hostname互相訪問逮刨。設(shè)置hostname:

# 查看主機(jī)名
$ hostname
# 修改主機(jī)名
$ hostnamectl set-hostname <your_hostname>

配置host,使所有節(jié)點之間可以通過hostname互相訪問:

$ vim /etc/hosts
192.168.243.143 m1
192.168.243.144 m2
192.168.243.145 m3
192.168.243.146 n1
192.168.243.147 n2

2堵泽、安裝依賴包:

# 更新yum
$ yum update -y

# 安裝依賴包
$ yum install -y conntrack ipvsadm ipset jq sysstat curl wget iptables libseccomp

3修己、關(guān)閉防火墻、swap迎罗,重置iptables:

# 關(guān)閉防火墻
$ systemctl stop firewalld && systemctl disable firewalld

# 重置iptables
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

# 關(guān)閉swap
$ swapoff -a
$ sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

# 關(guān)閉selinux
$ setenforce 0

# 關(guān)閉dnsmasq(否則可能導(dǎo)致docker容器無法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq

# 重啟docker服務(wù)
$ systemctl restart docker

4箩退、系統(tǒng)參數(shù)設(shè)置:

# 制作配置文件
$ cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF

# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf

準(zhǔn)備二進(jìn)制文件(所有節(jié)點)

配置免密登錄

由于二進(jìn)制的搭建方式需要各個節(jié)點具備k8s組件的二進(jìn)制可執(zhí)行文件,所以我們得將準(zhǔn)備好的二進(jìn)制文件copy到各個節(jié)點上佳谦。為了方便文件的copy戴涝,我們可以選擇一個中轉(zhuǎn)節(jié)點(隨便一個節(jié)點),配置好跟其他所有節(jié)點的免密登錄轴猎,這樣在copy的時候就不需要反復(fù)輸入密碼了陕壹。

我這里選擇m1作為中轉(zhuǎn)節(jié)點,首先在m1節(jié)點上創(chuàng)建一對密鑰:

[root@m1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9CVdxUGLSaZHMwzbOs+aF/ibxNpsUaaY4LVJtC3DJiU root@m1
The key's randomart image is:
+---[RSA 2048]----+
|           .o*o=o|
|         E +Bo= o|
|        . *o== . |
|       . + @o. o |
|        S BoO +  |
|         . *=+   |
|            .=o  |
|            B+.  |
|           +o=.  |
+----[SHA256]-----+
[root@m1 ~]# 

查看公鑰的內(nèi)容:

[root@m1 ~]# cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDF99/mk7syG+OjK5gFFKLZDpMWcF3BEF1Gaa8d8xNIMKt2qGgxyYOC7EiGcxanKw10MQCoNbiAG1UTd0/wgp/UcPizvJ5AKdTFImzXwRdXVbMYkjgY2vMYzpe8JZ5JHODggQuGEtSE9Q/RoCf29W2fIoOKTKaC2DNyiKPZZ+zLjzQr8sJC3BRb1Tk4p8cEnTnMgoFwMTZD8AYMNHwhBeo5NXZSE8zyJiWCqQQkD8n31wQxVgSL9m3rD/1wnsBERuq3cf7LQMiBTxmt1EyqzqM4S1I2WEfJkT0nJZeY+zbHqSJq2LbXmCmWUg5LmyxaE9Ksx4LDIl7gtVXe99+E1NLd root@m1
[root@m1 ~]# 

然后把id_rsa.pub文件中的內(nèi)容copy到其他機(jī)器的授權(quán)文件中浸剩,在其他節(jié)點執(zhí)行下面命令(這里的公鑰替換成你生成的公鑰):

$ echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDF99/mk7syG+OjK5gFFKLZDpMWcF3BEF1Gaa8d8xNIMKt2qGgxyYOC7EiGcxanKw10MQCoNbiAG1UTd0/wgp/UcPizvJ5AKdTFImzXwRdXVbMYkjgY2vMYzpe8JZ5JHODggQuGEtSE9Q/RoCf29W2fIoOKTKaC2DNyiKPZZ+zLjzQr8sJC3BRb1Tk4p8cEnTnMgoFwMTZD8AYMNHwhBeo5NXZSE8zyJiWCqQQkD8n31wQxVgSL9m3rD/1wnsBERuq3cf7LQMiBTxmt1EyqzqM4S1I2WEfJkT0nJZeY+zbHqSJq2LbXmCmWUg5LmyxaE9Ksx4LDIl7gtVXe99+E1NLd root@m1" >> ~/.ssh/authorized_keys

測試一下能否免密登錄可帽,可以看到我這里登錄m2節(jié)點不需要輸入密碼:

[root@m1 ~]# ssh m2
Last login: Fri Sep  4 15:55:59 2020 from m1
[root@m2 ~]# 

下載二進(jìn)制文件

下載Kubernetes

我們首先下載k8s的二進(jìn)制文件娄涩,k8s的官方下載地址如下:

我這里下載的是1.19.0版本,注意下載鏈接是在CHANGELOG/CHANGELOG-1.19.md里面:

image.png

只需要在“Server Binaries”一欄選擇對應(yīng)的平臺架構(gòu)下載即可映跟,因為Server的壓縮包里已經(jīng)包含了Node和Client的二進(jìn)制文件:


image.png

復(fù)制下載鏈接蓄拣,到系統(tǒng)上下載并解壓:

[root@m1 ~]# cd /usr/local/src
[root@m1 /usr/local/src]# wget https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz  # 下載
[root@m1 /usr/local/src]# tar -zxvf kubernetes-server-linux-amd64.tar.gz  # 解壓

k8s的二進(jìn)制文件都存放在kubernetes/server/bin/目錄下:

[root@m1 /usr/local/src]# ls kubernetes/server/bin/
apiextensions-apiserver  kube-apiserver             kube-controller-manager             kubectl     kube-proxy.docker_tag  kube-scheduler.docker_tag
kubeadm                  kube-apiserver.docker_tag  kube-controller-manager.docker_tag  kubelet     kube-proxy.tar         kube-scheduler.tar
kube-aggregator          kube-apiserver.tar         kube-controller-manager.tar         kube-proxy  kube-scheduler         mounter
[root@m1 /usr/local/src]# 

為了后面copy文件方便,我們需要整理一下文件努隙,將不同節(jié)點所需的二進(jìn)制文件統(tǒng)一放在相同的目錄下球恤。具體步驟如下:

[root@m1 /usr/local/src]# mkdir -p k8s-master k8s-worker
[root@m1 /usr/local/src]# cd kubernetes/server/bin/
[root@m1 /usr/local/src/kubernetes/server/bin]# for i in kubeadm kube-apiserver kube-controller-manager kubectl kube-scheduler;do cp $i /usr/local/src/k8s-master/; done
[root@m1 /usr/local/src/kubernetes/server/bin]# for i in kubelet kube-proxy;do cp $i /usr/local/src/k8s-worker/; done
[root@m1 /usr/local/src/kubernetes/server/bin]# 

整理后的文件都被放在了相應(yīng)的目錄下,k8s-master目錄存放master所需的二進(jìn)制文件荸镊,k8s-worker目錄則存放了worker節(jié)點所需的文件:

[root@m1 /usr/local/src/kubernetes/server/bin]# cd /usr/local/src
[root@m1 /usr/local/src]# ls k8s-master/
kubeadm  kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
[root@m1 /usr/local/src]# ls k8s-worker/
kubelet  kube-proxy
[root@m1 /usr/local/src]# 

下載etcd

k8s依賴于etcd做分布式存儲咽斧,所以接下來我們還需要下載etcd,官方下載地址如下:

我這里下載的是3.4.13版本:

image.png

同樣躬存,復(fù)制下載鏈接到系統(tǒng)上使用wget命令進(jìn)行下載并解壓:

[root@m1 /usr/local/src]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[root@m1 /usr/local/src]# mkdir etcd && tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz -C etcd --strip-components 1
[root@m1 /usr/local/src]# ls etcd
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
[root@m1 /usr/local/src]# 

將etcd的二進(jìn)制文件拷貝到k8s-master目錄下:

[root@m1 /usr/local/src]# cd etcd
[root@m1 /usr/local/src/etcd]# for i in etcd etcdctl;do cp $i /usr/local/src/k8s-master/; done
[root@m1 /usr/local/src/etcd]# ls ../k8s-master/
etcd  etcdctl  kubeadm  kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
[root@m1 /usr/local/src/etcd]# 

分發(fā)文件并設(shè)置好PATH

在所有節(jié)點上創(chuàng)建/opt/kubernetes/bin目錄:

$ mkdir -p /opt/kubernetes/bin

將二進(jìn)制文件分發(fā)到相應(yīng)的節(jié)點上:

[root@m1 /usr/local/src]# for i in m1 m2 m3; do scp k8s-master/* $i:/opt/kubernetes/bin/; done
[root@m1 /usr/local/src]# for i in n1 n2; do scp k8s-worker/* $i:/opt/kubernetes/bin/; done

給每個節(jié)點設(shè)置PATH環(huán)境變量:

[root@m1 /usr/local/src]# for i in m1 m2 m3 n1 n2; do ssh $i "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"; done

高可用集群部署

生成CA證書(任意節(jié)點)

安裝cfssl

cfssl是非常好用的CA工具张惹,我們用它來生成證書和秘鑰文件。安裝過程比較簡單岭洲,我這里選擇在m1節(jié)點上安裝宛逗。首先下載cfssl的二進(jìn)制文件:

[root@m1 ~]# mkdir -p ~/bin
[root@m1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O ~/bin/cfssl
[root@m1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O ~/bin/cfssljson

將這兩個文件授予可執(zhí)行的權(quán)限:

[root@m1 ~]# chmod +x ~/bin/cfssl ~/bin/cfssljson

設(shè)置一下PATH環(huán)境變量:

[root@m1 ~]# vim ~/.bashrc
PATH=~/bin:$PATH
[root@m1 ~]# source ~/.bashrc

驗證一下是否能正常執(zhí)行:

[root@m1 ~]# cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6
[root@m1 ~]# 

生成根證書

根證書是集群所有節(jié)點共享的,所以只需要創(chuàng)建一個 CA 證書盾剩,后續(xù)創(chuàng)建的所有證書都由它簽名雷激。首先創(chuàng)建一個ca-csr.json文件,內(nèi)容如下:

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}

執(zhí)行以下命令彪腔,生成證書和私鑰

[root@m1 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

生成完成后會有以下文件(我們最終想要的就是ca-key.pemca.pem侥锦,一個秘鑰进栽,一個證書):

[root@m1 ~]# ls *.pem
ca-key.pem  ca.pem
[root@m1 ~]# 

將這兩個文件分發(fā)到每個master節(jié)點上:

[root@m1 ~]# for i in m1 m2 m3; do ssh $i "mkdir -p /etc/kubernetes/pki/"; done
[root@m1 ~]# for i in m1 m2 m3; do scp *.pem $i:/etc/kubernetes/pki/; done

部署etcd集群(master節(jié)點)

生成證書和私鑰

接下來我們需要生成etcd節(jié)點使用的證書和私鑰德挣,創(chuàng)建ca-config.json文件,內(nèi)容如下:

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

然后創(chuàng)建etcd-csr.json文件快毛,內(nèi)容如下:

{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.243.143",
    "192.168.243.144",
    "192.168.243.145"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}
  • hosts里的ip是master節(jié)點的ip

有了以上兩個文件以后就可以使用如下命令生成etcd的證書和私鑰:

[root@m1 ~]# cfssl gencert -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@m1 ~]# ls etcd*.pem  # 執(zhí)行成功會生成兩個文件
etcd-key.pem  etcd.pem
[root@m1 ~]# 

然后將這兩個文件分發(fā)到每個etcd節(jié)點:

[root@m1 ~]# for i in m1 m2 m3; do scp etcd*.pem $i:/etc/kubernetes/pki/; done

創(chuàng)建service文件

創(chuàng)建etcd.service文件格嗅,用于后續(xù)可以通過systemctl命令去啟動、停止及重啟etcd服務(wù)唠帝,內(nèi)容如下:

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/kubernetes/bin/etcd \
  --data-dir=/var/lib/etcd \
  --name=m1 \
  --cert-file=/etc/kubernetes/pki/etcd.pem \
  --key-file=/etc/kubernetes/pki/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  --peer-cert-file=/etc/kubernetes/pki/etcd.pem \
  --peer-key-file=/etc/kubernetes/pki/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://192.168.243.143:2380 \
  --initial-advertise-peer-urls=https://192.168.243.143:2380 \
  --listen-client-urls=https://192.168.243.143:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://192.168.243.143:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=m1=https://192.168.243.143:2380,m2=https://192.168.243.144:2380,m3=https://192.168.243.145:2380 \
  --initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

將該配置文件分發(fā)到每個master節(jié)點:

[root@m1 ~]# for i in m1 m2 m3; do scp etcd.service $i:/etc/systemd/system/; done

分發(fā)完之后屯掖,需要在除了m1以外的其他master節(jié)點上修改etcd.service文件的內(nèi)容,主要修改如下幾處:

# 修改為所在節(jié)點的hostname
--name=m1

# 以下幾項則是將ip修改為所在節(jié)點的ip襟衰,本地ip不用修改
--listen-peer-urls=https://192.168.243.143:2380 
--initial-advertise-peer-urls=https://192.168.243.143:2380 
--listen-client-urls=https://192.168.243.143:2379,http://127.0.0.1:2379 
--advertise-client-urls=https://192.168.243.143:2379 

接著在每個master節(jié)點上創(chuàng)建etcd的工作目錄:

[root@m1 ~]# for i in m1 m2 m3; do ssh $i "mkdir -p /var/lib/etcd"; done

啟動服務(wù)

在各個etcd節(jié)點上執(zhí)行如下命令啟動etcd服務(wù):

$ systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
  • Tips:etcd 進(jìn)程首次啟動時會等待其它節(jié)點的 etcd 加入集群贴铜,命令 systemctl start etcd 會卡住一段時間,為正常現(xiàn)象绍坝。

查看服務(wù)狀態(tài)徘意,狀態(tài)為active (running)代表啟動成功:

$ systemctl status etcd

如果沒有啟動成功,可以查看啟動日志排查下問題:

$ journalctl -f -u etcd

部署api-server(master節(jié)點)

生成證書和私鑰

第一步還是一樣的轩褐,首先生成api-server的證書和私鑰椎咧。創(chuàng)建kubernetes-csr.json文件,內(nèi)容如下:

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.243.143",
    "192.168.243.144",
    "192.168.243.145",
    "192.168.243.101",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}

生成證書把介、私鑰:

[root@m1 ~]# cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
[root@m1 ~]# ls kubernetes*.pem
kubernetes-key.pem  kubernetes.pem
[root@m1 ~]# 

分發(fā)到每個master節(jié)點:

[root@m1 ~]# for i in m1 m2 m3; do scp kubernetes*.pem $i:/etc/kubernetes/pki/; done

創(chuàng)建service文件

創(chuàng)建kube-apiserver.service文件勤讽,內(nèi)容如下:

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --advertise-address=192.168.243.143 \
  --bind-address=0.0.0.0 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --service-node-port-range=8400-8900 \
  --tls-cert-file=/etc/kubernetes/pki/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/pki/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/pki/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/pki/kubernetes-key.pem \
  --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
  --etcd-cafile=/etc/kubernetes/pki/ca.pem \
  --etcd-certfile=/etc/kubernetes/pki/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/pki/kubernetes-key.pem \
  --etcd-servers=https://192.168.243.143:2379,https://192.168.243.144:2379,https://192.168.243.145:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

將該配置文件分發(fā)到每個master節(jié)點:

[root@m1 ~]# for i in m1 m2 m3; do scp kube-apiserver.service $i:/etc/systemd/system/; done

分發(fā)完之后,需要在除了m1以外的其他master節(jié)點上修改kube-apiserver.service文件的內(nèi)容拗踢。只需要修改以下一項:

# 修改為所在節(jié)點的ip即可
--advertise-address=192.168.243.143

然后在所有的master節(jié)點上創(chuàng)建api-server的日志目錄:

[root@m1 ~]# for i in m1 m2 m3; do ssh $i "mkdir -p /var/log/kubernetes"; done

啟動服務(wù)

在各個master節(jié)點上執(zhí)行如下命令啟動api-server服務(wù):

$ systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver

查看服務(wù)狀態(tài)脚牍,狀態(tài)為active (running)代表啟動成功:

$ systemctl status kube-apiserver

檢查是否有正常監(jiān)聽6443端口:

[root@m1 ~]# netstat -lntp |grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      24035/kube-apiserve 
[root@m1 ~]# 

如果沒有啟動成功,可以查看啟動日志排查下問題:

$ journalctl -f -u kube-apiserver

部署keepalived使api-server高可用(master節(jié)點)

安裝keepalived

在兩個主節(jié)點上安裝keepalived即可(一主一備)秒拔,我這里選擇在m1m2節(jié)點上安裝:

$ yum install -y keepalived

創(chuàng)建keepalived配置文件

m1m2節(jié)點上創(chuàng)建一個目錄用于存放keepalived的配置文件:

[root@m1 ~]# for i in m1 m2; do ssh $i "mkdir -p /etc/keepalived"; done

m1(角色為master)上創(chuàng)建配置文件如下:

[root@m1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.back
[root@m1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 router_id keepalive-master
}

vrrp_script check_apiserver {
 # 檢測腳本路徑
 script "/etc/keepalived/check-apiserver.sh"
 # 多少秒檢測一次
 interval 3
 # 失敗的話權(quán)重-2
 weight -2
}

vrrp_instance VI-kube-master {
   state MASTER    # 定義節(jié)點角色
   interface ens32  # 網(wǎng)卡名稱
   virtual_router_id 68
   priority 100
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     # 自定義虛擬ip
     192.168.243.101
   }
   track_script {
       check_apiserver
   }
}

m2(角色為backup)上創(chuàng)建配置文件如下:

[root@m1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.back
[root@m1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 router_id keepalive-backup
}

vrrp_script check_apiserver {
 script "/etc/keepalived/check-apiserver.sh"
 interval 3
 weight -2
}

vrrp_instance VI-kube-master {
   state BACKUP
   interface ens32
   virtual_router_id 68
   priority 99
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     192.168.243.101
   }
   track_script {
       check_apiserver
   }
}

分別在m1m2節(jié)點上創(chuàng)建keepalived的檢測腳本:

$ vim /etc/keepalived/check-apiserver.sh  # 創(chuàng)建檢測腳本莫矗,內(nèi)容如下
#!/bin/sh

errorExit() {
   echo "*** $*" 1>&2
   exit 1
}

# 檢查本機(jī)api-server是否正常
curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/"
# 如果虛擬ip綁定在本機(jī)上,則檢查能否通過虛擬ip正常訪問到api-server
if ip addr | grep -q 192.168.243.101; then
   curl --silent --max-time 2 --insecure https://192.168.243.101:6443/ -o /dev/null || errorExit "Error GET https://192.168.243.101:6443/"
fi

啟動keepalived

分別在master和backup上啟動keepalived服務(wù):

$ systemctl enable keepalived && service keepalived start

查看服務(wù)狀態(tài)砂缩,狀態(tài)為active (running)代表啟動成功:

$ systemctl status keepalived

查看有無正常綁定虛擬ip:

$ ip a |grep 192.168.243.101

訪問測試作谚,能返回數(shù)據(jù)代表服務(wù)是正在運(yùn)行的:

[root@m1 ~]# curl --insecure https://192.168.243.101:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}
[root@m1 ~]#

如果沒有啟動成功,可以查看日志排查下問題:

$ journalctl -f -u keepalived

部署kubectl(任意節(jié)點)

kubectl 是 kubernetes 集群的命令行管理工具庵芭,它默認(rèn)從 ~/.kube/config 文件讀取 kube-apiserver 地址妹懒、證書、用戶名等信息双吆。

創(chuàng)建 admin 證書和私鑰

kubectl 與 apiserver https 安全端口通信眨唬,apiserver 對提供的證書進(jìn)行認(rèn)證和授權(quán)。kubectl 作為集群的管理工具好乐,需要被授予最高權(quán)限匾竿,所以這里創(chuàng)建具有最高權(quán)限的 admin 證書。首先創(chuàng)建admin-csr.json文件蔚万,內(nèi)容如下:

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "seven"
    }
  ]
}

使用cfssl工具創(chuàng)建證書和私鑰:

[root@m1 ~]# cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@m1 ~]# ls admin*.pem
admin-key.pem  admin.pem
[root@m1 ~]#   

創(chuàng)建kubeconfig配置文件

kubeconfigkubectl 的配置文件岭妖,包含訪問 apiserver 的所有信息,如 apiserver 地址反璃、CA 證書和自身使用的證書昵慌。

1、設(shè)置集群參數(shù):

[root@m1 ~]# kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.243.101:6443 \
  --kubeconfig=kube.config

2淮蜈、設(shè)置客戶端認(rèn)證參數(shù):

[root@m1 ~]# kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kube.config

3斋攀、設(shè)置上下文參數(shù):

[root@m1 ~]# kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kube.config

4、設(shè)置默認(rèn)上下文:

[root@m1 ~]# kubectl config use-context kubernetes --kubeconfig=kube.config

5梧田、拷貝文件配置文件并重命名為~/.kube/config

[root@m1 ~]# cp kube.config ~/.kube/config

授予 kubernetes 證書訪問 kubelet API 的權(quán)限

在執(zhí)行 kubectl exec淳蔼、run侧蘸、logs 等命令時,apiserver 會轉(zhuǎn)發(fā)到 kubelet鹉梨。這里定義 RBAC 規(guī)則闺魏,授權(quán) apiserver 調(diào)用 kubelet API。

[root@m1 ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created
[root@m1 ~]# 

測試kubectl

1俯画、查看集群信息:

[root@m1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.243.101:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@m1 ~]# 

2析桥、查看集群中所有命名空間下的資源信息:

[root@m1 ~]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   43m
[root@m1 ~]# 

4、查看集群中的組件狀態(tài):

[root@m1 ~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                                                             
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}                                                                             
[root@m1 ~]# 

配置kubectl命令補(bǔ)全

kubectl是用于與k8s集群交互的一個命令行工具艰垂,操作k8s基本離不開這個工具泡仗,所以該工具所支持的命令比較多。好在kubectl支持設(shè)置命令補(bǔ)全猜憎,使用kubectl completion -h可以查看各個平臺下的設(shè)置示例娩怎。這里以Linux平臺為例,演示一下如何設(shè)置這個命令補(bǔ)全胰柑,完成以下操作后就可以使用tap鍵補(bǔ)全命令了:

[root@m1 ~]# yum install bash-completion -y
[root@m1 ~]# source /usr/share/bash-completion/bash_completion
[root@m1 ~]# source <(kubectl completion bash)
[root@m1 ~]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@m1 ~]# printf "  
# Kubectl shell completion  
source '$HOME/.kube/completion.bash.inc'  
" >> $HOME/.bash_profile
[root@m1 ~]# source $HOME/.bash_profile

部署controller-manager(master節(jié)點)

controller-manager啟動后將通過競爭選舉機(jī)制產(chǎn)生一個 leader 節(jié)點截亦,其它節(jié)點為阻塞狀態(tài)。當(dāng) leader 節(jié)點不可用后柬讨,剩余節(jié)點將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點崩瓤,從而保證服務(wù)的可用性。

創(chuàng)建證書和私鑰

創(chuàng)建controller-manager-csr.json文件踩官,內(nèi)容如下:

{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.243.143",
      "192.168.243.144",
      "192.168.243.145"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "seven"
      }
    ]
}

生成證書却桶、私鑰:

[root@m1 ~]# cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager
[root@m1 ~]# ls controller-manager*.pem
controller-manager-key.pem  controller-manager.pem
[root@m1 ~]# 

分發(fā)到每個master節(jié)點:

[root@m1 ~]# for i in m1 m2 m3; do scp controller-manager*.pem $i:/etc/kubernetes/pki/; done

創(chuàng)建controller-manager的kubeconfig

創(chuàng)建kubeconfig

# 設(shè)置集群參數(shù)
[root@m1 ~]# kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.243.101:6443 \
  --kubeconfig=controller-manager.kubeconfig

# 設(shè)置客戶端認(rèn)證參數(shù)
[root@m1 ~]# kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=controller-manager.pem \
  --client-key=controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=controller-manager.kubeconfig

# 設(shè)置上下文參數(shù)
[root@m1 ~]# kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=controller-manager.kubeconfig

設(shè)置默認(rèn)上下文:

[root@m1 ~]# kubectl config use-context system:kube-controller-manager --kubeconfig=controller-manager.kubeconfig

分發(fā)controller-manager.kubeconfig文件到每個master節(jié)點上:

[root@m1 ~]# for i in m1 m2 m3; do scp controller-manager.kubeconfig $i:/etc/kubernetes/; done

創(chuàng)建service文件

創(chuàng)建kube-controller-manager.service文件,內(nèi)容如下:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=172.23.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/pki/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/pki/controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/pki/controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

kube-controller-manager.service配置文件分發(fā)到每個master節(jié)點上:

[root@m1 ~]# for i in m1 m2 m3; do scp kube-controller-manager.service $i:/etc/systemd/system/; done

啟動服務(wù)

在各個master節(jié)點上啟動kube-controller-manager服務(wù)蔗牡,具體命令如下:

$ systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager

查看服務(wù)狀態(tài)颖系,狀態(tài)為active (running)代表啟動成功:

$ systemctl status kube-controller-manager

查看leader信息:

[root@m1 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"m1_ae36dc74-68d0-444d-8931-06b37513990a","leaseDurationSeconds":15,"acquireTime":"2020-09-04T15:47:14Z","renewTime":"2020-09-04T15:47:39Z","leaderTransitions":0}'
  creationTimestamp: "2020-09-04T15:47:15Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-09-04T15:47:39Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "1908"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 149b117e-f7c4-4ad8-bc83-09345886678a
[root@m1 ~]# 

如果沒有啟動成功,可以查看日志排查下問題:

$ journalctl -f -u kube-controller-manager

部署scheduler(master節(jié)點)

scheduler啟動后將通過競爭選舉機(jī)制產(chǎn)生一個 leader 節(jié)點辩越,其它節(jié)點為阻塞狀態(tài)嘁扼。當(dāng) leader 節(jié)點不可用后,剩余節(jié)點將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點黔攒,從而保證服務(wù)的可用性趁啸。

創(chuàng)建證書和私鑰

創(chuàng)建scheduler-csr.json文件,內(nèi)容如下:

{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.243.143",
      "192.168.243.144",
      "192.168.243.145"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "seven"
      }
    ]
}

生成證書和私鑰:

[root@m1 ~]# cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes scheduler-csr.json | cfssljson -bare kube-scheduler
[root@m1 ~]# ls kube-scheduler*.pem
kube-scheduler-key.pem  kube-scheduler.pem
[root@m1 ~]# 

分發(fā)到每個master節(jié)點:

[root@m1 ~]# for i in m1 m2 m3; do scp kube-scheduler*.pem $i:/etc/kubernetes/pki/; done

創(chuàng)建scheduler的kubeconfig

創(chuàng)建kubeconfig

# 設(shè)置集群參數(shù)
[root@m1 ~]# kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.243.101:6443 \
  --kubeconfig=kube-scheduler.kubeconfig

# 設(shè)置客戶端認(rèn)證參數(shù)
[root@m1 ~]# kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

# 設(shè)置上下文參數(shù)
[root@m1 ~]# kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

設(shè)置默認(rèn)上下文:

[root@m1 ~]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分發(fā)kube-scheduler.kubeconfig文件到每個master節(jié)點上:

[root@m1 ~]# for i in m1 m2 m3; do scp kube-scheduler.kubeconfig $i:/etc/kubernetes/; done

創(chuàng)建service文件

創(chuàng)建kube-scheduler.service文件亏钩,內(nèi)容如下:

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

kube-scheduler.service配置文件分發(fā)到每個master節(jié)點上:

[root@m1 ~]# for i in m1 m2 m3; do scp kube-scheduler.service $i:/etc/systemd/system/; done

啟動服務(wù)

在每個master節(jié)點上啟動kube-scheduler服務(wù):

$ systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler

查看服務(wù)狀態(tài)莲绰,狀態(tài)為active (running)代表啟動成功:

$ service kube-scheduler status

查看leader信息:

[root@m1 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"m1_f6c4da9f-85b4-47e2-919d-05b24b4aacac","leaseDurationSeconds":15,"acquireTime":"2020-09-04T16:03:57Z","renewTime":"2020-09-04T16:04:19Z","leaderTransitions":0}'
  creationTimestamp: "2020-09-04T16:03:57Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-scheduler
    operation: Update
    time: "2020-09-04T16:04:19Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "3230"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: c2f2210d-b00f-4157-b597-d3e3b4bec38b
[root@m1 ~]# 

如果沒有啟動成功欺旧,可以查看日志排查下問題:

$ journalctl -f -u kube-scheduler

部署kubelet(worker節(jié)點)

預(yù)先下載需要的docker鏡像

首先我們需要預(yù)先下載鏡像到所有的節(jié)點上姑丑,由于有些鏡像不翻墻無法下載,所以這里提供了一個簡單的腳本拉取阿里云倉庫的鏡像并修改了tag

[root@m1 ~]# vim download-images.sh
#!/bin/bash

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 k8s.gcr.io/pause-amd64:3.2
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

將腳本分發(fā)到其他節(jié)點上:

[root@m1 ~]# for i in m2 m3 n1 n2; do scp download-images.sh $i:~; done

然后讓每個節(jié)點執(zhí)行該腳本:

[root@m1 ~]# for i in m1 m2 m3 n1 n2; do ssh $i "sh ~/download-images.sh"; done

拉取完成后辞友,此時各個節(jié)點應(yīng)有如下鏡像:

$ docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/coredns       1.7.0               bfe3a36ebd25        2 months ago        45.2MB
k8s.gcr.io/pause-amd64   3.2                 80d28bedfe5d        6 months ago        683kB

創(chuàng)建bootstrap配置文件

創(chuàng)建 token 并設(shè)置環(huán)境變量:

[root@m1 ~]# export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:worker \
      --kubeconfig kube.config)

創(chuàng)建kubelet-bootstrap.kubeconfig

# 設(shè)置集群參數(shù)
[root@m1 ~]# kubectl config set-cluster kubernetes \
      --certificate-authority=ca.pem \
      --embed-certs=true \
      --server=https://192.168.243.101:6443 \
      --kubeconfig=kubelet-bootstrap.kubeconfig

# 設(shè)置客戶端認(rèn)證參數(shù)
[root@m1 ~]# kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap.kubeconfig

# 設(shè)置上下文參數(shù)
[root@m1 ~]# kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap.kubeconfig

設(shè)置默認(rèn)上下文:

[root@m1 ~]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

在worker節(jié)點上創(chuàng)建k8s配置文件存儲目錄并把生成的配置文件copy到每個worker節(jié)點上:

[root@m1 ~]# for i in n1 n2; do ssh $i "mkdir /etc/kubernetes/"; done
[root@m1 ~]# for i in n1 n2; do scp kubelet-bootstrap.kubeconfig $i:/etc/kubernetes/kubelet-bootstrap.kubeconfig; done

worker節(jié)點上創(chuàng)建密鑰存放目錄:

[root@m1 ~]# for i in n1 n2; do ssh $i "mkdir -p /etc/kubernetes/pki"; done

把CA證書分發(fā)到每個worker節(jié)點上:

[root@m1 ~]# for i in n1 n2; do scp ca.pem $i:/etc/kubernetes/pki/; done

kubelet配置文件

創(chuàng)建kubelet.config.json配置文件栅哀,內(nèi)容如下:

{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/pki/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.243.146",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

kubelet配置文件分發(fā)到每個worker節(jié)點上:

[root@m1 ~]# for i in n1 n2; do scp kubelet.config.json $i:/etc/kubernetes/; done

注意:分發(fā)完成后需要修改配置文件中的address字段震肮,改為所在節(jié)點的IP

kubelet服務(wù)文件

創(chuàng)建kubelet.service文件,內(nèi)容如下:

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/pki \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.config.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

kubelet的服務(wù)文件分發(fā)到每個worker節(jié)點上

[root@m1 ~]# for i in n1 n2; do scp kubelet.service $i:/etc/systemd/system/; done

啟動服務(wù)

kublet 啟動時會查找 --kubeletconfig 參數(shù)配置的文件是否存在留拾,如果不存在則使用 --bootstrap-kubeconfig 向 kube-apiserver 發(fā)送證書簽名請求 (CSR)戳晌。

kube-apiserver 收到 CSR 請求后,對其中的 Token 進(jìn)行認(rèn)證(事先使用 kubeadm 創(chuàng)建的 token)痴柔,認(rèn)證通過后將請求的 user 設(shè)置為 system:bootstrap:沦偎,group 設(shè)置為 system:bootstrappers,這就是Bootstrap Token Auth咳蔚。

bootstrap賦權(quán)豪嚎,即創(chuàng)建一個角色綁定:

[root@m1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

然后就可以啟動kubelet服務(wù)了,在每個worker節(jié)點上執(zhí)行如下命令:

$ mkdir -p /var/lib/kubelet
$ systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet

查看服務(wù)狀態(tài)谈火,狀態(tài)為active (running)代表啟動成功:

$ systemctl status kubelet

如果沒有啟動成功侈询,可以查看日志排查下問題:

$ journalctl -f -u kubelet

確認(rèn)kubelet服務(wù)啟動成功后,接著到master上Approve一下bootstrap請求糯耍。執(zhí)行如下命令可以看到兩個worker節(jié)點分別發(fā)送了兩個 CSR 請求:

[root@m1 ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR                 CONDITION
node-csr-0U6dO2MrD_KhUCdofq1rab6yrLvuVMJkAXicLldzENE   27s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:seh1w7   Pending
node-csr-QMAVx75MnxCpDT5QtI6liNZNfua39vOwYeUyiqTIuPg   74s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:seh1w7   Pending
[root@m1 ~]# 

然后Approve這兩個請求即可:

[root@m1 ~]# kubectl certificate approve node-csr-0U6dO2MrD_KhUCdofq1rab6yrLvuVMJkAXicLldzENE
certificatesigningrequest.certificates.k8s.io/node-csr-0U6dO2MrD_KhUCdofq1rab6yrLvuVMJkAXicLldzENE approved
[root@m1 ~]# kubectl certificate approve node-csr-QMAVx75MnxCpDT5QtI6liNZNfua39vOwYeUyiqTIuPg
certificatesigningrequest.certificates.k8s.io/node-csr-QMAVx75MnxCpDT5QtI6liNZNfua39vOwYeUyiqTIuPg approved
[root@m1 ~]# 

部署kube-proxy(worker節(jié)點)

創(chuàng)建證書和私鑰

創(chuàng)建 kube-proxy-csr.json 文件扔字,內(nèi)容如下:

{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}

生成證書和私鑰:

[root@m1 ~]# cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@m1 ~]# ls kube-proxy*.pem
kube-proxy-key.pem  kube-proxy.pem
[root@m1 ~]# 

創(chuàng)建和分發(fā) kubeconfig 文件

執(zhí)行如下命令創(chuàng)建kube-proxy.kubeconfig文件:

# 設(shè)置集群參數(shù)
[root@m1 ~]# kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.243.101:6443 \
  --kubeconfig=kube-proxy.kubeconfig

# 設(shè)置客戶端認(rèn)證參數(shù)
[root@m1 ~]# kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# 設(shè)置上下文參數(shù)
[root@m1 ~]# kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

切換默認(rèn)上下文:

[root@m1 ~]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分發(fā)kube-proxy.kubeconfig文件到各個worker節(jié)點上:

[root@m1 ~]# for i in n1 n2; do scp kube-proxy.kubeconfig $i:/etc/kubernetes/; done

創(chuàng)建和分發(fā)kube-proxy配置文件

創(chuàng)建kube-proxy.config.yaml文件,內(nèi)容如下:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
# 修改為所在節(jié)點的ip
bindAddress: {worker_ip}
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.23.0.0/16
# 修改為所在節(jié)點的ip
healthzBindAddress: {worker_ip}:10256
kind: KubeProxyConfiguration
# 修改為所在節(jié)點的ip
metricsBindAddress: {worker_ip}:10249
mode: "iptables"

kube-proxy.config.yaml文件分發(fā)到每個worker節(jié)點上:

[root@m1 ~]# for i in n1 n2; do scp kube-proxy.config.yaml $i:/etc/kubernetes/; done

創(chuàng)建和分發(fā)kube-proxy服務(wù)文件

創(chuàng)建kube-proxy.service文件温技,內(nèi)容如下:

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

kube-proxy.service文件分發(fā)到所有worker節(jié)點上:

[root@m1 ~]# for i in n1 n2; do scp kube-proxy.service $i:/etc/systemd/system/; done

啟動服務(wù)

創(chuàng)建kube-proxy服務(wù)所依賴的目錄:

[root@m1 ~]# for i in n1 n2; do ssh $i "mkdir -p /var/lib/kube-proxy && mkdir -p /var/log/kubernetes"; done

然后就可以啟動kube-proxy服務(wù)了革为,在每個worker節(jié)點上執(zhí)行如下命令:

$ systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy

查看服務(wù)狀態(tài),狀態(tài)為active (running)代表啟動成功:

$ systemctl status kube-proxy

如果沒有啟動成功舵鳞,可以查看日志排查下問題:

$ journalctl -f -u kube-proxy

部署CNI插件 - calico

我們使用calico官方的安裝方式來部署篷角。創(chuàng)建目錄(在配置了kubectl的節(jié)點上執(zhí)行):

[root@m1 ~]# mkdir -p /etc/kubernetes/addons

在該目錄下創(chuàng)建calico-rbac-kdd.yaml文件,內(nèi)容如下:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  - apiGroups: [""]
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - update
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
      - patch
  - apiGroups: [""]
    resources:
      - services
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups: ["extensions"]
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - clusterinformations
      - hostendpoints
    verbs:
      - create
      - get
      - list
      - update
      - watch

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

然后分別執(zhí)行如下命令完成calico的安裝:

[root@m1 ~]# kubectl apply -f /etc/kubernetes/addons/calico-rbac-kdd.yaml
[root@m1 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

等待幾分鐘后查看Pod狀態(tài)系任,均為Running才代表部署成功:

[root@m1 ~]# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5bc4fc6f5f-z8lhf   1/1     Running   0          105s
kube-system   calico-node-qflvj                          1/1     Running   0          105s
kube-system   calico-node-x9m2n                          1/1     Running   0          105s
[root@m1 ~]# 

部署DNS插件 - coredns

/etc/kubernetes/addons/目錄下創(chuàng)建coredns.yaml配置文件:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.7.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
  • Tips:該文件是使用官方的deploy.sh腳本生成的恳蹲,執(zhí)行該腳本時需使用-i參數(shù)指定dns的clusterIP,通常為kubernetes服務(wù)ip網(wǎng)段的第二個俩滥,ip相關(guān)的定義在本文開頭有說明

然后執(zhí)行如下命令部署coredns

[root@m1 ~]# kubectl create -f /etc/kubernetes/addons/coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@m1 ~]# 

查看Pod狀態(tài):

[root@m1 ~]# kubectl get pod --all-namespaces | grep coredns
kube-system   coredns-7bf4bd64bd-ww4q2       1/1     Running   0          3m40s
[root@m1 ~]# 

查看集群中的節(jié)點狀態(tài):

[root@m1 ~]# kubectl get node
NAME   STATUS   ROLES    AGE     VERSION
n1     Ready    <none>   3h30m   v1.19.0
n2     Ready    <none>   3h30m   v1.19.0
[root@m1 ~]# 

集群可用性測試

創(chuàng)建nginx ds

m1節(jié)點上創(chuàng)建nginx-ds.yml配置文件嘉蕾,內(nèi)容如下:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

然后執(zhí)行如下命令創(chuàng)建nginx ds:

[root@m1 ~]# kubectl create -f nginx-ds.yml
service/nginx-ds created
daemonset.apps/nginx-ds created
[root@m1 ~]# 

檢查各種ip連通性

稍等一會后,檢查Pod狀態(tài)是否正常:

[root@m1 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
nginx-ds-4f48f   1/1     Running   0          63s   172.16.40.130   n1     <none>           <none>
nginx-ds-zsm7d   1/1     Running   0          63s   172.16.217.10   n2     <none>           <none>
[root@m1 ~]# 

在每個worker節(jié)點上去嘗試ping Pod IP(master節(jié)點沒有安裝calico所以不能訪問Pod IP):

[root@n1 ~]# ping 172.16.40.130
PING 172.16.40.130 (172.16.40.130) 56(84) bytes of data.
64 bytes from 172.16.40.130: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 172.16.40.130: icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from 172.16.40.130: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 172.16.40.130: icmp_seq=4 ttl=64 time=0.054 ms
^C
--- 172.16.40.130 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.052/0.058/0.073/0.011 ms
[root@n1 ~]# 

確認(rèn)Pod IP能夠ping通后霜旧,檢查Service的狀態(tài):

[root@m1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
kubernetes   ClusterIP   10.255.0.1     <none>        443/TCP       17h
nginx-ds     NodePort    10.255.4.100   <none>        80:8568/TCP   11m
[root@m1 ~]# 

在每個worker節(jié)點上嘗試訪問nginx-ds服務(wù)(master節(jié)點沒有proxy所以不能訪問Service IP):

[root@n1 ~]# curl 10.255.4.100:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@n1 ~]# 

在每個節(jié)點上檢查NodePort的可用性错忱,NodePort會將服務(wù)的端口與宿主機(jī)的端口做映射,正常情況下所有節(jié)點都可以通過worker節(jié)點的 IP + NodePort 訪問到nginx-ds服務(wù):

$ curl 192.168.243.146:8568
$ curl 192.168.243.147:8568

檢查dns可用性

需要創(chuàng)建一個Nginx Pod挂据,首先定義一個pod-nginx.yaml配置文件以清,內(nèi)容如下:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80

然后基于該配置文件去創(chuàng)建Pod:

[root@m1 ~]# kubectl create -f pod-nginx.yaml
pod/nginx created
[root@m1 ~]# 

使用如下命令進(jìn)入到Pod里:

[root@m1 ~]# kubectl exec nginx -i -t -- /bin/bash

查看dns配置,nameserver的值需為 coredns 的clusterIP

root@nginx:/# cat /etc/resolv.conf
nameserver 10.255.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local. localdomain
options ndots:5
root@nginx:/# 

接著測試是否可以正確解析Service的名稱崎逃。如下能根據(jù)nginx-ds這個名稱解析出對應(yīng)的IP:10.255.4.100掷倔,代表dns也是正常的:

root@nginx:/# ping nginx-ds
PING nginx-ds.default.svc.cluster.local (10.255.4.100): 48 data bytes

kubernetes服務(wù)也能正常解析:

root@nginx:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.255.0.1): 48 data bytes

高可用測試

m1節(jié)點上的kubectl配置文件拷貝到其他兩臺master節(jié)點上:

[root@m1 ~]# for i in m2 m3; do ssh $i "mkdir ~/.kube/"; done
[root@m1 ~]# for i in m2 m3; do scp ~/.kube/config $i:~/.kube/; done

m1節(jié)點上執(zhí)行如下命令將其關(guān)機(jī):

[root@m1 ~]# init 0

然后查看虛擬IP是否成功漂移到了m2節(jié)點上:

[root@m2 ~]# ip a |grep 192.168.243.101
    inet 192.168.243.101/32 scope global ens32
[root@m2 ~]# 

接著測試能否在m2m3節(jié)點上使用kubectl與集群進(jìn)行交互,能正常交互則代表集群已經(jīng)具備了高可用:

[root@m2 ~]# kubectl get nodes
NAME   STATUS   ROLES    AGE    VERSION
n1     Ready    <none>   4h2m   v1.19.0
n2     Ready    <none>   4h2m   v1.19.0
[root@m2 ~]# 

部署dashboard

dashboard是k8s提供的一個可視化操作界面个绍,用于簡化我們對集群的操作和管理勒葱,在界面上我們可以很方便的查看各種信息浪汪、操作Pod、Service等資源凛虽,以及創(chuàng)建新的資源等死遭。dashboard的倉庫地址如下,

dashboard的部署也比較簡單凯旋,首先定義dashboard-all.yaml配置文件呀潭,內(nèi)容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 8523
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.3
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

創(chuàng)建dashboard服務(wù):

[root@m1 ~]# kubectl create -f dashboard-all.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@m1 ~]# 

查看deployment運(yùn)行情況:

[root@m1 ~]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           20s
[root@m1 ~]# 

查看dashboard pod運(yùn)行情況:

[root@m1 ~]# kubectl --namespace kubernetes-dashboard get pods -o wide |grep dashboard
dashboard-metrics-scraper-7b59f7d4df-xzxs8   1/1     Running   0          82s   172.16.217.13   n2     <none>           <none>
kubernetes-dashboard-5dbf55bd9d-s8rhb        1/1     Running   0          82s   172.16.40.132   n1     <none>           <none>
[root@m1 ~]# 

查看dashboard service的運(yùn)行情況:

[root@m1 ~]# kubectl get services kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.255.120.138   <none>        443:8523/TCP   101s
[root@m1 ~]# 

n1節(jié)點上查看8523端口是否有被正常監(jiān)聽:

[root@n1 ~]# netstat -ntlp |grep 8523
tcp        0      0 0.0.0.0:8523       0.0.0.0:*     LISTEN      13230/kube-proxy    
[root@n1 ~]# 

訪問dashboard

為了集群安全,從 1.7 開始至非,dashboard 只允許通過 https 訪問蜗侈,我們使用NodePort的方式暴露服務(wù),可以使用 https://NodeIP:NodePort 地址訪問睡蟋。例如使用curl進(jìn)行訪問:

[root@n1 ~]# curl https://192.168.243.146:8523 -k
<!--
Copyright 2017 The Kubernetes Authors.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

<!doctype html>
<html lang="en">

<head>
  <meta charset="utf-8">
  <title>Kubernetes Dashboard</title>
  <link rel="icon"
        type="image/png"
        href="assets/images/kubernetes-logo.png" />
  <meta name="viewport"
        content="width=device-width">
<link rel="stylesheet" href="styles.988f26601cdcb14da469.css"></head>

<body>
  <kd-root></kd-root>
<script src="runtime.ddfec48137b0abfd678a.js" defer></script><script src="polyfills-es5.d57fe778f4588e63cc5c.js" nomodule defer></script><script src="polyfills.49104fe38e0ae7955ebb.js" defer></script><script src="scripts.391d299173602e261418.js" defer></script><script src="main.b94e335c0d02b12e3a7b.js" defer></script></body>

</html>
[root@n1 ~]# 
  • 由于dashboard的證書是自簽的踏幻,所以這里需要加-k參數(shù)指定不驗證證書進(jìn)行https請求

關(guān)于自定義證書

默認(rèn)dashboard的證書是自動生成的,肯定是非安全的證書戳杀,如果大家有域名和對應(yīng)的安全證書可以自己替換掉该面。使用安全的域名方式訪問dashboard。

dashboard-all.yaml中增加dashboard啟動參數(shù)信卡,可以指定證書文件隔缀,其中證書文件是通過secret注進(jìn)來的。

- --tls-cert-file=dashboard.cer
- --tls-key-file=dashboard.key

登錄dashboard

Dashboard 默認(rèn)只支持 token 認(rèn)證傍菇,所以如果使用 KubeConfig 文件猾瘸,需要在該文件中指定 token,我們這里使用token的方式登錄丢习。

首先創(chuàng)建service account:

[root@m1 ~]# kubectl create sa dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@m1 ~]#

創(chuàng)建角色綁定關(guān)系:

[root@m1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@m1 ~]# 

查看dashboard-admin的Secret名稱:

[root@m1 ~]# kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}'
dashboard-admin-token-757fb
[root@m1 ~]# 

打印Secret的token:

[root@m1 ~]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
[root@m1 ~]# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ilhyci13eDR3TUtmSG9kcXJxdzVmcFdBTFBGeDhrOUY2QlZoenZhQWVZM0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNzU3ZmIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjdlMWVhMzQtMjNhMS00MjZkLWI0NTktOGI2NmQxZWZjMWUzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.UlKmcZoGb6OQ1jE55oShAA2dBiL0FHEcIADCfTogtBEuYLPdJtBUVQZ_aVICGI23gugIu6Y9Yt7iQYlwT6zExhUzDz0UUiBT1nSLe94CkPl64LXbeWkC3w2jee8iSqR2UfIZ4fzY6azaqhGKE1Fmm_DLjD-BS-etphOIFoCQFbabuFjvR8DVDss0z1czhHwXEOvlv5ted00t50dzv0rAZ8JN-PdOoem3aDkXDvWWmqu31QAhqK1strQspbUOF5cgcSeGwsQMfau8U5BNsm_K92IremHqOVvOinkR_EHslomDJRc3FYbV_Jw359rc-QROSTbLphRfvGNx9UANDMo8lA
[root@m1 ~]# 

獲取到token后牵触,使用瀏覽器訪問https://192.168.243.146:8523,由于是dashboard是自簽的證書咐低,所以此時瀏覽器會提示警告揽思。不用理會直接點擊“高級” -> “繼續(xù)前往”即可:

image.png

然后輸入token:


image.png

成功登錄后首頁如下:


image.png

可視化界面也沒啥可說的,這里就不進(jìn)一步介紹了见擦,可以自行探索一下钉汗。我們使用二進(jìn)制方式搭建高可用的k8s集群之旅至此就結(jié)束了,本文篇幅可以說是非常的長鲤屡,這也是為了記錄每一步的操作細(xì)節(jié)损痰,所以為了方便還是使用kubeadm吧。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末酒来,一起剝皮案震驚了整個濱河市卢未,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌役首,老刑警劉巖尝丐,帶你破解...
    沈念sama閱讀 218,941評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異衡奥,居然都是意外死亡爹袁,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,397評論 3 395
  • 文/潘曉璐 我一進(jìn)店門矮固,熙熙樓的掌柜王于貴愁眉苦臉地迎上來失息,“玉大人,你說我怎么就攤上這事档址№锞ぃ” “怎么了?”我有些...
    開封第一講書人閱讀 165,345評論 0 356
  • 文/不壞的土叔 我叫張陵守伸,是天一觀的道長绎秒。 經(jīng)常有香客問我,道長尼摹,這世上最難降的妖魔是什么见芹? 我笑而不...
    開封第一講書人閱讀 58,851評論 1 295
  • 正文 為了忘掉前任,我火速辦了婚禮蠢涝,結(jié)果婚禮上玄呛,老公的妹妹穿的比我還像新娘。我一直安慰自己和二,他們只是感情好徘铝,可當(dāng)我...
    茶點故事閱讀 67,868評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著惯吕,像睡著了一般惕它。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上废登,一...
    開封第一講書人閱讀 51,688評論 1 305
  • 那天怠缸,我揣著相機(jī)與錄音,去河邊找鬼钳宪。 笑死揭北,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的吏颖。 我是一名探鬼主播搔体,決...
    沈念sama閱讀 40,414評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼半醉!你這毒婦竟也來了疚俱?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,319評論 0 276
  • 序言:老撾萬榮一對情侶失蹤缩多,失蹤者是張志新(化名)和其女友劉穎呆奕,沒想到半個月后养晋,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,775評論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡梁钾,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,945評論 3 336
  • 正文 我和宋清朗相戀三年绳泉,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片姆泻。...
    茶點故事閱讀 40,096評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡零酪,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出拇勃,到底是詐尸還是另有隱情四苇,我是刑警寧澤,帶...
    沈念sama閱讀 35,789評論 5 346
  • 正文 年R本政府宣布方咆,位于F島的核電站月腋,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏瓣赂。R本人自食惡果不足惜罗售,卻給世界環(huán)境...
    茶點故事閱讀 41,437評論 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望钩述。 院中可真熱鬧寨躁,春花似錦、人聲如沸牙勘。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,993評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽方面。三九已至放钦,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間恭金,已是汗流浹背操禀。 一陣腳步聲響...
    開封第一講書人閱讀 33,107評論 1 271
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留横腿,地道東北人颓屑。 一個月前我還...
    沈念sama閱讀 48,308評論 3 372
  • 正文 我出身青樓,卻偏偏與公主長得像耿焊,于是被迫代替她去往敵國和親揪惦。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 45,037評論 2 355