本文主要在centos7系統(tǒng)上基于docker
和flannel
組件部署v1.23.6
版本的k8s原生集群雄右,由于集群主要用于自己平時(shí)學(xué)習(xí)和測(cè)試使用澎胡,加上資源有限舷丹,暫不涉及高可用部署皇忿。
此前寫(xiě)的一些關(guān)于k8s基礎(chǔ)知識(shí)和集群搭建的一些方案,有需要的同學(xué)可以看一下最筒。
1、準(zhǔn)備工作
1.1 flannel-集群節(jié)點(diǎn)信息
機(jī)器均為8C8G的虛擬機(jī)蔚叨,硬盤(pán)為100G床蜘。
IP | Hostname |
---|---|
10.31.8.1 | tiny-flannel-master-8-1.k8s.tcinternal |
10.31.8.11 | tiny-flannel-worker-8-11.k8s.tcinternal |
10.31.8.12 | tiny-flannel-worker-8-12.k8s.tcinternal |
10.8.64.0/18 | podSubnet |
10.8.0.0/18 | serviceSubnet |
1.2 檢查mac和product_uuid
同一個(gè)k8s集群內(nèi)的所有節(jié)點(diǎn)需要確保mac
地址和product_uuid
均唯一,開(kāi)始集群初始化之前需要檢查相關(guān)信息
# 檢查mac地址
ip link
ifconfig -a
# 檢查product_uuid
sudo cat /sys/class/dmi/id/product_uuid
1.3 配置ssh免密登錄(可選)
如果k8s集群的節(jié)點(diǎn)有多個(gè)網(wǎng)卡蔑水,確保每個(gè)節(jié)點(diǎn)能通過(guò)正確的網(wǎng)卡互聯(lián)訪問(wèn)
# 在root用戶下面生成一個(gè)公用的key邢锯,并配置可以使用該key免密登錄
su root
ssh-keygen
cd /root/.ssh/
cat id_rsa.pub >> authorized_keys
chmod 600 authorized_keys
cat >> ~/.ssh/config <<EOF
Host tiny-flannel-master-8-1.k8s.tcinternal
HostName 10.31.8.1
User root
Port 22
IdentityFile ~/.ssh/id_rsa
Host tiny-flannel-worker-8-11.k8s.tcinternal
HostName 10.31.8.11
User root
Port 22
IdentityFile ~/.ssh/id_rsa
Host tiny-flannel-worker-8-12.k8s.tcinternal
HostName 10.31.8.12
User root
Port 22
IdentityFile ~/.ssh/id_rsa
EOF
1.4 修改hosts文件
cat >> /etc/hosts <<EOF
10.31.8.1 tiny-flannel-master-8-1 tiny-flannel-master-8-1.k8s.tcinternal
10.31.8.11 tiny-flannel-worker-8-11 tiny-flannel-worker-8-11.k8s.tcinternal
10.31.8.12 tiny-flannel-worker-8-12 tiny-flannel-worker-8-12.k8s.tcinternal
EOF
1.5 關(guān)閉swap內(nèi)存
# 使用命令直接關(guān)閉swap內(nèi)存
swapoff -a
# 修改fstab文件禁止開(kāi)機(jī)自動(dòng)掛載swap分區(qū)
sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab
1.6 配置時(shí)間同步
這里可以根據(jù)自己的習(xí)慣選擇ntp或者是chrony同步均可,同步的時(shí)間源服務(wù)器可以選擇阿里云的ntp1.aliyun.com
或者是國(guó)家時(shí)間中心的ntp.ntsc.ac.cn
肤粱。
使用ntp同步
# 使用yum安裝ntpdate工具
yum install ntpdate -y
# 使用國(guó)家時(shí)間中心的源同步時(shí)間
ntpdate ntp.ntsc.ac.cn
# 最后查看一下時(shí)間
hwclock
使用chrony同步
# 使用yum安裝chrony
yum install chrony -y
# 設(shè)置開(kāi)機(jī)啟動(dòng)并開(kāi)啟chony并查看運(yùn)行狀態(tài)
systemctl enable chronyd.service
systemctl start chronyd.service
systemctl status chronyd.service
# 當(dāng)然也可以自定義時(shí)間服務(wù)器
vim /etc/chrony.conf
# 修改前
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
# 修改后
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server ntp.ntsc.ac.cn iburst
# 重啟服務(wù)使配置文件生效
systemctl restart chronyd.service
# 查看chrony的ntp服務(wù)器狀態(tài)
chronyc sourcestats -v
chronyc sources -v
1.7 關(guān)閉selinux
# 使用命令直接關(guān)閉
setenforce 0
# 也可以直接修改/etc/selinux/config文件
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
1.8 配置防火墻
k8s集群之間通信和服務(wù)暴露需要使用較多端口弹囚,為了方便,直接禁用防火墻
# centos7使用systemctl禁用默認(rèn)的firewalld服務(wù)
systemctl disable firewalld.service
1.9 配置netfilter參數(shù)
這里主要是需要配置內(nèi)核加載br_netfilter
和iptables
放行ipv6
和ipv4
的流量领曼,確保集群內(nèi)的容器能夠正常通信鸥鹉。
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
1.10 關(guān)閉IPV6(可選)
雖然新版本的k8s已經(jīng)支持雙棧網(wǎng)絡(luò),但是本次的集群部署過(guò)程并不涉及IPv6網(wǎng)絡(luò)的通信庶骄,因此關(guān)閉IPv6網(wǎng)絡(luò)支持
# 直接在內(nèi)核中添加ipv6禁用參數(shù)
grubby --update-kernel=ALL --args=ipv6.disable=1
1.11 配置IPVS(可選)
IPVS是專門(mén)設(shè)計(jì)用來(lái)應(yīng)對(duì)負(fù)載均衡場(chǎng)景的組件毁渗,kube-proxy 中的 IPVS 實(shí)現(xiàn)通過(guò)減少對(duì) iptables 的使用來(lái)增加可擴(kuò)展性。在 iptables 輸入鏈中不使用 PREROUTING单刁,而是創(chuàng)建一個(gè)假的接口灸异,叫做 kube-ipvs0,當(dāng)k8s集群中的負(fù)載均衡配置變多的時(shí)候羔飞,IPVS能實(shí)現(xiàn)比iptables更高效的轉(zhuǎn)發(fā)性能肺樟。
注意在4.19之后的內(nèi)核版本中使用
nf_conntrack
模塊來(lái)替換了原有的nf_conntrack_ipv4
模塊(Notes: use
nf_conntrack
instead ofnf_conntrack_ipv4
for Linux kernel 4.19 and later)
# 在使用ipvs模式之前確保安裝了ipset和ipvsadm
sudo yum install ipset ipvsadm -y
# 手動(dòng)加載ipvs相關(guān)模塊
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# 配置開(kāi)機(jī)自動(dòng)加載ipvs相關(guān)模塊
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
sudo sysctl --system
# 最好重啟一遍系統(tǒng)確定是否生效
$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 15053 2
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
$ cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack_ipv4
2、安裝container runtime
2.1 安裝docker
詳細(xì)的官方文檔可以參考這里逻淌,由于在剛發(fā)布的1.24版本中移除了docker-shim
么伯,因此安裝的版本≥1.24
的時(shí)候需要注意容器運(yùn)行時(shí)
的選擇。這里我們安裝的版本低于1.24卡儒,因此我們繼續(xù)使用docker田柔。
docker的具體安裝可以參考我之前寫(xiě)的這篇文章,這里不做贅述骨望。
# 安裝必要的依賴組件并且導(dǎo)入docker官方提供的yum源
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 我們直接安裝最新版本的docker
yum install docker-ce docker-ce-cli containerd.io
2.2 配置cgroup drivers
CentOS7使用的是systemd
來(lái)初始化系統(tǒng)并管理進(jìn)程硬爆,初始化進(jìn)程會(huì)生成并使用一個(gè) root 控制組 (cgroup
), 并充當(dāng) cgroup
管理器。 Systemd
與 cgroup
集成緊密擎鸠,并將為每個(gè) systemd
單元分配一個(gè) cgroup
缀磕。 我們也可以配置容器運(yùn)行時(shí)
和 kubelet
使用 cgroupfs
。 連同 systemd
一起使用 cgroupfs
意味著將有兩個(gè)不同的 cgroup 管理器
。而當(dāng)一個(gè)系統(tǒng)中同時(shí)存在cgroupfs和systemd兩者時(shí)虐骑,容易變得不穩(wěn)定准验,因此最好更改設(shè)置,令容器運(yùn)行時(shí)和 kubelet 使用 systemd
作為 cgroup
驅(qū)動(dòng)廷没,以此使系統(tǒng)更為穩(wěn)定糊饱。 對(duì)于 Docker, 需要設(shè)置 native.cgroupdriver=systemd
參數(shù)。
參考官方的說(shuō)明文檔:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers
參考配置說(shuō)明文檔
https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#docker
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
# 最后檢查一下Cgroup Driver是否為systemd
$ docker info | grep systemd
Cgroup Driver: systemd
2.3 關(guān)于kubelet的cgroup driver
k8s官方有詳細(xì)的文檔介紹了如何設(shè)置kubelet的cgroup driver
颠黎,需要特別注意的是另锋,在1.22版本開(kāi)始,如果沒(méi)有手動(dòng)設(shè)置kubelet的cgroup driver狭归,那么默認(rèn)會(huì)設(shè)置為systemd
Note: In v1.22, if the user is not setting the
cgroupDriver
field underKubeletConfiguration
,kubeadm
will default it tosystemd
.
一個(gè)比較簡(jiǎn)單的指定kubelet的cgroup driver
的方法就是在kubeadm-config.yaml
加入cgroupDriver
字段
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
我們可以直接查看configmaps來(lái)查看初始化之后集群的kubeadm-config配置夭坪。
$ kubectl describe configmaps kubeadm-config -n kube-system
Name: kubeadm-config
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
ClusterConfiguration:
----
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.6
networking:
dnsDomain: cali-cluster.tclocal
serviceSubnet: 10.88.0.0/18
scheduler: {}
BinaryData
====
Events: <none>
當(dāng)然因?yàn)槲覀冃枰惭b的版本高于1.22.0并且使用的就是systemd,因此可以不用再重復(fù)配置过椎。
3室梅、安裝kube三件套
對(duì)應(yīng)的官方文檔可以參考這里
kube三件套就是kubeadm
、kubelet
和 kubectl
疚宇,三者的具體功能和作用如下:
-
kubeadm
:用來(lái)初始化集群的指令亡鼠。 -
kubelet
:在集群中的每個(gè)節(jié)點(diǎn)上用來(lái)啟動(dòng) Pod 和容器等。 -
kubectl
:用來(lái)與集群通信的命令行工具敷待。
需要注意的是:
-
kubeadm
不會(huì)幫助我們管理kubelet
和kubectl
间涵,其他兩者也是一樣的,也就是說(shuō)這三者是相互獨(dú)立的榜揖,并不存在誰(shuí)管理誰(shuí)的情況勾哩; -
kubelet
的版本必須小于等于API-server
的版本,否則容易出現(xiàn)兼容性的問(wèn)題举哟; -
kubectl
并不是集群中的每個(gè)節(jié)點(diǎn)都需要安裝思劳,也并不是一定要安裝在集群中的節(jié)點(diǎn),可以單獨(dú)安裝在自己本地的機(jī)器環(huán)境上面妨猩,然后配合kubeconfig
文件即可使用kubectl
命令來(lái)遠(yuǎn)程管理對(duì)應(yīng)的k8s集群潜叛;
CentOS7的安裝比較簡(jiǎn)單,我們直接使用官方提供的yum
源即可册赛。需要注意的是這里需要設(shè)置selinux
的狀態(tài)钠导,但是前面我們已經(jīng)關(guān)閉了selinux震嫉,因此這里略過(guò)這步森瘪。
# 直接導(dǎo)入谷歌官方的yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# 當(dāng)然如果連不上谷歌的源,可以考慮使用國(guó)內(nèi)的阿里鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 接下來(lái)直接安裝三件套即可
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 如果網(wǎng)絡(luò)環(huán)境不好出現(xiàn)gpgcheck驗(yàn)證失敗導(dǎo)致無(wú)法正常讀取yum源票堵,可以考慮關(guān)閉該yum源的repo_gpgcheck
sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/kubernetes.repo
# 或者在安裝的時(shí)候禁用gpgcheck
sudo yum install -y kubelet kubeadm kubectl --nogpgcheck --disableexcludes=kubernetes
# 如果想要安裝特定版本扼睬,可以使用這個(gè)命令查看相關(guān)版本的信息
sudo yum list --nogpgcheck kubelet kubeadm kubectl --showduplicates --disableexcludes=kubernetes
# 這里我們?yōu)榱吮A羰褂胐ocker-shim,因此我們按照1.24.0版本的前一個(gè)版本1.23.6
sudo yum install -y kubelet-1.23.6-0 kubeadm-1.23.6-0 kubectl-1.23.6-0 --nogpgcheck --disableexcludes=kubernetes
# 安裝完成后配置開(kāi)機(jī)自啟kubelet
sudo systemctl enable --now kubelet
4、初始化集群
4.1 編寫(xiě)配置文件
在集群中所有節(jié)點(diǎn)都執(zhí)行完上面的三點(diǎn)操作之后窗宇,我們就可以開(kāi)始創(chuàng)建k8s集群了措伐。因?yàn)槲覀冞@次不涉及高可用部署,因此初始化的時(shí)候直接在我們的目標(biāo)master節(jié)點(diǎn)上面操作即可军俊。
# 我們先使用kubeadm命令查看一下主要的幾個(gè)鏡像版本
# 因?yàn)槲覀兇饲爸付ò惭b了舊的1.23.6版本侥加,這里的apiserver鏡像版本也會(huì)隨之回滾
$ kubeadm config images list
I0507 14:14:34.992275 20038 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
# 為了方便編輯和管理,我們還是把初始化參數(shù)導(dǎo)出成配置文件
$ kubeadm config print init-defaults > kubeadm-flannel.conf
- 考慮到大多數(shù)情況下國(guó)內(nèi)的網(wǎng)絡(luò)無(wú)法使用谷歌的k8s.gcr.io鏡像源粪躬,我們可以直接在配置文件中修改
imageRepository
參數(shù)為阿里的鏡像源 -
kubernetesVersion
字段用來(lái)指定我們要安裝的k8s版本 -
localAPIEndpoint
參數(shù)需要修改為我們的master節(jié)點(diǎn)的IP和端口担败,初始化之后的k8s集群的apiserver地址就是這個(gè) -
serviceSubnet
和dnsDomain
兩個(gè)參數(shù)默認(rèn)情況下可以不用修改,這里我按照自己的需求進(jìn)行了變更 -
nodeRegistration
里面的name
參數(shù)修改為對(duì)應(yīng)master節(jié)點(diǎn)的hostname
- 新增配置塊使用ipvs镰官,具體可以參考官方文檔
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.31.8.1
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: tiny-flannel-master-8-1.k8s.tcinternal
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.6
networking:
dnsDomain: flan-cluster.tclocal
serviceSubnet: 10.8.0.0/18
podSubnet: 10.8.64.0/18
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
4.2 初始化集群
此時(shí)我們?cè)俨榭磳?duì)應(yīng)的配置文件中的鏡像版本提前,就會(huì)發(fā)現(xiàn)已經(jīng)變成了對(duì)應(yīng)阿里云鏡像源的版本
# 查看一下對(duì)應(yīng)的鏡像版本,確定配置文件是否生效
$ kubeadm config images list --config kubeadm-flannel.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 確認(rèn)沒(méi)問(wèn)題之后我們直接拉取鏡像
$ kubeadm config images pull --config kubeadm-flannel.conf
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 初始化
$ kubeadm init --config kubeadm-flannel.conf
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
...此處略去一堆輸出...
當(dāng)我們看到下面這個(gè)輸出結(jié)果的時(shí)候泳唠,我們的集群就算是初始化成功了狈网。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.31.8.1:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:d7160866920c0331731ad3c1c31a6e5b6c788b5682f86971cacaa940211db9ab
4.3 配置kubeconfig
剛初始化成功之后,我們還沒(méi)辦法馬上查看k8s集群信息笨腥,需要配置kubeconfig相關(guān)參數(shù)才能正常使用kubectl連接apiserver讀取集群信息拓哺。
# 對(duì)于非root用戶,可以這樣操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 如果是root用戶扇雕,可以直接導(dǎo)入環(huán)境變量
export KUBECONFIG=/etc/kubernetes/admin.conf
# 添加kubectl的自動(dòng)補(bǔ)全功能
echo "source <(kubectl completion bash)" >> ~/.bashrc
前面我們提到過(guò)
kubectl
不一定要安裝在集群內(nèi)拓售,實(shí)際上只要是任何一臺(tái)能連接到apiserver
的機(jī)器上面都可以安裝kubectl
并且根據(jù)步驟配置kubeconfig
,就可以使用kubectl
命令行來(lái)管理對(duì)應(yīng)的k8s集群镶奉。
配置完成后础淤,我們?cè)賵?zhí)行相關(guān)命令就可以查看集群的信息了。
$ kubectl cluster-info
Kubernetes control plane is running at https://10.31.8.1:6443
CoreDNS is running at https://10.31.8.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tiny-flannel-master-8-1.k8s.tcinternal NotReady control-plane,master 79s v1.23.6 10.31.8.1 <none> CentOS Linux 7 (Core) 3.10.0-1160.62.1.el7.x86_64 docker://20.10.14
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-6d8c4cb4d-2clkj 0/1 Pending 0 86s <none> <none> <none> <none>
kube-system coredns-6d8c4cb4d-8mznz 0/1 Pending 0 86s <none> <none> <none> <none>
kube-system etcd-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 91s 10.31.8.1 tiny-flannel-master-8-1.k8s.tcinternal <none> <none>
kube-system kube-apiserver-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 92s 10.31.8.1 tiny-flannel-master-8-1.k8s.tcinternal <none> <none>
kube-system kube-controller-manager-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 90s 10.31.8.1 tiny-flannel-master-8-1.k8s.tcinternal <none> <none>
kube-system kube-proxy-dkvrn 1/1 Running 0 86s 10.31.8.1 tiny-flannel-master-8-1.k8s.tcinternal <none> <none>
kube-system kube-scheduler-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 92s 10.31.8.1 tiny-flannel-master-8-1.k8s.tcinternal <none> <none>
4.4 添加worker節(jié)點(diǎn)
這時(shí)候我們還需要繼續(xù)添加剩下的兩個(gè)節(jié)點(diǎn)作為worker節(jié)點(diǎn)運(yùn)行負(fù)載哨苛,直接在剩下的節(jié)點(diǎn)上面運(yùn)行集群初始化成功時(shí)輸出的命令就可以成功加入集群:
$ kubeadm join 10.31.8.1:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:d7160866920c0331731ad3c1c31a6e5b6c788b5682f86971cacaa940211db9ab
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果不小心沒(méi)保存初始化成功的輸出信息也沒(méi)有關(guān)系鸽凶,我們可以使用kubectl工具查看或者生成token
# 查看現(xiàn)有的token列表
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 23h 2022-05-08T06:27:34Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
# 如果token已經(jīng)失效,那就再創(chuàng)建一個(gè)新的token
$ kubeadm token create
pyab3u.j1a9ld7vk03znbk8
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 23h 2022-05-08T06:27:34Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
pyab3u.j1a9ld7vk03znbk8 23h 2022-05-08T06:34:28Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
# 如果找不到--discovery-token-ca-cert-hash參數(shù)建峭,則可以在master節(jié)點(diǎn)上使用openssl工具來(lái)獲取
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
d6cdc5a3bc40cbb0ae85776eb4fcdc1854942e2dd394470ae0f2f97714dd9fb9
添加完成之后我們?cè)俨榭醇旱墓?jié)點(diǎn)可以發(fā)現(xiàn)這時(shí)候已經(jīng)多了兩個(gè)node玻侥,但是此時(shí)節(jié)點(diǎn)的狀態(tài)還是NotReady
,接下來(lái)就需要部署CNI了亿蒸。
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
tiny-flannel-master-8-1.k8s.tcinternal NotReady control-plane,master 7m49s v1.23.6
tiny-flannel-worker-8-11.k8s.tcinternal NotReady <none> 2m58s v1.23.6
tiny-flannel-worker-8-12.k8s.tcinternal NotReady <none> 102s v1.23.6
5凑兰、安裝CNI
5.1 編寫(xiě)manifest文件
flannel應(yīng)該是眾多開(kāi)源的CNI插件中入門(mén)門(mén)檻最低的CNI之一了,部署簡(jiǎn)單边锁,原理易懂姑食,且相關(guān)的文檔在網(wǎng)絡(luò)上也非常豐富。
# 我們先把官方的yaml模板下載下來(lái)茅坛,然后對(duì)關(guān)鍵字段逐個(gè)修改
$ wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
針對(duì)kube-flannel.yml
文件音半,我們需要修改一些參數(shù)以適配我們的集群:
-
net-conf.json
參數(shù),配置的是pod的網(wǎng)段,這里我們使用此前計(jì)劃好的10.8.64.0/18
net-conf.json: | { "Network": "10.8.64.0/18", "Backend": { "Type": "vxlan" } }
5.2 部署flannel
修改完成之后我們直接部署即可
$ kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 查看pod是否正常運(yùn)行
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d8c4cb4d-np7q2 1/1 Running 0 14m
kube-system coredns-6d8c4cb4d-z8f5b 1/1 Running 0 14m
kube-system etcd-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 14m
kube-system kube-apiserver-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 14m
kube-system kube-controller-manager-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 14m
kube-system kube-flannel-ds-9fq4z 1/1 Running 0 12m
kube-system kube-flannel-ds-ckstx 1/1 Running 0 7m18s
kube-system kube-flannel-ds-qj55x 1/1 Running 0 8m25s
kube-system kube-proxy-bncfl 1/1 Running 0 14m
kube-system kube-proxy-lslcm 1/1 Running 0 7m18s
kube-system kube-proxy-pmwhf 1/1 Running 0 8m25s
kube-system kube-scheduler-tiny-flannel-master-8-1.k8s.tcinternal 1/1 Running 0 14m
# 查看flannel的pod日志是否有報(bào)錯(cuò)
$ kubectl logs -f -l app=flannel -n kube-system
6曹鸠、部署測(cè)試用例
集群部署完成之后我們?cè)趉8s集群中部署一個(gè)nginx測(cè)試一下是否能夠正常工作煌茬。首先我們創(chuàng)建一個(gè)名為nginx-quic
的命名空間(namespace
),然后在這個(gè)命名空間內(nèi)創(chuàng)建一個(gè)名為nginx-quic-deployment
的deployment
用來(lái)部署pod彻桃,最后再創(chuàng)建一個(gè)service
用來(lái)暴露服務(wù)坛善,這里我們先使用nodeport
的方式暴露端口方便測(cè)試。
$ cat nginx-quic.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-quic-deployment
namespace: nginx-quic
spec:
selector:
matchLabels:
app: nginx-quic
replicas: 2
template:
metadata:
labels:
app: nginx-quic
spec:
containers:
- name: nginx-quic
image: tinychen777/nginx-quic:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-quic-service
namespace: nginx-quic
spec:
selector:
app: nginx-quic
ports:
- protocol: TCP
port: 8080 # match for service access port
targetPort: 80 # match for pod access port
nodePort: 30088 # match for external access port
type: NodePort
部署完成后我們直接查看狀態(tài)
# 直接部署
$ kubectl apply -f nginx-quic.yaml
namespace/nginx-quic created
deployment.apps/nginx-quic-deployment created
service/nginx-quic-service created
# 查看deployment的運(yùn)行狀態(tài)
$ kubectl get deployment -o wide -n nginx-quic
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-quic-deployment 2/2 2 2 48s nginx-quic tinychen777/nginx-quic:latest app=nginx-quic
# 查看service的運(yùn)行狀態(tài)
$ kubectl get service -o wide -n nginx-quic
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-quic-service NodePort 10.8.4.218 <none> 8080:30088/TCP 62s app=nginx-quic
# 查看pod的運(yùn)行狀態(tài)
$ kubectl get pods -o wide -n nginx-quic
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-quic-deployment-696d959797-jm8w5 1/1 Running 0 73s 10.8.66.2 tiny-flannel-worker-8-12.k8s.tcinternal <none> <none>
nginx-quic-deployment-696d959797-lwcqz 1/1 Running 0 73s 10.8.65.2 tiny-flannel-worker-8-11.k8s.tcinternal <none> <none>
# 查看IPVS規(guī)則
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.17.0.1:30088 rr
-> 10.8.65.2:80 Masq 1 0 0
-> 10.8.66.2:80 Masq 1 0 0
TCP 10.8.4.218:8080 rr
-> 10.8.65.2:80 Masq 1 0 0
-> 10.8.66.2:80 Masq 1 0 0
TCP 10.8.64.0:30088 rr
-> 10.8.65.2:80 Masq 1 0 0
-> 10.8.66.2:80 Masq 1 0 0
TCP 10.8.64.1:30088 rr
-> 10.8.65.2:80 Masq 1 0 0
-> 10.8.66.2:80 Masq 1 0 0
TCP 10.31.8.1:30088 rr
-> 10.8.65.2:80 Masq 1 0 0
-> 10.8.66.2:80 Masq 1 0 0
最后我們進(jìn)行測(cè)試邻眷,這個(gè)nginx-quic的鏡像默認(rèn)情況下會(huì)返回在nginx容器中獲得的用戶請(qǐng)求的IP和端口
# 首先我們?cè)诩簝?nèi)進(jìn)行測(cè)試
# 直接訪問(wèn)pod浑吟,這時(shí)候顯示的IP是master節(jié)點(diǎn)上面的flannel.1網(wǎng)卡的IP
$ curl 10.8.66.2:80
10.8.64.0:38958
$ curl 10.8.65.2:80
10.8.64.0:46484
# 直接訪問(wèn)service的ClusterIP,這時(shí)請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到pod中
$ curl 10.8.4.218:8080
10.8.64.0:26305
# 直接訪問(wèn)nodeport耗溜,這時(shí)請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到pod中组力,不會(huì)經(jīng)過(guò)ClusterIP
$ curl 10.31.8.1:30088
10.8.64.0:6519
# 接著我們?cè)诩和膺M(jìn)行測(cè)試
# 直接訪問(wèn)三個(gè)節(jié)點(diǎn)的nodeport,這時(shí)請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到pod中抖拴,不會(huì)經(jīng)過(guò)ClusterIP
# 由于externalTrafficPolicy默認(rèn)為Cluster燎字,nginx拿到的IP就是我們?cè)L問(wèn)的節(jié)點(diǎn)的flannel.1網(wǎng)卡的IP
$ curl 10.31.8.1:30088
10.8.64.0:50688
$ curl 10.31.8.11:30088
10.8.65.1:41032
$ curl 10.31.8.12:30088
10.8.66.0:11422