k8s系列03-kubeadm部署calico網(wǎng)絡(luò)的k8s集群

本文主要在centos7系統(tǒng)上基于dockercalico組件部署v1.23.6版本的k8s原生集群,由于集群主要用于自己平時(shí)學(xué)習(xí)和測(cè)試使用马澈,加上資源有限,暫不涉及高可用部署锻梳。

此前寫的一些關(guān)于k8s基礎(chǔ)知識(shí)和集群搭建的一些方案箭券,有需要的同學(xué)可以看一下。

1疑枯、準(zhǔn)備工作

1.1 calico-集群節(jié)點(diǎn)信息

機(jī)器均為8C8G的虛擬機(jī)辩块,硬盤為100G。

IP Hostname
10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal
10.31.88.11 tiny-calico-worker-88-11.k8s.tcinternal
10.31.88.12 tiny-calico-worker-88-12.k8s.tcinternal
10.88.64.0/18 podSubnet
10.88.0.0/18 serviceSubnet

1.2 檢查mac和product_uuid

同一個(gè)k8s集群內(nèi)的所有節(jié)點(diǎn)需要確保mac地址和product_uuid均唯一荆永,開始集群初始化之前需要檢查相關(guān)信息

# 檢查mac地址
ip link 
ifconfig -a

# 檢查product_uuid
sudo cat /sys/class/dmi/id/product_uuid

1.3 配置ssh免密登錄(可選)

如果k8s集群的節(jié)點(diǎn)有多個(gè)網(wǎng)卡废亭,確保每個(gè)節(jié)點(diǎn)能通過正確的網(wǎng)卡互聯(lián)訪問

# 在root用戶下面生成一個(gè)公用的key,并配置可以使用該key免密登錄
su root
ssh-keygen
cd /root/.ssh/
cat id_rsa.pub >> authorized_keys
chmod 600 authorized_keys


cat >> ~/.ssh/config <<EOF
Host tiny-calico-master-88-1.k8s.tcinternal
    HostName 10.31.88.1
    User root
    Port 22
    IdentityFile ~/.ssh/id_rsa

Host tiny-calico-worker-88-11.k8s.tcinternal
    HostName 10.31.88.11
    User root
    Port 22
    IdentityFile ~/.ssh/id_rsa

Host tiny-calico-worker-88-12.k8s.tcinternal
    HostName 10.31.88.12
    User root
    Port 22
    IdentityFile ~/.ssh/id_rsa
EOF

1.4 修改hosts文件

cat >> /etc/hosts <<EOF
10.31.88.1  tiny-calico-master-88-1 tiny-calico-master-88-1.k8s.tcinternal
10.31.88.11 tiny-calico-worker-88-11 tiny-calico-worker-88-11.k8s.tcinternal
10.31.88.12 tiny-calico-worker-88-12 tiny-calico-worker-88-12.k8s.tcinternal
EOF

1.5 關(guān)閉swap內(nèi)存

# 使用命令直接關(guān)閉swap內(nèi)存
swapoff -a
# 修改fstab文件禁止開機(jī)自動(dòng)掛載swap分區(qū)
sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab

1.6 配置時(shí)間同步

這里可以根據(jù)自己的習(xí)慣選擇ntp或者是chrony同步均可具钥,同步的時(shí)間源服務(wù)器可以選擇阿里云的ntp1.aliyun.com或者是國(guó)家時(shí)間中心的ntp.ntsc.ac.cn豆村。

使用ntp同步

# 使用yum安裝ntpdate工具
yum install ntpdate -y

# 使用國(guó)家時(shí)間中心的源同步時(shí)間
ntpdate ntp.ntsc.ac.cn

# 最后查看一下時(shí)間
hwclock

使用chrony同步

# 使用yum安裝chrony
yum install chrony -y

# 設(shè)置開機(jī)啟動(dòng)并開啟chony并查看運(yùn)行狀態(tài)
systemctl enable chronyd.service
systemctl start chronyd.service
systemctl status chronyd.service

# 當(dāng)然也可以自定義時(shí)間服務(wù)器
vim /etc/chrony.conf

# 修改前
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

# 修改后
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server ntp.ntsc.ac.cn iburst

# 重啟服務(wù)使配置文件生效
systemctl restart chronyd.service

# 查看chrony的ntp服務(wù)器狀態(tài)
chronyc sourcestats -v
chronyc sources -v

1.7 關(guān)閉selinux

# 使用命令直接關(guān)閉
setenforce 0

# 也可以直接修改/etc/selinux/config文件
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

1.8 配置防火墻

k8s集群之間通信和服務(wù)暴露需要使用較多端口,為了方便骂删,直接禁用防火墻

# centos7使用systemctl禁用默認(rèn)的firewalld服務(wù)
systemctl disable firewalld.service

1.9 配置netfilter參數(shù)

這里主要是需要配置內(nèi)核加載br_netfilteriptables放行ipv6ipv4的流量掌动,確保集群內(nèi)的容器能夠正常通信。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

1.10 關(guān)閉IPV6(可選)

雖然新版本的k8s已經(jīng)支持雙棧網(wǎng)絡(luò)宁玫,但是本次的集群部署過程并不涉及IPv6網(wǎng)絡(luò)的通信粗恢,因此關(guān)閉IPv6網(wǎng)絡(luò)支持

# 直接在內(nèi)核中添加ipv6禁用參數(shù)
grubby --update-kernel=ALL --args=ipv6.disable=1

1.11 配置IPVS(可選)

IPVS是專門設(shè)計(jì)用來應(yīng)對(duì)負(fù)載均衡場(chǎng)景的組件,kube-proxy 中的 IPVS 實(shí)現(xiàn)通過減少對(duì) iptables 的使用來增加可擴(kuò)展性欧瘪。在 iptables 輸入鏈中不使用 PREROUTING眷射,而是創(chuàng)建一個(gè)假的接口,叫做 kube-ipvs0佛掖,當(dāng)k8s集群中的負(fù)載均衡配置變多的時(shí)候妖碉,IPVS能實(shí)現(xiàn)比iptables更高效的轉(zhuǎn)發(fā)性能。

注意在4.19之后的內(nèi)核版本中使用nf_conntrack模塊來替換了原有的nf_conntrack_ipv4模塊

(Notes: use nf_conntrack instead of nf_conntrack_ipv4 for Linux kernel 4.19 and later)

# 在使用ipvs模式之前確保安裝了ipset和ipvsadm
sudo yum install ipset ipvsadm -y

# 手動(dòng)加載ipvs相關(guān)模塊
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

# 配置開機(jī)自動(dòng)加載ipvs相關(guān)模塊
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF

sudo sysctl --system
# 最好重啟一遍系統(tǒng)確定是否生效

$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  2
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
$ cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack_ipv4

2芥被、安裝container runtime

2.1 安裝docker

詳細(xì)的官方文檔可以參考這里欧宜,由于在剛發(fā)布的1.24版本中移除了docker-shim,因此安裝的版本≥1.24的時(shí)候需要注意容器運(yùn)行時(shí)的選擇撕彤。這里我們安裝的版本低于1.24鱼鸠,因此我們繼續(xù)使用docker。

docker的具體安裝可以參考我之前寫的這篇文章羹铅,這里不做贅述蚀狰。

# 安裝必要的依賴組件并且導(dǎo)入docker官方提供的yum源
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo

# 我們直接安裝最新版本的docker
yum install docker-ce docker-ce-cli containerd.io

2.2 配置cgroup drivers

CentOS7使用的是systemd來初始化系統(tǒng)并管理進(jìn)程,初始化進(jìn)程會(huì)生成并使用一個(gè) root 控制組 (cgroup), 并充當(dāng) cgroup 管理器职员。 Systemdcgroup 集成緊密,并將為每個(gè) systemd 單元分配一個(gè) cgroup蛾扇。 我們也可以配置容器運(yùn)行時(shí)kubelet 使用 cgroupfs。 連同 systemd 一起使用 cgroupfs 意味著將有兩個(gè)不同的 cgroup 管理器苞氮。而當(dāng)一個(gè)系統(tǒng)中同時(shí)存在cgroupfs和systemd兩者時(shí)猖辫,容易變得不穩(wěn)定,因此最好更改設(shè)置练链,令容器運(yùn)行時(shí)和 kubelet 使用 systemd 作為 cgroup 驅(qū)動(dòng)沽损,以此使系統(tǒng)更為穩(wěn)定。 對(duì)于 Docker, 需要設(shè)置 native.cgroupdriver=systemd 參數(shù)循头。

參考官方的說明文檔:

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers

參考配置說明文檔

https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#docker

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker


# 最后檢查一下Cgroup Driver是否為systemd
$ docker info | grep systemd
 Cgroup Driver: systemd

2.3 關(guān)于kubelet的cgroup driver

k8s官方有詳細(xì)的文檔介紹了如何設(shè)置kubelet的cgroup driver绵估,需要特別注意的是炎疆,在1.22版本開始,如果沒有手動(dòng)設(shè)置kubelet的cgroup driver国裳,那么默認(rèn)會(huì)設(shè)置為systemd

Note: In v1.22, if the user is not setting the cgroupDriver field under KubeletConfiguration, kubeadm will default it to systemd.

一個(gè)比較簡(jiǎn)單的指定kubelet的cgroup driver的方法就是在kubeadm-config.yaml加入cgroupDriver字段

# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

我們可以直接查看configmaps來查看初始化之后集群的kubeadm-config配置形入。

$ kubectl describe configmaps kubeadm-config -n kube-system
Name:         kubeadm-config
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
ClusterConfiguration:
----
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.6
networking:
  dnsDomain: cali-cluster.tclocal
  serviceSubnet: 10.88.0.0/18
scheduler: {}


BinaryData
====

Events:  <none>

當(dāng)然因?yàn)槲覀冃枰惭b的版本高于1.22.0并且使用的就是systemd,因此可以不用再重復(fù)配置缝左。

3亿遂、安裝kube三件套

對(duì)應(yīng)的官方文檔可以參考這里

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl

kube三件套就是kubeadmkubeletkubectl盒使,三者的具體功能和作用如下:

  • kubeadm:用來初始化集群的指令崩掘。
  • kubelet:在集群中的每個(gè)節(jié)點(diǎn)上用來啟動(dòng) Pod 和容器等。
  • kubectl:用來與集群通信的命令行工具少办。

需要注意的是:

  • kubeadm不會(huì)幫助我們管理kubeletkubectl苞慢,其他兩者也是一樣的,也就是說這三者是相互獨(dú)立的英妓,并不存在誰管理誰的情況挽放;
  • kubelet的版本必須小于等于API-server的版本,否則容易出現(xiàn)兼容性的問題蔓纠;
  • kubectl并不是集群中的每個(gè)節(jié)點(diǎn)都需要安裝辑畦,也并不是一定要安裝在集群中的節(jié)點(diǎn),可以單獨(dú)安裝在自己本地的機(jī)器環(huán)境上面腿倚,然后配合kubeconfig文件即可使用kubectl命令來遠(yuǎn)程管理對(duì)應(yīng)的k8s集群纯出;

CentOS7的安裝比較簡(jiǎn)單,我們直接使用官方提供的yum源即可敷燎。需要注意的是這里需要設(shè)置selinux的狀態(tài)暂筝,但是前面我們已經(jīng)關(guān)閉了selinux,因此這里略過這步硬贯。

# 直接導(dǎo)入谷歌官方的yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# 當(dāng)然如果連不上谷歌的源焕襟,可以考慮使用國(guó)內(nèi)的阿里鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


# 接下來直接安裝三件套即可
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# 如果網(wǎng)絡(luò)環(huán)境不好出現(xiàn)gpgcheck驗(yàn)證失敗導(dǎo)致無法正常讀取yum源,可以考慮關(guān)閉該yum源的repo_gpgcheck
sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/kubernetes.repo
# 或者在安裝的時(shí)候禁用gpgcheck
sudo yum install -y kubelet kubeadm kubectl --nogpgcheck --disableexcludes=kubernetes



# 如果想要安裝特定版本饭豹,可以使用這個(gè)命令查看相關(guān)版本的信息
sudo yum list --nogpgcheck kubelet kubeadm kubectl --showduplicates --disableexcludes=kubernetes
# 這里我們?yōu)榱吮A羰褂胐ocker-shim鸵赖,因此我們按照1.24.0版本的前一個(gè)版本1.23.6
sudo yum install -y kubelet-1.23.6-0 kubeadm-1.23.6-0 kubectl-1.23.6-0 --nogpgcheck --disableexcludes=kubernetes

# 安裝完成后配置開機(jī)自啟kubelet
sudo systemctl enable --now kubelet

4、初始化集群

4.1 編寫配置文件

在集群中所有節(jié)點(diǎn)都執(zhí)行完上面的三點(diǎn)操作之后拄衰,我們就可以開始創(chuàng)建k8s集群了它褪。因?yàn)槲覀冞@次不涉及高可用部署,因此初始化的時(shí)候直接在我們的目標(biāo)master節(jié)點(diǎn)上面操作即可肾砂。

# 我們先使用kubeadm命令查看一下主要的幾個(gè)鏡像版本
# 因?yàn)槲覀兇饲爸付ò惭b了舊的1.23.6版本列赎,這里的apiserver鏡像版本也會(huì)隨之回滾
$ kubeadm config images list
I0506 11:24:17.061315   16055 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

# 為了方便編輯和管理,我們還是把初始化參數(shù)導(dǎo)出成配置文件
$ kubeadm config print init-defaults > kubeadm-calico.conf

  • 考慮到大多數(shù)情況下國(guó)內(nèi)的網(wǎng)絡(luò)無法使用谷歌的k8s.gcr.io鏡像源,我們可以直接在配置文件中修改imageRepository參數(shù)為阿里的鏡像源
  • kubernetesVersion字段用來指定我們要安裝的k8s版本
  • localAPIEndpoint參數(shù)需要修改為我們的master節(jié)點(diǎn)的IP和端口包吝,初始化之后的k8s集群的apiserver地址就是這個(gè)
  • serviceSubnetdnsDomain兩個(gè)參數(shù)默認(rèn)情況下可以不用修改饼煞,這里我按照自己的需求進(jìn)行了變更
  • nodeRegistration里面的name參數(shù)修改為對(duì)應(yīng)master節(jié)點(diǎn)的hostname
  • 新增配置塊使用ipvs,具體可以參考官方文檔
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.31.88.1
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: tiny-calico-master-88-1.k8s.tcinternal
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.6
networking:
  dnsDomain: cali-cluster.tclocal
  serviceSubnet: 10.88.0.0/18
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

4.2 初始化集群

此時(shí)我們?cè)俨榭磳?duì)應(yīng)的配置文件中的鏡像版本诗越,就會(huì)發(fā)現(xiàn)已經(jīng)變成了對(duì)應(yīng)阿里云鏡像源的版本

# 查看一下對(duì)應(yīng)的鏡像版本砖瞧,確定配置文件是否生效
$ kubeadm config images list --config kubeadm-calico.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6

# 確認(rèn)沒問題之后我們直接拉取鏡像
$ kubeadm config images pull --config kubeadm-calico.conf
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

# 初始化
$ kubeadm init --config kubeadm-calico.conf
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
...此處略去一堆輸出...

當(dāng)我們看到下面這個(gè)輸出結(jié)果的時(shí)候,我們的集群就算是初始化成功了嚷狞。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.31.88.1:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae

4.3 配置kubeconfig

剛初始化成功之后块促,我們還沒辦法馬上查看k8s集群信息,需要配置kubeconfig相關(guān)參數(shù)才能正常使用kubectl連接apiserver讀取集群信息床未。

# 對(duì)于非root用戶竭翠,可以這樣操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 如果是root用戶,可以直接導(dǎo)入環(huán)境變量
export KUBECONFIG=/etc/kubernetes/admin.conf

# 添加kubectl的自動(dòng)補(bǔ)全功能
echo "source <(kubectl completion bash)" >> ~/.bashrc

前面我們提到過kubectl不一定要安裝在集群內(nèi)薇搁,實(shí)際上只要是任何一臺(tái)能連接到apiserver的機(jī)器上面都可以安裝kubectl并且根據(jù)步驟配置kubeconfig斋扰,就可以使用kubectl命令行來管理對(duì)應(yīng)的k8s集群。

配置完成后啃洋,我們?cè)賵?zhí)行相關(guān)命令就可以查看集群的信息了传货。

$ kubectl cluster-info
Kubernetes control plane is running at https://10.31.88.1:6443
CoreDNS is running at https://10.31.88.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'

$ kubectl get nodes -o wide
NAME                                     STATUS     ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
tiny-calico-master-88-1.k8s.tcinternal   NotReady   control-plane,master   4m15s   v1.23.6   10.31.88.1    <none>        CentOS Linux 7 (Core)   3.10.0-1160.62.1.el7.x86_64   docker://20.10.14

$ kubectl get pods -A -o wide
NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE     IP           NODE                                     NOMINATED NODE   READINESS GATES
kube-system   coredns-6d8c4cb4d-r8r9q                                          0/1     Pending   0          4m20s   <none>       <none>                                   <none>           <none>
kube-system   coredns-6d8c4cb4d-ztq6w                                          0/1     Pending   0          4m20s   <none>       <none>                                   <none>           <none>
kube-system   etcd-tiny-calico-master-88-1.k8s.tcinternal                      1/1     Running   0          4m25s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-apiserver-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0          4m26s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-controller-manager-tiny-calico-master-88-1.k8s.tcinternal   1/1     Running   0          4m27s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-proxy-v6cg9                                                 1/1     Running   0          4m20s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>
kube-system   kube-scheduler-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0          4m25s   10.31.88.1   tiny-calico-master-88-1.k8s.tcinternal   <none>           <none>

4.4 添加worker節(jié)點(diǎn)

這時(shí)候我們還需要繼續(xù)添加剩下的兩個(gè)節(jié)點(diǎn)作為worker節(jié)點(diǎn)運(yùn)行負(fù)載,直接在剩下的節(jié)點(diǎn)上面運(yùn)行集群初始化成功時(shí)輸出的命令就可以成功加入集群:

$ kubeadm join 10.31.88.1:6443 --token abcdef.0123456789abcdef \
>         --discovery-token-ca-cert-hash sha256:a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果不小心沒保存初始化成功的輸出信息也沒有關(guān)系宏娄,我們可以使用kubectl工具查看或者生成token

# 查看現(xiàn)有的token列表
$ kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   23h         2022-05-07T05:19:08Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

# 如果token已經(jīng)失效问裕,那就再創(chuàng)建一個(gè)新的token
$ kubeadm token create
e31cv1.lbtrzwp6mzon78ue
$ kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   23h         2022-05-07T05:19:08Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
e31cv1.lbtrzwp6mzon78ue   23h         2022-05-07T05:51:40Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

# 如果找不到--discovery-token-ca-cert-hash參數(shù),則可以在master節(jié)點(diǎn)上使用openssl工具來獲取
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |    openssl dgst -sha256 -hex | sed 's/^.* //'
a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae

添加完成之后我們?cè)俨榭醇旱墓?jié)點(diǎn)可以發(fā)現(xiàn)這時(shí)候已經(jīng)多了兩個(gè)node孵坚,但是此時(shí)節(jié)點(diǎn)的狀態(tài)還是NotReady粮宛,接下來就需要部署CNI了。

$ kubectl get nodes
NAME                                      STATUS     ROLES                  AGE    VERSION
tiny-calico-master-88-1.k8s.tcinternal    NotReady   control-plane,master   20m    v1.23.6
tiny-calico-worker-88-11.k8s.tcinternal   NotReady   <none>                 105s   v1.23.6
tiny-calico-worker-88-12.k8s.tcinternal   NotReady   <none>                 35s    v1.23.6

5卖宠、安裝CNI

5.1 編寫manifest文件

calico的安裝也比較簡(jiǎn)單窟勃,官方提供了多種安裝方式,我們這里使用yaml自定義manifests)進(jìn)行安裝逗堵,并且使用etcd作為datastore

# 我們先把官方的yaml模板下載下來眷昆,然后對(duì)關(guān)鍵字段逐個(gè)修改
curl https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml -O

針對(duì)calico-etcd.yaml文件蜒秤,我們需要修改一些參數(shù)以適配我們的集群:

  • CALICO_IPV4POOL_CIDR參數(shù),配置的是pod的網(wǎng)段亚斋,這里我們使用此前計(jì)劃好的10.88.64.0/18作媚;CALICO_IPV4POOL_BLOCK_SIZE參數(shù),配置的是分配子網(wǎng)的大小帅刊,默認(rèn)是26

                # The default IPv4 pool to create on startup if none exists. Pod IPs will be
                # chosen from this range. Changing this value after installation will have
                # no effect. This should fall within `--cluster-cidr`.
                - name: CALICO_IPV4POOL_CIDR
                  value: "10.88.64.0/18"
                - name: CALICO_IPV4POOL_BLOCK_SIZE
                  value: "26"
    
  • CALICO_IPV4POOL_IPIP參數(shù)纸泡,控制是否啟用ip-ip模式,默認(rèn)情況下是Always赖瞒,由于我們的節(jié)點(diǎn)都在同一個(gè)二層網(wǎng)絡(luò)女揭,這里修改為Never或者是CrossSubnet都可以蚤假。

    其中Never表示不啟用ip-ip模式,而CrossSubnet則表示僅當(dāng)跨子網(wǎng)的時(shí)候才啟用ip-ip模式

                # Enable IPIP
                - name: CALICO_IPV4POOL_IPIP
                  value: "Never"
    
  • ConfigMap里面的etcd_endpoints變量配置etcd的連接端口和地址吧兔,為了安全我們這里開啟TLS認(rèn)證磷仰,當(dāng)然如果不想配置證書的,也可以不使用TLS境蔼,然后這三個(gè)字段直接留空不做修改

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: calico-config
      namespace: kube-system
    data:
      # Configure this with the location of your etcd cluster.
      # etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
      # If you're using TLS enabled etcd uncomment the following.
      # You must also populate the Secret below with these files.
      # etcd_ca: ""   # "/calico-secrets/etcd-ca"
      # etcd_cert: "" # "/calico-secrets/etcd-cert"
      # etcd_key: ""  # "/calico-secrets/etcd-key"
      etcd_endpoints: "https://10.31.88.1:2379"
      etcd_ca: "/etc/kubernetes/pki/etcd/ca.crt"
      etcd_cert: "/etc/kubernetes/pki/etcd/server.crt"
      etcd_key: "/etc/kubernetes/pki/etcd/server.key"
    
  • Secret里面的 name: calico-etcd-secrets下面的data字段灶平,需要把上面的三個(gè)證書內(nèi)容使用該命令cat <file> | base64 -w 0轉(zhuǎn)成base64編碼格式

    ---
    # Source: calico/templates/calico-etcd-secrets.yaml
    # The following contains k8s Secrets for use with a TLS enabled etcd cluster.
    # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: calico-etcd-secrets
      namespace: kube-system
    data:
      # Populate the following with etcd TLS configuration if desired, but leave blank if
      # not using TLS for etcd.
      # The keys below should be uncommented and the values populated with the base64
      # encoded contents of each file that would be associated with the TLS data.
      # Example command for encoding a file contents: cat <file> | base64 -w 0
      etcd-key: LS0tLS1CRUdJTi......tLS0tCg==
      etcd-cert: LS0tLS1CRUdJT......tLS0tLQo=
      etcd-ca: LS0tLS1CRUdJTiB......FLS0tLS0K
    

5.2 部署calico

修改完成之后我們直接部署即可

$ kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

# 查看pod是否正常運(yùn)行
$ kubectl get pods -A
NAMESPACE     NAME                                                             READY   STATUS    RESTARTS        AGE
kube-system   calico-kube-controllers-5c4bd49f9b-6b2gr                         1/1     Running   5 (3m18s ago)   6m18s
kube-system   calico-node-bgsfs                                                1/1     Running   5 (2m55s ago)   6m18s
kube-system   calico-node-tr88g                                                1/1     Running   5 (3m19s ago)   6m18s
kube-system   calico-node-w59pc                                                1/1     Running   5 (2m36s ago)   6m18s
kube-system   coredns-6d8c4cb4d-r8r9q                                          1/1     Running   0               3h8m
kube-system   coredns-6d8c4cb4d-ztq6w                                          1/1     Running   0               3h8m
kube-system   etcd-tiny-calico-master-88-1.k8s.tcinternal                      1/1     Running   0               3h8m
kube-system   kube-apiserver-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0               3h8m
kube-system   kube-controller-manager-tiny-calico-master-88-1.k8s.tcinternal   1/1     Running   0               3h8m
kube-system   kube-proxy-n65sb                                                 1/1     Running   0               169m
kube-system   kube-proxy-qmxhp                                                 1/1     Running   0               168m
kube-system   kube-proxy-v6cg9                                                 1/1     Running   0               3h8m
kube-system   kube-scheduler-tiny-calico-master-88-1.k8s.tcinternal            1/1     Running   0               3h8m

# 查看calico-kube-controllers的pod日志是否有報(bào)錯(cuò)
$ kubectl logs -f calico-kube-controllers-5c4bd49f9b-6b2gr -n kube-system

5.3 pod安裝calicoctl

calicoctl是用來查看管理calico的命令行工具,定位上有點(diǎn)類似于calico版本的kubectl箍土,因?yàn)槲覀兦懊媸褂昧薳tcd作為calico的datastore逢享,這里直接選擇在k8s集群中以pod的形式部署calicoctl的方式更加簡(jiǎn)單。

  • calicoctl的版本最好和部署的calico一致吴藻,這里均為v3.22.2
  • calicoctl的etcd配置最好和部署的calico一致瞒爬,因?yàn)榍懊娌渴餭alico的時(shí)候etcd開啟了TLS,因此這里我們也要修改yaml文件開啟TLS
# 為了方便后期管理调缨,我們先把calicoctl.yaml下載到本地再進(jìn)行部署
$ wget https://projectcalico.docs.tigera.io/manifests/calicoctl-etcd.yaml

$ cat calicoctl-etcd.yaml
# Calico Version v3.22.2
# https://projectcalico.docs.tigera.io/releases#v3.22.2
# This manifest includes the following component versions:
#   calico/ctl:v3.22.2

apiVersion: v1
kind: Pod
metadata:
  name: calicoctl
  namespace: kube-system
spec:
  nodeSelector:
    kubernetes.io/os: linux
  hostNetwork: true
  containers:
  - name: calicoctl
    image: calico/ctl:v3.22.2
    command:
      - /calicoctl
    args:
      - version
      - --poll=1m
    env:
    - name: ETCD_ENDPOINTS
      valueFrom:
        configMapKeyRef:
          name: calico-config
          key: etcd_endpoints
    # If you're using TLS enabled etcd uncomment the following.
    # Location of the CA certificate for etcd.
    - name: ETCD_CA_CERT_FILE
      valueFrom:
        configMapKeyRef:
          name: calico-config
          key: etcd_ca
    # Location of the client key for etcd.
    - name: ETCD_KEY_FILE
      valueFrom:
        configMapKeyRef:
          name: calico-config
          key: etcd_key
    # Location of the client certificate for etcd.
    - name: ETCD_CERT_FILE
      valueFrom:
        configMapKeyRef:
          name: calico-config
          key: etcd_cert
    volumeMounts:
    - mountPath: /calico-secrets
      name: etcd-certs
  volumes:
    # If you're using TLS enabled etcd uncomment the following.
    - name: etcd-certs
      secret:
        secretName: calico-etcd-secrets

修改完成之后我們直接部署即可使用

$ kubectl apply -f calicoctl-etcd.yaml
pod/calicoctl created

# 創(chuàng)建完成后我們查看calicoctl的運(yùn)行狀態(tài)
$ kubectl get pods -A | grep calicoctl
kube-system   calicoctl                                                        1/1     Running   0             9s

# 檢驗(yàn)一下是否能夠正常工作
$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get nodes
NAME
tiny-calico-master-88-1.k8s.tcinternal
tiny-calico-worker-88-11.k8s.tcinternal
tiny-calico-worker-88-12.k8s.tcinternal

$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get profiles -o wide
NAME                                                 LABELS
projectcalico-default-allow
kns.default                                          pcns.kubernetes.io/metadata.name=default,pcns.projectcalico.org/name=default
kns.kube-node-lease                                  pcns.kubernetes.io/metadata.name=kube-node-lease,pcns.projectcalico.org/name=kube-node-lease
kns.kube-public                                      pcns.kubernetes.io/metadata.name=kube-public,pcns.projectcalico.org/name=kube-public
kns.kube-system                                      pcns.kubernetes.io/metadata.name=kube-system,pcns.projectcalico.org/name=kube-system
...此處略去一堆輸出...

# 查看ipam的分配情況
$ calicoctl ipam show
+----------+---------------+-----------+------------+--------------+
| GROUPING |     CIDR      | IPS TOTAL | IPS IN USE |   IPS FREE   |
+----------+---------------+-----------+------------+--------------+
| IP Pool  | 10.88.64.0/18 |     16384 | 2 (0%)     | 16382 (100%) |
+----------+---------------+-----------+------------+--------------+


# 為了方便可以在bashrc中設(shè)置alias
cat >> ~/.bashrc <<EOF
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
EOF

完整版本calicoctl命令可以參考官方文檔疮鲫。

5.4 binary安裝calicoctl

使用pod方式部署calicoctl雖然簡(jiǎn)單,但是有個(gè)問題就是無法使用calicoctl node命令弦叶,這個(gè)命令需要訪問部分宿主機(jī)的文件系統(tǒng)俊犯。因此這里我們?cè)俣M(jìn)制部署一個(gè)calicoctl

Note that if you run calicoctl in a container, calicoctl node ... commands will not work (they need access to parts of the host filesystem).

# 直接下線二進(jìn)制文件即可使用
$ cd /usr/local/bin/
$ curl -L https://github.com/projectcalico/calico/releases/download/v3.22.2/calicoctl-linux-amd64 -o calicoctl
$ chmod +x ./calicoctl

二進(jìn)制的calicoctl會(huì)優(yōu)先讀取配置文件伤哺,當(dāng)找不到配置文件的時(shí)候才會(huì)去讀取環(huán)境變量燕侠,這里我們直接配置/etc/calico/calicoctl.cfg,注意etcd的證書直接和前面部署calico時(shí)使用的證書文件一致即可立莉。

# 配置calicoctl的配置文件
$ mkdir /etc/calico
$ cat /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: etcdv3
  etcdEndpoints: "https://10.31.88.1:2379"
  etcdCACert: |
      -----BEGIN CERTIFICATE-----
      MIIC9TCCAd2gAwIBAgIBADANBgkqhkiG9w0BAQsFADASMRAwDgYDVQQDEwdldGNk
      LWNhMB4XDTIyMDUwNjA1MTg1OVoXDTMyMDUwMzA1MTg1OVowEjEQMA4GA1UEAxMH
      ZXRjZC1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANFFqq4Mk3DE
      6UW581xnZPFrHqQWlGr/KptEywKH56Bp24OAnDIAkSz7KAMrJzL+OiVsj9YJV59F
      9qH/YzU+bppctDnfk1yCuavkcXgLSd9O6EBhM2LkGtF9AdWMnFw9ui2jNhFC/QXj
      zCvq0I1c9o9gulbFmSHwIw2GLQd7ogO+PpfLsubRscJdKkCUWVFV0mb8opccmXoF
      vXynRX0VW3wpN+v66bD+HTdMSNK1JljfBngh9LAkibjUx7bMrHvu/GOalNCSWrtG
      lss/hhWkzwV7Y7AIXgvxxcmDdfswe5lUYLvW2CP4e+tXfB3i2wg10fErc8z63lix
      v9BWkIIalScCAwEAAaNWMFQwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMB
      Af8wHQYDVR0OBBYEFH49PpnJYxze8aq0PVwgpY4Fo6djMBIGA1UdEQQLMAmCB2V0
      Y2QtY2EwDQYJKoZIhvcNAQELBQADggEBAAGL6KwN80YEK6gZcL+7RI9bkMKk7UWW
      V48154CgN8w9GKvNTm4l0tZKvsWCnR61hiJtLQcG0S8HYHAvL1DBjOXw11bNilLy
      vaVM+wqOOIxPsXLU//F46z3V9z1uV0v/yLLlg320c0wtG+OLZZIn8O+yUhtOHM09
      K0JSAF2/KhtNxhrc0owCTOzS+DKsb0w1SzQmS0t/tflyLfc3oJZ/2V4Tqd72j7iI
      cDBa36lGqtUBf8MXu+Xza0cdhy/f19AqkeM2fe+/DrbzR4zDVmZ7l4dqYGLbKHYo
      XaLn8bSToYQq4dlA/oAlyyH0ekB5v0DyYiHwlqgZgiu4qcR3Gw8azVk=
      -----END CERTIFICATE-----
  etcdCert: |
      -----BEGIN CERTIFICATE-----
      MIIDgzCCAmugAwIBAgIIePiBSOdMGwcwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
      AxMHZXRjZC1jYTAeFw0yMjA1MDYwNTE4NTlaFw0yMzA1MDYwNTE4NTlaMDExLzAt
      BgNVBAMTJnRpbnktY2FsaWNvLW1hc3Rlci04OC0xLms4cy50Y2ludGVybmFsMIIB
      IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqZM/jBrdXLR3ctee7LVJhGSA
      4usg/JQXGyOAd52OkkOLYwn3fvwqeo0Z0cX0q4mqaF0cnrPYc4eExX/3fJpF3Fxy
      D6vdpEZ/FrnzCAkibEYtK/UVhTKuV7n/VdbjFPGl8CpppuGVs6o+4NFZxffW7em0
      8m/FK/7SDkV2qXCyG94kOaUCeDEgdBKE3cPCZQ4maFuwXi08bYs2CiTfbfa4dsT5
      3yzaoQVX9BaBqE9IGmsHDFuxp1X8gkJXs+7wwHQX39o1oXmci6T4IVxVHA5GRbTv
      pCDG5Wye7QqKgnxO1KRF42FKs1Nif7UJ0iR35Ydpa7cat7Fr0M7l+rZLCDTJgwID
      AQABo4G9MIG6MA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYI
      KwYBBQUHAwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBR+PT6ZyWMc3vGqtD1c
      IKWOBaOnYzBaBgNVHREEUzBRgglsb2NhbGhvc3SCJnRpbnktY2FsaWNvLW1hc3Rl
      ci04OC0xLms4cy50Y2ludGVybmFshwQKH1gBhwR/AAABhxAAAAAAAAAAAAAAAAAA
      AAABMA0GCSqGSIb3DQEBCwUAA4IBAQC+pyH14/+US5Svz04Vi8QIduY/DVx1HOQq
      hfrIZKOZCH2iKU7fZ4o9QpQZh7D9B8hgpXM6dNuFpd98c0MVPr+LesShu4BHVjHl
      gPvUWEVB2XD5x51HqnMV2OkhMKooyAUIzI0P0YKN29SFEyJGD1XDu4UtqvBADqf7
      COvAuqj4VbRgF/iQwNstjqZ47rSzvyp6rIwqFoHRP+Zi+8KL1qmozGjI3+H+TZFM
      Gv3b5DRx2pmfY+kGVLO5bjl3zxylRPjCDHaRlQUWiOYSWS8OHYRCBZuSLvW4tht0
      JjWjUAh4hF8+3lyNrfx8moz7tfm5SG2q01pO1vjkhrhxhINAwaac
      -----END CERTIFICATE-----
  etcdKey: |
      -----BEGIN RSA PRIVATE KEY-----
      MIIEowIBAAKCAQEAqZM/jBrdXLR3ctee7LVJhGSA4usg/JQXGyOAd52OkkOLYwn3
      fvwqeo0Z0cX0q4mqaF0cnrPYc4eExX/3fJpF3FxyD6vdpEZ/FrnzCAkibEYtK/UV
      hTKuV7n/VdbjFPGl8CpppuGVs6o+4NFZxffW7em08m/FK/7SDkV2qXCyG94kOaUC
      eDEgdBKE3cPCZQ4maFuwXi08bYs2CiTfbfa4dsT53yzaoQVX9BaBqE9IGmsHDFux
      p1X8gkJXs+7wwHQX39o1oXmci6T4IVxVHA5GRbTvpCDG5Wye7QqKgnxO1KRF42FK
      s1Nif7UJ0iR35Ydpa7cat7Fr0M7l+rZLCDTJgwIDAQABAoIBAE1gMw7q8zbp4dc1
      K/82eWU/ts/UGikmKaTofiYWboeu6ls2oQgAaCGjYLSnbw0Ws/sLAZQo3AtbOuoj
      ifoBKv9x71nXQjtDL5pfHtX71QkyvEniev9cMNE2vZudgeB8owsDT1ImfPiOJkLP
      Q/dhL2E/0qEM/xskGxUH/S0zjxHHfPZZsYODhkVPWc6Z+XEDll48fRCFn4/48FTN
      9GbRvo7dv34EHmNYA20K4DMHbZUdrPqSZpKWzAPJXnDlgZbpvUeAYOJxqZHQtCm1
      zbSOyM1Ql6K0Ayro0L5GAzap+0yGuk79OWiPnEsdPneVsATKG7dT7RZIL/INrOqQ
      0wjUmQECgYEA02OHdT1K5Au6wtiTqKD99WweltnvFd4C/Z3dobEj8M8qN6uiKCca
      PievWahnxAlJEah3RiOgtarwA+0E/Jgsw99Qutp5BR/XdD3llTNczkPkg/RkWpve
      2f/4DlZQrxuIem7UNLl+5BacfmF691DQQoX2RoIkvQxYJGTUNXvrSUkCgYEAzVyz
      mvN+dvSwzAlm0gkfVP5Ez3DFESUrWd0FR2v1HR6qHQy/dkgkkic6zRGCJtGeT5V7
      N0kbVSHsz+wi6aQkFy0Sp0TbgZzjPhSwNtk+2JsBRvMp0CYczgrfyvWuAQ3gbXGc
      N8IkcZSSOv8TuigCnnYf2Xaz8LM50AivScnb6GsCgYEAyq4ScgnLpa3NawbnRPbf
      qRH6nl7lC01sBqn3mBHVSQ4JB4msF92uHsxEJ639mAvjIGgrvHdqnuT/7nOypVJv
      EXsr14ykHpKyLQUv/Idbw3V7RD3ufqYW3WS8/VorUEoQ6HsdQlRc4ur/L3ndwgWd
      OTtir6YW/aA5XuPCSGnBZekCgYB6VtlgW+Jg91BDnO41/d0+guN3ONUNa7kxpau5
      aqTxHg11lNySmFPBBcHP3LhOa94FxyVKQDEaPEWZcDE0QuaFMALGxwyFYHM3zpdT
      dYQtAdp26/Fi4PGUBYJgpI9ubVffmyjXRr7zMvESWFbmNWOqBvDeWgrEP+EW/7V9
      HdX11QKBgE1czchlibgQ/bhAl8BatKRr1X/UHvblWhmyApudOfFeGOILR6u/lWvY
      SS+Rg0y8nnZ4hTRSXbd/sSEsUJcSmoBc1TivWzl32eVuqe9CcrUZY0JSLtoj1KiP
      adRcCZtVDETXbW326Hvgz+MnqrIgzx+Zgy4tNtoAAbTv0q83j45I
      -----END RSA PRIVATE KEY-----

配置完成之后我們檢查一下效果

$ calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.31.88.11  | node-to-node mesh | up    | 08:26:30 | Established |
| 10.31.88.12  | node-to-node mesh | up    | 08:26:30 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

$ calicoctl get nodes
NAME
tiny-calico-master-88-1.k8s.tcinternal
tiny-calico-worker-88-11.k8s.tcinternal
tiny-calico-worker-88-12.k8s.tcinternal

$ calicoctl ipam show
+----------+---------------+-----------+------------+--------------+
| GROUPING |     CIDR      | IPS TOTAL | IPS IN USE |   IPS FREE   |
+----------+---------------+-----------+------------+--------------+
| IP Pool  | 10.88.64.0/18 |     16384 | 2 (0%)     | 16382 (100%) |
+----------+---------------+-----------+------------+--------------+

6绢彤、部署測(cè)試用例

集群部署完成之后我們?cè)趉8s集群中部署一個(gè)nginx測(cè)試一下是否能夠正常工作。首先我們創(chuàng)建一個(gè)名為nginx-quic的命名空間(namespace)蜓耻,然后在這個(gè)命名空間內(nèi)創(chuàng)建一個(gè)名為nginx-quic-deploymentdeployment用來部署pod茫舶,最后再創(chuàng)建一個(gè)service用來暴露服務(wù),這里我們先使用nodeport的方式暴露端口方便測(cè)試刹淌。

$ cat nginx-quic.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-quic-deployment
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-quic
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-quic
    spec:
      containers:
      - name: nginx-quic
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-quic-service
  namespace: nginx-quic
spec:
  selector:
    app: nginx-quic
  ports:
  - protocol: TCP
    port: 8080 # match for service access port
    targetPort: 80 # match for pod access port
    nodePort: 30088 # match for external access port
  type: NodePort

部署完成后我們直接查看狀態(tài)

# 直接部署
$ kubectl apply -f nginx-quic.yaml
namespace/nginx-quic created
deployment.apps/nginx-quic-deployment created
service/nginx-quic-service created

# 查看deployment的運(yùn)行狀態(tài)
$ kubectl get deployment -o wide -n nginx-quic
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                          SELECTOR
nginx-quic-deployment   4/4     4            4           55s   nginx-quic   tinychen777/nginx-quic:latest   app=nginx-quic

# 查看service的運(yùn)行狀態(tài)
$ kubectl get service -o wide -n nginx-quic
NAME                 TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE   SELECTOR
nginx-quic-service   NodePort   10.88.52.168   <none>        8080:30088/TCP   66s   app=nginx-quic

# 查看pod的運(yùn)行狀態(tài)
$ kubectl get pods -o wide -n nginx-quic
NAME                                     READY   STATUS    RESTARTS   AGE   IP             NODE                                      NOMINATED NODE   READINESS GATES
nginx-quic-deployment-7457f4d579-24q9z   1/1     Running   0          75s   10.88.120.72   tiny-calico-worker-88-12.k8s.tcinternal   <none>           <none>
nginx-quic-deployment-7457f4d579-4svv9   1/1     Running   0          75s   10.88.84.68    tiny-calico-worker-88-11.k8s.tcinternal   <none>           <none>
nginx-quic-deployment-7457f4d579-btrjj   1/1     Running   0          75s   10.88.120.71   tiny-calico-worker-88-12.k8s.tcinternal   <none>           <none>
nginx-quic-deployment-7457f4d579-lvh6x   1/1     Running   0          75s   10.88.84.69    tiny-calico-worker-88-11.k8s.tcinternal   <none>           <none>


# 查看IPVS規(guī)則
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30088 rr
  -> 10.88.84.68:80               Masq    1      0          0
  -> 10.88.84.69:80               Masq    1      0          0
  -> 10.88.120.71:80              Masq    1      0          0
  -> 10.88.120.72:80              Masq    1      0          0
TCP  10.31.88.1:30088 rr
  -> 10.88.84.68:80               Masq    1      0          0
  -> 10.88.84.69:80               Masq    1      0          0
  -> 10.88.120.71:80              Masq    1      0          0
  -> 10.88.120.72:80              Masq    1      0          0
TCP  10.88.52.168:8080 rr
  -> 10.88.84.68:80               Masq    1      0          0
  -> 10.88.84.69:80               Masq    1      0          0
  -> 10.88.120.71:80              Masq    1      0          0
  -> 10.88.120.72:80              Masq    1      0          0

最后我們進(jìn)行測(cè)試饶氏,這個(gè)nginx-quic的鏡像默認(rèn)情況下會(huì)返回在nginx容器中獲得的用戶請(qǐng)求的IP和端口

# 首先我們?cè)诩簝?nèi)進(jìn)行測(cè)試
# 直接訪問pod
$ curl 10.88.84.68:80
10.31.88.1:34612
# 直接訪問service的ClusterIP,這時(shí)請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到pod中
$ curl 10.88.52.168:8080
10.31.88.1:58978
# 直接訪問nodeport有勾,這時(shí)請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到pod中疹启,不會(huì)經(jīng)過ClusterIP
$ curl 10.31.88.1:30088
10.31.88.1:56595

# 接著我們?cè)诩和膺M(jìn)行測(cè)試
# 直接訪問三個(gè)節(jié)點(diǎn)的nodeport,這時(shí)請(qǐng)求會(huì)被轉(zhuǎn)發(fā)到pod中蔼卡,不會(huì)經(jīng)過ClusterIP
# 由于externalTrafficPolicy默認(rèn)為Cluster喊崖,因此nginx拿到的IP就是我們?cè)L問的節(jié)點(diǎn)的IP,而非客戶端IP
$ curl 10.31.88.1:30088
10.31.88.1:27851
$ curl 10.31.88.11:30088
10.31.88.11:16540
$ curl 10.31.88.12:30088
10.31.88.12:5767
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市荤懂,隨后出現(xiàn)的幾起案子茁裙,更是在濱河造成了極大的恐慌,老刑警劉巖势誊,帶你破解...
    沈念sama閱讀 206,968評(píng)論 6 482
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件呜达,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡粟耻,警方通過查閱死者的電腦和手機(jī)查近,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,601評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來挤忙,“玉大人霜威,你說我怎么就攤上這事〔崃遥” “怎么了戈泼?”我有些...
    開封第一講書人閱讀 153,220評(píng)論 0 344
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)赏僧。 經(jīng)常有香客問我大猛,道長(zhǎng),這世上最難降的妖魔是什么淀零? 我笑而不...
    開封第一講書人閱讀 55,416評(píng)論 1 279
  • 正文 為了忘掉前任挽绩,我火速辦了婚禮,結(jié)果婚禮上驾中,老公的妹妹穿的比我還像新娘唉堪。我一直安慰自己,他們只是感情好肩民,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,425評(píng)論 5 374
  • 文/花漫 我一把揭開白布唠亚。 她就那樣靜靜地躺著,像睡著了一般持痰。 火紅的嫁衣襯著肌膚如雪灶搜。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,144評(píng)論 1 285
  • 那天工窍,我揣著相機(jī)與錄音占调,去河邊找鬼。 笑死移剪,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的薪者。 我是一名探鬼主播纵苛,決...
    沈念sama閱讀 38,432評(píng)論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了攻人?” 一聲冷哼從身側(cè)響起取试,我...
    開封第一講書人閱讀 37,088評(píng)論 0 261
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎怀吻,沒想到半個(gè)月后瞬浓,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,586評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡蓬坡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,028評(píng)論 2 325
  • 正文 我和宋清朗相戀三年猿棉,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片屑咳。...
    茶點(diǎn)故事閱讀 38,137評(píng)論 1 334
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡萨赁,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出兆龙,到底是詐尸還是另有隱情杖爽,我是刑警寧澤,帶...
    沈念sama閱讀 33,783評(píng)論 4 324
  • 正文 年R本政府宣布紫皇,位于F島的核電站慰安,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏聪铺。R本人自食惡果不足惜化焕,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,343評(píng)論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望计寇。 院中可真熱鬧锣杂,春花似錦、人聲如沸番宁。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,333評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽蝶押。三九已至踱蠢,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間棋电,已是汗流浹背茎截。 一陣腳步聲響...
    開封第一講書人閱讀 31,559評(píng)論 1 262
  • 我被黑心中介騙來泰國(guó)打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留赶盔,地道東北人企锌。 一個(gè)月前我還...
    沈念sama閱讀 45,595評(píng)論 2 355
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像于未,于是被迫代替她去往敵國(guó)和親撕攒。 傳聞我的和親對(duì)象是個(gè)殘疾皇子陡鹃,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,901評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容