【參考資料】
k8s文檔/安裝擴(kuò)展 https://kubernetes.io/docs/concepts/cluster-administration/addons/
canal安裝文檔 https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel
k8s安裝后,啟動(dòng)kubelet,再運(yùn)行kubeadm init 就算初始化成功了螟深,但查看pod節(jié)點(diǎn)狀態(tài)會(huì)發(fā)現(xiàn)coreDNS服務(wù)還在pending或者其他一些異常吼肥,所以接下來(lái)要去裝網(wǎng)絡(luò)控件來(lái)完善k8s
由于init啟動(dòng)時(shí)我們一般沒(méi)有網(wǎng)絡(luò)控件万矾,所以會(huì)使得我們的主節(jié)點(diǎn)node產(chǎn)生污點(diǎn)(taint)撞鹉,刪除污點(diǎn)網(wǎng)上很多教程
對(duì)于我們新手來(lái)說(shuō)垂涯,挑哪個(gè)都無(wú)所謂鸵赫,所以我就挑了個(gè)順眼的Canal
其中若干個(gè)網(wǎng)絡(luò)控件都需要讓我們配置 controller-manager 的CIDR
第一步的tip不知道怎么用,所以就老實(shí)地按文檔來(lái)
在k8s-controller-manager.yaml中配置集群CIDR
cd /etc/kubernetes/manifests/
ll
總用量 240
...
-rw------- 1 root root 2842 12月 27 17:28 kube-controller-manager.yaml
...
vim kube-controller-manager.yaml
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
- --cluster-cidr=10.1.0.0/16 # 按文檔加上這兩條命令宵蕉,這里的CIDR不要和已使用的ip地址沖突就好酝静,我的服務(wù)器內(nèi)網(wǎng)id是 10.0.x.x,所以使用了10.1.x.x
- --allocate-node-cidrs=true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
改好配置文件后要應(yīng)用
kubectl apply -f kube-controller-manager.yaml
應(yīng)用配置時(shí)感覺(jué)沒(méi)生效可以先刪除原來(lái)的配置
kubectl delete -f kube-controller.manager.yaml
多查看pod狀態(tài)檢查操作是否成功
kubectl get pods --all-namespaces
應(yīng)該看到這些基本的pod在運(yùn)行羡玛,多刷新幾次别智,如果狀態(tài)始終不是Running,那么檢查node是不是有污點(diǎn)要?jiǎng)h除
kube-system etcd-vm-20-9-centos 1/1 Running 4 29h
kube-system kube-apiserver-vm-20-9-centos 1/1 Running 4 29h
kube-system kube-controller-manager 1/1 Running 0 21h
kube-system kube-proxy-68zg7 1/1 Running 0 29h
準(zhǔn)備好后就可以開始獲取canal.yaml(curl或者wget弄到本地)
我喜歡保持自己當(dāng)前目錄為 /etc/kubernetes/manifests/稼稿,因?yàn)閥aml文件都在這
curl https://docs.projectcalico.org/manifests/canal.yaml -O
ll
...
-rw-r--r-- 1 root root 216854 12月 27 17:43 canal.yaml
...
kubectl apply -f canal.yaml
清除一些污點(diǎn)后薄榛,等待pod運(yùn)行起來(lái)就大功告成了讳窟,接下來(lái)就可以去弄dash_board等(組件/擴(kuò)展)了
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-647d84984b-zdqwq 1/1 Running 0 22h
kube-system canal-w94hz 2/2 Running 0 22h
kube-system coredns-65c54cc984-lbxwx 1/1 Running 0 29h
kube-system coredns-65c54cc984-vw7h8 1/1 Running 0 29h
kube-system etcd-vm-20-9-centos 1/1 Running 4 29h
kube-system kube-apiserver-vm-20-9-centos 1/1 Running 4 29h
kube-system kube-controller-manager 1/1 Running 0 22h
kube-system kube-proxy-68zg7 1/1 Running 0 29h
kube-system kube-scheduler-vm-20-9-centos 1/1 Running 5 29h