個人筆記奏赘,僅供參考,他人按此步驟不一定能順利搭建
1闻坚、配置阿里云docker源易猫,安裝組件,安裝docker-ce(所有節(jié)點)
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-19.03.14
2攻臀、配置阿里云kubernetes源,安裝kubeadm,kubectl,kubelet(kubeadm,kubelet在work節(jié)點,kubectl在master節(jié)點)
[root@master manifests]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
yum install -y kubelet
yum install -y kubectl
yum install -y kubeadm
3透罢、安裝完成后,我們還需要對docker進(jìn)行配置冠蒋,因為用yum源的方式安裝的kubelet生成的配置文件將參數(shù)--cgroup-driver改成了systemd羽圃,而 docker 的cgroup-driver是cgroupfs,這二者必須一致才行抖剿,我們可以通過docker info命令查看:
[root@master manifests]# docker info |grep Cgroup
Cgroup Driver: systemd
修改
[root@master manifests]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
4朽寞、關(guān)閉swap
swapoff -a
5、在master節(jié)點提前下載好k8s組件鏡像斩郎,再使用docker tag重命名脑融,因為K8s默認(rèn)是從grc.io下載鏡像的,而我本地?zé)o法訪問
查看需要的鏡像
[root@master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
從dockerhub可以下載的地址下載鏡像
docker pull aiotceo/kube-apiserver:v1.20.2
docker pull aiotceo/kube-controller-manager:v1.20.2
docker pull aiotceo/kube-proxy:v1.20.2
docker pull aiotceo/kube-scheduler:v1.20.2
docker pull aiotceo/pause:3.2
docker pull aiotceo/coredns:1.7.0
docker pull aiotceo/etcd:3.4.13-alpine
docker pull aiotceo/etcd:3.4.13-ubuntu
重命名
1109 docker tag docker.io/aiotceo/kube-apiserver:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2
1111 docker tag aiotceo/kube-proxy:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2
1112 docker tag aiotceo/kube-scheduler:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2
1113 docker tag docker.io/aiotceo/pause:3.2 k8s.gcr.io/pause:3.2
1114 docker tag docker.io/aiotceo/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
1115 docker tag docker.io/aiotceo/etcd:3.4.13-ubuntu k8s.gcr.io/etcd:3.4.13-0
6缩宜、集群初始化
kubeadm init --kubernetes-version=v1.20.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.1.1.206
初始化完成后你會看到下面這樣的提示:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.1.1.206:6443 --token hr8pxx.x7hnskwkz7dp20tq
--discovery-token-ca-cert-hash sha256:a894653ab32c92d89a4a43f6486bbe7cfbbeee1e601b5b3a99ffdcd68367737b
意思說你可以使用kubeadm join加入本集群肘迎,但是需要安裝網(wǎng)絡(luò)插件
此時使用kubectl get cs查看集群狀態(tài)甥温,會有以下提示:
[root@master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
這是因為集群初始化默認(rèn)從官方下載的kube-scheduler和kube-controller-manaer配置文件里面的port端口為0,將--port=0注釋掉妓布,然后重啟Kubelet
...
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --port=0
image: k8s.gcr.io/kube-scheduler:v1.20.2
...
systemctl restart kubelet
此時再查看cs姻蚓,狀態(tài)都為ready
[root@master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
7、安裝網(wǎng)絡(luò)插件
下載flannel配置文件,同樣需要提前在worker節(jié)點下載kube-proxy鏡像匣沼,并重命名
wget https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
注意狰挡,需要在瀏覽器里面直接打開然后復(fù)制內(nèi)容,再創(chuàng)建kube-flannel.yml,否則用kubectl apply -f 這個鏈接會報錯
kubectl apply -f kube-flannel.yml
查看
kubectl -n kube-system get pod
[root@master ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
etcd-master 1/1 Running 0 71m
kube-apiserver-master 1/1 Running 0 71m
kube-controller-manager-master 1/1 Running 0 71m
kube-flannel-ds-5xsbn 1/1 Running 0 34m
kube-flannel-ds-9dxlm 1/1 Running 0 34m
kube-flannel-ds-xt564 1/1 Running 0 35m
8肛著、工作節(jié)點加入集群
分別在工作節(jié)點使用上面初始化的生成的token和hash值加入集群
kubeadm join 10.1.1.206:6443 --token hr8pxx.x7hnskwkz7dp20tq
--discovery-token-ca-cert-hash sha256:a894653ab32c92d89a4a43f6486bbe7cfbbeee1e601b5b3a99ffdcd68367737b
vim /etc/profile
在底部增加新的環(huán)境變量 export KUBECONFIG=/etc/kubernetes/admin.conf
source /etc/profile
把主節(jié)點文件拷貝到node節(jié)點
scp -r /etc/kubernetes/admin.conf root@200.200.4.151:/etc/kubernetes/admin.conf
給從節(jié)點打標(biāo)簽
kubectl label node k8s-node-151 node-role.kubernetes.io/node=
[root@master ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-c4tqr 78m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:hr8pxx Approved,Issued
csr-pdsnh 78m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:hr8pxx Approved,Issued
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 86m v1.20.2
node01 Ready <none> 78m v1.20.2
node02 Ready <none> 78m v1.20.2
9圆兵、創(chuàng)建coredns
提前將鏡像下載好,并重命名
docker pull aiotceo/coredns:1.7.0
docker tag docker.io/aiotceo/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
[root@master ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-ckd4v 1/1 Running 0 90m
coredns-74ff55c5b-pgw2h 1/1 Running 0 90m
PS:安裝過程中如果有出錯枢贿,使用以下命令重置殉农,并且重新初始化,worker節(jié)點也一樣
kubeadm reset
systemctl daemon-reload
systemctl restart kubelet
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X