文中遇到的問(wèn)題解決方案并一定適用每個(gè)人翰绊,我也是摸索中慢慢解決的
一旁壮、準(zhǔn)備工作
環(huán)境:
3臺(tái) centos7
運(yùn)行內(nèi)存4G
內(nèi)核 4.4以上
永久打開(kāi)集群所需端口 ,舉例
firewall-cmd --zone=public --add-port=6443/tcp --permanent
Master節(jié)點(diǎn)
Worker節(jié)點(diǎn)
所有節(jié)點(diǎn)關(guān)閉防火墻裁奇,命令
systemctl disable firewalld.service
systemctl stop firewalld.service
所有節(jié)點(diǎn)關(guān)閉swap
vi /etc/fstab
注釋掉swap那一行
禁用SELINUX
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
設(shè)置iptables
vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
設(shè)置所有節(jié)點(diǎn)主機(jī)名
hostnamectl --static set-hostname k8s-master
hostnamectl --static set-hostname k8s-worker-1
hostnamectl --static set-hostname k8s-worker-2
所有節(jié)點(diǎn) 主機(jī)名/IP加入 hosts解析
192.168.233.3 k8s-master
192.168.233.4 k8s-worker-1
192.168.233.5 k8s-worker-2
時(shí)間同步
@root# yum install -y ntpdate
@root# crontab -e
0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP
@root# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
@root# ntpdate us.pool.ntp.org
二麦撵、安裝docker
環(huán)境準(zhǔn)備
yum install -y yum-utils device-mapper-persistent-data lvm2
安裝docker源(社區(qū)版,免費(fèi))
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
正式安裝docker
yum -y install docker-ce
新建daemon.conf
vi /etc/docker/daemon.json
配置國(guó)內(nèi)鏡像
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
設(shè)置開(kāi)機(jī)自啟動(dòng)
systemctl enable docker
啟動(dòng)docker
systemctl start docker
運(yùn)行測(cè)試docker
docker run hello-world
如果遇到問(wèn)題Unable to find image 'hello-world:latest' locally音五,不必緊張,再重新執(zhí)行一遍命令就能拉取成功了(前提是配置了國(guó)內(nèi)鏡像)
打開(kāi)iptable的橋接相關(guān)功能
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
docker服務(wù)重啟
sudo systemctl daemon-reload && sudo systemctl restart docker
三躺涝、為安裝k8s做準(zhǔn)備
配置鏡像
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
配置k8s.conf,修改網(wǎng)絡(luò)配置
vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
修改完成后執(zhí)行sysctl --system應(yīng)用
四撬碟、安裝kubelet莉撇、kubeadm惶傻、kubectl
使用命令安裝
指定版本安裝
yum -y install kubeadm-1.13.0 kubelet-1.13.0 kubectl-1.13.0 kubernetes-cni-0.6.0 --disableexcludes=kubernetes
最新版本安裝
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
配置開(kāi)機(jī)自啟動(dòng)
systemctl enable kubelet.service
驗(yàn)證MAC地址是否唯一
cat /sys/class/net/ens160/address
cat /sys/class/dmi/id/product_uuid
br_netfilter模塊加載
cat > /etc/rc.sysinit
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
初始化kubrenetes---- kubeadm init
初始化Kubernetes,執(zhí)行完命令以后,會(huì)自動(dòng)去鏡像倉(cāng)庫(kù)下載所需的組件涂佃,如api-server,etcd等,如果下載失敗則通過(guò)方法二去手動(dòng)下載辜荠,并重新打tag
如果kubeadm init失敗需要重新初始化抓狭,那在開(kāi)始之前需要執(zhí)行kubeadm reset重置環(huán)境
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address master節(jié)點(diǎn)ip地址 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version 1.21.2 --apiserver-advertise-address 192.168.32.137 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=1.21.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
如果出現(xiàn)下圖的問(wèn)題
解決方案:
參照網(wǎng)上的一些資料,修改了vi /etc/yum.repos.d/kubernetes.repo和docker.service午笛,再重新?lián)Q了init 命令后面的參數(shù),執(zhí)行步驟(1)(2)
#vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
(1)對(duì)docker.service新增Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16",必須在Type=notify后面
# vi /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
(2)取消apiserver-advertise-address參數(shù)
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=1.21.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
成功以后顯示如下,然后按照提示執(zhí)行命令药磺,以及建加入節(jié)點(diǎn)的命令復(fù)制保存下來(lái),每個(gè)人join的token是不一樣的癌佩,已自己的返回結(jié)果為準(zhǔn)
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.36.137:6443 --token x3n0p9.b30auojouc1dpkqt --discovery-token-ca-cert-hash sha256:810775682bb60963f44c099bd3bbeb9124f1b897257cc856495508d6ed2cee22
備案:如果上訴方法下載鏡像失敗便锨,則通過(guò)Docker手動(dòng)下載其他所需組件
(1)先使用kubeadm config images list 查看需要的鏡像版本
[root@k8s-master k8s] kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
(2)拉取,如果已經(jīng)配置了docker鏡像鸿秆,則可將docker.io換成k8s.gcr.io
docker pull k8s.gcr.io/kube-apiserver-arm64:v1.21.2
docker pull k8s.gcr.io/kube-controller-manager-arm64:v1.21.2
docker pull k8s.gcr.io/kube-scheduler-arm64:v1.21.2
docker pull k8s.gcr.io/kube-proxy-arm64:v1.21.2
docker pull k8s.gcr.io/pause-arm3.4.1
docker pull k8s.gcr.io/etcd-arm64:3.4.13-0
docker pull k8s.gcr.io/coredns/coredns:v1.8.0
如果上速方法也失敗,則手動(dòng)輸入阿里云代理倉(cāng)庫(kù)地址registry.aliyuncs.com/google_containers,下載以后重新用docker tag給組件鏡像重新打tag命
(1)手動(dòng)下載鏡像
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2
docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.0
(2)docker images查看鏡像組件
(3)重新打tag
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2 k8s.gcr.io/kube-apiserver:v1.21.2
(4)刪除老鏡像
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
按照提示執(zhí)行命令恳守,為普通用戶(hù)添加 kubectl 運(yùn)行權(quán)限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
重新生成token和ca證書(shū)的hash值
查看token是否失效
kubeadm token list
如果失效,則生成新token沥阱,token生命周期為一天
kubeadm token create
重新獲取到的 ca 證書(shū)sha256編碼hash值,我的 ca 證書(shū)存放在/etc/kubernetes/pki目錄下考杉,這個(gè)證書(shū)在init的時(shí)候會(huì)自動(dòng)生成
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
Node節(jié)點(diǎn)加入到集群
將node節(jié)點(diǎn)加入到集群策精,復(fù)制init返回的join命令到node機(jī)器上執(zhí)行即可(不要改變?cè)畹膇p地址咽袜,我就是因?yàn)楦牧薸p地址導(dǎo)致無(wú)法加入到集群) ,如果join期間卡住了询刹,則在命令后面加上 --v=5查看相關(guān)報(bào)錯(cuò)日志
kubeadm join 192.168.36.137:6443 --token x3n0p9.b30auojouc1dpkqt --discovery-token-ca-cert-hash sha256:810775682bb60963f44c099bd3bbeb9124f1b897257cc856495508d6ed2cee22
如果node節(jié)點(diǎn)報(bào)8080 refuse,則執(zhí)行
#scp master:/etc/kubernetes/admin.conf root@node:/home
#mv /home/admin.conf /etc/kubenetes
#echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
再次執(zhí)行join命令凹联,提示一下信息則成功
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
安裝flannel
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ym
# kubectl apply -f kube-flannel.yml
獲取節(jié)點(diǎn)信息 kubectl get nodes
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 18h v1.21.2
k8s-worker-1 NotReady <none> 7m32s v1.21.2
k8s-worker-2 NotReady <none> 105s v1.21.2
Master NotReday
檢查是否安裝好網(wǎng)絡(luò)插件flannel
Node NotReady
如果節(jié)點(diǎn)正常哆档,則status 為running狀態(tài)
kubectl get pods -n kube-system -o wide
執(zhí)行journalctl -f -u kubelet
"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
檢查下面的文件是否建立,如果沒(méi)有則執(zhí)行mkdir新建
vi /etc/cni/net.d/10-flannel.conf
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
vi /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
最后再執(zhí)行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
再去master檢查節(jié)點(diǎn)信息,全部變?yōu)镽eady狀態(tài)
[root@k8s-master share]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 23h v1.21.2
k8s-worker-1 Ready <none> 4h36m v1.21.2
k8s-worker-2 Ready <none> 4h30m v1.21.2
kubeadm init初始化后面參數(shù)的意思如下
# 初始化 Control-plane/Master 節(jié)點(diǎn)
kubeadm init \
--apiserver-advertise-address 0.0.0.0 \
# API 服務(wù)器所公布的其正在監(jiān)聽(tīng)的 IP 地址,指定“0.0.0.0”以使用默認(rèn)網(wǎng)絡(luò)接口的地址
# 切記只可以是內(nèi)網(wǎng)IP澳淑,不能是外網(wǎng)IP斟叼,如果有多網(wǎng)卡,可以使用此選項(xiàng)指定某個(gè)網(wǎng)卡
--apiserver-bind-port 6443 \
# API 服務(wù)器綁定的端口,默認(rèn) 6443
--cert-dir /etc/kubernetes/pki \
# 保存和存儲(chǔ)證書(shū)的路徑朗涩,默認(rèn)值:"/etc/kubernetes/pki"
--control-plane-endpoint kuber4s.api \
# 為控制平面指定一個(gè)穩(wěn)定的 IP 地址或 DNS 名稱(chēng),
# 這里指定的 kuber4s.api 已經(jīng)在 /etc/hosts 配置解析為本機(jī)IP
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
# 選擇用于拉取Control-plane的鏡像的容器倉(cāng)庫(kù),默認(rèn)值:"k8s.gcr.io"
# 因 Google被墻谢床,這里選擇國(guó)內(nèi)倉(cāng)庫(kù)
--kubernetes-version 1.17.3 \
# 為Control-plane選擇一個(gè)特定的 Kubernetes 版本, 默認(rèn)值:"stable-1"
--node-name master01 \
# 指定節(jié)點(diǎn)的名稱(chēng),不指定的話為主機(jī)hostname出革,默認(rèn)可以不指定
--pod-network-cidr 10.10.0.0/16 \
# 指定pod的IP地址范圍
--service-cidr 10.20.0.0/16 \
# 指定Service的VIP地址范圍
--service-dns-domain cluster.local \
# 為Service另外指定域名,默認(rèn)"cluster.local"
--upload-certs
# 將 Control-plane 證書(shū)上傳到 kubeadm-certs Secret
準(zhǔn)備cfssl證書(shū)生成工具(這一步暫時(shí)不要參考我的骂束,我沒(méi)弄完)
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
自簽CA,生成json文件
cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
cat ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}