部署k8s 1.22.2 集群
Euler部署k8s 1.22.2 集群
一、基礎(chǔ)環(huán)境
主機名 | IP地址 | 角色 | 系統(tǒng) |
---|---|---|---|
master | 192.168.10.20 | master | Euler-21.03 |
node1 | 192.168.10.21 | node1 | Euler-21.03 |
node2 | 192.168.10.22 | node2 | Euler-21.03 |
二智哀、配置基礎(chǔ)環(huán)境次询。
? 以下命令在三臺主機上均需運行
1、添加yum源
vim /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - x86_64
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/x86_64/stable
enabled=1
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo x86_64
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/debug-x86_64/stable
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/source/stable
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-test]
name=Docker CE Test - x86_64
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/x86_64/test
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo x86_64
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/debug-x86_64/test
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/source/test
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-nightly]
name=Docker CE Nightly - x86_64
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/x86_64/nightly
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo x86_64
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/debug-x86_64/nightly
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/8/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg
2瓷叫、安裝相關(guān)依賴包
yum install -y vim bash-completion lrzsz conntrack ipvsadm ipset jq sysstat curl iptables
3屯吊、關(guān)閉防火墻
systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
4绅作、關(guān)閉SELinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
5嫡丙、關(guān)閉sawp分區(qū)
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
6、加載內(nèi)核模塊
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
- modprobe ip_vs lvs基于4層的負載均很
- modprobe ip_vs_rr 輪詢
- modprobe ip_vs_wrr 加權(quán)輪詢
- modprobe ip_vs_sh 源地址散列調(diào)度算法
- modprobe nf_conntrack_ipv4 連接跟蹤模塊
- modprobe br_netfilter 遍歷橋的數(shù)據(jù)包由iptables進行處理以進行過濾和端口轉(zhuǎn)發(fā)
7败砂、設(shè)置內(nèi)核參數(shù)
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/ipv4/tcp_tw_recycle: No such file or directory
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100
fs.file-max = 52706963
fs.nr_open = 52706963
net.ipv6.conf.all.disable_ipv6 = 1
net.netfilter.nf_conntrack_max = 2310720
overcommit_memory是一個內(nèi)核對內(nèi)存分配的一種策略,取值又三種分別為0辨嗽, 1世落, 2
overcommit_memory=0, 表示內(nèi)核將檢查是否有足夠的可用內(nèi)存供應(yīng)用進程使用;如果有足夠的可用內(nèi)存屉佳,內(nèi)存申請允許谷朝;否則,內(nèi)存申請失敗武花,并把錯誤返回給應(yīng)用進程圆凰。
overcommit_memory=1, 表示內(nèi)核允許分配所有的物理內(nèi)存体箕,而不管當前的內(nèi)存狀態(tài)如何专钉。
overcommit_memory=2, 表示內(nèi)核允許分配超過所有物理內(nèi)存和交換空間總和的內(nèi)存
net.bridge.bridge-nf-call-iptables 設(shè)置網(wǎng)橋iptables網(wǎng)絡(luò)過濾通告
net.ipv4.tcp_tw_recycle 設(shè)置 IP_TW 回收
vm.swappiness 禁用swap
vm.panic_on_oom 設(shè)置系統(tǒng)oom(內(nèi)存溢出)
fs.inotify.max_user_watches 允許用戶最大監(jiān)控目錄數(shù)
fs.file-max 允許系統(tǒng)打開的最大文件數(shù)
fs.nr_open 允許單個進程打開的最大文件數(shù)
net.ipv6.conf.all.disable_ipv6 禁用ipv6
net.netfilter.nf_conntrack_max 系統(tǒng)的最大連接數(shù)
8累铅、安裝 Docker
8.1首先卸載舊版
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
8.2安裝依賴包
yum install -y yum-utils device-mapper-persistent-data lvm2
8.3安裝
yum makecache fast
yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce-3:20.10.8-3.el8.x86_64
8.3設(shè)置docker開機自啟
systemctl enable docker
8.4啟動docker
systemctl start docker
8.5修改docker鏡像源和daemon.json
sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://v0rjmu9s.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker
9跃须、安裝 kubeadm 和 kubelet
9.1配置安裝源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- 重建yum緩存,輸入y添加證書認證
yum makecache fast
9.2 安裝k8s
yum -y install kubeadm-1.22.2-0.x86_64 kubelet-1.22.2-0.x86_64 kubectl-1.22.2-0.x86_64
- 設(shè)置kubectl與kubeadm命令補全娃兽,下次login生效
kubectl completion bash > /etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
10菇民、拉取所需鏡像
- 由于國內(nèi)網(wǎng)絡(luò)因素,鏡像只能拉去dockerhub的鏡像投储,有條件的同學自己拉取第练,這邊也準備了鏡像。
kubeadm config images list --kubernetes-version v1.22.2
k8s.gcr.io/kube-apiserver:v1.22.2
k8s.gcr.io/kube-controller-manager:v1.22.2
k8s.gcr.io/kube-scheduler:v1.22.2
k8s.gcr.io/kube-proxy:v1.22.2
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
準備了所有的鏡像
鏈接:https://pan.baidu.com/s/10yKcDMDCqnAG1kqNruBm6w l8h4
-
注意三個節(jié)點都要導入鏡像玛荞。
tar -zxvf k8s-1-22.tar.gz
cd k8s-1.22
ls
total 1.1G
-rw-r--r-- 1 root root 198K Sep 25 21:41 calico.yaml
-rw------- 1 root root 1.1G Sep 25 22:06 k8s-1-22-all.tar.gzdocker image load -i k8s-1.22-all.tar.gz
三娇掏、初始化集群
- 主節(jié)點執(zhí)行以下命令
1、使用kubeadm init初始化集群(注意修 apiserver 地址為本機IP)
kubeadm init --kubernetes-version=v1.22.2 --apiserver-advertise-address=192.168.10.20 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
-
初始化成功后會輸出類似下面的加入命令冲泥,暫時無需運行驹碍,先記錄。
kubeadm join 192.168.10.20:6443 --token xsdadasatjr4 --discovery-token-ca-cert-hash sha256:4622asdksdjalklaksdl5efe2b35f169ccc2c2a43df11cbc2af5f5473
2凡恍、為需要使用kubectl的用戶進行配置
- 每次啟動自動加載$HOME/.kube/config下的密鑰配置文件(K8S自動行為)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3志秃、 集群網(wǎng)絡(luò)配置安裝 calico
- 在剛才的解壓目錄里面有calico.yaml,執(zhí)行以下命令。
kubectl apply -f calico.yaml
4嚼酝、添加計算節(jié)點
-
在node1,node2節(jié)點執(zhí)行一下命令
kubeadm join 192.168.10.20:6443 --token xsdadasatjr4 --discovery-token-ca-cert-hash sha256:4622asdksdjalklaksdl5efe2b35f169ccc2c2a43df11cbc2af5f5473
-
注意:沒有記錄集群 join 命令的可以通過以下方式重新獲取
kubeadm token create --print-join-command --ttl=0
查看集群中的節(jié)點狀態(tài)浮还,可能要等等許久才Ready
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 16d v1.22.2
k8s-node1 Ready <none> 16d v1.22.2
k8s-node2 Ready <none> 16d v1.22.2