參考
使用kubeadm安裝kubernetes_v1.20.x | Kuboard
從零搭建k8s集群 - 許大仙 - 博客園
CentOS8.0通過(guò)yum安裝ntp同步時(shí)間 - 吳昊博客
CentOS 8同步時(shí)間 - tlanyan
yum安裝指定版本docker - 簡(jiǎn)書(shū)
節(jié)點(diǎn)加入k8s集群如何獲取token等參數(shù)值_魂醉的一畝二分地-CSDN博客
CentOS - Docker —— 從入門(mén)到實(shí)踐
selinux詳解及配置文件 - 大熊子 - 博客園
k8s常用命令 - 云+社區(qū) - 騰訊云
k8s排錯(cuò)
Kubernetes 架構(gòu) · Kubernetes Handbook - Kubernetes中文指南/云原生應(yīng)用架構(gòu)實(shí)踐手冊(cè) by Jimmy Song(宋凈超)
簡(jiǎn)介
- kubelet:k8s后端服務(wù),接收并執(zhí)行master發(fā)來(lái)的指令
- kubectl:k8s控制器必怜,用于向k8s后端發(fā)起服務(wù)操作請(qǐng)求的工具
- kubeadm:k8s管理員控制器妨退,權(quán)限比kubectl高这溅,多用于集群初始化
- kube-proxy:實(shí)現(xiàn)pod服務(wù)負(fù)載均衡
- etcd: 每一節(jié)點(diǎn)均有,服務(wù)發(fā)現(xiàn)嚼松,共享配置
- kube-apiserver:提供集群對(duì)外訪問(wèn)功能
[image:953B6305-D5ED-4633-B841-012BB838B13B-943-00010361CA98AEA2/AA126061-201F-4424-8B44-65C8C4297B26.png]
搭建步驟
- 準(zhǔn)備兩臺(tái)服務(wù)器
一臺(tái)master,一臺(tái)node,搭建master單節(jié)點(diǎn)集群環(huán)境乓梨,選用阿里云ecs按量計(jì)費(fèi)服務(wù)器
[image:5B20920A-8EB1-49DB-AFFA-E54A2F6D687A-427-000011201F8771E1/711F636B-5406-45D1-9739-D2F3520AA3FB.png]
master:8.140.134.10
node1:8.140.109.105
master節(jié)點(diǎn)服務(wù)器參數(shù)要求2核以上鳖轰,內(nèi)存4g以上
- 設(shè)置hostname
hostnamectl set-hostname master/node
查看設(shè)置結(jié)果
[root@k8s-master ~]# hostnamectl status
Static hostname: k8s-master
Icon name: computer-vm
Chassis: vm
Machine ID: 20201120143309750601020764519652
Boot ID: 5d3b7ad7a3174bbca92120abc8c93bd5
Virtualization: kvm
Operating System: CentOS Linux 8 (Core)
CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-193.28.1.el8_2.x86_64
Architecture: x86-64
[root@k8s-master ~]#
- 關(guān)閉防火墻
[root@k8s-master ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
這里是默認(rèn)已經(jīng)關(guān)閉了防火墻,沒(méi)有關(guān)閉可以執(zhí)行
systemctl stop firewalld
督禽,并執(zhí)行systemctl disable firewalld
禁止防火墻開(kāi)機(jī)自啟動(dòng)
- 關(guān)閉selinux
為了允許容器訪問(wèn)宿主機(jī)的文件系統(tǒng)脆霎,避免麻煩,需要關(guān)閉selinux
[root@k8s-master ~]#
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
selinux默認(rèn)是關(guān)閉的狈惫,如果沒(méi)有關(guān)閉睛蛛,可以執(zhí)行
sed -i 's/enforcing/disabled/' /etc/selinux/config
將對(duì)應(yīng)selinux參數(shù)替換為disabled,然后重啟服務(wù)器
- 關(guān)閉swap分區(qū)
Swap分區(qū)開(kāi)啟的目的是為了防止內(nèi)存不足時(shí)臨時(shí)使用一部分磁盤(pán)作為內(nèi)存使用胧谈,但是性能會(huì)有所下降忆肾,畢竟磁盤(pán)讀寫(xiě)性能差
[root@k8s-master ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Nov 20 06:36:28 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=edf839fd-8e1a-4373-946a-c32c9b459611 / xfs defaults 0 0
[root@k8s-master ~]# free -h
total used free shared buff/cache available
Mem: 15Gi 196Mi 14Gi 1.0Mi 288Mi 14Gi
Swap: 0B 0B 0B
默認(rèn)沒(méi)有開(kāi)啟swap分區(qū),如果需要關(guān)閉swap分區(qū)菱肖,執(zhí)行
sed -ri 's/.*swap.*/#&/' /etc/fstab
客冈,然后重啟
- 設(shè)置hosts文件
cat >> /etc/hosts << EOF
8.140.134.10 k8s-master
8.140.109.105 k8s-node1
EOF
互ping測(cè)試,注意關(guān)閉防火墻
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (8.140.109.105) 56(84) bytes of data.
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=1 ttl=62 time=0.337 ms
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=2 ttl=62 time=0.288 ms
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=3 ttl=62 time=0.303 ms
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=4 ttl=62 time=0.299 ms
[root@k8s-node1 ~]# ping k8s-master
PING k8s-master (8.140.134.10) 56(84) bytes of data.
64 bytes from k8s-master (8.140.134.10): icmp_seq=1 ttl=62 time=0.328 ms
64 bytes from k8s-master (8.140.134.10): icmp_seq=2 ttl=62 time=0.318 ms
64 bytes from k8s-master (8.140.134.10): icmp_seq=3 ttl=62 time=0.375 ms
- 安裝docker
- 安裝倉(cāng)庫(kù)管理工具::yum-utils::
yum install -y yum-utils
[root@k8s-master ~]# rpm -qa| grep yum-utils
yum-utils-4.0.17-5.el8.noarch
- 添加docker-ce倉(cāng)庫(kù)
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查看倉(cāng)庫(kù)
[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# ls
CentOS-AppStream.repo CentOS-centosplus.repo CentOS-Debuginfo.repo CentOS-Extras.repo CentOS-Media.repo CentOS-Sources.repo
CentOS-Base.repo CentOS-CR.repo CentOS-epel.repo CentOS-fasttrack.repo CentOS-PowerTools.repo CentOS-Vault.repo
添加倉(cāng)庫(kù)
[root@k8s-master yum.repos.d]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
添加倉(cāng)庫(kù)自:https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master yum.repos.d]#
[root@k8s-master yum.repos.d]# ls
CentOS-AppStream.repo CentOS-centosplus.repo CentOS-Debuginfo.repo CentOS-Extras.repo CentOS-Media.repo CentOS-Sources.repo docker-ce.repo
CentOS-Base.repo CentOS-CR.repo CentOS-epel.repo CentOS-fasttrack.repo CentOS-PowerTools.repo CentOS-Vault.repo
- 安裝指定版本docker-ce
1. 移除已有docker稳强,如果已有docker的話
[root@k8s-node1 ~]# rpm -qa| grep docker
docker-ce-20.10.0-3.el8.x86_64
docker-ce-cli-20.10.3-3.el8.x86_64
docker-ce-rootless-extras-20.10.3-3.el8.x86_64
[root@k8s-master ~]# sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
1. 查看docker-ce版本
[root@k8s-master ~]# yum list docker-ce --showduplicates | sort -r
docker-ce.x86_64 3:20.10.3-3.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.2-3.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.1-3.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.0-3.el8 docker-ce-stable
docker-ce.x86_64 3:19.03.15-3.el8 docker-ce-stable
docker-ce.x86_64 3:19.03.14-3.el8 docker-ce-stable
docker-ce.x86_64 3:19.03.13-3.el8 docker-ce-stable
上次元數(shù)據(jù)過(guò)期檢查:0:00:53 前场仲,執(zhí)行于 2021年02月19日 星期五 14時(shí)59分28秒。
可安裝的軟件包
2. 安裝指定版本
yum install -y docker-ce-19.03.15
3. 啟動(dòng)docker并設(shè)置開(kāi)機(jī)自啟動(dòng)
systemctl start docker
systemctl enable docker.service
[root@k8s-master ~]# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
- 設(shè)置docker鏡像加速器
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
其中"exec-opts": ["native.cgroupdriver=systemd”],需要設(shè)置退疫,且kubelet的driver需要保持一致渠缕,否則獲取鏡像獲取啟動(dòng)都有問(wèn)題
- 安裝kubeadm,kubelet和kubectl
1. 添加kubernets倉(cāng)庫(kù)源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2. 安裝
yum install -y kubelet-1.20.1 kubeadm-1.20.1 kubectl-1.20.1
3. 配置kubelet的cgroup driver
保證kubelet的鏡像拉去和docker沒(méi)有沖突
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
4. 啟動(dòng)kubelet服務(wù)
systemctl enable kubelet && systemctl start kubelet
如果已經(jīng)啟動(dòng)kubelet褒繁,這里通過(guò)
systemctl status kubelet
查看狀態(tài)會(huì)發(fā)現(xiàn)kubelet.service: main process exited, code=exited, status=255/n/a
通過(guò)日志命令journalctl -xefu kubelet
和資料查詢可以發(fā)現(xiàn)kubelet在init之前都會(huì)不斷重啟導(dǎo)致錯(cuò)誤亦鳞,可以不管直接init之后會(huì)正常
- 初始化master
第一種方式
- 初始化配置文件
1. 初始化默認(rèn)配置
kubeadm config print init-defaults > kubeadm.yaml
2. 修改初始化配置
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.89.142
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
#imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.1
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
- advertiseAddress: 172.16.89.142,這里設(shè)置為appserver設(shè)置為本地內(nèi)網(wǎng)ip
- imageRepository: registry.aliyuncs.com/google_containers,修改為阿里云鏡像棒坏,防止獲取鏡像失敗導(dǎo)致master啟動(dòng)失敗
- podSubnet: 10.244.0.0/16燕差,pod網(wǎng)段,flannel插件需要使用這個(gè)網(wǎng)段
- kubernetesVersion: v1.20.1版本需要和安裝的kubelet版本一致
- 查看初始化需要用到鏡像
查看初始化需要用到的鏡像
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
k8s.gcr.io/kube-apiserver:v1.20.1
k8s.gcr.io/kube-controller-manager:v1.20.1
k8s.gcr.io/kube-scheduler:v1.20.1
k8s.gcr.io/kube-proxy:v1.20.1
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
提前從阿里云把鏡像下載下來(lái)并打上tag
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1 k8s.gcr.io/kube-apiserver:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
或者設(shè)置初始化配置文件的鏡像為阿里云坝冕,直接拉去下來(lái)
[root@k8s-master ~]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
這里選擇所有鏡像從阿里云下載徒探,如果下載不成功,也可以統(tǒng)一tag到本地docker或者統(tǒng)一從habor鏡像倉(cāng)庫(kù)下載
- 初始化master
kubeadm init --config kubeadm.yaml --v=5
[root@k8s-master ~]# kubeadm init --config kubeadm.yaml --v=5
I0223 09:39:07.830262 3023 initconfiguration.go:201] loading configuration from "kubeadm.yaml"
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
I0223 09:39:07.900068 3023 checks.go:577] validating Kubernetes and kubeadm version
I0223 09:39:07.900088 3023 checks.go:166] validating if the firewall is enabled and active
I0223 09:39:07.923727 3023 checks.go:201] validating availability of port 6443
I0223 09:39:07.923822 3023 checks.go:201] validating availability of port 10259
I0223 09:39:07.923845 3023 checks.go:201] validating availability of port 10257
I0223 09:39:07.923865 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0223 09:39:07.923874 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0223 09:39:07.923881 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0223 09:39:07.923889 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0223 09:39:07.923897 3023 checks.go:432] validating if the connectivity type is via proxy or direct
I0223 09:39:07.923925 3023 checks.go:471] validating http connectivity to first IP address in the CIDR
I0223 09:39:07.923948 3023 checks.go:471] validating http connectivity to first IP address in the CIDR
I0223 09:39:07.923959 3023 checks.go:102] validating the container runtime
I0223 09:39:07.984679 3023 checks.go:128] validating if the "docker" service is enabled and active
I0223 09:39:08.059694 3023 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0223 09:39:08.059742 3023 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0223 09:39:08.059759 3023 checks.go:649] validating whether swap is enabled or not
I0223 09:39:08.059784 3023 checks.go:376] validating the presence of executable conntrack
I0223 09:39:08.059805 3023 checks.go:376] validating the presence of executable ip
I0223 09:39:08.059818 3023 checks.go:376] validating the presence of executable iptables
I0223 09:39:08.059831 3023 checks.go:376] validating the presence of executable mount
I0223 09:39:08.059855 3023 checks.go:376] validating the presence of executable nsenter
I0223 09:39:08.059870 3023 checks.go:376] validating the presence of executable ebtables
I0223 09:39:08.059883 3023 checks.go:376] validating the presence of executable ethtool
I0223 09:39:08.059894 3023 checks.go:376] validating the presence of executable socat
I0223 09:39:08.059909 3023 checks.go:376] validating the presence of executable tc
I0223 09:39:08.059922 3023 checks.go:376] validating the presence of executable touch
I0223 09:39:08.059937 3023 checks.go:520] running all checks
I0223 09:39:08.126650 3023 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.100.2.136:53: no such host
I0223 09:39:08.127123 3023 checks.go:618] validating kubelet version
I0223 09:39:08.184792 3023 checks.go:128] validating if the "kubelet" service is enabled and active
I0223 09:39:08.195795 3023 checks.go:201] validating availability of port 10250
I0223 09:39:08.195847 3023 checks.go:201] validating availability of port 2379
I0223 09:39:08.195865 3023 checks.go:201] validating availability of port 2380
I0223 09:39:08.195885 3023 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 09:39:08.224823 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
I0223 09:39:08.252056 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
I0223 09:39:08.280014 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
I0223 09:39:08.307725 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.20.1
I0223 09:39:08.335600 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/pause:3.2
I0223 09:39:08.364146 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
I0223 09:39:08.392096 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/coredns:1.7.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0223 09:39:08.392141 3023 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0223 09:39:08.605798 3023 certs.go:474] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.35.194]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0223 09:39:08.921838 3023 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0223 09:39:08.978884 3023 certs.go:474] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0223 09:39:09.232775 3023 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0223 09:39:09.269962 3023 certs.go:474] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.17.35.194 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.17.35.194 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0223 09:39:09.759214 3023 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 09:39:09.809489 3023 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0223 09:39:09.871803 3023 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 09:39:09.929750 3023 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 09:39:09.999157 3023 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 09:39:10.240935 3023 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 09:39:10.374064 3023 manifests.go:96] [control-plane] getting StaticPodSpecs
I0223 09:39:10.374422 3023 certs.go:474] validating certificate period for CA certificate
I0223 09:39:10.374495 3023 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0223 09:39:10.374502 3023 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0223 09:39:10.374507 3023 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0223 09:39:10.388510 3023 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 09:39:10.388531 3023 manifests.go:96] [control-plane] getting StaticPodSpecs
I0223 09:39:10.388793 3023 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0223 09:39:10.388801 3023 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0223 09:39:10.388807 3023 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0223 09:39:10.388812 3023 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0223 09:39:10.388818 3023 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0223 09:39:10.389622 3023 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 09:39:10.389637 3023 manifests.go:96] [control-plane] getting StaticPodSpecs
I0223 09:39:10.389896 3023 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0223 09:39:10.390427 3023 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 09:39:10.391160 3023 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0223 09:39:10.391171 3023 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.001635 seconds
I0223 09:39:24.394483 3023 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0223 09:39:24.401973 3023 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
I0223 09:39:24.406809 3023 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0223 09:39:24.406822 3023 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0223 09:39:25.435228 3023 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I0223 09:39:25.435520 3023 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0223 09:39:25.435693 3023 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0223 09:39:25.437065 3023 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0223 09:39:25.439866 3023 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0223 09:39:25.440473 3023 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I0223 09:39:25.818119 3023 request.go:591] Throttling request took 70.393404ms, request: POST:https://172.17.35.194:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.17.35.194:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:530d61b5d25258f27bfce7e1d53b0604bd94164eb7cd926cb79186ae350935bd
依次執(zhí)行下面命令完成初始化
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果啟動(dòng)失敗徽诲,那么執(zhí)行
kubeadm reset
重制刹帕,rm -rf /etc/kubernetes/*
,如果初始化出現(xiàn)etcd不為空,那么也需要?jiǎng)h除etcd信息rm -rf /var/lib/etcd
第二種方式
kubeadm init --kubernetes-version=1.20.1 --pod-network-cidr 10.244.0.0/16 --v=5
- 查看健康狀態(tài)
[root@k8s-master ~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
如果不是全部健康需要到對(duì)應(yīng)的文件目錄
/etc/kubernetes/manifests/
把對(duì)應(yīng)的端口號(hào)注釋掉
- 加入node
[root@k8s-node2 ~]# kubeadm join 172.17.35.194:6443 --token avy7qv.zffy2i3bdivbp57x --discovery-token-ca-cert-hash sha256:530d61b5d25258f27bfce7e1d53b0604bd94164eb7cd926cb79186ae350935bd
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0223 09:54:40.991309 9033 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-node2 ~]# kubeadm join 172.17.35.194:6443 --token avy7qv.zffy2i3bdivbp57x --discovery-token-ca-cert-hash sha256:530d61b5d25258f27bfce7e1d53b0604bd94164eb7cd926cb79186ae350935bd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果該節(jié)點(diǎn)已經(jīng)加入過(guò)其他master谎替,或者無(wú)法加入偷溺,執(zhí)行
kubeadm reset
重制node環(huán)境再加入
- 部署CNI網(wǎng)絡(luò)插件
1. 查看master初始化狀態(tài),網(wǎng)絡(luò)插件pod未初始化
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-2gqch 0/1 ContainerCreating 0 85s
kube-system coredns-7f89b7bc75-9pkxr 0/1 ContainerCreating 0 85s
kube-system etcd-k8s-master 1/1 Running 0 92s
kube-system kube-apiserver-k8s-master 1/1 Running 0 92s
kube-system kube-controller-manager-k8s-master 0/1 Running 0 58s
kube-system kube-proxy-x6qb4 1/1 Running 0 86s
kube-system kube-scheduler-k8s-master 0/1 Running 0 76s
2. 下載部署文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
3. 部署網(wǎng)絡(luò)插件
kubectl apply -f kube-flannel.yml
在master節(jié)點(diǎn)查看集群情況
[root@k8s-master ~]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-7f89b7bc75-2gqch 1/1 Running 0 18m 10.244.0.3 k8s-master <none> <none>
kube-system coredns-7f89b7bc75-9pkxr 1/1 Running 0 18m 10.244.0.2 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 0 18m 172.17.35.194 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 0 18m 172.17.35.194 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 0 17m 172.17.35.194 k8s-master <none> <none>
kube-system kube-flannel-ds-jwk99 1/1 Running 0 3m9s 172.17.35.192 k8s-node2 <none> <none>
kube-system kube-flannel-ds-mf696 1/1 Running 0 14s 172.17.35.189 k8s-node1 <none> <none>
kube-system kube-flannel-ds-mtvdr 1/1 Running 0 8m52s 172.17.35.194 k8s-master <none> <none>
kube-system kube-proxy-5xbwq 1/1 Running 0 14s 172.17.35.189 k8s-node1 <none> <none>
kube-system kube-proxy-l4txf 1/1 Running 0 3m9s 172.17.35.192 k8s-node2 <none> <none>
kube-system kube-proxy-x6qb4 1/1 Running 0 18m 172.17.35.194 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 0 18m 172.17.35.194 k8s-master <none> <none>
其中node的kubelet版本不能低于masterkubelet版本钱贯,否則無(wú)法初始化到集群中
如果集群pod有問(wèn)題挫掏,運(yùn)行kubectl -n kube-system describe pod pod-name
查看pod啟動(dòng)失敗的原因,參考