1.服務(wù)器準備
準備三臺主機
IP | OS | Hostname |
---|---|---|
172.16.1.101 | Ubuntu 22.04 | k8s-master |
172.16.1.102 | Ubuntu 22.04 | k8s-worker1 |
172.16.1.103 | Ubuntu 22.04 | k8s-worker2 |
1.1 設(shè)置/etc/hosts及各主機的hostname
cat << EOF | sudo tee -a /etc/hosts
172.16.1.101 k8s-master
172.16.1.102 k8s-worker1
172.16.1.103 k8s-worker2
EOF
# 設(shè)置主節(jié)點hostname庆聘,如果是worker節(jié)點钥飞,按上面的名稱進行替換
sudo hostnamectl hostname k8s-master
1.2 主機時間同步
sudo apt install -y chrony
sudo systemctl start chrony
sudo systemctl enable chrony
1.3 各節(jié)點防火墻設(shè)定
sudo ufw disable && sudo ufw status
1.4 禁用Swap設(shè)備
sudo swapoff -a
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab
1.5 Forwarding IPv4 and letting iptables see bridged traffic
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
2.安裝containerd
sudo apt update
sudo apt-get install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y containerd.io
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo sed -i 's/registry.k8s.io/registry.aliyuncs.com\/google_containers/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
參考:
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
3.安裝kubeadm践图、kubelet、kubectl
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
如果主機都是通過虛擬機創(chuàng)建的挡鞍,那么到了這一步,可以采用克隆方式創(chuàng)建其余的主機酌住,再修改IP和hostname叼旋,省去很多麻煩伴挚。
4.初始化主節(jié)點
sudo kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.27.3 \
--control-plane-endpoint=k8s-master \
--pod-network-cidr=10.10.0.0/16
完成后結(jié)果如下靶衍,根據(jù)提示完成進一步的操作。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
--discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
--discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f
5.加入worker節(jié)點
$ sudo kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
--discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
回到主節(jié)點茎芋,使用kubectl查看集群中的節(jié)點
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 45m v1.27.3
k8s-worker1 NotReady <none> 44m v1.27.3
k8s-worker2 NotReady <none> 44m v1.27.3
STATUS為NotReady狀態(tài)颅眶,這是由于沒有為集群配置網(wǎng)絡(luò)插件。
6.配置網(wǎng)絡(luò)插件
curl https://docs.tigera.io/archive/v3.25/manifests/calico.yaml -O
sed -i "s#192\.168\.0\.0/16#10\.10\.0\.0/16#" calico.yaml
kubectl apply -f calico.yaml
calico.yamlCALICO_IPV4POOL_CIDR
需要與k8s集群初始化時指定的pod-network-cidr
一致田弥,如果pod-network-cidr剛好為calico中的默認值192.168.0.0/16
涛酗,則無需調(diào)整calico.yml。
此外,需要注意一下煤杀,如果calico.yaml文件中以下兩行被注釋掉了眷蜈,那么需要手動取消注釋。
- name: CALICO_IPV4POOL_CIDR
value: "10.10.0.0/16"
安裝完成后沈自,查看kube-system命名空間下的pod,結(jié)果如下
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6c99c8747f-wmz9s 1/1 Running 0 17m
calico-node-65kc4 1/1 Running 0 17m
calico-node-kk75c 1/1 Running 0 17m
calico-node-qqm8b 1/1 Running 0 17m
coredns-7bdc4cb885-8d9bq 1/1 Running 0 48m
coredns-7bdc4cb885-dhxz2 1/1 Running 0 48m
etcd-master 1/1 Running 3 48m
kube-apiserver-master 1/1 Running 3 49m
kube-controller-manager-master 1/1 Running 3 48m
kube-proxy-7pdx5 1/1 Running 0 47m
kube-proxy-g7h9c 1/1 Running 0 47m
kube-proxy-l2kqh 1/1 Running 0 48m
kube-scheduler-master 1/1 Running 3 48m
再次查看集群中的節(jié)點辜妓,都已經(jīng)是Ready狀態(tài)了
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 50m v1.27.3
worker1 Ready <none> 49m v1.27.3
worker2 Ready <none> 48m v1.27.3
這里有個注意點枯途,如果使用的是云主機,可能云廠商默認只開放了部分端口籍滴,那么會出現(xiàn)如下情況
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7bdbfc669-bxn9g 1/1 Running 0 7m53s
calico-node-5m5x8 0/1 Running 0 7m53s
calico-node-bpphv 0/1 Running 0 7m53s
calico-node-lbvq8 0/1 Running 0 7m53s
coredns-5bbd96d687-8xvvq 1/1 Running 0 16m
coredns-5bbd96d687-pjwrc 1/1 Running 0 16m
etcd-master.test.com 1/1 Running 0 16m
kube-apiserver-master.test.com 1/1 Running 0 16m
kube-controller-manager-master.test.com 1/1 Running 0 16m
kube-proxy-5qjvp 1/1 Running 0 14m
kube-proxy-87bpn 1/1 Running 0 16m
kube-proxy-bp6zz 1/1 Running 0 14m
kube-scheduler-master.test.com 1/1 Running 0 16m
那么只要在所有節(jié)點開放179端口即可酪夷。
至此,k8s集群基本搭建完成了孽惰,除了后續(xù)的Ingress Controller晚岭。