搭建說明
至少準備三臺機器 電腦性能比較好可以開三個虛擬機,cpu或內(nèi)存不夠可以選擇購買阿里云或者騰訊云
k8s可以搭建單master和多master压语,一般學(xué)習過程我們就準備三臺機器搭建一個master 兩個 node
每臺機器要求
- cpu 兩核以上
- 內(nèi)存 至少2GB
- 三臺機器網(wǎng)絡(luò)要能互通
- 關(guān)閉防火墻 不然后面會出現(xiàn)很多問題需要一個個去開放端口。
- 禁用SELinux
- 關(guān)閉swap分區(qū) 本來機器性能不夠,所以還是把虛擬內(nèi)存分區(qū)關(guān)閉
- 時間需要同步
可以參考官網(wǎng)搭建 https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
機器準備
10.0.4.11 | node01 |
---|---|
10.0.4.13 | node02 |
10.0.4.9 | master |
修改hostname
# 10.0.4.9 執(zhí)行
hostnamectl set-hostname master
#10.0.4.11執(zhí)行
hostnamectl set-hostname node01
# 10.0.4.13執(zhí)行
hostnamectl set-hostname node02
配置host
每臺機器執(zhí)行
cat >> /etc/hosts<<EOF
10.0.4.9 master
10.0.4.11 node01
10.0.4.13 node02
EOF
網(wǎng)絡(luò)時間同步
每臺機器時間最好同步下协怒,避免后面出現(xiàn)問題
每臺機器運行
查看是否有 ntpdate
which ntpdate
# 如果沒有就安裝
yum install ntpdate -y
統(tǒng)一時區(qū)上海時區(qū)
ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
bash -c "echo 'Asia/Shanghai' > /etc/timezone"
使用阿里服務(wù)器進行時間更新
# 使用阿里服務(wù)器進行時間更新
ntpdate ntp1.aliyun.com
查看當前時間
[root@node01 ~]# date
Tue Nov 1 00:08:10 CST 2022
禁用SELinux
所有節(jié)點執(zhí)行,讓容器可以讀取主機文件系統(tǒng)
# 臨時關(guān)閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# 或者設(shè)置為permissive也是相當于禁用的
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
如果不關(guān)閉防火墻需要把節(jié)點自建相互通信的端口開放
api-server | 8080,6443 |
---|---|
controller-manager | 10252 |
scheduler | 10251 |
kubelet | 10250,10255 |
etcd | 2379,2380 |
dns | 53 (tcp upd) |
關(guān)閉swap分區(qū)
關(guān)閉swap可以提升性能
# 臨時關(guān)閉swap
swapoff -a
# 永久關(guān)閉
sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@node01 ~]# free -m
total used free shared buff/cache available
Mem: 3694 596 435 0 2662 2806
Swap: 1024 4 1020
[root@node01 ~]# swapoff -a
[root@node01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@node01 ~]# free -m
total used free shared buff/cache available
Mem: 3694 596 434 0 2663 2806
Swap: 0 0 0
配置 k8s 安裝源
所有節(jié)點配置
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[k8s]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
如果本地有yum源庐扫,可以指定baseurl為 file://dir 離線安裝
安裝 kubeadm句狼,kubelet 和 kubectl
目前最新版本1.24已經(jīng)移除docker渡蜻,如果需要docker就指定版本安裝
我們使用1.23.9版本
yum install -y kubelet-1.23.9 kubectl-1.23.9 kubeadm-1.23.9
-
kubeadm
:用來初始化集群的指令碱璃。 -
kubelet
:在集群中的每個節(jié)點上用來啟動 Pod 和容器等褐荷。 -
kubectl
:用來與集群通信的命令行工具敷扫。
官網(wǎng)提醒
kubeadm 不能幫你安裝或者管理 kubelet
或 kubectl
, 所以你需要確保它們與通過 kubeadm 安裝的控制平面的版本相匹配诚卸。 如果不這樣做葵第,則存在發(fā)生版本偏差的風險,可能會導(dǎo)致一些預(yù)料之外的錯誤和問題合溺。 然而卒密,控制平面與 kubelet 之間可以存在一個次要版本的偏差,但 kubelet 的版本不可以超過 API 服務(wù)器的版本棠赛。 例如哮奇,1.7.0 版本的 kubelet 可以完全兼容 1.8.0 版本的 API 服務(wù)器,反之則不可以
查看安裝版本是否正確
[root@master k8s]# yum info kubeadm
Loaded plugins: fastestmirror, langpacks
Repository epel is listed more than once in the configuration
Loading mirror speeds from cached hostfile
Installed Packages
Name : kubeadm
Arch : x86_64
Version : 1.23.9
Release : 0
Size : 43 M
Repo : installed
From repo : k8s
Summary : Command-line utility for administering a Kubernetes cluster.
URL : https://kubernetes.io
License : ASL 2.0
Description : Command-line utility for administering a Kubernetes cluster.
Available Packages
Name : kubeadm
Arch : x86_64
Version : 1.25.3
Release : 0
Size : 9.8 M
Repo : k8s
Summary : Command-line utility for administering a Kubernetes cluster.
URL : https://kubernetes.io
License : ASL 2.0
Description : Command-line utility for administering a Kubernetes cluster.
可以看情況先啟動kubelet睛约,我是后面在啟動的
systemctl start kubectl
systemctl enable kubectl
設(shè)置 cgroup driver
docker的默認cgroup驅(qū)動cgroupfs,修改為與k8s一致
# 修改/etc/docker/daemon.json 添加一行
# 指定docker的cgroupdriver為systemd倍宾,官方推薦 docker和k8s的cgroup driver必須一致 否則啟動不了
"exec-opts": ["native.cgroupdriver=systemd"],
# 重啟
systemctl daemon-reload
systemctl restart docker
部署master節(jié)點
apiserver-advertise-address master節(jié)點地址
image-repository 使用阿里云的鏡像地址 不然訪問很慢
kubernetes-version 版本號
其它就設(shè)置為默認值
kubeadm init \
--apiserver-advertise-address=10.0.4.9 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.9 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/16
# 將打印的 kubeadm join 記錄下來免得后面去找
kubeadm join 10.0.4.9:6443 --token x22atb.reldvil72yia0ac4 \
--discovery-token-ca-cert-hash sha256:c32a489c444bf5242543811c1aad5b5925693341699756ea4523a4228da6e5ff
# 日志里還會提示一段命令 后面需要用到
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# 實在忘記了重新獲取token
kubeadm token create --print-join-command
第二種部署方式(選用)
#導(dǎo)出默認配置
kubeadm config print init-defaults > init-kubeadm.conf
# 修改默認配置
# init
kubeadm init --config init-kubeadm.conf
查看版本
kubectl version
# 報錯 The connection to the server localhost:8080 was refused - did you specify the right host or port
# 前面沒有執(zhí)行 現(xiàn)在執(zhí)行
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# 再次訪問 kubectl version 此時沒有報錯了
# 如果其它機器需要使用 kubectl
# 拷貝$HOME/.kube/config到其它機器
查看nodes
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 36m v1.23.9
# 狀態(tài)為NotReady 沒有成功 查看日志
tail -f /var/log/messages
# 需要安裝網(wǎng)絡(luò)插件
Nov 1 01:06:05 VM-4-9-centos kubelet: E1101 01:06:05.769861 7352 kubelet.go:2391] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
安裝網(wǎng)絡(luò)插件和蚪,便于pod之間可以相互通信
這里我選擇kube-flannel,當然也可以選擇Calico CNI插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
一般是下載不了,需要翻墻下載一個,我這里下載了一個
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# 從yml中可以看到這里與我們搭建master時的 --pod-network-cidr=10.244.0.0/16配置一樣
# 如果使用 kube-flannel就最后設(shè)置默認值就行
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
使用kubectl 安裝網(wǎng)絡(luò)插件
[root@master k8s]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
等待一段時間再次查看nodes
Ready
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 50m v1.23.9
# 日志正常
[root@master k8s]# tail -f /var/log/messages
Nov 1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 9 cni0 10.244.0.1 UDP 123
Nov 1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 10 veth91dca84c fe80::e86e:34ff:fe92:50a7 UDP 123
Nov 1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 11 cni0 fe80::d097:f0ff:fed3:e444 UDP 123
Nov 1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 12 veth77048412 fe80::9cf4:6cff:fee8:5547 UDP 123
Nov 1 01:43:01 VM-4-9-centos systemd: Started Session 4243 of user root.
Nov 1 01:44:01 VM-4-9-centos systemd: Started Session 4244 of user root.
查看pod
[root@master k8s]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-lpfvv 1/1 Running 0 4m29s
kube-system coredns-6d8c4cb4d-96hvd 1/1 Running 0 53m
kube-system coredns-6d8c4cb4d-wrm4s 1/1 Running 0 53m
kube-system etcd-master 1/1 Running 0 53m
kube-system kube-apiserver-master 1/1 Running 0 53m
kube-system kube-controller-manager-master 1/1 Running 0 53m
kube-system kube-proxy-7kqc5 1/1 Running 0 53m
kube-system kube-scheduler-master 1/1 Running 0 53m
如果安裝失敗可以使用kubeadm reset 恢復(fù)原狀重新安裝
node節(jié)點加入集群
# 找到上面記錄的 token 執(zhí)行
kubeadm join 10.0.4.9:6443 --token x22atb.reldvil72yia0ac4 \
--discovery-token-ca-cert-hash sha256:c32a489c444bf5242543811c1aad5b5925693341699756ea4523a4228da6e5ff
# 報錯 需要允許 iptables 檢查橋接流量
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
允許 iptables 檢查橋接流量
設(shè)置 net.bridge.bridge-nf-call-iptables =1 以便于Linux 節(jié)點的 iptables 能夠正確查看橋接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 不需要重新啟動
sudo sysctl --system
如果碰到提示 systemctl enable kebuctl.service 之類的信息 復(fù)制執(zhí)行一下即可
再次執(zhí)行
#如果token忘記了或者過期了可以重新生成一個
kubeadm token create --print-join-command --ttl=0
[root@node01 ~]# kubeadm join 10.0.4.9:6443 --token q1g5bp.qtc45zl0umpu1viy --discovery-token-ca-cert-hash sha256:c32a489c444bf5242543811c1aad5b5925693341699756ea4523a4228da6e5ff
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
###############
[root@node01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 125m v1.23.9
node01 NotReady <none> 53s v1.23.9
#另外一臺node節(jié)點也加入涝涤,等待一段時間訪問 都已經(jīng)Ready
[root@node02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 129m v1.23.9
node01 Ready <none> 5m47s v1.23.9
node02 Ready <none> 3m44s v1.23.9
啟動kubelet
如果啟動有報錯,通過命令 journalctl -f -u kubelet 查看日志
# 現(xiàn)在啟動
systemctl enable --now kubelet
安裝dashboard
翻墻下載 https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
這里我修改了一下信息 添加 type: NodePort 暴露端口30000 不設(shè)置的話只能部署ingress訪問
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
[root@master k8s]# kubectl apply -f kube-dashboard.yml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master k8s]# kubectl get pods -A | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-6f669b9c9b-prj4m 1/1 Running 0 63s
kubernetes-dashboard kubernetes-dashboard-67b9478795-zzrds 1/1 Running 0 63s
給dashboard創(chuàng)建管理員角色
kube-dashboard-adminuser.yml
[root@master k8s]# kubectl apply -f kube-dashboard-adminuser.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# kube-dashboard-adminuser
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
# 確定 ip
kubectl get pods,svc -n kubernetes-dashboard -o wide
獲取登錄token
[root@master k8s]# kubectl describe secrets -n kubernetes-dashboard admin-user-token | grep token | awk 'NR==3{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Imh3NGJpbjlZQjZubDg0OWY2Ri1xMDdLSkV6dC1fM2MyMzVmVW5XZnhlelkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXd4bnNxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkOGJkZmNhNi1jZDAxLTQzOTgtODE1Mi0wZGYyNGYxOTQzMzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.qBmPRz_8fUjYbFAn9jMTjHtZMeBvEQPRBtkwC5ZuYE4CNU3-6z81G3c8uuWxrZvgEei_BYUXrYxlChMksQkMMhn6xjR3o1PhLEHAz7o6Vv0jeYfXY0-aFe2PRzSc3aZjoEHhz7-G5OMSiGU9W1_Ltg7PqetwfXSPo39rIweo4P0AKY689IChq3nZXDX2MjExvuqVsCVgRSilPf1azUsZLC_R-cwHfOloPDgBWmbKDatbL_LqRtmMQ705YQH_G89I257Mf2Ki-KsCB8sm7uqrt1EwU4ovU5UEDk05hwxcEXIay2m5vXyVOESysJMR8g9j2F4B8ulv0ixpE41-eC0tlQ
登錄成功