一、系統(tǒng)情況
虛擬機(jī)版本:pve
系統(tǒng)版本:centos7.9_2009_x86
鏡像地址:http://isoredirect.centos.org/centos/7/isos/x86_64/
配置:4核8G(官網(wǎng)最低要求2核2G)
主機(jī) | 說明 |
---|---|
192.168.1.32 | master節(jié)點(diǎn) |
192.168.1.33 | node1節(jié)點(diǎn) |
192.168.1.34 | node2節(jié)點(diǎn) |
二鳄虱、環(huán)境配置
編輯器安裝:因?yàn)榱?xí)慣拙已,我使用nano作為編輯器倍踪,也可以根據(jù)喜好使用vi或vim惭适。
yum install -y nano #安裝nano
2.1癞志、所有節(jié)點(diǎn)修改防火墻
本次是實(shí)驗(yàn)環(huán)境,圖省事選擇關(guān)閉防火墻戒突,如果是生產(chǎn)膊存,除非做了公網(wǎng)和內(nèi)網(wǎng)隔離,還是別關(guān)閉吧爵卒,做好相關(guān)接口開發(fā)就行钓株。
systemctl stop firewalld #停止防火墻
systemctl disable firewalld #設(shè)置開機(jī)不啟動(dòng)
2.2、所有節(jié)點(diǎn)禁用selinux
#修改/etc/selinux/config文件中的SELINUX=permissive
nano /etc/selinux/config
或
# 將 SELinux 設(shè)置為 permissive 模式(相當(dāng)于將其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
2.3、所有節(jié)點(diǎn)關(guān)閉swap分區(qū)
#永久禁用swap,刪除或注釋掉/etc/fstab里的swap設(shè)備的掛載命令即可
nano /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
修改后重啟服務(wù)器
reboot
2.4、所有節(jié)點(diǎn)時(shí)間同步
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
2.5咳秉、開啟bridge-nf-call-iptalbes
執(zhí)行下述指令
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# 設(shè)置所需的 sysctl 參數(shù)澜建,參數(shù)在重新啟動(dòng)后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 應(yīng)用 sysctl 參數(shù)而不重新啟動(dòng)
sudo sysctl --system
通過運(yùn)行以下指令確認(rèn) `br_netfilter` 和 `overlay` 模塊被加載:
lsmod | grep br_netfilter
lsmod | grep overlay
通過運(yùn)行以下指令確認(rèn) net.bridge.bridge-nf-call-iptables
何之、net.bridge.bridge-nf-call-ip6tables
和 net.ipv4.ip_forward
系統(tǒng)變量在你的 sysctl
配置中被設(shè)置為 1:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
三、所有節(jié)點(diǎn)安裝containerd
3.1奸攻、安裝containerd
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install containerd.io
3.2辐赞、生成config.toml配置
containerd config default > /etc/containerd/config.toml
3.3响委、配置 systemd cgroup 驅(qū)動(dòng)
在 /etc/containerd/config.toml
中設(shè)置
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
將sandbox_image下載地址改為阿里云地址
[plugins."io.containerd.grpc.v1.cri"]
...
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
3.4、啟動(dòng)containerd 并設(shè)置開機(jī)自啟動(dòng)
systemctl restart containerd && systemctl enable containerd
四、k8s配置阿里云yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
五蛔翅、yum安裝kubeadm山析、kubelet、kubectl
在所有服務(wù)器上都安裝kubeadm爵政、kubelet、kubectl
5.1掺出、刪除歷史版本
如果之前沒裝過就跳過此步驟
yum -y remove kubelet kubeadm kubectl
5.2双抽、安裝kubeadm、kubelet、kubectl
這些說明適用于 Kubernetes 1.28聂抢,阿里的yum源,kubelet版本只更新到1.28.0版本空盼,所以下面命令需要加上版本號(hào)。
yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 --disableexcludes=kubernetes
systemctl enable kubelet
六新荤、初始化master節(jié)點(diǎn)
kubeadm init \
--apiserver-advertise-address=192.168.1.32 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
得到以下內(nèi)容揽趾,就為成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.32:6443 --token ew2io9.4iw5iw110z880t7f \
--discovery-token-ca-cert-hash sha256:4d7754e0b61037862d8a6c7f07f6467d7c263e7443c38f1f7b57c1eb739d2fe7
然后按照上面提示,一步步執(zhí)行命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
現(xiàn)在可以看到master節(jié)點(diǎn)了
kubectl get node
七苛骨、子節(jié)點(diǎn)加入master節(jié)點(diǎn)
kubeadm join 192.168.1.31:6443 --token bhtq9s.dr8txafrpnncpfj8 --discovery-token-ca-cert-hash sha256:875a3dad7491c653ab7cabcbd1e80cbcc2e91a42263bb09e9703d39cdc490b3c
這里面經(jīng)常遇到的情況是命令卡住不動(dòng)篱瞎,大概率是token過期了,回到master節(jié)點(diǎn)痒芝,執(zhí)行
kubeadm token create
創(chuàng)建新的token俐筋,替換后重新執(zhí)行就行
現(xiàn)在可以看到master節(jié)點(diǎn)和子節(jié)點(diǎn)了
kubectl get node
8、部署CNI網(wǎng)絡(luò)
雖然現(xiàn)在有了master和node節(jié)點(diǎn)嘀倒,但是所有節(jié)點(diǎn)狀態(tài)都是NotReady,這是因?yàn)闆]有cni網(wǎng)絡(luò)插件。
8.1、下載cni插件
wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
mkdir -pv /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/
8.2馆衔、master安裝flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
這時(shí)候再看節(jié)點(diǎn)狀態(tài)
都已經(jīng)成為ready了陕壹,在master服務(wù)器執(zhí)行
kubectl get pods -n kube-system
查看pod狀態(tài)绊袋,如果是
說明可用了剖毯。
九土铺、安裝dashboard
9.1部宿、下載recommended.yaml文件
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
由于外網(wǎng)比較慢藏雏,這兒提供我目前用的,和原版唯一不同的悲酷,就是以下內(nèi)容渣蜗,目的是為了暴露端口,外網(wǎng)直接訪問凭戴。
spec:
ports:
- port: 443
targetPort: 8443
name: https # 原版沒有name
nodePort: 32001 # 原版沒有nodePort
type: NodePort # 原版沒有nodePort
kubectl apply -f [你的本地路徑]/recommended.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
name: https
nodePort: 32001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
9.2、創(chuàng)建用戶示例
本地創(chuàng)建dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
kubectl apply -f [你的文件路徑]/dashboard-adminuser.yaml
kubectl -n kubernetes-dashboard create token admin-user
至此,安裝完成朽们!