環(huán)境信息: centos7.9 + kubernetes 1.23.8 + docker 20.10.17 + virtualBox 6.1
文章編寫時(shí)間: 2022-06-30
部署方式: kubeadm
組件: 網(wǎng)絡(luò)組件calico、dashboard組件
一、前置工作與注意事項(xiàng)
這里我們的centos使用的是 centos7.9, 不同版本的系統(tǒng)對(duì)k8s影響較大,具體看實(shí)際情況而定淹辞。 有的還需要更新系統(tǒng)內(nèi)核鹃锈。
-
我們先準(zhǔn)備了3臺(tái)虛擬機(jī)埋同,配置好網(wǎng)絡(luò)(映射好ssh端口)蚯根。虛擬機(jī)情況如下:
k8s_master-1 192.168.56.105 k8s_slave-2 192.168.56.106 k8s_slave-3 192.168.56.107
hosts配置(每臺(tái)機(jī)器都需要設(shè)置)
vim /etc/hosts 192.168.56.105 k8s-master01 192.168.56.106 k8s-slave02 192.168.56.107 k8s-slave03
hostname配置(每臺(tái)機(jī)器都需要配置,這里我們以192.168.56.105 為例放闺,我們需要設(shè)置hostname為 "k8s_master-1" 祟昭,與hosts 相匹配)
如果不配置hostname 默認(rèn)會(huì)配置為
localhost.localdomain
,k8s 運(yùn)行時(shí)會(huì)報(bào)錯(cuò)Error getting node" err="node \"localhost.localdomain\" not found
# 設(shè)置當(dāng)前機(jī)器的hostname hostnamectl set-hostname k8s-master01 # 查看當(dāng)前機(jī)器hostname hostname
系統(tǒng)配置要求:2c 2g 20g 以上,怖侦, cpu 至少為2核篡悟,否則k8s初始化無(wú)法成功。 建議
master
節(jié)點(diǎn)內(nèi)存給4g-
k8s安裝有多種方式:
- 使用minikube安裝單節(jié)點(diǎn)集群匾寝,用于測(cè)試
- 采用工具kubeadm -- 我們使用的這種方式(開(kāi)發(fā)環(huán)境搬葬,機(jī)器比較少(幾十臺(tái)以下))
- 使用kubespray, google官方提供的工具
- 全手動(dòng): 二進(jìn)制安裝(運(yùn)維)
- 全自動(dòng)安裝: rancher、kubesphere (大型生產(chǎn)環(huán)境艳悔,百臺(tái)急凰,萬(wàn)臺(tái)機(jī)器)
-
k8s health會(huì)依賴一些端口,為了不出現(xiàn)網(wǎng)絡(luò)問(wèn)題很钓,我們?cè)?strong>虛擬機(jī)(master)中開(kāi)放以下端口:
6443 主要
2379
2380
kubeadm 幫助我們安裝的
ca 證書
時(shí)限是一年香府,所以不推薦正式環(huán)境使用,或需要手動(dòng)配置ca證書码倦。
二企孩、安裝
1. 初始準(zhǔn)備
以下為實(shí)際安裝步驟與流程:
# 基礎(chǔ)依賴包安裝
yum -y install wget vim net-tools ntpdate bash-completion
# 設(shè)置當(dāng)前機(jī)器的hostname
hostnamectl set-hostname k8s-master01
# 查看當(dāng)前機(jī)器hostname
hostname
0.系統(tǒng)時(shí)鐘同步
# 向阿里云服務(wù)器同步時(shí)間
ntpdate time1.aliyun.com
# 刪除本地時(shí)間并設(shè)置時(shí)區(qū)為上海
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 查看時(shí)間
date -R || date
1.關(guān)閉防火墻、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
3.關(guān)閉swap
# 臨時(shí)關(guān)閉Swap
swapoff -a
# 修改 /etc/fstab 刪除或者注釋掉swap的掛載袁稽,可永久關(guān)閉swap
sed -i '/swap/s/^/#/' /etc/fstab
# 修改完后我們檢測(cè)以下勿璃,看最后一行swap 都是0 就成功了
free -m
#----------------start----------------------
total used free shared buff/cache available
Mem: 1837 721 69 10 1046 944
Swap: 0 0 0
#-----------------end---------------------
4.網(wǎng)橋過(guò)濾
# 網(wǎng)橋過(guò)濾
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0
# 生效命令
sysctl --system
# 查看效果
sysctl -a|grep "ip_forward"
5.開(kāi)啟ipvs(kubernetes1.8版本開(kāi)始,新增了kube-proxy對(duì)ipvs的支持推汽,性能和追蹤問(wèn)題比iptable強(qiáng))------ 此步驟為選填項(xiàng)补疑,如果不執(zhí)行那么默認(rèn)使用iptables
# 安裝IPVS
yum -y install ipset ipvsdm
# 編譯ipvs.modules文件
vi /etc/sysconfig/modules/ipvs.modules
# 文件內(nèi)容如下
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# 賦予權(quán)限并執(zhí)行
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 重啟電腦,檢查是否生效
reboot
lsmod | grep ip_vs_rr
修改hosts文件歹撒,添加解析
vim /etc/hosts
192.168.56.105 k8s-master01
192.168.56.106 k8s-slave02
192.168.56.107 k8s-slave03
2. docker安裝
docker 換源安裝
# 安裝yum utils
yum install -y yum-utils
# yum docker-ce config 換源
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# 安裝docker
yum -y install docker-ce docker-ce-cli containerd.io
# 啟動(dòng)docker莲组, enable 為必須,k8s會(huì)檢測(cè)docker.service
systemctl enable docker && systemctl start docker
docker配置鏡像加速
# 創(chuàng)建docker目錄
mkdir -p /etc/docker
# 設(shè)置鏡像源, exec-opts必須指定否則k8s啟動(dòng)報(bào)錯(cuò)(cgroup暖夭、systemd)
tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
# 重啟docke并生效鏡像加速
systemctl daemon-reload && systemctl restart docker
3. k8s安裝
配置kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
注意:阿里源并未與官網(wǎng)同步gpg(由于官網(wǎng)未開(kāi)放同步方式, 可能會(huì)有索引gpg檢查失敗的情況锹杈,這時(shí)請(qǐng)用如下命令安裝)
安裝kubernets,最好指定版本,否則會(huì)使用最新版本(k8s 每個(gè)版本的變化都比較大迈着,這里我們的k8s使用1.23.8 版本)
# 檢測(cè)可用的k8s版本(--nogpgcheck 忽略gpg檢測(cè))
yum list --nogpgcheck --showduplicates kubeadm --disableexcludes=kubernetes
# 找到我們想要安裝的版本竭望,并安裝--------------這里我們用1.23.8版本,最新版目前是1.24.0 版本安裝啟用了docker 會(huì)有一些問(wèn)題裕菠。
# 安裝kubelet咬清、kubeadm、kubectl 組件--- 這里要注意,docker 版本和 k8s版本有關(guān)系旧烧,盡量使用支持區(qū)間的版本
# yum install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8
yum -y install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8
安裝完成后我們檢查一下
# 檢查kubectl version
kubectl version
##########show start############
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.23.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T13:00:45Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
###########show end###########
# 檢查kubeadm版本
kubeadm version
##########show start############
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.23.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T12:58:46Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}
##########show end############
啟動(dòng)k8s服務(wù)
# 啟動(dòng)k8s服務(wù)
systemctl enable kubelet && systemctl start kubelet
# 查看服務(wù)狀態(tài)
systemctl status kubelet
# 如果不指定版本初始化那么會(huì)使用最新的k8s影钉,有可能存在的報(bào)錯(cuò)信息如下,需要先手動(dòng)設(shè)置下
# 運(yùn)行初始化后有可能會(huì)報(bào)錯(cuò):
#########錯(cuò)誤信息--start###########
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E0627 16:44:11.772277 16359 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-06-27T16:44:11+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
#########錯(cuò)誤信息--end###########
# 解決方案:
vim /etc/containerd/config.toml
# 從disabled_plugins 數(shù)組值中刪除 "cri"屬性
# 重啟容器
systemctl restart containerd
k8s master 主節(jié)點(diǎn)初始化(僅master節(jié)點(diǎn)執(zhí)行-- 這里考慮的是單master粪滤,多slave)
# 初始化
kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--apiserver-advertise-address=192.168.56.105 \
--service-cidr=10.222.0.0/16 \
--pod-network-cidr=10.244.0.0/16
# 初始化過(guò)程比較長(zhǎng)斧拍,需要下載一些資源
#---------------打印信息 start---------------------
I0628 15:14:20.293469 5655 version.go:255] remote version is much newer: v1.24.2; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.222.0.1 192.168.56.105]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.56.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.56.105 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.502756 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4yipfl.er9r8aqnq0hpd8a4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.105:6443 --token 4yipfl.er9r8aqnq0hpd8a4 \
--discovery-token-ca-cert-hash sha256:afa76da3ced528e667374693fb4b0edd160530c251471ae11ece13c65d3d162a
#---------------打印信息 end---------------------
初始化參數(shù)解析
參數(shù)名 | 示例值 | 含義 |
---|---|---|
--kubernetes-version | v1.23.8 | 版本 |
--apiserver-advertise-address | 192.168.56.105 | 當(dāng)前機(jī)器節(jié)點(diǎn)ip |
--image-repository | registry.aliyuncs.com/google_containers | 鏡像倉(cāng)庫(kù) |
--service-cidr | 10.222.0.0/16 | service 網(wǎng)段 |
--pod-network-cidr | 10.244.0.0/16 | k8s內(nèi)部pod節(jié)點(diǎn)直接網(wǎng)段雀扶,不能和--service-cide相同 |
至此杖小,kubeadm init(master 主節(jié)點(diǎn))安裝完成。
還需要進(jìn)行一些收尾工作愚墓,根據(jù)kubeadm init log
提示予权,執(zhí)行以下命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
然后,我們就可以進(jìn)行k8s的節(jié)點(diǎn)查詢
# 查詢節(jié)點(diǎn)
kubectl get nodes
#------------展示信息 start--------------
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 6m21s v1.23.8
#------------展示信息 end----------------
此時(shí)浪册, STATUS 是 NotReady
狀態(tài)扫腺,因?yàn)榫W(wǎng)絡(luò)組件還未安裝,Pod之間還不能通訊
查看各命名空間下的Pod信息
kubectl get pods --all-namespaces
#------------展示信息 start--------------
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d8c4cb4d-5bjmr 0/1 Pending 0 12m
kube-system coredns-6d8c4cb4d-7w72l 0/1 Pending 0 12m
kube-system etcd-k8s-master01 1/1 Running 0 12m
kube-system kube-apiserver-k8s-master01 1/1 Running 0 12m
kube-system kube-controller-manager-k8s-master01 1/1 Running 0 12m
kube-system kube-proxy-rcsfg 1/1 Running 0 12m
kube-system kube-scheduler-k8s-master01 1/1 Running 0 12m
#------------展示信息 end----------------
可以看到NDS解析服務(wù)coredns
的pod還處于Pending狀態(tài)未運(yùn)行,也是因?yàn)榫W(wǎng)絡(luò)組件還沒(méi)安裝
三村象、網(wǎng)絡(luò)插件的安裝
下面我們進(jìn)行網(wǎng)絡(luò)組件的安裝
1. 常用網(wǎng)絡(luò)插件
這里只簡(jiǎn)單說(shuō)明下笆环,推薦使用calico。
flannel 和 calico 是常用的網(wǎng)絡(luò)插件厚者。
calico 的性能更好躁劣,使用場(chǎng)景更廣一些。
flannel 沒(méi)有網(wǎng)絡(luò)策略库菲,不能控制pod的訪問(wèn)账忘。
這里我們用calico插件
2. 插件安裝
# calico插件安裝
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 插件安裝過(guò)程較慢,請(qǐng)耐心等待
# 安裝后我們查看pod狀態(tài),直到 所有 STATUS 為 Running 才啟動(dòng)成功
kubectl get pod --all-namespaces
#----------顯示如下 start-------------
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-7bc6547ffb-2bnbh 1/1 Running 0 5m57s
kube-system calico-node-rnhcv 1/1 Running 0 5m57s
kube-system coredns-6d8c4cb4d-5bjmr 1/1 Running 0 90m
kube-system coredns-6d8c4cb4d-7w72l 1/1 Running 0 90m
kube-system etcd-k8s-master01 1/1 Running 0 91m
kube-system kube-apiserver-k8s-master01 1/1 Running 0 91m
kube-system kube-controller-manager-k8s-master01 1/1 Running 0 91m
kube-system kube-proxy-rcsfg 1/1 Running 0 90m
kube-system kube-scheduler-k8s-master01 1/1 Running 0 91m
#----------顯示如下 end---------------
# 查看k8s node 狀態(tài)
kubectl get nodes
#----------顯示如下 start-------------
k8s-master01 Ready control-plane,master 99m v1.23.8
#----------顯示如下 end---------------
四熙宇、安裝dashboard
1.注意事項(xiàng)
dashboard 在github上開(kāi)源地址:
dashboard 的版本和 k8s的版本有關(guān)系鳖擒,因?yàn)槊總€(gè)k8s版本改動(dòng)較大,所以在選擇dashboard時(shí)盡量選擇兼容的版本烫止,否則某些功能有可能使用異常蒋荚。
版本的選擇方式:
github 點(diǎn)擊
releases
-
查看dashboard版本的兼容情況,并選擇對(duì)應(yīng)版本
image.png
3.找到支持的兼容版本馆蠕,復(fù)制安裝語(yǔ)句并執(zhí)行
2.安裝
# 安裝dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
#----------顯示如下 start-------------
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
#----------顯示如下 end---------------
# 配置dashboard訪問(wèn)端口等信息
vim k8s-dashboard.yaml
#----------編輯內(nèi)容 start-------------
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
# 對(duì)外暴露的端口(端口范圍 30000~32767)
nodePort: 30443
selector:
k8s-app: kubernetes-dashboard
#----------編輯內(nèi)容 end---------------
# 運(yùn)行yaml
kubectl apply -f k8s-dashboard.yaml
#----------顯示如下 start-------------
service/kubernetes-dashboard created
#----------顯示如下 end---------------
# 這樣我們的dashboard服務(wù)就部署完畢了期升,下面我們驗(yàn)證下
# 查看pod,我們會(huì)看到兩個(gè)pod 的status 都是`Running`
kubectl get pods -n kubernetes-dashboard
#----------顯示如下 start-------------
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-799d786dbf-j85zw 1/1 Running 0 18m
kubernetes-dashboard-fb8648fd9-qcrzk 1/1 Running 0 18m
#----------顯示如下 end---------------
# 查看服務(wù)
kubectl get svc -n kubernetes-dashboard -o wide
#----------顯示如下 start-------------
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.222.211.134 <none> 8000/TCP 19m k8s-app=dashboard-metrics-scraper
kubernetes-dashboard NodePort 10.222.156.236 <none> 443:30443/TCP 19m k8s-app=kubernetes-dashboard
#----------顯示如下 end---------------
上面我們說(shuō)到dashboard 開(kāi)放的端口是30443
荆几,我們的虛擬機(jī)也需要對(duì)外暴露該端口
設(shè)置好后吓妆,我們?cè)L問(wèn)https://192.168.56.105:30443
(注意:這里一定要用https)
3.權(quán)限配置與登錄驗(yàn)證
上面我們已經(jīng)完成了dashboard的安裝,但是我們發(fā)現(xiàn)登錄的時(shí)候需要使用Token
,下邊我們就說(shuō)下Token的生成和配置
# 為了規(guī)范化處理吨铸,我們制定配置文件路徑
cd /opt/kube-dashboard/conf
# 創(chuàng)建rbac配置文件
vim admin-user-dashboard.yaml
#----------編輯內(nèi)容 start-------------
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-view
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- get
- list
- watch
- nonResourceURLs:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: view-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-view
subjects:
- kind: ServiceAccount
name: view-user
namespace: kubernetes-dashboard
#----------編輯內(nèi)容 end---------------
# 運(yùn)行權(quán)限配置
kubectl apply -f admin-user-dashboard.yaml
# 生成登錄token
# admin-user 為角色名稱
kubectl describe secret admin-user -n kubernetes-dashboard
# 或者(還是推薦上邊的語(yǔ)句行拢,比較簡(jiǎn)單)
# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
#展示內(nèi)容如下,我們復(fù)制token部分信息即可
#----------------------------------------------
Name: admin-user-token-2gt2w
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: ce38f197-f395-45ed-9385-104707df07c7
Type: kubernetes.io/service-account-token
Data
====
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImgzM0taa283elhJbjFvc2NFNTJULXVDTGNURjZJaV9zSzV0X3U1VkgwUFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJndDJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjZTM4ZjE5Ny1mMzk1LTQ1ZWQtOTM4NS0xMDQ3MDdkZjA3YzciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RoYrf2VCgGn8RiHVgBvMS7El4DWa6XmAT_Prrjs_Kk2nOFOjG2z3i4I_1Db9S6Jq0ZC-L2lCeGDQSdMmOBW1eYMNw6-vSwteKR_Un7GhshPLK4AJML3CK8uHsgYnhM64EinyTcdbBj9ade6OdJ3ypFi_Dw_oms4CUnuD57zLynZnh_JGMj-HJEMmtjBDV_FE-yJUn7_Y626e5Uw92p_xcW9up68TPEMuOSTedlxHJ61jpGf0H8ZGdinslvgpEbp7jUJeXoU_caLHhKGc28pQzJgjtHkatHJS7HmYdcPmSSON-2HZztmNlNfHI0luEfEg2KCAU3hxQeDKMw89jye1eg
ca.crt: 1099 bytes
#----------------------------------------------
下面我們復(fù)制登錄token诞吱,登錄dashboard舟奠,顯示如下頁(yè)面
五竭缝、子節(jié)點(diǎn)加入
子節(jié)點(diǎn)需要執(zhí)行 一
、二
兩點(diǎn)操作沼瘫,注意二
點(diǎn)我們除了kubeadm init
操作外其他的都執(zhí)行
子節(jié)點(diǎn)加入的命令在 step3-》k8s安裝-》kubeadm init
輸出的日志中我們可以找到抬纸。
kubeadm join 192.168.56.105:6443 --token 4yipfl.er9r8aqnq0hpd8a4 \
--discovery-token-ca-cert-hash sha256:afa76da3ced528e667374693fb4b0edd160530c251471ae11ece13c65d3d162a
下面我們簡(jiǎn)單說(shuō)下 子節(jié)點(diǎn)服務(wù)器的操作流程:
# 基礎(chǔ)依賴包安裝
yum -y install wget vim net-tools ntpdate bash-completion
# 修改當(dāng)前機(jī)器名
hostnamectl set-hostname k8s-slave02
# 或 hostnamectl set-hostname k8s-slave03
# 修改hosts文件
vim /etc/hosts
192.168.56.105 k8s-master01
192.168.56.106 k8s-slave02
192.168.56.107 k8s-slave03
# 系統(tǒng)時(shí)鐘同步與時(shí)區(qū)配置
ntpdate time1.aliyun.com
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
date -R || date
# 關(guān)閉防火墻、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
# 關(guān)閉swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
free -m
# 網(wǎng)橋過(guò)濾
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0
# 生效命令 與 查看
sysctl --system
sysctl -a|grep "ip_forward"
# docker安裝
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
systemctl enable docker && systemctl start docker
# docker 鏡像加速 與 cgroup配置
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload && systemctl restart docker
# k8s安裝
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install --nogpgcheck kubelet-1.23.8 kubeadm-1.23.8 kubectl-1.23.8
# 啟動(dòng)k8s服務(wù)
systemctl enable kubelet && systemctl start kubelet
# join k8s網(wǎng)絡(luò)
kubeadm join 192.168.56.105:6443 --token 4yipfl.er9r8aqnq0hpd8a4 \
--discovery-token-ca-cert-hash sha256:afa76da3ced528e667374693fb4b0edd160530c251471ae11ece13c65d3d162a
# 注意: token 有效時(shí)間是24小時(shí)耿戚,如果過(guò)來(lái)24小時(shí)那么需要刷新token
# 在master 主節(jié)點(diǎn)上運(yùn)行命令湿故,刷新 token
kubeadm token create --print-join-command
# 得到以下結(jié)果
kubeadm join 192.168.56.105:6443 --token o26r8i.zg2t9ade0tyuh4tp --discovery-token-ca-cert-hash sha256:de2d35d81f3740e93aca8a461713ca4ab1fcb9e7e881dc0f8836dd06d8a40229
# 在slave 節(jié)點(diǎn)上運(yùn)行 上述 `kubeadm join` 語(yǔ)句
# 運(yùn)行成功,顯示如下:
#------------運(yùn)行成功 start-----------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#------------運(yùn)行成功 end-------------------
# 在主節(jié)點(diǎn)上 查看所有nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 18h v1.23.8
k8s-slave02 NotReady <none> 46s v1.23.8
# 這個(gè)時(shí)候我們看到 `k8s-slave02` 已經(jīng)加入到 k8s集群中了膜蛔,但是`STATUS`還是`NotReady`
# 此步驟比較耗時(shí)坛猪,我們多等一會(huì),直到 `NotReady` 變?yōu)?`Ready`
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 18h v1.23.8
k8s-slave02 Ready <none> 3m16s v1.23.8
# 至此皂股,子節(jié)點(diǎn)加入成功
# 為了穩(wěn)妥起見(jiàn)墅茉,我們?cè)?master 上發(fā)現(xiàn)子節(jié)點(diǎn) 狀態(tài)由 `NotReady` 變成 `Ready` 后,我們對(duì)子節(jié)點(diǎn)k8s 進(jìn)行重啟
systemctl restart kubelet
子節(jié)點(diǎn)上我們嘗試運(yùn)行 k8s命令
# 查看節(jié)點(diǎn)信息
kubectl get nodes
# 展示信息
The connection to the server localhost:8080 was refused - did you specify the right host or port?
我們發(fā)現(xiàn):子節(jié)點(diǎn)無(wú)法運(yùn)行kubectl命令
原因:kubectl
命令需要使用 kubernetes-admin
來(lái)運(yùn)行
解決辦法: 我們將master節(jié)點(diǎn)的 admin.conf 復(fù)制到子節(jié)點(diǎn)上
# master節(jié)點(diǎn)上操作:從master節(jié)點(diǎn) 復(fù)制到 子節(jié)點(diǎn)
scp /etc/kubernetes/admin.conf root@192.168.56.106:/etc/kubernetes/
# scp /etc/kubernetes/admin.conf root@192.168.56.107:/etc/kubernetes/
# node節(jié)點(diǎn)上操作:配置環(huán)境變量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
# 而后我們?cè)谧庸?jié)點(diǎn)上運(yùn)行 命令
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 19h v1.23.8
k8s-slave02 Ready <none> 49m v1.23.8
k8s-slave03 Ready <none> 14m v1.23.8
六呜呐、常用信息
1. 常用命令
# 查看k8s 運(yùn)行日志命令, 這個(gè)比較有用就斤,在k8s 啟動(dòng)、kubeadm init蘑辑、kubeadm join 階段可以輔助分析問(wèn)題洋机。
journalctl -xefu kubelet
# 查看k8s驅(qū)動(dòng)
systemctl show --property=Environment kubelet |cat
# 重啟k8s
systemctl restart kubelet
# 啟動(dòng)k8s
systemctl start kubelet
# 停止k8s
systemctl stop kubelet
# 開(kāi)機(jī)自啟k8s
systemctl enable kubelet
# dashboard 獲取token
kubectl describe secret admin-user -n kubernetes-dashboard
# kubeadm 重置, 有些時(shí)候我們?cè)谑褂胟ubeadm init 命令時(shí)會(huì)報(bào)錯(cuò)以躯,我們根據(jù)錯(cuò)誤提示修復(fù)問(wèn)題后需要重新進(jìn)行 init 操作槐秧,因此需要進(jìn)行reset重置
kubeadm reset
2. 環(huán)境信息
# k8s 安裝目錄
/etc/kubernetes/
總用量 32
-rw-------. 1 root root 5642 6月 28 15:19 admin.conf
-rw-------. 1 root root 5674 6月 28 15:19 controller-manager.conf
-rw-------. 1 root root 1986 6月 28 15:19 kubelet.conf
drwxr-xr-x. 2 root root 113 6月 28 15:19 manifests
drwxr-xr-x. 3 root root 4096 6月 28 15:19 pki
-rw-------. 1 root root 5618 6月 28 15:19 scheduler.conf
# 組件配置文件目錄
/etc/kubernetes/manifests/
總用量 16
-rw-------. 1 root root 2310 6月 28 15:19 etcd.yaml
-rw-------. 1 root root 3378 6月 28 15:19 kube-apiserver.yaml
-rw-------. 1 root root 2879 6月 28 15:19 kube-controller-manager.yaml
-rw-------. 1 root root 1464 6月 28 15:19 kube-scheduler.yaml
# 自定義dashboard yaml文件目錄
/opt/kube-dashboard/conf/
總用量 8
-rw-r--r--. 1 root root 1124 6月 29 08:41 admin-user-dashboard.yaml
-rw-r--r--. 1 root root 285 6月 29 08:25 k8s-dashboard.yaml