官網(wǎng):https://kubernetes.io/
官方文檔:https://kubernetes.io/zh-cn/docs/home/
二绳慎、基礎(chǔ)環(huán)境部署
1)前期準(zhǔn)備(所有節(jié)點(diǎn))
1啤斗、修改主機(jī)名和配置hosts
先部署1master和2node節(jié)點(diǎn)彰檬,后面再加一個master節(jié)點(diǎn)
# 在192.168.0.113執(zhí)行
hostnamectl set-hostname k8s-master-168-0-113
# 在192.168.0.114執(zhí)行
hostnamectl set-hostname k8s-node1-168-0-114
# 在192.168.0.115執(zhí)行
hostnamectl set-hostname k8s-node2-168-0-115
配置hosts
cat >> /etc/hosts<<EOF
192.168.0.113 k8s-master-168-0-113
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
EOF
2、配置ssh互信
# 直接一直回車就行
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115
3谈宛、時間同步
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
7、關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
4溅话、關(guān)閉swap
# 臨時關(guān)閉;關(guān)閉swap主要是為了性能考慮
swapoff -a
# 可以通過這個命令查看swap是否關(guān)閉了
free
# 永久關(guān)閉
sed -ri 's/.*swap.*/#&/' /etc/fstab
5远豺、禁用SELinux
# 臨時關(guān)閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
6、允許 iptables 檢查橋接流量(可選坞嘀,所有節(jié)點(diǎn))
若要顯式加載此模塊躯护,請運(yùn)行 sudo modprobe br_netfilter,通過運(yùn)行 lsmod | grep br_netfilter 來驗證 br_netfilter 模塊是否已加載丽涩,
sudo modprobe br_netfilter
lsmod | grep br_netfilter
為了讓 Linux 節(jié)點(diǎn)的 iptables 能夠正確查看橋接流量棺滞,請確認(rèn) sysctl 配置中的 net.bridge.bridge-nf-call-iptables 設(shè)置為 1。 例如:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# 設(shè)置所需的 sysctl 參數(shù)矢渊,參數(shù)在重新啟動后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 應(yīng)用 sysctl 參數(shù)而不重新啟動
sudo sysctl --system
2)安裝容器docker(所有節(jié)點(diǎn))
提示:v1.24 之前的 Kubernetes 版本包括與 Docker Engine 的直接集成继准,使用名為 dockershim 的組件。 這種特殊的直接整合不再是 Kubernetes 的一部分 (這次刪除被作為 v1.20 發(fā)行版本的一部分宣布)矮男。 你可以閱讀檢查 Dockershim 棄用是否會影響你 以了解此刪除可能會如何影響你移必。 要了解如何使用 dockershim 進(jìn)行遷移,請參閱從 dockershim 遷移毡鉴。
# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
# 安裝yum-config-manager配置工具
yum -y install yum-utils
# 設(shè)置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安裝docker-ce版本
yum install -y docker-ce
# 啟動
systemctl start docker
# 開機(jī)自啟
systemctl enable docker
# 查看版本號
docker --version
# 查看版本具體信息
docker version
# Docker鏡像源設(shè)置
# 修改文件 /etc/docker/daemon.json崔泵,沒有這個文件就創(chuàng)建
# 添加以下內(nèi)容后,重啟docker服務(wù):
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加載
systemctl reload docker
# 查看
systemctl status docker containerd
【溫馨提示】dockerd實際真實調(diào)用的還是containerd的api接口眨补,containerd是dockerd和runC之間的一個中間交流組件管削。所以啟動docker服務(wù)的時候,也會啟動containerd服務(wù)的撑螺。
3)配置k8s yum源(所有節(jié)點(diǎn))
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF
4)將 sandbox_image 鏡像源設(shè)置為阿里云google_containers鏡像源(所有節(jié)點(diǎn))
# 導(dǎo)出默認(rèn)配置含思,config.toml這個文件默認(rèn)是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml
5)配置containerd cgroup 驅(qū)動程序systemd(所有節(jié)點(diǎn))
kubernets自v1.24.0后,就不再使用docker.shim甘晤,替換采用containerd作為容器運(yùn)行時端點(diǎn)含潘。因此需要安裝containerd(在docker的基礎(chǔ)下安裝),上面安裝docker的時候就自動安裝了containerd了线婚。這里的docker只是作為客戶端而已遏弱。容器引擎還是containerd。
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 應(yīng)用所有更改后,重新啟動containerd
systemctl restart containerd
6)開始安裝kubeadm塞弊,kubelet和kubectl(master節(jié)點(diǎn))
# 不指定版本就是最新版本漱逸,當(dāng)前最新版就是1.24.1
yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了這個kubernetes之外的別的倉庫
# 設(shè)置為開機(jī)自啟并現(xiàn)在立刻啟動服務(wù) --now:立刻啟動服務(wù)
systemctl enable --now kubelet
# 查看狀態(tài),這里需要等待一段時間再查看服務(wù)狀態(tài)游沿,啟動會有點(diǎn)慢
systemctl status kubelet
查看日志饰抒,發(fā)現(xiàn)有報錯,報錯如下:
kubelet.service: Main process exited, code=exited, status=1/FAILURE kubelet.service: Failed with result 'exit-code'.
···

【解釋】重新安裝(或第一次安裝)k8s诀黍,未經(jīng)過kubeadm init 或者 kubeadm join后袋坑,kubelet會不斷重啟,這個是正趁泄矗現(xiàn)象……枣宫,執(zhí)行init或join后問題會自動解決婆誓,對此官網(wǎng)有如下描述,也就是此時不用理會kubelet.service也颤。
查看版本
kubectl version
yum info kubeadm
7)使用 kubeadm 初始化集群(master節(jié)點(diǎn))
最好提前把鏡像下載好洋幻,這樣安裝快
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.24.1
docker pull registry.aliyuncs.com/google_containers/pause:3.7
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.3-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6
集群初始化
kubeadm init \
--apiserver-advertise-address=192.168.0.113 \
--image-repository registry.aliyuncs.com/google_containers \
--control-plane-endpoint=cluster-endpoint \
--kubernetes-version v1.24.1 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--v=5
# –image-repository string: 這個用于指定從什么位置來拉取鏡像(1.13版本才有的),默認(rèn)值是k8s.gcr.io歇拆,我們將其指定為國內(nèi)鏡像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string: 指定kubenets版本號鞋屈,默認(rèn)值是stable-1,會導(dǎo)致從https://dl.k8s.io/release/stable-1.txt下載最新的版本號故觅,我們可以將其指定為固定版本(v1.22.1)來跳過網(wǎng)絡(luò)請求厂庇。
# –apiserver-advertise-address 指明用 Master 的哪個 interface 與 Cluster 的其他節(jié)點(diǎn)通信。如果 Master 有多個 interface输吏,建議明確指定权旷,如果不指定,kubeadm 會自動選擇有默認(rèn)網(wǎng)關(guān)的 interface贯溅。這里的ip為master節(jié)點(diǎn)ip拄氯,記得更換。
# –pod-network-cidr 指定 Pod 網(wǎng)絡(luò)的范圍它浅。Kubernetes 支持多種網(wǎng)絡(luò)方案译柏,而且不同網(wǎng)絡(luò)方案對 –pod-network-cidr有自己的要求,這里設(shè)置為10.244.0.0/16 是因為我們將使用 flannel 網(wǎng)絡(luò)方案姐霍,必須設(shè)置成這個 CIDR鄙麦。
# --control-plane-endpoint cluster-endpoint 是映射到該 IP 的自定義 DNS 名稱,這里配置hosts映射:192.168.0.113 cluster-endpoint镊折。 這將允許你將 --control-plane-endpoint=cluster-endpoint 傳遞給 kubeadm init胯府,并將相同的 DNS 名稱傳遞給 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的負(fù)載均衡器的地址恨胚。
【溫馨提示】kubeadm 不支持將沒有 --control-plane-endpoint 參數(shù)的單個控制平面集群轉(zhuǎn)換為高可用性集群骂因。
kubeadm reset
rm -fr ~/.kube/ /etc/kubernetes/* var/lib/etcd/*
kubeadm init \
--apiserver-advertise-address=192.168.0.113 \
--image-repository registry.aliyuncs.com/google_containers \
--control-plane-endpoint=cluster-endpoint \
--kubernetes-version v1.24.1 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--v=5
# –image-repository string: 這個用于指定從什么位置來拉取鏡像(1.13版本才有的),默認(rèn)值是k8s.gcr.io赃泡,我們將其指定為國內(nèi)鏡像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string: 指定kubenets版本號寒波,默認(rèn)值是stable-1,會導(dǎo)致從https://dl.k8s.io/release/stable-1.txt下載最新的版本號升熊,我們可以將其指定為固定版本(v1.22.1)來跳過網(wǎng)絡(luò)請求影所。
# –apiserver-advertise-address 指明用 Master 的哪個 interface 與 Cluster 的其他節(jié)點(diǎn)通信。如果 Master 有多個 interface僚碎,建議明確指定,如果不指定阴幌,kubeadm 會自動選擇有默認(rèn)網(wǎng)關(guān)的 interface勺阐。這里的ip為master節(jié)點(diǎn)ip卷中,記得更換。
# –pod-network-cidr 指定 Pod 網(wǎng)絡(luò)的范圍渊抽。Kubernetes 支持多種網(wǎng)絡(luò)方案蟆豫,而且不同網(wǎng)絡(luò)方案對 –pod-network-cidr有自己的要求,這里設(shè)置為10.244.0.0/16 是因為我們將使用 flannel 網(wǎng)絡(luò)方案懒闷,必須設(shè)置成這個 CIDR十减。
# --control-plane-endpoint cluster-endpoint 是映射到該 IP 的自定義 DNS 名稱,這里配置hosts映射:192.168.0.113 cluster-endpoint愤估。 這將允許你將 --control-plane-endpoint=cluster-endpoint 傳遞給 kubeadm init帮辟,并將相同的 DNS 名稱傳遞給 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的負(fù)載均衡器的地址玩焰。
配置環(huán)境變量
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 臨時生效(退出當(dāng)前窗口重連環(huán)境變量失效)
export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效(推薦)
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
發(fā)現(xiàn)節(jié)點(diǎn)還是有問題由驹,查看日志 /var/log/messages
"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
接下來就是安裝Pod網(wǎng)絡(luò)插件
8)安裝Pod網(wǎng)絡(luò)插件(CNI:Container Network Interface)(master)
你必須部署一個基于 Pod 網(wǎng)絡(luò)插件的 容器網(wǎng)絡(luò)接口 (CNI),以便你的 Pod 可以相互通信昔园。
最好提前下載鏡像(所有節(jié)點(diǎn))
docker pull quay.io/coreos/flannel:v0.14.0
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果上面安裝失敗蔓榄,則下載我百度里的,離線安裝
鏈接:https://pan.baidu.com/s/1HB9xuO3bssAW7v5HzpXkeQ
提取碼:8888
再查看node節(jié)點(diǎn)默刚,就已經(jīng)正常了
9)node節(jié)點(diǎn)加入k8s集群
先安裝kubelet
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 設(shè)置為開機(jī)自啟并現(xiàn)在立刻啟動服務(wù) --now:立刻啟動服務(wù)
systemctl enable --now kubelet
systemctl status kubelet
如果沒有令牌甥郑,可以通過在控制平面節(jié)點(diǎn)上運(yùn)行以下命令來獲取令牌:
kubeadm token list
默認(rèn)情況下,令牌會在24小時后過期荤西。如果要在當(dāng)前令牌過期后將節(jié)點(diǎn)加入集群澜搅, 則可以通過在控制平面節(jié)點(diǎn)上運(yùn)行以下命令來創(chuàng)建新令牌:
kubeadm token create
# 再查看
kubeadm token list
如果你沒有 –discovery-token-ca-cert-hash 的值,則可以通過在控制平面節(jié)點(diǎn)上執(zhí)行以下命令鏈來獲取它:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
如果執(zhí)行kubeadm init時沒有記錄下加入集群的命令皂冰,可以通過以下命令重新創(chuàng)建(推薦)一般不用上面的分別獲取token和ca-cert-hash方式店展,執(zhí)行以下命令一氣呵成:
kubeadm token create --print-join-command
這里需要等待一段時間,再查看節(jié)點(diǎn)節(jié)點(diǎn)狀態(tài)秃流,因為需要安裝kube-proxy和flannel赂蕴。
kubectl get pods -A
kubectl get nodes
10)配置IPVS
【問題】集群內(nèi)無法ping通ClusterIP(或ServiceName)
1、加載ip_vs相關(guān)內(nèi)核模塊
modprobe -- ip_vs
modprobe -- ip_vs_sh
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
所有節(jié)點(diǎn)驗證開啟了ipvs:
lsmod |grep ip_vs
2舶胀、安裝ipvsadm工具
yum install ipset ipvsadm -y
3概说、編輯kube-proxy配置文件,mode修改成ipvs
kubectl edit configmap -n kube-system kube-proxy
4嚣伐、重啟kube-proxy
# 先查看
kubectl get pod -n kube-system | grep kube-proxy
# 再delete讓它自拉起
kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
# 再查看
kubectl get pod -n kube-system | grep kube-proxy
5糖赔、查看ipvs轉(zhuǎn)發(fā)規(guī)則
ipvsadm -Ln
11)集群高可用配置
配置高可用(HA)Kubernetes 集群實現(xiàn)的兩種方案:
使用堆疊(stacked)控制平面節(jié)點(diǎn),其中 etcd 節(jié)點(diǎn)與控制平面節(jié)點(diǎn)共存(本章使用)轩端,架構(gòu)圖如下:
使用外部 etcd 節(jié)點(diǎn)放典,其中 etcd 在與控制平面不同的節(jié)點(diǎn)上運(yùn)行,架構(gòu)圖如下:
這里新增一臺機(jī)器作為另外一個master節(jié)點(diǎn):192.168.0.116
配置跟上面master節(jié)點(diǎn)一樣。只是不需要最后一步初始化了奋构。
1壳影、修改主機(jī)名和配置hosts
所有節(jié)點(diǎn)都統(tǒng)一如下配置:
# 在192.168.0.113執(zhí)行
hostnamectl set-hostname k8s-master-168-0-113
# 在192.168.0.114執(zhí)行
hostnamectl set-hostname k8s-node1-168-0-114
# 在192.168.0.115執(zhí)行
hostnamectl set-hostname k8s-node2-168-0-115
# 在192.168.0.116執(zhí)行
hostnamectl set-hostname k8s-master2-168-0-116
配置hosts
cat >> /etc/hosts<<EOF
192.168.0.113 k8s-master-168-0-113 cluster-endpoint
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
192.168.0.116 k8s-master2-168-0-116
EOF
2、配置ssh互信
# 直接一直回車就行
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master2-168-0-116
3弥臼、時間同步
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
4宴咧、關(guān)閉swap
# 臨時關(guān)閉;關(guān)閉swap主要是為了性能考慮
swapoff -a
# 可以通過這個命令查看swap是否關(guān)閉了
free
# 永久關(guān)閉
sed -ri 's/.*swap.*/#&/' /etc/fstab
#關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
5径缅、禁用SELinux
# 臨時關(guān)閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
6掺栅、允許 iptables 檢查橋接流量(可選,所有節(jié)點(diǎn))
若要顯式加載此模塊纳猪,請運(yùn)行 sudo modprobe br_netfilter氧卧,通過運(yùn)行 lsmod | grep br_netfilter 來驗證 br_netfilter 模塊是否已加載,
sudo modprobe br_netfilter
lsmod | grep br_netfilter
為了讓 Linux 節(jié)點(diǎn)的 iptables 能夠正確查看橋接流量兆旬,請確認(rèn) sysctl 配置中的 net.bridge.bridge-nf-call-iptables 設(shè)置為 1假抄。 例如:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# 設(shè)置所需的 sysctl 參數(shù),參數(shù)在重新啟動后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 應(yīng)用 sysctl 參數(shù)而不重新啟動
sudo sysctl --system
7丽猬、安裝容器docker(所有節(jié)點(diǎn))
提示:v1.24 之前的 Kubernetes 版本包括與 Docker Engine 的直接集成宿饱,使用名為 dockershim 的組件。 這種特殊的直接整合不再是 Kubernetes 的一部分 (這次刪除被作為 v1.20 發(fā)行版本的一部分宣布)脚祟。 你可以閱讀檢查 Dockershim 棄用是否會影響你 以了解此刪除可能會如何影響你谬以。 要了解如何使用 dockershim 進(jìn)行遷移,請參閱從 dockershim 遷移由桌。
# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
# 安裝yum-config-manager配置工具
yum -y install yum-utils
# 設(shè)置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安裝docker-ce版本
yum install -y docker-ce
# 啟動
systemctl start docker
# 開機(jī)自啟
systemctl enable docker
# 查看版本號
docker --version
# 查看版本具體信息
docker version
# Docker鏡像源設(shè)置
# 修改文件 /etc/docker/daemon.json为黎,沒有這個文件就創(chuàng)建
# 添加以下內(nèi)容后,重啟docker服務(wù):
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加載
systemctl reload docker
# 查看
systemctl status docker containerd
【溫馨提示】dockerd實際真實調(diào)用的還是containerd的api接口行您,containerd是dockerd和runC之間的一個中間交流組件铭乾。所以啟動docker服務(wù)的時候,也會啟動containerd服務(wù)的娃循。
8炕檩、配置k8s yum源(所有節(jié)點(diǎn))
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF
9、將 sandbox_image 鏡像源設(shè)置為阿里云google_containers鏡像源(所有節(jié)點(diǎn))
# 導(dǎo)出默認(rèn)配置捌斧,config.toml這個文件默認(rèn)是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml
10笛质、配置containerd cgroup 驅(qū)動程序systemd
kubernets自v1.24.0后,就不再使用docker.shim捞蚂,替換采用containerd作為容器運(yùn)行時端點(diǎn)妇押。因此需要安裝containerd(在docker的基礎(chǔ)下安裝),上面安裝docker的時候就自動安裝了containerd了姓迅。這里的docker只是作為客戶端而已敲霍。容器引擎還是containerd俊马。
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 應(yīng)用所有更改后,重新啟動containerd
systemctl restart containerd
11、開始安裝kubeadm色冀,kubelet和kubectl(master節(jié)點(diǎn))
# 不指定版本就是最新版本潭袱,當(dāng)前最新版就是1.24.1
yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了這個kubernetes之外的別的倉庫
# 設(shè)置為開機(jī)自啟并現(xiàn)在立刻啟動服務(wù) --now:立刻啟動服務(wù)
systemctl enable --now kubelet
# 查看狀態(tài),這里需要等待一段時間再查看服務(wù)狀態(tài)锋恬,啟動會有點(diǎn)慢
systemctl status kubelet
# 查看版本
kubectl version
yum info kubeadm
12、加入k8s集群
# 證如果過期了编丘,可以使用下面命令生成新證書上傳与学,這里會打印出certificate key,后面會用到
kubeadm init phase upload-certs --upload-certs
# 你還可以在 【init】期間指定自定義的 --certificate-key嘉抓,以后可以由 join 使用索守。 要生成這樣的密鑰,可以使用以下命令(這里不執(zhí)行抑片,就用上面那個自命令就可以了):
kubeadm certs certificate-key
kubeadm token create --print-join-command
kubeadm join cluster-endpoint:6443 --token wswrfw.fc81au4yvy6ovmhh --discovery-token-ca-cert-hash sha256:43a3924c25104d4393462105639f6a02b8ce284728775ef9f9c30eed8e0abc0f --control-plane --certificate-key 8d2709697403b74e35d05a420bd2c19fd8c11914eb45f2ff22937b245bed5b68
# --control-plane 標(biāo)志通知 kubeadm join 創(chuàng)建一個新的控制平面卵佛。加入master必須加這個標(biāo)記
# --certificate-key ... 將導(dǎo)致從集群中的 kubeadm-certs Secret 下載控制平面證書并使用給定的密鑰進(jìn)行解密。這里的值就是上面這個命令(kubeadm init phase upload-certs --upload-certs)打印出的key敞斋。
根據(jù)提示執(zhí)行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看
kubectl get nodes
kubectl get pods -A -owide
雖然現(xiàn)在已經(jīng)有兩個master了截汪,但是對外還是只能有一個入口的,所以還得要一個負(fù)載均衡器植捎,如果一個master掛了衙解,會自動切到另外一個master節(jié)點(diǎn)。
12)部署Nginx+Keepalived高可用負(fù)載均衡器
1焰枢、安裝Nginx和Keepalived
# 在兩個master節(jié)點(diǎn)上執(zhí)行
yum install nginx keepalived -y
2蚓峦、Nginx配置
在兩個master節(jié)點(diǎn)配置
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四層負(fù)載均衡,為兩臺Master apiserver組件提供負(fù)載均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
# Master APISERVER IP:PORT
server 192.168.0.113:6443;
# Master2 APISERVER IP:PORT
server 192.168.0.116:6443;
}
server {
listen 16443;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
EOF
【溫馨提示】如果只保證高可用济锄,不配置k8s-apiserver負(fù)載均衡的話暑椰,可以不裝nginx,但是最好還是配置一下k8s-apiserver負(fù)載均衡荐绝。
3一汽、Keepalived配置(master)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from fage@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的
priority 100 # 優(yōu)先級很泊,備服務(wù)器設(shè)置 90
advert_int 1 # 指定VRRP 心跳包通告間隔時間角虫,默認(rèn)1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虛擬IP
virtual_ipaddress {
192.168.0.120/24
}
track_script {
check_nginx
}
}
EOF
vrrp_script:指定檢查nginx工作狀態(tài)腳本(根據(jù)nginx狀態(tài)判斷是否故障轉(zhuǎn)移)
virtual_ipaddress:虛擬IP(VIP)
檢查nginx狀態(tài)腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
4、Keepalived配置(backup)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from fage@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51 # VRRP 路由 ID實例委造,每個實例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.0.120/24
}
track_script {
check_nginx
}
}
EOF
檢查nginx狀態(tài)腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
5戳鹅、啟動并設(shè)置開機(jī)啟動
systemctl daemon-reload
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
查看VIP
ip a
6、修改hosts(所有節(jié)點(diǎn))
將cluster-endpoint之前執(zhí)行的ip修改執(zhí)行現(xiàn)在的VIP
192.168.0.113 k8s-master-168-0-113
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
192.168.0.116 k8s-master2-168-0-116
192.168.0.120 cluster-endpoint
7昏兆、測試驗證
查看版本(負(fù)載均衡測試驗證)
curl -k https://cluster-endpoint:16443/version
高可用測試驗證枫虏,將k8s-master-168-0-113節(jié)點(diǎn)關(guān)機(jī)
shutdown -h now
curl -k https://cluster-endpoint:16443/version
kubectl get nodes -A
kubectl get pods -A
【溫馨提示】堆疊集群存在耦合失敗的風(fēng)險。如果一個節(jié)點(diǎn)發(fā)生故障,則 etcd 成員和控制平面實例都將丟失隶债, 并且冗余會受到影響腾它。你可以通過添加更多控制平面節(jié)點(diǎn)來降低此風(fēng)險。
三死讹、k8s管理平臺dashboard環(huán)境部署
1)dashboard部署
GitHub地址:https://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard
但是這個只能內(nèi)部訪問瞒滴,所以要外部訪問,要么部署ingress赞警,要么就是設(shè)置service NodePort類型妓忍。這里選擇service暴露端口。
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
修改后的內(nèi)容如下:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 31443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.6.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
重新部署
kubectl delete -f recommended.yaml
kubectl apply -f recommended.yaml
kubectl get svc,pods -n kubernetes-dashboard
2)創(chuàng)建登錄用戶
cat >ServiceAccount.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
kubectl apply -f ServiceAccount.yaml
創(chuàng)建并獲取登錄token
kubectl -n kubernetes-dashboard create token admin-user
3)配置hosts登錄dashboard web
192.168.0.120 cluster-endpoint
登錄:https://cluster-endpoint:31443
輸入上面創(chuàng)建的token登錄
四愧旦、k8s鏡像倉庫harbor環(huán)境部署
GitHub地址:https://github.com/helm/helm/releases
這使用helm安裝世剖,所以得先安裝helm
1)安裝helm
mkdir -p /opt/k8s/helm && cd /opt/k8s/helm
wget https://get.helm.sh/helm-v3.9.0-rc.1-linux-amd64.tar.gz
tar -xf helm-v3.9.0-rc.1-linux-amd64.tar.gz
ln -s /opt/k8s/helm/linux-amd64/helm /usr/bin/helm
helm version
helm help
2)配置hosts
192.168.0.120 myharbor.com
3)創(chuàng)建stl證書
mkdir /opt/k8s/helm/stl && cd /opt/k8s/helm/stl
# 生成 CA 證書私鑰
openssl genrsa -out ca.key 4096
# 生成 CA 證書
openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" \
-key ca.key \
-out ca.crt
# 創(chuàng)建域名證書,生成私鑰
openssl genrsa -out myharbor.com.key 4096
# 生成證書簽名請求 CSR
openssl req -sha512 -new \
-subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" \
-key myharbor.com.key \
-out myharbor.com.csr
# 生成 x509 v3 擴(kuò)展
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=myharbor.com
DNS.2=*.myharbor.com
DNS.3=hostname
EOF
#創(chuàng)建 Harbor 訪問證書
openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in myharbor.com.csr \
-out myharbor.com.crt
4)安裝ingress
ingress 官方網(wǎng)站:https://kubernetes.github.io/ingress-nginx/
ingress 倉庫地址:https://github.com/kubernetes/ingress-nginx
部署文檔:https://kubernetes.github.io/ingress-nginx/deploy/
1笤虫、通過helm部署
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
2旁瘫、通過YAML 文件安裝(本章使用這個方式安裝ingress)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
如果下載鏡像失敗,可以用以下方式修改鏡像地址再安裝
# 可以先把鏡像下載琼蚯,再安裝
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
# 修改鏡像地址
sed -i 's@k8s.gcr.io/ingress-nginx/controller:v1.2.0\(.*\)@registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0@' deploy.yaml
sed -i 's@k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1\(.*\)$@registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@' deploy.yaml
###還需要修改兩地方
#1酬凳、kind: 類型修改成DaemonSet,replicas: 注銷掉凌停,因為DaemonSet模式會每個節(jié)點(diǎn)運(yùn)行一個pod
#2粱年、在添加一條: hostnetwork:true
#3、把LoadBalancer修改成NodePort
#4罚拟、在--validating-webhook-key下面添加- --watch-ingress-without-class=true
#5台诗、設(shè)置master節(jié)點(diǎn)可調(diào)度
kubectl taint nodes k8s-master-168-0-113 node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint nodes k8s-master2-168-0-116 node-role.kubernetes.io/control-plane:NoSchedule-
kubectl apply -f deploy.yaml
5)安裝nfs
1、所有節(jié)點(diǎn)安裝nfs
yum -y install nfs-utils rpcbind
2赐俗、在master節(jié)點(diǎn)創(chuàng)建共享目錄并授權(quán)
mkdir /opt/nfsdata
# 授權(quán)共享目錄
chmod 666 /opt/nfsdata
3拉队、配置exports文件
cat > /etc/exports<<EOF
/opt/nfsdata *(rw,no_root_squash,no_all_squash,sync)
EOF
# 配置生效
exportfs -r
exportfs命令
常用選項
-a 全部掛載或者全部卸載
-r 重新掛載
-u 卸載某一個目錄
-v 顯示共享目錄 以下操作在服務(wù)端上
4、啟動rpc和nfs(客戶端只需要啟動rpc服務(wù))(注意順序)
systemctl start rpcbind
systemctl start nfs-server
systemctl enable rpcbind
systemctl enable nfs-server
查看
showmount -e
# VIP
showmount -e 192.168.0.120
-e 顯示NFS服務(wù)器的共享列表
-a 顯示本機(jī)掛載的文件資源的情況NFS資源的情況
-v 顯示版本號
5阻逮、客戶端
# 安裝
yum -y install nfs-utils rpcbind
# 啟動rpc服務(wù)
systemctl start rpcbind
systemctl enable rpcbind
# 創(chuàng)建掛載目錄
mkdir /mnt/nfsdata
# 掛載
echo "192.168.0.120:/opt/nfsdata /mnt/nfsdata nfs defaults 0 1">> /etc/fstab
mount -a
6粱快、rsync數(shù)據(jù)同步
【1】rsync安裝
# 兩端都得安裝
yum -y install rsync
【2】配置
在/etc/rsyncd.conf中添加
cat >/etc/rsyncd.conf<<EOF
uid = root
gid = root
#禁錮在源目錄
use chroot = yes
#監(jiān)聽地址
address = 192.168.0.113
#監(jiān)聽地址tcp/udp 873,可通過cat /etc/services | grep rsync查看
port 873
#日志文件位置
log file = /var/log/rsyncd.log
#存放進(jìn)程 ID 的文件位置
pid file = /var/run/rsyncd.pid
#允許訪問的客戶機(jī)地址
hosts allow = 192.168.0.0/16
#共享模塊名稱
[nfsdata]
#源目錄的實際路徑
path = /opt/nfsdata
comment = Document Root of www.kgc.com
#指定客戶端是否可以上傳文件叔扼,默認(rèn)對所有模塊為 true
read only = yes
#同步時不再壓縮的文件類型
dont compress = *.gz *.bz2 *.tgz *.zip *.rar *.z
#授權(quán)賬戶事哭,多個賬號以空格分隔,不加則為匿名瓜富,不依賴系統(tǒng)賬號
auth users = backuper
#存放賬戶信息的數(shù)據(jù)文件
secrets file = /etc/rsyncd_users.db
EOF
配置rsyncd_users.db
cat >/etc/rsyncd_users.db<<EOF
backuper:123456
EOF
#官方要求鳍咱,最好只是賦權(quán)600!
chmod 600 /etc/rsyncd_users.db
【3】rsyncd.conf 常用參數(shù)詳解
rsyncd.conf 參數(shù) 參數(shù)說明
uid=root rsync 使用的用戶与柑。
gid=root rsync 使用的用戶組(用戶所在的組)
use chroot=no 如果為 true谤辜,daemon 會在客戶端傳輸文件前“chroot to the path”蓄坏。這是一種安全配置,因為我們大多數(shù)都在內(nèi)網(wǎng)丑念,所以不配也沒關(guān)系
max connections=200 設(shè)置最大連接數(shù)涡戳,默認(rèn) 0,意思無限制脯倚,負(fù)值為關(guān)閉這個模塊
timeout=400 默認(rèn)為 0渔彰,表示 no timeout,建議 300-600(5-10 分鐘)
pid file rsync daemon 啟動后將其進(jìn)程 pid 寫入此文件推正。如果這個文件存在胳岂,rsync 不會覆蓋該文件,而是會終止
lock file 指定 lock 文件用來支持“max connections”參數(shù)舔稀,使得總連接數(shù)不會超過限制
log file 不設(shè)或者設(shè)置錯誤,rsync 會使用 rsyslog 輸出相關(guān)日志信息
ignore errors 忽略 I/O 錯誤
read only=false 指定客戶端是否可以上傳文件掌测,默認(rèn)對所有模塊為 true
list=false 是否允許客戶端可以查看可用模塊列表内贮,默認(rèn)為可以
hosts allow 指定可以聯(lián)系的客戶端主機(jī)名或和 ip 地址或地址段,默認(rèn)情況沒有此參數(shù)汞斧,即都可以連接
hosts deny 指定不可以聯(lián)系的客戶端主機(jī)名或 ip 地址或地址段夜郁,默認(rèn)情況沒有此參數(shù),即都可以連接
auth users 指定以空格或逗號分隔的用戶可以使用哪些模塊粘勒,用戶不需要在本地系統(tǒng)中存在竞端。默認(rèn)為所有用戶無密碼訪問
secrets file 指定用戶名和密碼存放的文件,格式庙睡;用戶名事富;密碼,密碼不超過 8 位
[backup] 這里就是模塊名稱乘陪,需用中括號擴(kuò)起來统台,起名稱沒有特殊要求,但最好是有意義的名稱啡邑,便于以后維護(hù)
path 這個模塊中贱勃,daemon 使用的文件系統(tǒng)或目錄,目錄的權(quán)限要注意和配置文件中的權(quán)限一致谤逼,否則會遇到讀寫的問題
【4】rsync常用命令參數(shù)詳解
【5】啟動服務(wù)(數(shù)據(jù)源機(jī)器)
#rsync監(jiān)聽端口:873
#rsync運(yùn)行模式:C/S
rsync --daemon --config=/etc/rsyncd.conf
netstat -tnlp|grep :873
【6】執(zhí)行命令同步數(shù)據(jù)
# 在目的機(jī)器上執(zhí)行
# rsync -avz 用戶名@源主機(jī)地址/源目錄 目的目錄
rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/
【7】crontab定時同步
# 配置crontab贵扰, 每五分鐘同步一次,這種方式不好
*/5 * * * * rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/
【溫馨提示】crontab定時同步數(shù)據(jù)不太好流部,可以使用rsync+inotify做數(shù)據(jù)實時同步戚绕,這里篇幅有點(diǎn)長了,先不講贵涵,如果后面有時間會出一篇單獨(dú)文章來講列肢。
6)創(chuàng)建nfs provisioner和持久化存儲SC
【溫馨提示】這里跟我之前的文章有點(diǎn)不同恰画,之前的方式也不適用新版本。
GitHub地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
helm部署nfs-subdir-external-provisioner
1、添加helm倉庫
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
2、helm安裝nfs provisioner
【溫馨提示】默認(rèn)鏡像是無法訪問的蟋恬,這里使用dockerhub搜索到的鏡像willdockerhub/nfs-subdir-external-provisioner:v4.0.2迅矛,還有就是StorageClass不分命名空間,所有在所有命名空間下都可以使用集漾。
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--namespace=nfs-provisioner \
--create-namespace \
--set image.repository=willdockerhub/nfs-subdir-external-provisioner \
--set image.tag=v4.0.2 \
--set replicaCount=2 \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=true \
--set nfs.server=192.168.0.120 \
--set nfs.path=/opt/nfsdata
【溫馨提示】上面 nfs.server設(shè)置為VIP,可實現(xiàn)高可用。
3费封、查看
kubectl get pods,deploy,sc -n nfs-provisioner
7)部署 Harbor(Https方式)
1、創(chuàng)建 Namespace
kubectl create ns harbor
2蒋伦、創(chuàng)建證書秘鑰
kubectl create secret tls myharbor.com --key myharbor.com.key --cert myharbor.com.crt -n harbor
kubectl get secret myharbor.com -n harbor
3弓摘、添加 Chart 庫
helm repo add harbor https://helm.goharbor.io
4、通過helm安裝harbor
helm install myharbor --namespace harbor harbor/harbor \
--set expose.ingress.hosts.core=myharbor.com \
--set expose.ingress.hosts.notary=notary.myharbor.com \
--set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
--set expose.tls.secretName=myharbor.com \
--set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
--set persistence.enabled=true \
--set externalURL=https://myharbor.com \
--set harborAdminPassword=Harbor12345
這里稍等一段時間在查看資源狀態(tài)
kubectl get ingress,svc,pods,pvc -n harbor
5痕届、ingress沒有ADDRESS問題解決
【分析】韧献,發(fā)現(xiàn)"error: endpoints “default-http-backend” not found"
cat << EOF > default-http-backend.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: harbor
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
# image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: harbor
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
EOF
kubectl apply -f default-http-backend.yaml
6、卸載重新部署
卸載
helm uninstall myharbor -n harbor
kubectl get pvc -n harbor| awk 'NR!=1{print $1}' | xargs kubectl delete pvc -n harbor
部署
helm install myharbor --namespace harbor harbor/harbor \
--set expose.ingress.hosts.core=myharbor.com \
--set expose.ingress.hosts.notary=notary.myharbor.com \
--set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
--set expose.tls.secretName=myharbor.com \
--set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
--set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
--set persistence.enabled=true \
--set externalURL=https://myharbor.com \
--set harborAdminPassword=Harbor12345
5研叫、訪問harbor
https://myharbor.com
賬號/密碼:admin/Harbor12345
6、harbor常見操作
【1】創(chuàng)建項目bigdata
【2】配置私有倉庫
在文件/etc/docker/daemon.json添加如下內(nèi)容:
"insecure-registries":["https://myharbor.com"]
重啟docker
systemctl restart docker
【3】服務(wù)器上登錄harbor
docker login https://myharbor.com
#賬號/密碼:admin/Harbor12345
【4】打標(biāo)簽并把鏡像上傳到harbor
docker tag rancher/pause:3.6 myharbor.com/bigdata/pause:3.6
docker push myharbor.com/bigdata/pause:3.6
7嚷炉、修改containerd配置
以前使用docker-engine的時候渊啰,只需要修改/etc/docker/daemon.json就行,但是新版的k8s已經(jīng)使用containerd了申屹,所以這里需要做相關(guān)配置绘证,要不然containerd會失敗。證書(ca.crt)可以在頁面上下載:
創(chuàng)建域名目錄
mkdir /etc/containerd/myharbor.com
cp ca.crt /etc/containerd/myharbor.com/
配置文件:/etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".tls]
ca_file = "/etc/containerd/myharbor.com/ca.crt"
[plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".auth]
username = "admin"
password = "Harbor12345"
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."myharbor.com"]
endpoint = ["https://myharbor.com"]
重啟containerd
#重新加載配置
systemctl daemon-reload
#重啟containerd
systemctl restart containerd
簡單使用
# 把docker換成crictl 就行独柑,命令都差不多
crictl pull myharbor.com/bigdata/mysql:5.7.38
執(zhí)行crictl報如下錯誤的解決辦法
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
這個報錯是docker的報錯迈窟,這里沒使用,所以這個錯誤不影響使用忌栅,但是還是解決好點(diǎn)车酣,解決方法如下:
cat <<EOF> /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
再次拉取鏡像
crictl pull myharbor.com/bigdata/mysql:5.7.38
Kubernetes(k8s)最新版最完整版基礎(chǔ)環(huán)境部署+master高可用實現(xiàn)詳細(xì)步驟就到這里了,有疑問的小伙伴歡迎給我留言哦~