k8s的搭建難度很大示启,這里的教程是結(jié)合是視頻兢哭,博客,官網(wǎng)各種折騰夫嗓,耗時(shí)一個(gè)五一假期折騰出來(lái)的迟螺,遇到問(wèn)題不要灰心,因?yàn)閗8s搭建真的很麻煩中間會(huì)遇到非常多的坑舍咖,耐心的解決個(gè)個(gè)問(wèn)題肯定會(huì)成功的
參考博客:https://www.cnblogs.com/ssgeek/p/11942062.html
- 高可用方案:使用的keepalived+haproxy或者keepalived+Nginx方案矩父,我選擇keepalived+haproxy,使用keepalived監(jiān)控master節(jié)點(diǎn)的可用性和故障轉(zhuǎn)移排霉,使用haproxy對(duì)master進(jìn)行均衡負(fù)載窍株。其實(shí)應(yīng)該把haproxy作為幾個(gè)單獨(dú)的節(jié)點(diǎn)去搭建主要起到master節(jié)點(diǎn)的負(fù)載均衡,keepalived去監(jiān)控haproxy是否可用和故障轉(zhuǎn)移,我這邊節(jié)點(diǎn)太多機(jī)器有點(diǎn)吃不消球订,就搭建簡(jiǎn)單點(diǎn)直接把haproxy運(yùn)行在master上
硬件環(huán)境
搭建三個(gè)master節(jié)點(diǎn)和兩個(gè)worker節(jié)點(diǎn)還有一個(gè)虛擬ip后裸,分別是192.168.200.128(master),192.168.200.129(master)冒滩,192.168.200.130(master)微驶,192.168.200.131(worker),192.168.200.132(worker),192.168.200.16(vip)
環(huán)境配置(all)
- 配置hosts
vim /etc/hosts
- 配置各個(gè)虛擬機(jī)的hostname开睡,確保集群各個(gè)虛擬機(jī)的hostname不一樣
hostnamectl set-hostname master128
- 環(huán)境配置
關(guān)閉防火墻因苹,關(guān)閉swap
yum update
systemctl stop firewalld && systemctl disable firewalld //關(guān)閉防火墻
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp //下載依賴包
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
service dnsmasq stop && systemctl disable dnsmasq
開啟路由轉(zhuǎn)發(fā)
cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf
安裝下載docker(all)
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y yum-utils device-mapper-persistent-data lvm2 //安裝依賴包
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo //設(shè)置從stable倉(cāng)庫(kù)獲取docker
yum install docker-ce docker-ce-cli containerd.io -y //安裝Docker
yum list docker-ce --showduplicates | sort -r //執(zhí)行以上命令之前,可以看看docker版本士八,執(zhí)行以下命令查看
systemctl start docker && systemctl enable docker //啟動(dòng)Docker(并設(shè)置為開機(jī)啟動(dòng))
添加鏡像加速器,創(chuàng)建docker-data到磁盤最大的盤下梁呈,使用df -hl查看磁盤掛載婚度,看看最大的硬盤掛載到哪里了
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://opmd7r0m.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"],
"graph":"/docker-data"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
下載配置keepalived(all master)
在master節(jié)點(diǎn)128和master節(jié)點(diǎn)129安裝配置keepalived,128搭建keepalived的master節(jié)點(diǎn)官卡,129和130搭建keepalived的backup,虛擬ip是192.168.200.16蝗茁,keepalived能實(shí)現(xiàn)服務(wù)器的故障自動(dòng)切換并且向外界提供統(tǒng)一的ip稱之為虛擬ip,開始是由master提供服務(wù),backu定期檢查master的節(jié)點(diǎn)是否可用寻咒,如果發(fā)現(xiàn)master故障backup就會(huì)頂替master提供服務(wù)
yum install -y keepalived
配置keepalived.conf
vim /etc/keepalived/keepalived.conf
配置如下:
//這里是master128的master配置
global_defs {
router_id keepalive-master
}
vrrp_srcipt check_apiserver {
script "/etc/keepalived/chack-apiserver.sh"http://監(jiān)控虛擬ip(vip)是否可用哮翘,不可用權(quán)重值就減2
interval 3 //每隔三秒執(zhí)行一次chack-apiserver.sh腳本
weight -2 //權(quán)重減2
}
vrrp_instance VI-kube-master {
state MASTER //master節(jié)點(diǎn)
interface ens33 //網(wǎng)卡接口,可用通過(guò)ip addr查看
virtual_router_id 51 //讓master 和backup在同一個(gè)虛擬路由里毛秘,id 號(hào)必須相同;
priority 250 //優(yōu)先級(jí),誰(shuí)的優(yōu)先級(jí)高誰(shuí)就是master
dont_track_primary //腳本執(zhí)行錯(cuò)誤與否不影響keepalived的正常運(yùn)行
advert_int 3 //心跳間隔時(shí)間
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.16 //虛擬ip
}
track_script {
check_apiserver
}
}
//這里是master129的backup配置
global_defs {
router_id keepalive-backup1
}
vrrp_srcipt check_apiserver {
script "/etc/keepalived/chack-apiserver.sh"
interval 3
weight -2
}
vrrp_instance VI-kube-backup {
state BACKUP
interface ens33
virtual_router_id 51
priority 200
dont_track_primary
advert_int 3
virtual_ipaddress {
192.168.200.16
}
track_script {
check_apiserver
}
}
//這里是master130的backup配置
global_defs {
router_id keepalive-backup2
}
vrrp_srcipt check_apiserver {
script "/etc/keepalived/chack-apiserver.sh"
interval 3
weight -2
}
vrrp_instance VI-kube-backup {
state BACKUP
interface ens33
virtual_router_id 51
priority 150
dont_track_primary
advert_int 3
virtual_ipaddress {
192.168.200.16
}
track_script {
check_apiserver
}
}
下面是chack-apiserver.sh饭寺,主要用于檢測(cè)虛擬ip是否可用
#!/bin/sh
errorExit(){
eho "*** $*" 1>&2
exit 1
}
curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error Get https://localhost:6443/"
if ip addr | grep -q 192.168.200.16; then
curl --silent --max-time 2 --insecure https://192.168.200.16:6443/ -o /dev/null || errorExit "Error Get https://192.168.200.16:6443/"
fi
啟動(dòng)keepalived
systemctl enable keepalived && service keepalived start //配置開機(jī)啟動(dòng)并啟動(dòng)keepalived
查看keepalived的狀態(tài)查看ip,這里如果master正常運(yùn)行的話backup是查看不到虛擬ip的,只有master才看得到叫挟,keepalived保證只有一個(gè)節(jié)點(diǎn)使用虛擬ip提供服務(wù)
service keepalived status
ip a
下載配置haproxy(all master)
下載haproxy
yum install -y haproxy
配置haproxy.cfg信息
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend main *:16443
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
backend static
balance roundrobin
server static 127.0.0.1:4331 check
backend app
balance roundrobin
server master128 192.168.200.128:6443 check
server master129 192.168.200.129:6443 check
server master130 192.168.200.130:6443 check
開機(jī)啟動(dòng)haproxy
systemctl enable haproxy && systemctl start haproxy
下載安裝配置k8s(all)
- 下載安裝k8s
配置阿里巴巴的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
下載k8s,注意這里1.20.0版本后就不推薦docker
yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
開機(jī)啟動(dòng)k8s并啟動(dòng)k8s
systemctl enable kubelet && systemctl start kubelet
命令補(bǔ)齊
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
初始化 kubeadm(master128)
kubeadm配置
安裝 kubernetes 主要是安裝它的各個(gè)鏡像艰匙,而 kubeadm 已經(jīng)為我們集成好了運(yùn)行 kubernetes 所需的基本鏡像。但由于國(guó)內(nèi)的網(wǎng)絡(luò)原因裙犹,在搭建環(huán)境時(shí)振劳,無(wú)法拉取到這些鏡像依啰。此時(shí)我們只需要修改為阿里云提供的鏡像服務(wù)即可解決該問(wèn)題。
下面是在master128節(jié)點(diǎn)的操作
先導(dǎo)出默認(rèn)的配置作為參考
kubeadm config print init-defaults
我的配置文件叫做kubeadm-conf.yaml健霹,我根據(jù)默認(rèn)配置修改得到的下面的配置
apiServer:
certSANs:
- vip16
- master128
- master129
- master130
- 192.168.200.16
- 192.168.200.128
- 192.168.200.129
- 192.168.200.130
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.200.16:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #改用阿里服務(wù)器
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
先測(cè)試一下文件是否可用
kubeadm init --config ~/kubeadm-conf.yaml --dry-run
執(zhí)行文件初始化
kubeadm init --config ~/kubeadm-conf.yaml
如果文件中間發(fā)送錯(cuò)誤可用使用一下命令重置kubeadm
kubeadm reset
這邊由于墻的問(wèn)題下載coredns包出問(wèn)題,可以使用docker拉取鏡像如果能下載正常的話忽略下面這兩個(gè)命令
docker pull coredns/coredns:1.6.2 //拉取鏡像
docker tag coredns/coredns:1.6.2 registry.aliyuncs.com/google_containers/coredns:1.6.2 //修改鏡像標(biāo)簽
下面第一個(gè)命令是用于加入其他master節(jié)點(diǎn)的瓶蚂,第二條命令是用于加入worker節(jié)點(diǎn)的
按照后臺(tái)輸出的提示執(zhí)行下面的命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
測(cè)試
kubectl get node
kubectl get pods --all-namespaces
安裝集群網(wǎng)絡(luò)(all)
由于下載地址的域名被墻了,將下面的配置加入/etc/hosts
199.232.28.133 raw.githubusercontent.com
下載kube-flannel.yml
curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
由于國(guó)內(nèi)的網(wǎng)絡(luò)問(wèn)題鏡像可能會(huì)拉不下來(lái)糖埋,在執(zhí)行配置文件前,我們先用docker拉取鏡像
docker pull quay.io/coreos/flannel:v0.11.0-amd64
在master節(jié)點(diǎn)執(zhí)行以下命令:
kubectl apply -f kube-flannel.yml
檢查
kubectl get pods -n kube-system
kubectl get node
加入其他master節(jié)點(diǎn)(master129,master130)
集群其他master節(jié)點(diǎn)互相通信就得把各個(gè)master128生產(chǎn)的秘鑰和證書進(jìn)行拷貝
scp -r root@192.168.200.128:/etc/kubernetes/pki .
scp -r root@192.168.200.128:/etc/kubernetes/admin.conf .
然后到master129和master130的操作客戶端刪除剛剛發(fā)送過(guò)的文件中一些不需要的文件
cd /etc/kubernetes/pki
rm -rf apiserver* front-proxy-client.*
cd /etc/kubernetes/pki/etcd/
rm -rf healthcheck-client.* peer.* server.*
將處理完的文件分發(fā)到另外兩個(gè)master節(jié)點(diǎn)去
kubeadm join 192.168.200.16:6443 --token tex1lz.58kdm6alx556wjmq \
--discovery-token-ca-cert-hash sha256:ada43a6f57d29cdbc9915054975a1af961dae2bb5408509752d79463bc10b5b4 \
--control-plane
然后執(zhí)行
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
檢查是否加入成功:
kubectl get nodes
加入worker節(jié)點(diǎn)(all worker)
kubeadm join 192.168.200.16:6443 --token hhjmtp.2cjjir23frxovz4p \
--discovery-token-ca-cert-hash sha256:ea552b566f04725584c53e55b124b50a24e29aa18b10deaef078cfd1e60fefd5
檢查
kubectl get nodes
dashboard安裝(master128)
下載配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
修改配置文件
vim recommended.yaml
在相應(yīng)的位置增加以下內(nèi)容修改完后執(zhí)行下面的命令
kubectl apply -f recommended.yaml
檢查安裝的狀態(tài)
kubectl get pods -n kubernetes-dashboard
編輯創(chuàng)建文件dashboard-adminuser.yaml
vim dashboard-adminuser.yaml
增加以下內(nèi)容阶捆,用于創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
執(zhí)行下面命令
kubectl apply -f dashboard-adminuser.yaml
執(zhí)行下面的命令來(lái)獲取token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
使用瀏覽器訪問(wèn)30001端口,使用token登錄,記得要用火狐瀏覽器洒试,微軟和谷歌不支持倍奢,我這里是
https://192.168.200.128:30001,把自己的token 粘貼上去即可
harbor高可用安裝(master128,master130,master131)
harbor是docker的私有倉(cāng)庫(kù)垒棋,使用harbor可以管理k8s集群下載的鏡像卒煞,將鏡像統(tǒng)一放置一處,有效節(jié)約空間叼架,此外還有GUI界面畔裕,高效管理集群鏡像
到GitHub安裝下載離線安裝版本https://github.com/goharbor/harbor/releases
下載docker-compose,然后執(zhí)行安裝文件
curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
cd harbor
sh install.sh
在master128已經(jīng)安裝成功
現(xiàn)在在master132和master131安裝甜无,步驟是一樣的,配置也一樣
下載Nginx
docker pull nginx:1.20.0
Nginx的配置
vim /usr/nginx/nginx.conf
user nginx;
worker_processes 1; # worker進(jìn)程數(shù)哥遮,值越大岂丘,支持的并發(fā)數(shù)量越大,盡量與cpu數(shù)相同
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
# 第二部分:events塊
events {
worker_connections 1024; # 每個(gè)worker進(jìn)程支持的最大連接數(shù)默認(rèn)為1024
}
stream{
upstream harbor {
server 192.168.200.130:8081;
}
server{
listen 8082;
proxy_pass harbor;
proxy_timeout 300s;
proxy_connect_timeout 5s;
}
}
docker運(yùn)行腳本
vim /usr/nginx/docker-nginx.sh
#!/bin/bash
# chkconfig: 2345 85 15
# description: auto_run
docker stop harbornginx
docker rm harbornginx
docker run -idt -p 8082:8082 --name harbornginx -v /usr/nginx/nginx.conf:/etc/nginx/nginx.conf nginx:1.20.0
這里添加開機(jī)自動(dòng)執(zhí)行
chmod +x docker-nginx.sh
cp ./docker-nginx.sh /etc/init.d
chkconfig --add docker-nginx.sh
chkconfig docker-nginx.sh on
service docker-nginx.sh start
驗(yàn)證
netstat -anp|grep 8082
由于現(xiàn)在訪問(wèn)harbor是由Nginx轉(zhuǎn)發(fā)的眠饮,Nginx攔截的是8082端口的請(qǐng)求奥帘,所以現(xiàn)在請(qǐng)求端口改為8082所有的節(jié)點(diǎn)加入如下配置,這里主要是用于登錄harbor用的
vim /etc/docker/daemon.json
{
"insecure-registries": ["master128:8082"]
}
systemctl restart docker //重啟docker
//登錄harbor倉(cāng)庫(kù)仪召,然后輸入賬號(hào)密碼即可
docker login master128:8082
上傳鏡像寨蹋,我在界面創(chuàng)建了一個(gè)名叫做k8s的項(xiàng)目,往這個(gè)項(xiàng)目傳入鏡像為例:
//修改鏡像標(biāo)簽
docker tag nginx:1.20.0 master128:8082/k8s/nginx:1.20.0
//直接上傳
docker push master128:8082/k8s/nginx:1.20.0
開機(jī)啟動(dòng)配置
vim /etc/systemd/system/harbor.service
加入如下配置,/usr/local/bin/docker-compose是我docker-compose的安裝路徑咖摹,自己可以通過(guò)which docker-compose查詢评姨,/usr/local/bin/harbor/docker-compose.yml是我docker-compose.yml的文件的存放路徑可以通過(guò)locate docker-compose.yml查詢
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/local/bin/docker-compose -f /usr/local/bin/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /usr/local/bin/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target
設(shè)置開機(jī)自起
chmod +x /etc/systemd/system/harbor.service
systemctl enable harbor.service && systemctl start harbor.service && systemctl status harbor.service
ingress-nginx安裝配置(master128)
igress其實(shí)就是一組基于DNS名稱(host)或URL路徑把請(qǐng)求轉(zhuǎn)發(fā)到指定的Service資源的規(guī)則,用于將集群外部的請(qǐng)求流量轉(zhuǎn)發(fā)到集群內(nèi)部完成的服務(wù)發(fā)布官方地址:
https://kubernetes.github.io/ingress-nginx/deploy/
下載文件
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.32.0/deploy/static/provider/cloud/deploy.yaml
我要將這個(gè)ingress-nginx的controller部署在其中一臺(tái)worker節(jié)點(diǎn)上萤晴,這里我部署在master131
//首先給master131配置標(biāo)簽
kubectl label node master131 app=master131-ingress
修改剛剛下載好的deploy.yaml,找到Deployment增加以下內(nèi)容
如果在Pod中使用hostNetwork:true配置網(wǎng)絡(luò)吐句,那么Pod中運(yùn)行的應(yīng)用程序可以直接使用node節(jié)點(diǎn)的端口,這樣node節(jié)點(diǎn)主機(jī)所在網(wǎng)絡(luò)的其他主機(jī)店读,都可以通過(guò)該端口訪問(wèn)到此應(yīng)用程序嗦枢。
應(yīng)用文件
kubectl apply -f deploy.yaml
檢查ingress-nginx命名空間下的服務(wù)是否都啟動(dòng)成功
kubectl get all -n ingress-nginx
發(fā)現(xiàn)有些服務(wù)為啟動(dòng),是因?yàn)殓R像為下載好屯断,過(guò)濾文件內(nèi)容看看缺哪些鏡像
grep image deploy.yaml
分別拉取鏡像
docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
docker pull jettech/kube-webhook-certgen:v1.2.0
//驗(yàn)證,查看ingress-nginx下的pod啟動(dòng)成功了嗎
kubectl get pods -n ingress-nginx
上傳鏡像到harbor以提供給其他的服務(wù)器使用
docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0 master128:8082/k8s/nginx-ingress-controller:0.32.0
docker tag jettech/kube-webhook-certgen:v1.2.0 master128:8082/k8s/kube-webhook-certgen:v1.2.0
docker push master128:8082/k8s/nginx-ingress-controller:0.32.0
docker push master128:8082/k8s/kube-webhook-certgen:v1.2.0
master130和master131都在harbor上拉取鏡像文虏,拉取鏡像前記得登錄
docker login master128:8082
docker pull master128:8082/k8s/nginx-ingress-controller:0.32.0
docker pull master128:8082/k8s/kube-webhook-certgen:v1.2.0
docker tag master128:8082/k8s/nginx-ingress-controller:0.32.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
docker tag master128:8082/k8s/kube-webhook-certgen:v1.2.0 jettech/kube-webhook-certgen:v1.2.0