安裝一個(gè)k8s集群筷狼,一個(gè)master,兩個(gè)node,叫k8s-master1,k8s-node1,k8s-node2
參考了:
https://blog.csdn.net/fy_long/article/details/86542872
https://blog.csdn.net/weixin_38380858/article/details/88830853
尚硅谷kubernetes課程
十分感謝他們提供的幫助
安裝要求
在開(kāi)始之前疙渣,部署 Kubernetes 集群機(jī)器需要滿(mǎn)足以下幾個(gè)條件:
(1)一臺(tái)或多臺(tái)機(jī)器轿腺,操作系統(tǒng) CentOS7.x-86_x64
(2)硬件配置:2GB 或更多 RAM睡互,2 個(gè) CPU 或更多 CPU玻淑,硬盤(pán) 30GB 或更多
(3)集群中所有機(jī)器之間網(wǎng)絡(luò)互通
(4)可以訪問(wèn)外網(wǎng)咧党,需要拉取鏡像秘蛔,如果服務(wù)器不能上網(wǎng),需要提前下載鏡像并導(dǎo)入節(jié)點(diǎn)
(5)禁止 swap 分區(qū)
注意:
1.安裝centos網(wǎng)上有很多教程,去官網(wǎng)上下載鏡像然后安裝深员,這里我用的是centos7版本负蠕。
下載鏈接:
http://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/
選第一個(gè)下載即可
安裝時(shí)master1硬盤(pán)30G,node1倦畅,node2硬盤(pán)40G
2.network要打開(kāi)(這張是盜圖)遮糖,hostname暫時(shí)不改也可(后面會(huì)改)
3.注意虛擬機(jī)設(shè)置(內(nèi)存,處理器叠赐,硬盤(pán)大小如圖)
master1:
node1欲账,node2:
準(zhǔn)備環(huán)境
(1)軟件環(huán)境:
centos7,docker芭概,kubernetes
(2)服務(wù)器規(guī)劃:
-k8s-master1:
IP:192.168.202.139(ifconfig命令查看)
組件:kube-apiserver赛不,kube-controller-manager,kube-scheduler罢洲,etcd
-k8s-node1:
IP:192.168.202.140
組件:kubelet踢故,kube-proxy,docker惹苗,etcd
-k8s-node2:
IP:192.168.202.141
組件:kubelet殿较,kube-proxy,docker桩蓉,etcd
(3)下載一個(gè)xshell更方便(我的是xshell6)
連接3臺(tái)虛擬機(jī):
node和master一樣淋纲,更改主機(jī)和名稱(chēng)即可
點(diǎn)擊連接,我一般用root登錄院究,感覺(jué)更方便
好了下來(lái)開(kāi)始就在xshell里工作洽瞬!
操作系統(tǒng)初始化配置
為了保險(xiǎn),臨時(shí)永久都輸一遍吧:)
- 關(guān)閉防火墻
$ systemctl stop firewalld #臨時(shí)
$ systemctl disable firewalld #永久
- 關(guān)閉 selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 臨時(shí)
- 關(guān)閉 swap:
$ swapoff -a # 臨時(shí)
$ sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
- 根據(jù)規(guī)劃設(shè)置主機(jī)名(k8s-master1业汰,k8s-node1片任, k8s-node2):
$ hostnamectl set-hostname <hostname>
- 在 master 添加 hosts:
$ cat >> /etc/hosts << EOF
192.168.202.139 k8s-master1
192.168.202.140 k8s-node1
192.168.202.141 k8s-node2
EOF
- 將橋接的 IPv4 流量傳遞到 iptables 的鏈:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system # 生效
- 時(shí)間同步:
$ yum install ntpdate -y
$ ntpdate time.windows.com
1,2蔬胯,3对供,6,7三臺(tái)操作一模一樣,xshell里:工具-發(fā)送鍵輸入到所有會(huì)話(huà) 選上氛濒,不用的時(shí)候調(diào)回來(lái)
部署 Etcd 集群(3臺(tái))
Etcd 是一個(gè)分布式鍵值存儲(chǔ)系統(tǒng)产场,Kubernetes 使用 Etcd 進(jìn)行數(shù)據(jù)存儲(chǔ)
- 準(zhǔn)備 cfssl 證書(shū)生成工具
找任意一臺(tái)服務(wù)器操作,這里用 master1 節(jié)點(diǎn)
cfssl 是一個(gè)開(kāi)源的證書(shū)管理工具舞竿,使用 json 文件生成證書(shū)京景,相比 openssl 更方便使用。
這里可能會(huì)有問(wèn)題骗奖,因?yàn)檫B接不到确徙,多試幾次:(
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
- 生成 Etcd 證書(shū)
(1)自簽證書(shū)頒發(fā)機(jī)構(gòu)(CA)
創(chuàng)建工作目錄:
mkdir -p ~/TLS/{etcd,k8s}
cd TLS/etcd
自簽 CA:
# 創(chuàng)建 ETCD 證書(shū)
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
# 創(chuàng)建 ETCD CA 配置文件
cat << EOF | tee ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen"
}
]
}
EOF
# 生成 kubernetes CA 證書(shū)和私鑰
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
ca-key.pem ca.pem
(2)使用自簽 CA 簽發(fā) Etcd HTTPS 證書(shū)
# 創(chuàng)建 ETCD Server 證書(shū)(注意更改host字段)
cat << EOF | tee server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.202.139",
"192.168.202.140",
"192.168.202.141"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen"
}
]
}
EOF
注:上述文件 hosts 字段中 IP 為所有 etcd 節(jié)點(diǎn)的集群內(nèi)部通信 IP醒串,一個(gè)都不能少!為了 方便后期擴(kuò)容可以多寫(xiě)幾個(gè)預(yù)留的 IP鄙皇。
生成證書(shū):
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
ls server*pem
server-key.pem server.pem
- 從 Github 下載二進(jìn)制文件
(1)我用的3.4.9
https://github.com/etcd-io/etcd/releases
(2)如果用借鑒文章的下載的資源里的etcd
給三臺(tái)虛擬機(jī)開(kāi)啟共享文件夾:
https://jingyan.baidu.com/article/b24f6c82e15cf2c7bfe5daa6.html
然后參考:
https://blog.csdn.net/lq1759336950/article/details/104866536
效果如圖:
- 部署 Etcd 集群
以下在節(jié)點(diǎn) 1 上操作芜赌,為簡(jiǎn)化操作,待會(huì)將節(jié)點(diǎn) 1 生成的所有文件拷貝到節(jié)點(diǎn) 2 和節(jié)點(diǎn) 3
(1)創(chuàng)建工作目錄并解壓二進(jìn)制包
cd ~/VMsharek8s1.13
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mkdir /opt/etcd/{bin,cfg,ssl} -p
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
(2)創(chuàng)建 etcd 配置文件
vim /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.202.139:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.202.139:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.202.139:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.202.139:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.202.139:2380,etcd-2=https://192.168.202.140:2380,etcd-3=https://192.168.202.141:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_NAME:節(jié)點(diǎn)名稱(chēng)伴逸,集群中唯一
ETCD_DATA_DIR:數(shù)據(jù)目錄
ETCD_LISTEN_PEER_URLS:集群通信監(jiān)聽(tīng)地址
ETCD_LISTEN_CLIENT_URLS:客戶(hù)端訪問(wèn)監(jiān)聽(tīng)地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客戶(hù)端通告地址
ETCD_INITIAL_CLUSTER:集群節(jié)點(diǎn)地址
ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的當(dāng)前狀態(tài)缠沈,new 是新集群,existing 表示加入 已有集群
(3)systemd 管理 etcd
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
(4)拷貝剛才生成的證書(shū)
把剛才生成的證書(shū)拷貝到配置文件中的路徑:
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
(5)將上面節(jié)點(diǎn) 1 所有生成的文件拷貝到節(jié)點(diǎn) 2 和節(jié)點(diǎn) 3
cd /opt/
scp -r etcd 192.168.202.140:/opt/
scp -r etcd 192.168.202.141:/opt/
scp /usr/lib/systemd/system/etcd.service 192.168.202.140:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service 192.168.202.141:/usr/lib/systemd/system/etcd.service
然后在節(jié)點(diǎn) 2 和節(jié)點(diǎn) 3 分別修改 etcd.conf 配置文件中的節(jié)點(diǎn)名稱(chēng)和當(dāng)前服務(wù)器 IP:
vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此處错蝴,節(jié)點(diǎn) 2 改為 etcd-2洲愤,節(jié)點(diǎn) 3 改為 etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.202.139:2380" # 修改此處為當(dāng)前服務(wù)器 IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.202.139:2379" # 修改此處為當(dāng)前服務(wù)器 IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.202.139:2380" # 修改此處為當(dāng)前服務(wù)器 IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.202.139:2379" # 修改此處為當(dāng)前服務(wù)器 IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.202.139:2380,etcd-2=https://192.168.202.140:2380,etcd-3=https://192.168.202.141:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
(6)三臺(tái)啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)(三臺(tái)上都操作這步)
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
(7)查看集群狀態(tài)
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.202.139:2379,https://192.168.202.140:2379,https://192.168.202.141:2379" endpoint health
https://192.168.202.140:2379 is healthy: successfully committed proposal: took = 12.763202ms
https://192.168.202.141:2379 is healthy: successfully committed proposal: took = 13.242415ms
https://192.168.202.139:2379 is healthy: successfully committed proposal: took = 22.820507ms
如果輸出上面信息,就說(shuō)明集群部署成功顷锰。如果有問(wèn)題第一步先看日志: /var/log/message 或 journalctl -u etcd
安裝 Docker
我用的19.03.9
下載地址:https://download.docker.com/linux/static/stable/x86_64/
以下在所有節(jié)點(diǎn)操作柬赐。這里采用二進(jìn)制安裝,用 yum 安裝也一樣官紫。
(1)到你下載的路徑去解壓二進(jìn)制包
tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
(2) systemd 管理 docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
(3)創(chuàng)建配置文件
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
registry-mirrors 阿里云鏡像加速器
(4)啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start docker
systemctl enable docker
部署 Master Node
- 生成 kube-apiserver 證書(shū)
(1)自簽證書(shū)頒發(fā)機(jī)構(gòu)(CA)
cd ~/TLS/k8s
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat << EOF | tee ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
(2)生成證書(shū):
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
ca-key.pem ca.pem
(3)使用自簽 CA 簽發(fā) kube-apiserver HTTPS 證書(shū)
創(chuàng)建證書(shū)申請(qǐng)文件:
cat << EOF | tee server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.202.139",
"192.168.202.140",
"192.168.202.141",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成證書(shū):
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
ls server*pem
server-key.pem server.pem
# 創(chuàng)建 Kubernetes Proxy 證書(shū)
cat << EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
- 從 Github 下載二進(jìn)制文件
下載地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#server-binaries
注:打開(kāi)鏈接你會(huì)發(fā)現(xiàn)里面有很多包躺率,下載一個(gè) server 包就夠了,包含了 Master 和 Worker Node 二進(jìn)制文件万矾。 - 解壓二進(jìn)制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cd ~/VMsharek8s1.13
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
cp kubectl /usr/bin/
- 部署 kube-apiserver
(1) 創(chuàng)建配置文件
vim /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.202.139:2379,https://192.168.202.140:2379,https://192.168.202.141:2379 \
--bind-address=192.168.202.139 \
--secure-port=6443 \
--advertise-address=192.168.202.139 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
–logtostderr:?jiǎn)⒂萌罩?br>
—v:日志等級(jí)
–log-dir:日志目錄
–etcd-servers:etcd 集群地址
–bind-address:監(jiān)聽(tīng)地址
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:?jiǎn)⒂檬跈?quán)
–service-cluster-ip-range:Service 虛擬 IP 地址段
–enable-admission-plugins:準(zhǔn)入控制模塊
–authorization-mode:認(rèn)證授權(quán),啟用 RBAC 授權(quán)和節(jié)點(diǎn)自管理
–enable-bootstrap-token-auth:?jiǎn)⒂?TLS bootstrap 機(jī)制
–token-auth-file:bootstrap token 文件
–service-node-port-range:Service nodeport 類(lèi)型默認(rèn)分配端口范圍
–kubelet-client-xxx:apiserver 訪問(wèn) kubelet 客戶(hù)端證書(shū)
–tls-xxx-file:apiserver https 證書(shū)
–etcd-xxxfile:連接 Etcd 集群證書(shū)
–audit-log-xxx:審計(jì)日志
在這里我真的要吐血了慎框,有的命令他缺個(gè)橫杠什么的良狈,真的要了命了,特別難看出來(lái)笨枯,但也要靜下心差錯(cuò)
如果有問(wèn)題薪丁,可以使用以下命令查錯(cuò)
cat /var/log/messages|grep kube-apiserver|grep -i error
(2) 拷貝剛才生成的證書(shū)
把剛才生成的證書(shū)拷貝到配置文件中的路徑:
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
(3) 啟用 TLS Bootstrapping 機(jī)制
TLSBootstraping:Master apiserver 啟用 TLS 認(rèn)證后,Node 節(jié)點(diǎn) kubelet 和 kubeproxy 要與 kube-apiserver 進(jìn)行通信馅精,必須使用 CA 簽發(fā)的有效證書(shū)才可以严嗜,當(dāng) Node 節(jié)點(diǎn)很多時(shí),這種客戶(hù)端證書(shū)頒發(fā)需要大量工作洲敢,同樣也會(huì)增加集群擴(kuò)展復(fù)雜度漫玄。為了 簡(jiǎn)化流程,Kubernetes 引入了 TLS bootstraping 機(jī)制來(lái)自動(dòng)頒發(fā)客戶(hù)端證書(shū)压彭,kubelet 會(huì)以一個(gè)低權(quán)限用戶(hù)自動(dòng)向 apiserver 申請(qǐng)證書(shū)睦优,kubelet 的證書(shū)由 apiserver 動(dòng)態(tài)簽署。
所以強(qiáng)烈建議在 Node 上使用這種方式壮不,目前主要用于 kubelet汗盘,kube-proxy 還是由我們統(tǒng)一頒發(fā)一個(gè)證書(shū)。
格式:token询一,用戶(hù)名隐孽,UID癌椿,用戶(hù)組 token 可自行生成替換:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
14c7a3907aead7409e1e16bb7fc2cc54
創(chuàng)建上述配置文件中 token 文件:
cat > /opt/kubernetes/cfg/token.csv << EOF
14c7a3907aead7409e1e16bb7fc2cc54,kubelet-bootstrap,10001,"system:nodebootstrapper"
EOF
(4)systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
(5)啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
查看apiserver(以后的也可以用這個(gè)命令檢查)
systemctl status kube-apiserver
(6)授權(quán)kubelet-bootstrap用戶(hù)允許請(qǐng)求證書(shū)
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
- 部署 kube-controller-manager
(1)創(chuàng)建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
注:上面兩個(gè)\ \ 第一個(gè)是轉(zhuǎn)義符,第二個(gè)是換行符菱阵,使用轉(zhuǎn)義符是為了使用 EOF 保留換行符踢俄。
–master:通過(guò)本地非安全本地端口 8080 連接 apiserver。
–leader-elect:當(dāng)該組件啟動(dòng)多個(gè)時(shí)送粱,自動(dòng)選舉(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自動(dòng)為 kubelet 頒發(fā)證書(shū) 的 CA褪贵,與 apiserver 保持一致
(2)systemd管理controller-manager
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager
\$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
(3)啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
- 部署 kube-scheduler
(1)創(chuàng)建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
–master:通過(guò)本地非安全本地端口 8080 連接 apiserver。
–leader-elect:當(dāng)該組件啟動(dòng)多個(gè)時(shí)抗俄,自動(dòng)選舉(HA)
(2)systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
(3)啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
(4)查看集群狀態(tài)
所有組件都已經(jīng)啟動(dòng)成功脆丁,通過(guò) kubectl 工具查看當(dāng)前集群組件狀態(tài):
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
如上輸出說(shuō)明 Master 節(jié)點(diǎn)組件運(yùn)行正常。
部署 Worker Node
下面在node1,2進(jìn)行操作
為了方便先給node1弄动雹,最后拷貝到node2即可
1.創(chuàng)建工作目錄并拷貝二進(jìn)制文件
在node1創(chuàng)建工作目錄:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
下載
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#node-binaries
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin
cp kubelet kube-proxy /opt/kubernetes/bin # 本地拷貝
從master節(jié)點(diǎn)上到之前的路徑拷ssl文件到node1上(這句在master)
scp -r ssl/* 192.168.202.140:/opt/kubernetes/ssl
(在node1)
mkdir TLS
(在master)
cd ~/TLS
scp -r k8s 192.168.202.140:~/TLS
cp kube-proxy-key.pem kube-proxy.pem /opt/kubernetes/ssl
# 創(chuàng)建 kubelet bootstrap.kubeconfig 文件
# cd 到kubernetes證書(shū)目錄槽卫,在目錄下創(chuàng)建environment.sh
cd ~/TLS/k8s
vim environment.sh
-----------------------------------------------------------------------------start
# 創(chuàng)建kubelet bootstrap.kubeconfig
BOOTSTRAP_TOKEN=14c7a3907aead7409e1e16bb7fc2cc54 #改成master生成的那個(gè)
KUBE_APISERVER="https://192.168.202.139:6443"
# 設(shè)置集群參數(shù)
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 設(shè)置客戶(hù)端認(rèn)證參數(shù)
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 設(shè)置上下文參數(shù)
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 設(shè)置默認(rèn)上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 創(chuàng)建kube-proxy.kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
------------------------------------------------------------------------end
# 給environment.sh添加執(zhí)行權(quán)限
chmod +x environment.sh
# 創(chuàng)建kubelet bootstrapping kubeconfig
./environment.sh
scp -r bootstrap.kubeconfig 192.168.202.140:/opt/kubernetes/cfg
scp -r kube-proxy.kubeconfig 192.168.202.140:/opt/kubernetes/cfg
2.部署 kubelet
(1) 創(chuàng)建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node1 \\ #這里修改成本節(jié)點(diǎn)名字
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
–hostname-override:顯示名稱(chēng),集群中唯一
–network-plugin:?jiǎn)⒂?CNI
–kubeconfig:空路徑胰蝠,會(huì)自動(dòng)生成歼培,后面用于連接 apiserver
–bootstrap-kubeconfig:首次啟動(dòng)向 apiserver 申請(qǐng)證書(shū)
–config:配置參數(shù)文件
–cert-dir:kubelet 證書(shū)生成目錄
–pod-infra-container-image:管理 Pod 網(wǎng)絡(luò)容器的鏡像
(2)配置參數(shù)文件
vim /opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
(3) systemd管理kubelet
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
(4) 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
- 部署 kube-proxy
(1) 創(chuàng)建配置文件
cd /opt/kubernetes/cfg
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
(2)配置參數(shù)文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.0.0.0/24
EOF
注意kubeconfig前面的空格,沒(méi)寫(xiě)會(huì)報(bào)錯(cuò)
(3)systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
(4)啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
- 批準(zhǔn) kubelet 證書(shū)申請(qǐng)并加入集群
# 查看 kubelet 證書(shū)請(qǐng)求
kubectl get csr
# 批準(zhǔn)申請(qǐng)
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ-K6M4G7bjhk8A(改成自己的)
# 查看節(jié)點(diǎn)
kubectl get node
注:由于網(wǎng)絡(luò)插件還沒(méi)有部署,節(jié)點(diǎn)會(huì)沒(méi)有準(zhǔn)備就緒 NotReady
- 部署 CNI 網(wǎng)絡(luò)
(1)創(chuàng)建目錄
mkdir /opt/cni/bin /opt/cni/net.d -p
(2)準(zhǔn)備好 CNI 二進(jìn)制文件:
下載地址: https://github.com/containernetworking/plugins/releases
我用的v0.8.6
解壓二進(jìn)制包并移動(dòng)到默認(rèn)工作目錄:
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
(3)在master1節(jié)點(diǎn)上操作
部署 CNI 網(wǎng)絡(luò):
法一:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0amd64#g" kube-flannel.yml
默認(rèn)鏡像地址無(wú)法訪問(wèn)茸塞,修改為 docker hub 鏡像倉(cāng)庫(kù)
法二:
我直接下下來(lái)的躲庄,大家可以用這個(gè)
鏈接:https://pan.baidu.com/s/1LdL2eI33iMY1HMEG2p2HBg
提取碼:1234
把文件放入共享文件夾,可以獲取
cp kube-flannel.yaml ~/
kubectl apply -f kube-flannel.yaml
kubectl get pods -n kube-system
kubectl get node
部署好網(wǎng)絡(luò)插件钾虐,Node 準(zhǔn)備就緒
(4)測(cè)試 kubernetes 集群
在 Kubernetes 集群中創(chuàng)建一個(gè) pod噪窘,驗(yàn)證是否正常運(yùn)行:
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
訪問(wèn)地址:http://NodeIP:Port(Nodeport為node1的192的地址)
6.授權(quán) apiserver 訪問(wèn) kubelet
vim apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
kubectl apply -f apiserver-to-kubelet-rbac.yaml
6.新增加 Worker Node2
(1)在node1上把node1上的文件拷貝到node2上
cd /opt
scp -r kubernetes 192.168.202.141:/opt/
scp -r /usr/lib/systemd/system/kubelet.service 192.168.202.141:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/kube-proxy.service 192.168.202.141:/usr/lib/systemd/system/
scp -r /opt/cni/ 192.168.202.141:/opt/
(2)刪除kubelet證書(shū)和kubeconfig文件
rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
注:這幾個(gè)文件是證書(shū)申請(qǐng)審批后自動(dòng)生成的,每個(gè) Node 不同效扫,必須刪除重新生成倔监。
到master1節(jié)點(diǎn)上執(zhí)行environment.sh然后把文件發(fā)給node2
(3) 修改主機(jī)名
kubelet.conf的hostname
kube-proxy-config.yml的hostname
(4) 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
(5) 在Master上批準(zhǔn)新Node kubelet證書(shū)申請(qǐng)
kubectl get csr
kubectl certificate approve
(6) 查看Node狀態(tài)
kubectl get node
要加入其他節(jié)點(diǎn)同上。記得修改主機(jī)名菌仁!
注意:
此教程并沒(méi)有在master上開(kāi)啟kubelet和kube-proxy服務(wù)浩习,想要安裝的話(huà)可自己在master節(jié)點(diǎn)上操作
參考:
https://blog.csdn.net/weixin_38380858/article/details/88830853