一线脚、環(huán)境
服務(wù)器信息
主機(jī)名 | IP | 備注 |
---|---|---|
k8s-master1 | 192.168.0.216 | Master1,etcd1,node節(jié)點(diǎn) |
k8s-master2 | 192.168.0.217 | Master2,etcd2,node節(jié)點(diǎn) |
k8s-master3 | 192.168.0.218 | Master3,etcd3,node節(jié)點(diǎn) |
slb | lb.ypvip.com.cn | 外網(wǎng)阿里slb域名 |
本環(huán)境使用阿里云锹安,
API Server
高可用通過(guò)阿里云SLB
實(shí)現(xiàn)同仆,如果環(huán)境不在云上,可以通過(guò) Nginx + Keepalived枪向,或者 HaProxy + Keepalived等實(shí)現(xiàn)勤揩。
服務(wù)版本與K8S集群說(shuō)明
-
阿里slb
設(shè)置TCP監(jiān)聽(tīng)
,監(jiān)聽(tīng)6443端口(通過(guò)四層負(fù)載到master apiserver)遣疯。 - 所有
阿里云ECS主機(jī)
使用CentOS 7.6.1810
版本雄可,并且內(nèi)核都升到5.x
版本。 - K8S 集群使用
Iptables 模式
(kube-proxy 注釋中預(yù)留Ipvs
模式配置) - Calico 使用
IPIP
模式 - 集群使用默認(rèn)
svc.cluster.local
-
10.10.0.1
為集群 kubernetes svc 解析ip - Docker CE version 19.03.6
- Kubernetes Version 1.18.2
- Etcd Version v3.4.7
- Calico Version v3.14.0
- Coredns Version 1.6.7
- Metrics-Server Version v0.3.6
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 6d23h
PS:上面服務(wù)版本都是使用
當(dāng)前最新版本
Service 和 Pods Ip 段劃分
名稱 | IP網(wǎng)段 | 備注 |
---|---|---|
service-cluster-ip | 10.10.0.0/16 | 可用地址 65534 |
pods-ip | 10.20.0.0/16 | 可用地址 65534 |
集群dns | 10.10.0.2 | 用于集群service域名解析 |
k8s svc | 10.10.0.1 | 集群 kubernetes svc 解析ip |
二缠犀、環(huán)境初始化
所有集群服務(wù)器都需要初始化
2.1 停止所有機(jī)器 firewalld 防火墻
$ systemctl stop firewalld
$ systemctl disable firewalld
2.2 關(guān)閉 swap
$ swapoff -a
$ sed -i 's/.*swap.*/#&/' /etc/fstab
2.3 關(guān)閉 Selinux
$ setenforce 0
$ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
$ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
$ sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
$ sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
2.4 設(shè)置主機(jī)名数苫、升級(jí)內(nèi)核、安裝 Docker ce
運(yùn)行下面 init.sh
shell 腳本辨液,腳本完成下面四項(xiàng)任務(wù):
- 設(shè)置服務(wù)器
hostname
- 安裝
k8s依賴環(huán)境
-
升級(jí)系統(tǒng)內(nèi)核
(升級(jí)Centos7系統(tǒng)內(nèi)核虐急,解決Docker-ce版本兼容問(wèn)題) - 安裝
docker ce
19.03.6 版本
在每臺(tái)機(jī)器上運(yùn)行 init.sh 腳本,示例如下:
Ps:init.sh 腳本只用于
Centos
滔迈,支持重復(fù)運(yùn)行
止吁。
# k8s-master1 機(jī)器運(yùn)行,init.sh 后面接的參數(shù)是設(shè)置 k8s-master1 服務(wù)器主機(jī)名
$ chmod +x init.sh && ./init.sh k8s-master1
# 執(zhí)行完 init.sh 腳本燎悍,請(qǐng)重啟服務(wù)器
$ reboot
#!/usr/bin/env bash
function Check_linux_system(){
linux_version=`cat /etc/redhat-release`
if [[ ${linux_version} =~ "CentOS" ]];then
echo -e "\033[32;32m 系統(tǒng)為 ${linux_version} \033[0m \n"
else
echo -e "\033[32;32m 系統(tǒng)不是CentOS,該腳本只支持CentOS環(huán)境\033[0m \n"
exit 1
fi
}
function Set_hostname(){
if [ -n "$HostName" ];then
grep $HostName /etc/hostname && echo -e "\033[32;32m 主機(jī)名已設(shè)置敬惦,退出設(shè)置主機(jī)名步驟 \033[0m \n" && return
case $HostName in
help)
echo -e "\033[32;32m bash init.sh 主機(jī)名 \033[0m \n"
exit 1
;;
*)
hostname $HostName
echo "$HostName" > /etc/hostname
echo "`ifconfig eth0 | grep inet | awk '{print $2}'` $HostName" >> /etc/hosts
;;
esac
else
echo -e "\033[32;32m 輸入為空,請(qǐng)參照 bash init.sh 主機(jī)名 \033[0m \n"
exit 1
fi
}
function Install_depend_environment(){
rpm -qa | grep nfs-utils &> /dev/null && echo -e "\033[32;32m 已完成依賴環(huán)境安裝谈山,退出依賴環(huán)境安裝步驟 \033[0m \n" && return
yum install -y nfs-utils curl yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl telnet
echo -e "\033[32;32m 升級(jí)Centos7系統(tǒng)內(nèi)核到5版本俄删,解決Docker-ce版本兼容問(wèn)題\033[0m \n"
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && \
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && \
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist && \
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64 && \
yum remove -y kernel-tools-libs.x86_64 kernel-tools.x86_64 && \
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64 && \
grub2-set-default 0
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge
}
function Install_docker(){
rpm -qa | grep docker && echo -e "\033[32;32m 已安裝docker,退出安裝docker步驟 \033[0m \n" && return
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-19.03.6 docker-ce-cli-19.03.6
systemctl enable docker.service
systemctl start docker.service
systemctl stop docker.service
echo '{"registry-mirrors": ["https://4xr1qpsp.mirror.aliyuncs.com"], "log-opts": {"max-size":"500m", "max-file":"3"}}' > /etc/docker/daemon.json
systemctl daemon-reload
systemctl start docker
}
# 初始化順序
HostName=$1
Check_linux_system && \
Set_hostname && \
Install_depend_environment && \
Install_docker
三奏路、Kubernetes 部署
部署順序
- 1畴椰、自簽TLS證書(shū)
- 2、部署Etcd集群
- 3鸽粉、創(chuàng)建 metrics-server 證書(shū)
- 4斜脂、獲取K8S二進(jìn)制包
- 5、創(chuàng)建Node節(jié)點(diǎn)kubeconfig文件
- 6触机、配置Master組件并運(yùn)行
- 7帚戳、配置kubelet證書(shū)自動(dòng)續(xù)期和創(chuàng)建Node授權(quán)用戶
- 8、配置Node組件并運(yùn)行
- 9威兜、安裝calico網(wǎng)絡(luò)销斟,使用IPIP模式
- 10、集群CoreDNS部署
- 11椒舵、部署集群監(jiān)控服務(wù) Metrics Server
- 12蚂踊、部署 Kubernetes Dashboard
3.1 自簽TLS證書(shū)
在
k8s-master1
安裝證書(shū)生成工具 cfssl,并生成相關(guān)證書(shū)
# 創(chuàng)建目錄用于存放 SSL 證書(shū)
$ mkdir /data/ssl -p
# 下載生成證書(shū)命令
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
# 添加執(zhí)行權(quán)限
$ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
# 移動(dòng)到 /usr/local/bin 目錄下
$ mv cfssl_linux-amd64 /usr/local/bin/cfssl
$ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
$ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
# 進(jìn)入證書(shū)目錄
$ cd /data/ssl/
# 創(chuàng)建 certificate.sh 腳本
$ vim certificate.sh
PS:證書(shū)有效期為
10年
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.0.216",
"192.168.0.217",
"192.168.0.218",
"10.10.0.1",
"lb.ypvip.com.cn",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
根據(jù)自己環(huán)境修改 certificate.sh
腳本
"192.168.0.216",
"192.168.0.217",
"192.168.0.218",
"10.10.0.1",
"lb.ypvip.com.cn",
修改完腳本笔宿,然后執(zhí)行
$ bash certificate.sh
3.2 部署Etcd集群
k8s-master1 機(jī)器上操作犁钟,把執(zhí)行文件copy到 k8s-master2 k8s-master3
二進(jìn)制包下載地址:https://github.com/etcd-io/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz
# 創(chuàng)建存儲(chǔ)etcd數(shù)據(jù)目錄
$ mkdir /data/etcd/
# 創(chuàng)建 k8s 集群配置目錄
$ mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# 下載二進(jìn)制etcd包,并把執(zhí)行文件放到 /opt/kubernetes/bin/ 目錄
$ cd /data/etcd/
$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz
$ tar zxvf etcd-v3.4.7-linux-amd64.tar.gz
$ cd etcd-v3.4.7-linux-amd64
$ cp -a etcd etcdctl /opt/kubernetes/bin/
# 把 /opt/kubernetes/bin 目錄加入到 PATH
$ echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
$ source /etc/profile
登陸到
k8s-master2
和k8s-master3
服務(wù)器上操作
# 創(chuàng)建 k8s 集群配置目錄
$ mkdir /data/etcd
$ mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# 把 /opt/kubernetes/bin 目錄加入到 PATH
$ echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile
$ source /etc/profile
登陸到
k8s-master1
操作
# 進(jìn)入 K8S 集群證書(shū)目錄
$ cd /data/ssl
# 把證書(shū) copy 到 k8s-master1 機(jī)器 /opt/kubernetes/ssl/ 目錄
$ cp ca*pem server*pem /opt/kubernetes/ssl/
# 把etcd執(zhí)行文件與證書(shū) copy 到 k8s-master2 k8s-master3 機(jī)器
scp -r /opt/kubernetes/* root@k8s-master2:/opt/kubernetes
scp -r /opt/kubernetes/* root@k8s-master3:/opt/kubernetes
$ cd /data/etcd
# 編寫(xiě) etcd 配置文件腳本
$ vim etcd.sh
#!/bin/bash
ETCD_NAME=${1:-"etcd01"}
ETCD_IP=${2:-"127.0.0.1"}
ETCD_CLUSTER=${3:-"etcd01=https://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/etcd.yml
name: ${ETCD_NAME}
data-dir: /var/lib/etcd/default.etcd
listen-peer-urls: https://${ETCD_IP}:2380
listen-client-urls: https://${ETCD_IP}:2379,https://127.0.0.1:2379
advertise-client-urls: https://${ETCD_IP}:2379
initial-advertise-peer-urls: https://${ETCD_IP}:2380
initial-cluster: ${ETCD_CLUSTER}
initial-cluster-token: etcd-cluster
initial-cluster-state: new
client-transport-security:
cert-file: /opt/kubernetes/ssl/server.pem
key-file: /opt/kubernetes/ssl/server-key.pem
client-cert-auth: false
trusted-ca-file: /opt/kubernetes/ssl/ca.pem
auto-tls: false
peer-transport-security:
cert-file: /opt/kubernetes/ssl/server.pem
key-file: /opt/kubernetes/ssl/server-key.pem
client-cert-auth: false
trusted-ca-file: /opt/kubernetes/ssl/ca.pem
auto-tls: false
debug: false
logger: zap
log-outputs: [stderr]
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/etcd-io/etcd
Conflicts=etcd.service
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
LimitNOFILE=65536
Restart=on-failure
RestartSec=5s
TimeoutStartSec=0
ExecStart=/opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
# 執(zhí)行 etcd.sh 生成配置腳本
$ chmod +x etcd.sh
$ ./etcd.sh etcd01 192.168.0.216 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380
# 查看 etcd 是否啟動(dòng)正常
$ ps -ef | grep etcd
$ netstat -ntplu | grep etcd
tcp 0 0 192.168.0.216:2379 0.0.0.0:* LISTEN 1558/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 1558/etcd
tcp 0 0 192.168.0.216:2380 0.0.0.0:* LISTEN 1558/etcd
# 把 etcd.sh 腳本 copy 到 k8s-master2 k8s-master3 機(jī)器上
$ scp /data/etcd/etcd.sh root@k8s-master2:/data/etcd/
$ scp /data/etcd/etcd.sh root@k8s-master3:/data/etcd/
登陸到
k8s-master2
操作
# 執(zhí)行 etcd.sh 生成配置腳本
$ chmod +x etcd.sh
$ ./etcd.sh etcd02 192.168.0.217 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380
# 查看 etcd 是否啟動(dòng)正常
$ ps -ef | grep etcd
$ netstat -ntplu | grep etcd
登陸到
k8s-master3
操作
# 執(zhí)行 etcd.sh 生成配置腳本
$ chmod +x etcd.sh
$ ./etcd.sh etcd03 192.168.0.218 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380
# 查看 etcd 是否啟動(dòng)正常
$ ps -ef | grep etcd
$ netstat -ntplu | grep etcd
# 隨便登陸一臺(tái)master機(jī)器泼橘,查看 etcd 集群是否正常
$ ETCDCTL_API=3 etcdctl --write-out=table \
--cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem --key=/opt/kubernetes/ssl/server-key.pem \
--endpoints=https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 endpoint health
+---------------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+---------------------------------+--------+-------------+-------+
| https://192.168.0.216:2379 | true | 38.721248ms | |
| https://192.168.0.217:2379 | true | 38.621248ms | |
| https://192.168.0.218:2379 | true | 38.821248ms | |
+---------------------------------+--------+-------------+-------+
3.3 創(chuàng)建 metrics-server 證書(shū)
創(chuàng)建 metrics-server 使用的證書(shū)
登陸到
k8s-master1
操作
$ cd /data/ssl/
# 注意: "CN": "system:metrics-server" 一定是這個(gè)涝动,因?yàn)楹竺媸跈?quán)時(shí)用到這個(gè)名稱,否則會(huì)報(bào)禁止匿名訪問(wèn)
$ cat > metrics-server-csr.json <<EOF
{
"CN": "system:metrics-server",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "system"
}
]
}
EOF
生成 metrics-server 證書(shū)和私鑰
# 生成證書(shū)
$ cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server
# copy 到 /opt/kubernetes/ssl 目錄
$ cp metrics-server-key.pem metrics-server.pem /opt/kubernetes/ssl/
# copy 到 k8s-master2 k8s-master3 機(jī)器上
$ scp metrics-server-key.pem metrics-server.pem root@k8s-master2:/opt/kubernetes/ssl/
$ scp metrics-server-key.pem metrics-server.pem root@k8s-master3:/opt/kubernetes/ssl/
3.4 獲取K8S二進(jìn)制包
登陸到
k8s-master1
操作
v1.18 下載頁(yè)面 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
# 創(chuàng)建存放 k8s 二進(jìn)制包目錄
$ mkdir /data/k8s-package
$ cd /data/k8s-package
# 下載 v1.18.2 二進(jìn)制包
# 作者把二進(jìn)制安裝包上傳到cdn上 https://cdm.yp14.cn/k8s-package/kubernetes-server-v1.18.2-linux-amd64.tar.gz
$ wget https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz
$ tar xf kubernetes-server-linux-amd64.tar.gz
master 節(jié)點(diǎn)需要用到:
- kubectl
- kube-scheduler
- kube-apiserver
- kube-controller-manager
node 節(jié)點(diǎn)需要用到:
- kubelet
- kube-proxy
PS:本文master節(jié)點(diǎn)也做為一個(gè)node節(jié)點(diǎn)炬灭,所以需要用到 kubelet kube-proxy 執(zhí)行文件
# 進(jìn)入解壓出來(lái)二進(jìn)制包bin目錄
$ cd /data/k8s-package/kubernetes/server/bin
# cpoy 執(zhí)行文件到 /opt/kubernetes/bin 目錄
$ cp -a kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /opt/kubernetes/bin
# copy 執(zhí)行文件到 k8s-master2 k8s-master3 機(jī)器 /opt/kubernetes/bin 目錄
$ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@k8s-master2:/opt/kubernetes/bin/
$ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@k8s-master3:/opt/kubernetes/bin/
3.5 創(chuàng)建Node節(jié)點(diǎn)kubeconfig文件
登陸到
k8s-master1
操作
- 創(chuàng)建TLS Bootstrapping Token
- 創(chuàng)建kubelet kubeconfig
- 創(chuàng)建kube-proxy kubeconfig
$ cd /data/ssl/
# 修改第10行 KUBE_APISERVER 地址
$ vim kubeconfig.sh
# 創(chuàng)建 TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#----------------------
# 創(chuàng)建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://lb.ypvip.com.cn:6443"
# 設(shè)置集群參數(shù)
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 設(shè)置客戶端認(rèn)證參數(shù)
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 設(shè)置上下文參數(shù)
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 設(shè)置默認(rèn)上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 創(chuàng)建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 生成證書(shū)
$ sh kubeconfig.sh
# 輸出下面結(jié)果
kubeconfig.sh kube-proxy-csr.json kube-proxy.kubeconfig
kube-proxy.csr kube-proxy-key.pem kube-proxy.pem bootstrap.kubeconfig
# copy *kubeconfig 文件到 /opt/kubernetes/cfg 目錄
$ cp *kubeconfig /opt/kubernetes/cfg
# copy 到 k8s-master2 k8s-master3 機(jī)器上
$ scp *kubeconfig root@k8s-master2:/opt/kubernetes/cfg
$ scp *kubeconfig root@k8s-master3:/opt/kubernetes/cfg
3.6 配置Master組件并運(yùn)行
登陸到
k8s-master1
k8s-master2
k8s-master3
操作
# 創(chuàng)建 /data/k8s-master 目錄醋粟,用于存放 master 配置執(zhí)行腳本
$ mkdir /data/k8s-master
登陸到
k8s-master1
$ cd /data/k8s-master
# 創(chuàng)建生成 kube-apiserver 配置文件腳本
$ vim apiserver.sh
#!/bin/bash
MASTER_ADDRESS=${1:-"192.168.0.216"}
ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.0.0/16 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \\
--runtime-config=api/all=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-truncate-enabled=true \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
# 創(chuàng)建生成 kube-controller-manager 配置文件腳本
$ vim controller-manager.sh
#!/bin/bash
MASTER_ADDRESS=${1:-"127.0.0.1"}
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=2 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--bind-address=0.0.0.0 \\
--service-cluster-ip-range=10.10.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--feature-gates=RotateKubeletServerCertificate=true \\
--feature-gates=RotateKubeletClientCertificate=true \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.20.0.0/16 \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
# 創(chuàng)建生成 kube-scheduler 配置文件腳本
$ vim scheduler.sh
#!/bin/bash
MASTER_ADDRESS=${1:-"127.0.0.1"}
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=2 \\
--master=${MASTER_ADDRESS}:8080 \\
--address=0.0.0.0 \\
--leader-elect"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
# 添加執(zhí)行權(quán)限
$ chmod +x *.sh
$ cp /data/ssl/token.csv /opt/kubernetes/cfg/
# copy token.csv 和 master 配置到 k8s-master2 k8s-master3 機(jī)器上
$ scp /data/ssl/token.csv root@k8s-master2:/opt/kubernetes/cfg
$ scp /data/ssl/token.csv root@k8s-master3:/opt/kubernetes/cfg
$ scp apiserver.sh controller-manager.sh scheduler.sh root@k8s-master2:/data/k8s-master
$ scp apiserver.sh controller-manager.sh scheduler.sh root@k8s-master3:/data/k8s-master
# 生成 master配置文件并運(yùn)行
$ ./apiserver.sh 192.168.0.216 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379
$ ./controller-manager.sh 127.0.0.1
$ ./scheduler.sh 127.0.0.1
# 查看master三個(gè)服務(wù)是否正常運(yùn)行
$ ps -ef | grep kube
$ netstat -ntpl | grep kube-
登陸到
k8s-master2
操作
$ cd /data/k8s-master
# 生成 master配置文件并運(yùn)行
$ ./apiserver.sh 192.168.0.217 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379
$ ./controller-manager.sh 127.0.0.1
$ ./scheduler.sh 127.0.0.1
# 查看master三個(gè)服務(wù)是否正常運(yùn)行
$ ps -ef | grep kube
$ netstat -ntpl | grep kube-
登陸到
k8s-master3
操作
$ cd /data/k8s-master
# 生成 master配置文件并運(yùn)行
$ ./apiserver.sh 192.168.0.218 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379
$ ./controller-manager.sh 127.0.0.1
$ ./scheduler.sh 127.0.0.1
# 查看master三個(gè)服務(wù)是否正常運(yùn)行
$ ps -ef | grep kube
$ netstat -ntpl | grep kube-
# 隨便登陸一臺(tái)master查看集群健康狀態(tài)
$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
3.7 配置kubelet證書(shū)自動(dòng)續(xù)期和創(chuàng)建Node授權(quán)用戶
登陸到
k8s-master1
操作
創(chuàng)建 Node節(jié)點(diǎn)
授權(quán)用戶 kubelet-bootstrap
$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
創(chuàng)建自動(dòng)批準(zhǔn)相關(guān) CSR 請(qǐng)求的 ClusterRole
# 創(chuàng)建證書(shū)旋轉(zhuǎn)配置存放目錄
$ mkdir ~/yaml/kubelet-certificate-rotating
$ cd ~/yaml/kubelet-certificate-rotating
$ vim tls-instructs-csr.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
# 部署
$ kubectl apply -f tls-instructs-csr.yaml
自動(dòng)批準(zhǔn) kubelet-bootstrap 用戶 TLS bootstrapping 首次申請(qǐng)證書(shū)的 CSR 請(qǐng)求
$ kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap
自動(dòng)批準(zhǔn) system:nodes 組用戶更新 kubelet 自身與 apiserver 通訊證書(shū)的 CSR 請(qǐng)求
$ kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
自動(dòng)批準(zhǔn) system:nodes 組用戶更新 kubelet 10250 api 端口證書(shū)的 CSR 請(qǐng)求
$ kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
3.8 配置Node組件并運(yùn)行
首先我們先了解下 kubelet
中 kubelet.kubeconfig
配置是如何生成?
kubelet.kubeconfig
配置是通過(guò) TLS Bootstrapping
機(jī)制生成,下面是生成的流程圖米愿。
登陸到
k8s-master1
k8s-master2
k8s-master3
操作
# 創(chuàng)建 node 節(jié)點(diǎn)生成配置腳本目錄
$ mkdir /data/k8s-node
登陸到
k8s-master1
操作
# 創(chuàng)建生成 kubelet 配置腳本
$ vim kubelet.sh
#!/bin/bash
DNS_SERVER_IP=${1:-"10.10.0.2"}
HOSTNAME=${2:-"`hostname`"}
CLUETERDOMAIN=${3:-"cluster.local"}
cat <<EOF >/opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=true \\
--v=2 \\
--hostname-override=${HOSTNAME} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--network-plugin=cni \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/cni/bin \\
--pod-infra-container-image=yangpeng2468/google_containers-pause-amd64:3.2"
EOF
cat <<EOF >/opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration # 使用對(duì)象
apiVersion: kubelet.config.k8s.io/v1beta1 # api版本
address: 0.0.0.0 # 監(jiān)聽(tīng)地址
port: 10250 # 當(dāng)前kubelet的端口
readOnlyPort: 10255 # kubelet暴露的端口
cgroupDriver: cgroupfs # 驅(qū)動(dòng)厦凤,要于docker info顯示的驅(qū)動(dòng)一致
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: ${CLUETERDOMAIN} # 集群域
failSwapOn: false # 關(guān)閉swap
# 身份驗(yàn)證
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
# 授權(quán)
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
# Node 資源保留
evictionHard:
imagefs.available: 15%
memory.available: 1G
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
# 鏡像刪除策略
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
# 旋轉(zhuǎn)證書(shū)
rotateCertificates: true # 旋轉(zhuǎn)kubelet client 證書(shū)
featureGates:
RotateKubeletServerCertificate: true
RotateKubeletClientCertificate: true
maxOpenFiles: 1000000
maxPods: 110
EOF
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
# 創(chuàng)建生成 kube-proxy 配置腳本
$ vim proxy.sh
#!/bin/bash
HOSTNAME=${1:-"`hostname`"}
cat <<EOF >/opt/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=2 \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
cat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0 # 監(jiān)聽(tīng)地址
metricsBindAddress: 0.0.0.0:10249 # 監(jiān)控指標(biāo)地址,監(jiān)控獲取相關(guān)信息 就從這里獲取
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig # 讀取配置文件
hostnameOverride: ${HOSTNAME} # 注冊(cè)到k8s的節(jié)點(diǎn)名稱唯一
clusterCIDR: 10.10.0.0/16 # service IP范圍
mode: iptables # 使用iptables模式
# 使用 ipvs 模式
#mode: ipvs # ipvs 模式
#ipvs:
# scheduler: "rr"
#iptables:
# masqueradeAll: true
EOF
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
# 生成 node 配置文件
$ ./kubelet.sh 10.10.0.2 k8s-master1 cluster.local
$ ./proxy.sh k8s-master1
# 查看服務(wù)是否啟動(dòng)
$ netstat -ntpl | egrep "kubelet|kube-proxy"
# copy kubelet.sh proxy.sh 腳本到 k8s-master2 k8s-master3 機(jī)器上
$ scp kubelet.sh proxy.sh root@k8s-master2:/data/k8s-node
$ scp kubelet.sh proxy.sh root@k8s-master3:/data/k8s-node
登陸到
k8s-master2
操作
$ cd /data/k8s-node
# 生成 node 配置文件
$ ./kubelet.sh 10.10.0.2 k8s-master2 cluster.local
$ ./proxy.sh k8s-master2
# 查看服務(wù)是否啟動(dòng)
$ netstat -ntpl | egrep "kubelet|kube-proxy"
登陸到
k8s-master3
操作
$ cd /data/k8s-node
# 生成 node 配置文件
$ ./kubelet.sh 10.10.0.2 k8s-master3 cluster.local
$ ./proxy.sh k8s-master3
# 查看服務(wù)是否啟動(dòng)
$ netstat -ntpl | egrep "kubelet|kube-proxy"
# 隨便登陸一臺(tái)master機(jī)器查看node節(jié)點(diǎn)是否添加成功
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NoReady <none> 4d4h v1.18.2
k8s-master2 NoReady <none> 4d4h v1.18.2
k8s-master3 NoReady <none> 4d4h v1.18.2
上面 Node 節(jié)點(diǎn)處理
NoReady
狀態(tài),是因?yàn)槟壳斑€沒(méi)有安裝網(wǎng)絡(luò)組件育苟,下文安裝網(wǎng)絡(luò)組件较鼓。
解決無(wú)法查詢pods日志問(wèn)題
$ vim ~/yaml/apiserver-to-kubelet-rbac.yml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubelet-api-admin
subjects:
- kind: User
name: kubernetes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:kubelet-api-admin
apiGroup: rbac.authorization.k8s.io
# 應(yīng)用
$ kubectl apply -f ~/yaml/apiserver-to-kubelet-rbac.yml
3.9 安裝calico網(wǎng)絡(luò),使用IPIP模式
登陸到
k8s-master1
操作
下載 Calico Version v3.14.0
Yaml 文件
# 存放etcd yaml文件
$ mkdir -p ~/yaml/calico
$ cd ~/yaml/calico
# 注意:下面是基于自建etcd做為存儲(chǔ)的配置文件
$ curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -O
calico-etcd.yaml
需要修改如下配置:
Secret 配置修改
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
etcd-key: (cat /opt/kubernetes/ssl/server-key.pem | base64 -w 0) # 將輸出結(jié)果填寫(xiě)在這里
etcd-cert: (cat /opt/kubernetes/ssl/server.pem | base64 -w 0) # 將輸出結(jié)果填寫(xiě)在這里
etcd-ca: (cat /opt/kubernetes/ssl/ca.pem | base64 -w 0) # 將輸出結(jié)果填寫(xiě)在這里
ConfigMap
配置修改
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
etcd_endpoints: "https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379"
etcd_ca: "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key"
關(guān)于ConfigMap部分主要參數(shù)如下:
-
etcd_endpoints
:Calico使用etcd來(lái)保存網(wǎng)絡(luò)拓?fù)浜蜖顟B(tài)违柏,該參數(shù)指定etcd的地址博烂,可以使用K8S Master所用的etcd,也可以另外搭建漱竖。 -
calico_backend
:Calico的后端禽篱,默認(rèn)為bird。 -
cni_network_config
:符合CNI規(guī)范的網(wǎng)絡(luò)配置馍惹,其中type=calico表示谆级,Kubelet 從 CNI_PATH (默認(rèn)為/opt/cni/bin)目錄找calico的可執(zhí)行文件,用于容器IP地址的分配讼积。 - etcd 如果配置
TLS安全認(rèn)證
肥照,則還需要指定相應(yīng)的ca
、cert
勤众、key
等文件
修改 Pods 使用的 IP 網(wǎng)段
舆绎,默認(rèn)使用 192.168.0.0/16
網(wǎng)段
- name: CALICO_IPV4POOL_CIDR
value: "10.20.0.0/16"
配置網(wǎng)卡自動(dòng)發(fā)現(xiàn)規(guī)則
在 DaemonSet calico-node env
中添加網(wǎng)卡發(fā)現(xiàn)規(guī)則
# 定義ipv4自動(dòng)發(fā)現(xiàn)網(wǎng)卡規(guī)則
- name: IP_AUTODETECTION_METHOD
value: "interface=eth.*"
# 定義ipv6自動(dòng)發(fā)現(xiàn)網(wǎng)卡規(guī)則
- name: IP6_AUTODETECTION_METHOD
value: "interface=eth.*"
Calico 模式設(shè)置
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
Calico 有兩種網(wǎng)絡(luò)模式:BGP
和 IPIP
- 使用
IPIP
模式時(shí),設(shè)置CALICO_IPV4POOL_IPIP="always"
们颜,IPIP 是一種將各Node的路由之間做一個(gè)tunnel
吕朵,再把兩個(gè)網(wǎng)絡(luò)連接起來(lái)的模式,啟用IPIP模式時(shí)窥突,Calico將在各Node上創(chuàng)建一個(gè)名為tunl0
的虛擬網(wǎng)絡(luò)接口努溃。 - 使用
BGP
模式時(shí),設(shè)置CALICO_IPV4POOL_IPIP="off"
錯(cuò)誤解決方法
錯(cuò)誤
: [ERROR][8] startup/startup.go 146: failed to query kubeadm's config map error=Get https://10.10.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
原因
:Node工作節(jié)點(diǎn)連接不到 apiserver
地址阻问,檢查一下calico配置文件梧税,要把a(bǔ)piserver的IP和端口配置上,如果不配置的話称近,calico默認(rèn)將設(shè)置默認(rèn)的calico網(wǎng)段和443端口第队。字段名:KUBERNETES_SERVICE_HOST
、KUBERNETES_SERVICE_PORT
刨秆、KUBERNETES_SERVICE_PORT_HTTPS
凳谦。
解決方法
:
在 DaemonSet calico-node env
中添加環(huán)境變量
- name: KUBERNETES_SERVICE_HOST
value: "lb.ypvip.com.cn"
- name: KUBERNETES_SERVICE_PORT
value: "6443"
- name: KUBERNETES_SERVICE_PORT_HTTPS
value: "6443"
修改完 calico-etcd.yaml
后,執(zhí)行部署
# 部署
$ kubectl apply -f calico-etcd.yaml
# 查看 calico pods
$ kubectl get pods -n kube-system | grep calico
# 查看 node 是否正常衡未,現(xiàn)在 node 服務(wù)正常了
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 4d4h v1.18.2
k8s-master2 Ready <none> 4d4h v1.18.2
k8s-master3 Ready <none> 4d4h v1.18.2
3.10 集群CoreDNS部署
登陸到
k8s-master1
操作
deploy.sh
是一個(gè)便捷的腳本尸执,用于生成coredns yaml 配置家凯。
# 安裝依賴 jq 命令
$ yum install jq -y
$ cd ~/yaml
$ mkdir coredns
$ cd coredns
# 下載 CoreDNS 項(xiàng)目
$ git clone https://github.com/coredns/deployment.git
$ cd coredns/deployment/kubernetes
默認(rèn)情況下 CLUSTER_DNS_IP
是自動(dòng)獲取kube-dns的集群ip的,但是由于沒(méi)有部署kube-dns所以只能手動(dòng)指定一個(gè)集群ip如失。
111 if [[ -z $CLUSTER_DNS_IP ]]; then
112 # Default IP to kube-dns IP
113 # CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}")
114 CLUSTER_DNS_IP=10.10.0.2
# 查看執(zhí)行效果肆饶,并未開(kāi)始部署
$ ./deploy.sh
# 執(zhí)行部署
$ ./deploy.sh | kubectl apply -f -
# 查看 Coredns
$ kubectl get svc,pods -n kube-system| grep coredns
測(cè)試 Coredns 解析
# 創(chuàng)建一個(gè) busybox Pod
$ vim busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28.4
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
# 部署
$ kubectl apply -f busybox.yaml
# 測(cè)試解析,下面是解析正常
$ kubectl exec -i busybox -n default nslookup kubernetes
Server: 10.10.0.2
Address 1: 10.10.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.10.0.1 kubernetes.default.svc.cluster.local
3.11 部署集群監(jiān)控服務(wù) Metrics Server
登陸到
k8s-master1
操作
$ cd ~/yaml
# 拉取 v0.3.6 版本
$ git clone https://github.com/kubernetes-sigs/metrics-server.git -b v0.3.6
$ cd metrics-server/deploy/1.8+
只修改 metrics-server-deployment.yaml
配置文件
# 下面是修改前后比較差異
$ git diff metrics-server-deployment.yaml
diff --git a/deploy/1.8+/metrics-server-deployment.yaml b/deploy/1.8+/metrics-server-deployment.yaml
index 2393e75..2139e4a 100644
--- a/deploy/1.8+/metrics-server-deployment.yaml
+++ b/deploy/1.8+/metrics-server-deployment.yaml
@@ -29,8 +29,19 @@ spec:
emptyDir: {}
containers:
- name: metrics-server
- image: k8s.gcr.io/metrics-server-amd64:v0.3.6
- imagePullPolicy: Always
+ image: yangpeng2468/metrics-server-amd64:v0.3.6
+ imagePullPolicy: IfNotPresent
+ resources:
+ limits:
+ cpu: 400m
+ memory: 1024Mi
+ requests:
+ cpu: 50m
+ memory: 50Mi
+ command:
+ - /metrics-server
+ - --kubelet-insecure-tls
+ - --kubelet-preferred-address-types=InternalIP
volumeMounts:
- name: tmp-dir
mountPath: /tmp
# 部署
$ kubectl apply -f .
# 驗(yàn)證
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master1 72m 7% 1002Mi 53%
k8s-master2 121m 3% 1852Mi 12%
k8s-master3 300m 3% 1852Mi 20%
# 內(nèi)存單位 Mi=1024*1024字節(jié) M=1000*1000字節(jié)
# CPU單位 1核=1000m 即 250m=1/4核
3.12 部署 Kubernetes Dashboard
Kubernetes Dashboard 部署岖常,請(qǐng)參考 K8S Dashboard 2.0 部署 文章。
結(jié)束語(yǔ)
Kubernetes v1.18.2 二進(jìn)制部署葫督,作者測(cè)試過(guò)無(wú)坑竭鞍。這篇部署文章完全可以直接用于生產(chǎn)環(huán)境部署。全方位包含整個(gè) Kubernetes 組件部署橄镜。
參考鏈接
- https://blog.51cto.com/1014810/2474723
- https://docs.projectcalico.org/getting-started/kubernetes/installation/config-options#customizing-application-layer-policy-manifests
- https://docs.projectcalico.org/reference/node/configuration
- https://webcache.googleusercontent.com/search?q=cache:ihRi-OM5WboJ:https://www.cnblogs.com/Christine-ting/p/12837403.html+&cd=1&hl=zh-CN&ct=clnk
本文由YP小站發(fā)布