kubeadm部署k8s集群
一算行、kubeadm介紹
? kubeadm是官方社區(qū)推出的用于快速部署kubernetes集群的工具。該工具通過(guò)兩條指令完成kubernetes集群的部署:
#1.創(chuàng)建Master節(jié)點(diǎn)
kubeadm init
#2.將node節(jié)點(diǎn)加入到當(dāng)前集群中
kubeadm join [master節(jié)點(diǎn)IP和端口]
1.1.安裝要求
- 一臺(tái)或多臺(tái)機(jī)器伦吠,操作系統(tǒng)CentOS7.x-86_X64
- 硬件配置:CPU:2C以上衡招,內(nèi)存:4G以上,硬盤:30G以上
- 所有節(jié)點(diǎn)之間網(wǎng)絡(luò)互通
- 可訪問(wèn)外網(wǎng)
- 禁止SWAP分區(qū)
- 可以使用"–ignore-preflight-errors=…"參數(shù)忽視,但建議不低于2c CPU的配置部署k8s
1.2.安裝目標(biāo)
- 在所有節(jié)點(diǎn)上安裝Docker崖技、kubeadm擎椰、kubelet支子、kubectl
- 部署Kubernetes Master
- 部署容器網(wǎng)絡(luò)插件
- 部署Kubernetes Node,將節(jié)點(diǎn)加入Kubernetes集群中
- 部署Dashboard Web頁(yè)面确憨,可視化查看Kubernetes資源
1.3.安裝規(guī)劃
角色 | ip | 組件 |
---|---|---|
VIP | 172.30.2.100 | 使用VIP進(jìn)行kubeadm初始化master |
k8s-master1 | 172.30.2.101 | docker,kubeadm,kubelet,kubectl |
k8s-master2 | 172.30.2.102 | docker,kubeadm,kubelet,kubectl |
k8s-master3 | 172.30.2.103 | docker,kubeadm,kubelet,kubectl |
k8s-worker1 | 172.30.2.201 | docker,kubeadm,kubelet,kubectl |
k8s-worker2 | 172.30.2.202 | docker,kubeadm,kubelet,kubectl |
二译荞、kubernetes集群安裝
2.1.所有節(jié)點(diǎn)操作系統(tǒng)優(yōu)化
#1.關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
#2.關(guān)閉selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config # 永久
setenforce 0 # 臨時(shí)
#3.關(guān)閉swap
swapoff -a # 臨時(shí)
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
#或
sed -ri '/.*swap.*/d' /etc/fstab
#4.DNS設(shè)置(根據(jù)實(shí)際環(huán)境)
cat >> /etc/resolv.conf << EOF
nameserver 114.114.114.114
nameserver 8.8.8.8
EOF
#5.將橋接的IPv4流量傳遞到iptables的鏈:
#橋接前確認(rèn)br_netfilter模塊是否加載,執(zhí)行以下命令
lsmod | grep br_netfilter
modprobe br_netfilter
#然后執(zhí)行下命令
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system #生效
#6.時(shí)間同步
yum install -y ntpdate wget
ntpdate time.windows.com
2.2.在master節(jié)點(diǎn)和worker節(jié)點(diǎn)操作
#1.所有節(jié)點(diǎn)設(shè)置主機(jī)名
hostnamectl set-hostname <hostname>
#2.在master節(jié)點(diǎn)添加hosts
cat >> /etc/hosts << EOF
172.30.2.101 k8s-master1
172.30.2.102 k8s-master2
172.30.2.103 k8s-master3
172.30.2.201 k8s-worker1
172.30.2.202 k8s-worker2
EOF
#3.在worker節(jié)點(diǎn)創(chuàng)建LVM
pvcreate /dev/sdb
vgcreate vg_node /dev/sdb
lvcreate -n lv_node -l 100%FREE vg_node
mkfs.xfs /dev/vg_node/lv_node
mount /dev/mapper/vg_node-lv_node /opt
sed -i '$a /dev/mapper/vg_node-lv_node /opt xfs defaults 0 0' /etc/fstab
2.3.所有節(jié)點(diǎn)安裝docker/kubeadm/kubelet和kubectl
kubernetes部署采用yum安裝默認(rèn)版本
kubernetes需要用到容器運(yùn)行時(shí)接口,本例采用docker容器運(yùn)行時(shí)
容器運(yùn)行時(shí)安裝參考:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/
#1.配置docker、kubernetes倉(cāng)庫(kù)
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast #更新yum緩存
#2.安裝docker-ce kubeadm kubelet kubectl
#由kubeadm采用最新版本,目前kubernetes最新版本通過(guò)驗(yàn)證docker版本到19.03
yum list docker-ce --showduplicates | sort -r #檢索docker19.03
yum install -y containerd.io-1.2.13 docker-ce-19.03.11 docker-ce-cli-19.03.11 kubelet kubeadm kubectl kubernetes-cni
# 創(chuàng)建 /etc/docker 目錄
sudo mkdir /etc/docker
#3.配置鏡像加載器及Cgroup Driver驅(qū)動(dòng)采用system
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"insecure-registries": ["172.30.2.254"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
#"insecure-registries": ["172.30.2.254"],非https訪問(wèn)倉(cāng)庫(kù)配置
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
#重啟 Docker
sudo systemctl daemon-reload && sudo systemctl restart docker&& sudo systemctl enable docker
#查看Cgroup驅(qū)動(dòng)是否為systemd
docker info | grep "Cgroup Driver"
# Cgroup Driver: systemd
#4.在worker節(jié)點(diǎn)修改Docker本地鏡像與容器的存儲(chǔ)位置的方法
#默認(rèn)/opt/data/ 是大容量磁盤
docker info | grep "Docker Root Dir"
systemctl stop docker
mkdir -p /opt/data
mv /var/lib/docker /opt/data/
ln -s /opt/data/docker /var/lib/docker
#5.重啟docker和kubelet設(shè)置開機(jī)啟動(dòng)
systemctl restart docker && systemctl enable --now kubelet
#6.查看版本
docker --version #查看版本
#Docker version 19.03.11, build 42e35e61f3
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:25:59Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
2.4.在所有master節(jié)點(diǎn)上建立高可用
? 在master建立高可用休弃,其實(shí)就是給所有的kube-apiserver做反向代理吞歼,可使用SLB或者使用一臺(tái)獨(dú)立虛擬服務(wù)器代理。本例是在所有master節(jié)點(diǎn)上部署nginx(upstream)+keepalived方式反向代理kube-apiserver塔猾。
2.4.1.kube-proxy開啟IPVS配置
#ipvs稱之為IP虛擬服務(wù)器(IP Virtual Server篙骡,簡(jiǎn)寫為IPVS)
#1.在所有master節(jié)點(diǎn)執(zhí)行以下命令
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
#2.查看IPVS模塊加載情況
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#能看到ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4加載成功
2.4.2.部署nginx和keepalived
#1.在所有master節(jié)點(diǎn)安裝nginx和keepalived
yum -y install nginx keepalived
systemctl start keepalived && systemctl enable keepalived
systemctl start nginx && systemctl enable nginx
2.4.3.配置Nginx的upstream反向代理
#1.在所有master節(jié)點(diǎn)配置nginx.conf
cat > /etc/nginx/nginx.conf <<EOF
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format proxy '\$remote_addr \$remote_port - [\$time_local] \$status \$protocol '
'"\$upstream_addr" "\$upstream_bytes_sent" "\$upstream_connect_time"' ;
access_log /var/log/nginx/nginx-proxy.log proxy;
# 修改為master的IP地址
upstream kubernetes_lb {
server 172.30.2.101:6443 weight=5 max_fails=3 fail_timeout=30s;
server 172.30.2.102:6443 weight=5 max_fails=3 fail_timeout=30s;
server 172.30.2.103:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 30s;
proxy_timeout 30s;
proxy_pass kubernetes_lb;
}
}
EOF
#在其他master節(jié)點(diǎn)執(zhí)行
scp -r 172.30.2.101:/etc/nginx/nginx.conf /etc/nginx/
#2.檢查Nginx配置文件語(yǔ)法是否正常,后重新加載Nginx
nginx -t
#nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
#nginx: configuration file /etc/nginx/nginx.conf test is successful
nginx -s reload
2.4.4.keepalived配置
#1.在所有master節(jié)點(diǎn)配置keepalived.conf
cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
notification_email {
root@localhost
}
notification_email_from root@k8s.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1 #router_id每臺(tái)機(jī)器設(shè)置不同
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh" ## 檢測(cè) nginx 狀態(tài)的腳本路徑
interval 2 ## 檢測(cè)時(shí)間間隔
weight -20 ## 如果條件成立丈甸,權(quán)重-20
}
vrrp_instance VI_1 {
state MASTER #其他節(jié)點(diǎn)設(shè)置為BACKUP
interface ens32 #網(wǎng)卡設(shè)備名稱糯俗,根據(jù)自己網(wǎng)卡信息進(jìn)行更改
virtual_router_id 88
advert_int 1
priority 110 #其他節(jié)點(diǎn)設(shè)置為109,108
authentication {
auth_type PASS
auth_pass 1234abcd
}
track_script {
chk_nginx #執(zhí)行nginx監(jiān)控
}
virtual_ipaddress {
172.30.2.100/22 # 這就是虛擬IP地址
}
}
EOF
#注釋:
#1>修改interface ens32中的ens32改為服務(wù)模塊節(jié)點(diǎn)實(shí)際的網(wǎng)卡名
#2>三個(gè)節(jié)點(diǎn)router_id分別修改為L(zhǎng)VS_1,LVS_2,LVS_3
#3>三個(gè)節(jié)點(diǎn)state MASTER分別修改為:state MASTER、state BACKUP睦擂、state BACKUP
#4>三個(gè)節(jié)點(diǎn)priority 110 分別修改為:110,109,108
#2.創(chuàng)建nginx_check.sh腳本
cat > /etc/keepalived/nginx_check.sh <<EOF
#!/bin/bash
export LANG="en_US.UTF-8"
if [ ! -f "/run/nginx.pid" ]; then
/usr/bin/systemctl restart nginx
sleep 2
if [ ! -f "/run/nginx.pid" ]; then
/bin/kill -9 \$(head -n 1 /var/run/keepalived.pid)
fi
fi
EOF
chmod a+x /etc/keepalived/nginx_check.sh
#在其他master節(jié)點(diǎn)執(zhí)行
scp 172.30.2.101:/etc/keepalived/keepalived.conf /etc/keepalived/
scp 172.30.2.101:/etc/keepalived/nginx_check.sh /etc/keepalived/
#3.所有master節(jié)點(diǎn)重啟keepalived
systemctl restart keepalived
#查看日志
journalctl -f -u keepalived
#4.在同網(wǎng)絡(luò)任意節(jié)點(diǎn)驗(yàn)證keepalived是否暢通
ping 172.30.2.100
#5.在同網(wǎng)絡(luò)任意節(jié)點(diǎn)驗(yàn)證nginx 的VIP:7443端口是否暢通
ssh -v -p 7443 172.30.2.100
#出現(xiàn)這個(gè)結(jié)果代表暢通
debug1: Connection established.
至此高可用VIP已經(jīng)建立下面開始master初始化工作
2.5.在master1節(jié)點(diǎn)上進(jìn)行kubeadm初始化
2.5.1.獲取kubeadm-init.yaml文件
#1.初始化節(jié)點(diǎn)1
kubeadm config print init-defaults > kubeadm-init.yaml
#2.編輯kubeadm-init.yaml
cat > kubeadm-init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.30.2.101 #指定本地ip地址
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master1 #指定本地主機(jī)名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "172.30.2.100:7443" #增加kubeapiserver集群ip地址和端口,就是VIP
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #國(guó)外網(wǎng)址k8s.gcr.io受限換成國(guó)內(nèi)
kind: ClusterConfiguration
kubernetesVersion: v1.20.0 #修改實(shí)際kubernetes版本
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16 #增加pod網(wǎng)絡(luò)
scheduler: {}
--- #增加kubeproxy代理配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF
#3.先下載鏡像
kubeadm config images pull --config kubeadm-init.yaml
按照上圖在其它master節(jié)點(diǎn)使用docker pull把以上鏡像拉取下來(lái)
#4.kubeadm初始化
kubeadm init --config kubeadm-init.yaml
#以下初始化結(jié)果
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
--control-planeThen you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
#1.kubeadm初始化完成先本地執(zhí)行命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
#2.加入master節(jié)點(diǎn)需要執(zhí)行--在2.6.節(jié)操作
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 \
--control-plane
#3.加入node節(jié)點(diǎn)需要執(zhí)行--在2.7.節(jié)操作
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
2.6.在其他兩個(gè)master節(jié)點(diǎn)執(zhí)行相關(guān)操作
#1.在master2和master3節(jié)點(diǎn)復(fù)制相關(guān)證書
mkdir -p /etc/kubernetes/pki/etcd
scp -r 172.30.2.101:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/
scp -r 172.30.2.101:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/
scp -r 172.30.2.101:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/
scp -r 172.30.2.101:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/
scp -r 172.30.2.101:/etc/kubernetes/admin.conf /etc/kubernetes/
#2.master2和master3節(jié)點(diǎn)執(zhí)行以下命令
kubeadm join 172.30.2.100:7443 --v=5 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 \
--control-plane \
--ignore-preflight-errors=all
#3.在master1節(jié)點(diǎn)上查看pod得湘、svc狀態(tài),其中pod是否全部處于running狀態(tài)
kubectl get pod,svc --all-namespaces -o wide
#4.在任意master節(jié)點(diǎn)上執(zhí)行以下命令驗(yàn)證是否部署成功
kubectl get node
#返回結(jié)果如下
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 28m v1.20.2
k8s-master2 NotReady control-plane,master 4m13s v1.20.2
k8s-master3 NotReady control-plane,master 30s v1.20.2
#以上NotReady等待CNI網(wǎng)絡(luò)插件安裝
2.7.安裝CNI網(wǎng)絡(luò)插件
#1.其中一個(gè)master節(jié)點(diǎn)上下載
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#遇見無(wú)法下載解決辦法
yum provides dig #dig命令所屬包
yum install -y bind-utils
dig@DNSIP raw.githubusercontent.com
#解析到ip放在hosts里
#2.執(zhí)行命令
kubectl apply -f kube-flannel.yml
#3.coredns應(yīng)用測(cè)試驗(yàn)證
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
/# nslookup kubernetes
/# ping kubernetes
/# nslookup 163.com
/# ping 163.com
#4.所有節(jié)點(diǎn)再驗(yàn)證
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 37m v1.20.2
k8s-master2 Ready control-plane,master 13m v1.20.2
k8s-master3 Ready control-plane,master 9m59s v1.20.2
2.8.加入worker節(jié)點(diǎn)
#1.在所有node節(jié)點(diǎn)上執(zhí)行如下命令
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
#通過(guò)journalctl查看日志
journalctl -f -u kubelet
#2.在任意master節(jié)點(diǎn)執(zhí)行如下命令進(jìn)行驗(yàn)證node節(jié)點(diǎn)是否加入成功
kubectl get node -A | grep node
#返回結(jié)果如下
k8s-worker1 NotReady <none> 16s v1.20.2
k8s-worker2 NotReady <none> 10s v1.20.2
#node節(jié)點(diǎn)處于NotReady狀態(tài)說(shuō)明pod的kube-flannel、kube-proxy為部署完成,通過(guò)命令
kubectl -n kube-system get pods #查看
#再次驗(yàn)證返回結(jié)果都為Ready狀態(tài)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 44m v1.20.2
k8s-master2 Ready control-plane,master 20m v1.20.2
k8s-master3 Ready control-plane,master 16m v1.20.2
k8s-worker1 Ready <none> 3m29s v1.20.2
k8s-worker2 Ready <none> 2m56s v1.20.2
至此kubeadm集群部署完成顿仇。
2.9.遇見到問(wèn)題解決
#問(wèn)題1.error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher
#解決方法
kubeadm reset -f
docker rm -f $(docker ps -a -q )
rm -rf /var/lib/cni/
systemctl daemon-reload
systemctl restart kubelet
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
2.10.縮減worker節(jié)點(diǎn)
#刪除worker節(jié)點(diǎn)
#1.在master節(jié)點(diǎn)執(zhí)行
#將當(dāng)前運(yùn)行在該節(jié)點(diǎn)上的容器驅(qū)離
kubectl drain k8s-worker1 --ignore-daemonsets
#將該節(jié)點(diǎn)設(shè)置為不可調(diào)度模式
kubectl cordon k8s-worker1
#刪除worker節(jié)點(diǎn)
kubectl delete node k8s-worker1
#2.在k8s-worker1節(jié)點(diǎn)執(zhí)行
kubeadm reset -f
docker rm -f $(docker ps -a -q )
rm -rf /var/lib/cni/
systemctl daemon-reload
systemctl restart kubelet
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
ip link delete cni0
ip link delete flannel.1
三淘正、kubernetes應(yīng)用部署
3.1.Dashboard部署及驗(yàn)證k8s集群
#1.下載Dashboard的yaml文件
#官方主頁(yè)https://github.com/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
#2.默認(rèn)Dashboard只能集群內(nèi)部訪問(wèn),修改Service為NodePort類型臼闻,暴露到外部:
vim recommended.yaml
...
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #新增
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #新增
selector:
k8s-app: kubernetes-dashboard
---
...
kubectl apply -f recommended.yaml
#3.驗(yàn)證
kubectl -n kubernetes-dashboard get pod,svc
#pod狀態(tài)處于Running說(shuō)明部署成功
#4.通過(guò)網(wǎng)頁(yè)訪問(wèn)使用worker節(jié)點(diǎn)任意ip訪問(wèn)
https://NodeIP:30001
#5.創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
#6.使用輸出的token登錄Dashboard
https://NodeIP:30001
#在設(shè)置項(xiàng)里可以修改語(yǔ)言
3.2.etcd-3.14.13使用
#1.在master1節(jié)點(diǎn)下載etcd程序包
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
#2.解壓程序etcd-v3.4.13-linux-amd64.tar.gz
tar -xzf etcd-v3.4.13-linux-amd64.tar.gz
cp etcd-v3.4.13-linux-amd64/etcdctl /usr/bin/
#3.etcdctl使用
#---驗(yàn)證集群狀態(tài)
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints="https://172.30.2.101:2379,https://172.30.2.102:2379,https://172.30.2.103:2379" endpoint health
#綁定etcdctl環(huán)境變量使用
cat <<EOF | sudo tee ~/.bashrc
export ETCDCTL_API=3
export ETCDCTL_DIAL_TIMEOUT=3s
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
EOF
source ~/.bashrc
#1>.以表格形式查看集群狀態(tài)
etcdctl --endpoints="https://172.30.2.101:2379" -w table endpoint --cluster status
#2>.查看所有的key
etcdctl --endpoints="https://172.30.2.101:2379" --keys-only=true get --from-key ''
#或
etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=true get /
#3>.查看擁有某個(gè)前綴的keys
etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=true get /registry/pods/
#4>.查看某個(gè)具體key的值以json格式輸出
etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=false -w json get /registry/pods/kube-system/etcd-k8s-master1
#更多etcdctl操作命令:https://github.com/etcd-io/etcd/tree/master/etcdctl
四鸿吆、kubernetes插件部署
4.1.在windows部署kubectl工具
#1.master節(jié)點(diǎn)下載windows版本kubectl工具
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/windows/amd64/kubectl.exe
#把kubectl.exe下載windows系統(tǒng)d:\kubectlv1.20.2目錄里
#https://storage.googleapis.com/kubernetes-release/release/stable.txt
#2.創(chuàng)建.kube目錄
#在windows系統(tǒng)運(yùn)行-cmd進(jìn)入當(dāng)前用戶目錄創(chuàng)建
cd C:\Users\當(dāng)用戶目錄
md .kube
#把master節(jié)點(diǎn)上$HOME/.kube/config拷貝到windows系統(tǒng)的.kube目錄中
#3.windows系統(tǒng)的環(huán)境變量的path環(huán)境增加d:\kubectlv1.20.2
#4.確保windows系統(tǒng)跟k8s集群在同一網(wǎng)絡(luò)里,并且打開cmd執(zhí)行命令
kubectl get pod,svc --all-namespaces
參考:https://luyanan.com/article/info/19821386744192 加入新的master與worker節(jié)點(diǎn)
https://blog.csdn.net/liuyunshengsir/article/details/105149866 node節(jié)點(diǎn)擴(kuò)縮容
https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/ 容器運(yùn)行時(shí)