基于kubeadm搭建k8s高可用集群

實(shí)踐環(huán)境準(zhǔn)備

服務(wù)器說明

我這里使用的是五臺(tái)CentOS-7.7的虛擬機(jī),具體信息如下表:

系統(tǒng)版本 IP地址 節(jié)點(diǎn)角色 CPU Memory Hostname
CentOS-7.7 192.168.243.138 master >=2 >=2G m1
CentOS-7.7 192.168.243.136 master >=2 >=2G m2
CentOS-7.7 192.168.243.141 master >=2 >=2G m3
CentOS-7.7 192.168.243.139 worker >=2 >=2G s1
CentOS-7.7 192.168.243.140 worker >=2 >=2G s2

這五臺(tái)機(jī)器均需事先安裝好Docker叛买,由于安裝過程比較簡(jiǎn)單這里不進(jìn)行介紹爽柒,可以參考官方文檔:

系統(tǒng)設(shè)置(所有節(jié)點(diǎn))

1、主機(jī)名必須每個(gè)節(jié)點(diǎn)都不一樣橄教,并且保證所有點(diǎn)之間可以通過hostname互相訪問清寇。設(shè)置hostname:

# 查看主機(jī)名
$ hostname
# 修改主機(jī)名
$ hostnamectl set-hostname <your_hostname>

配置host喘漏,使所有節(jié)點(diǎn)之間可以通過hostname互相訪問:

$ vim /etc/hosts
192.168.243.138 m1
192.168.243.136 m2
192.168.243.141 m3
192.168.243.139 s1
192.168.243.140 s2

2、安裝依賴包:

# 更新yum
$ yum update
# 安裝依賴包
$ yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

3华烟、關(guān)閉防火墻翩迈、swap,重置iptables:

# 關(guān)閉防火墻
$ systemctl stop firewalld && systemctl disable firewalld
# 重置iptables
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 關(guān)閉swap
$ swapoff -a
$ sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
# 關(guān)閉selinux
$ setenforce 0
# 關(guān)閉dnsmasq(否則可能導(dǎo)致docker容器無法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq
# 重啟docker服務(wù)
$ systemctl restart docker

4盔夜、系統(tǒng)參數(shù)設(shè)置:

# 制作配置文件
$ cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf

安裝必要工具(所有節(jié)點(diǎn))

工具說明:

  • kubeadm: 部署集群用的命令
  • kubelet: 在集群中每臺(tái)機(jī)器上都要運(yùn)行的組件负饲,負(fù)責(zé)管理pod、容器的生命周期
  • kubectl: 集群管理工具(可選喂链,只要在控制集群的節(jié)點(diǎn)上安裝即可)

1返十、首先添加k8s的源:

$ bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF'

2、安裝k8s相關(guān)組件:

$ yum install -y kubelet kubeadm kubectl
$ systemctl enable --now kubelet.service

配置kubectl命令補(bǔ)全

kubectl是用于與k8s集群交互的一個(gè)命令行工具椭微,操作k8s基本離不開這個(gè)工具洞坑,所以該工具所支持的命令比較多。好在kubectl支持設(shè)置命令補(bǔ)全赏表,使用kubectl completion -h可以查看各個(gè)平臺(tái)下的設(shè)置示例检诗。這里以Linux平臺(tái)為例,演示一下如何設(shè)置這個(gè)命令補(bǔ)全瓢剿,完成以下操作后就可以使用tap鍵補(bǔ)全命令了:

[root@m1 ~]# yum install bash-completion -y
[root@m1 ~]# source /usr/share/bash-completion/bash_completion
[root@m1 ~]# source <(kubectl completion bash)
[root@m1 ~]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@m1 ~]# printf "  
# Kubectl shell completion  
source '$HOME/.kube/completion.bash.inc'  
" >> $HOME/.bash_profile
[root@m1 ~]# source $HOME/.bash_profile

高可用集群部署

部署keepalived - apiserver高可用(任選兩個(gè)master節(jié)點(diǎn))

1逢慌、在兩個(gè)主節(jié)點(diǎn)上執(zhí)行如下命令安裝keepalived(一主一備),我這里選擇在m1m2節(jié)點(diǎn)上進(jìn)行安裝:

$ yum install -y keepalived

2间狂、分別在兩臺(tái)機(jī)器上創(chuàng)建keepalived配置文件的存放目錄:

$ mkdir -p /etc/keepalived

3攻泼、在m1(角色為master)上創(chuàng)建配置文件如下:

[root@m1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 router_id keepalive-master
}

vrrp_script check_apiserver {
 # 檢測(cè)腳本路徑
 script "/etc/keepalived/check-apiserver.sh"
 # 多少秒檢測(cè)一次
 interval 3
 # 失敗的話權(quán)重-2
 weight -2
}

vrrp_instance VI-kube-master {
   state MASTER  # 定義節(jié)點(diǎn)角色
   interface ens32  # 網(wǎng)卡名稱
   virtual_router_id 68
   priority 100
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     # 自定義虛擬ip
     192.168.243.100
   }
   track_script {
       check_apiserver
   }
}

4、在m2(角色為backup)上創(chuàng)建配置文件如下:

[root@m2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 router_id keepalive-backup
}

vrrp_script check_apiserver {
 script "/etc/keepalived/check-apiserver.sh"
 interval 3
 weight -2
}

vrrp_instance VI-kube-master {
   state BACKUP
   interface ens32
   virtual_router_id 68
   priority 99
   dont_track_primary
   advert_int 3
   virtual_ipaddress {
     192.168.243.100
   }
   track_script {
       check_apiserver
   }
}

5鉴象、分別在m1m2節(jié)點(diǎn)上創(chuàng)建keepalived的檢測(cè)腳本忙菠,這個(gè)腳本比較簡(jiǎn)單,可以自行根據(jù)需求去完善:

$ vim /etc/keepalived/check-apiserver.sh
#!/bin/sh
netstat -ntlp |grep 6443 || exit 1

6纺弊、完成上述步驟后牛欢,啟動(dòng)keepalived:

# 分別在master和backup上啟動(dòng)keepalived服務(wù)
$ systemctl enable keepalived && service keepalived start

# 檢查狀態(tài)
$ service keepalived status

# 查看日志
$ journalctl -f -u keepalived

# 查看虛擬ip
$ ip a

部署第一個(gè)k8s主節(jié)點(diǎn)

使用kubeadm創(chuàng)建的k8s集群,大部分組件都是以docker容器的方式去運(yùn)行的淆游,所以kubeadm在初始化master節(jié)點(diǎn)的時(shí)候需要拉取相應(yīng)的組件鏡像傍睹。但是kubeadm默認(rèn)是從Google的k8s.gcr.io上拉取鏡像,因此在國內(nèi)自然是無法成功拉取到所需的鏡像犹菱。

要解決這種情況要么翻墻拾稳,要么手動(dòng)拉取國內(nèi)與之對(duì)應(yīng)的鏡像到本地然后改下tag。我這里選擇后者腊脱,首先查看kubeadm需要拉取的鏡像列表:

[root@m1 ~]# kubeadm config images list
W0830 19:17:13.056761   81487 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.0
k8s.gcr.io/kube-controller-manager:v1.19.0
k8s.gcr.io/kube-scheduler:v1.19.0
k8s.gcr.io/kube-proxy:v1.19.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.9-1
k8s.gcr.io/coredns:1.7.0
[root@m1 ~]# 

我這里是從阿里云的容器鏡像倉庫去拉取访得,但是有個(gè)問題就是版本號(hào)可能會(huì)與kubeadm中定義的對(duì)不上,這就需要我們自行到鏡像倉庫查詢確認(rèn):

例如陕凹,我這里kubeadm列出的版本號(hào)是v1.19.0悍抑,但阿里云鏡像倉庫上卻是v1.19.0-rc.1鳄炉。找到對(duì)應(yīng)的版本號(hào)后,為了避免重復(fù)的工作传趾,我這里就寫了個(gè)shell腳本去完成鏡像的拉取及修改tag

[root@m1 ~]# vim pullk8s.sh
#!/bin/bash
ALIYUN_KUBE_VERSION=v1.19.0-rc.1
KUBE_VERSION=v1.19.0
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.9-1
DNS_VERSION=1.7.0
username=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(
    kube-proxy-amd64:${ALIYUN_KUBE_VERSION}
    kube-scheduler-amd64:${ALIYUN_KUBE_VERSION}
    kube-controller-manager-amd64:${ALIYUN_KUBE_VERSION}
    kube-apiserver-amd64:${ALIYUN_KUBE_VERSION}
    pause:${KUBE_PAUSE_VERSION}
    etcd-amd64:${ETCD_VERSION}
    coredns:${DNS_VERSION}
)

for image in ${images[@]}
do
    docker pull ${username}/${image}
    # 此處需刪除“-amd64”迎膜,否則kuadm還是無法識(shí)別本地鏡像
    new_image=`echo $image|sed 's/-amd64//g'`
    if [[ $new_image == *$ALIYUN_KUBE_VERSION* ]]
    then
        new_kube_image=`echo $new_image|sed "s/$ALIYUN_KUBE_VERSION//g"`
        docker tag ${username}/${image} k8s.gcr.io/${new_kube_image}$KUBE_VERSION
    else
        docker tag ${username}/${image} k8s.gcr.io/${new_image}
    fi
    docker rmi ${username}/${image}
done
[root@m1 ~]# sh pullk8s.sh

腳本執(zhí)行完后,此時(shí)查看Docker鏡像列表應(yīng)如下:

[root@m1 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.19.0             b2d80fe68e4f        6 weeks ago         120MB
k8s.gcr.io/kube-controller-manager   v1.19.0             a7cd7b6717e8        6 weeks ago         116MB
k8s.gcr.io/kube-apiserver            v1.19.0             1861e5423d80        6 weeks ago         126MB
k8s.gcr.io/kube-scheduler            v1.19.0             6d4fe43fdd0d        6 weeks ago         48.4MB
k8s.gcr.io/etcd                      3.4.9-1             d4ca8726196c        2 months ago        253MB
k8s.gcr.io/coredns                   1.7.0               bfe3a36ebd25        2 months ago        45.2MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        6 months ago        683kB
[root@m1 ~]# 

創(chuàng)建kubeadm用于初始化master節(jié)點(diǎn)的配置文件:

[root@m1 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
# 指定控制面板的訪問端點(diǎn)浆兰,這里的ip為keepalived的虛擬ip
controlPlaneEndpoint: "192.168.243.100:6443"
networking:
    # This CIDR is a Calico default. Substitute or remove for your CNI provider.
    podSubnet: "172.22.0.0/16"  # 指定pod所使用的網(wǎng)段

然后執(zhí)行如下命令進(jìn)行初始化:

[root@m1 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs
W0830 20:05:29.447773   88394 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1] and IPs [10.96.0.1 192.168.243.138 192.168.243.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m1] and IPs [192.168.243.138 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m1] and IPs [192.168.243.138 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 173.517640 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f76360121e
[mark-control-plane] Marking the node m1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5l7pv5.5iiq4atzlazq0b7x
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
    --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 \
    --control-plane --certificate-key a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f76360121e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
    --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 
[root@m1 ~]# 
  • 拷貝一下這里打印出來的兩條kubeadm join命令磕仅,后面添加其他master節(jié)點(diǎn)以及worker節(jié)點(diǎn)時(shí)需要用到

然后在master節(jié)點(diǎn)上執(zhí)行如下命令拷貝配置文件:

[root@m1 ~]# mkdir -p $HOME/.kube
[root@m1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@m1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

查看當(dāng)前的Pod信息:

[root@m1 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-f9fd979d6-kg4lf      0/1     Pending   0          9m9s
kube-system   coredns-f9fd979d6-t8xzj      0/1     Pending   0          9m9s
kube-system   etcd-m1                      1/1     Running   0          9m22s
kube-system   kube-apiserver-m1            1/1     Running   1          9m22s
kube-system   kube-controller-manager-m1   1/1     Running   1          9m22s
kube-system   kube-proxy-rjgnw             1/1     Running   0          9m9s
kube-system   kube-scheduler-m1            1/1     Running   1          9m22s
[root@m1 ~]# 

使用curl命令請(qǐng)求一下健康檢查接口,返回ok代表沒問題:

[root@m1 ~]# curl -k https://192.168.243.100:6443/healthz
ok
[root@m1 ~]# 

部署網(wǎng)絡(luò)插件 - calico

創(chuàng)建配置文件存放目錄:

[root@m1 ~]# mkdir -p /etc/kubernetes/addons

在該目錄下創(chuàng)建calico-rbac-kdd.yaml配置文件:

[root@m1 ~]# vi /etc/kubernetes/addons/calico-rbac-kdd.yaml
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  - apiGroups: [""]
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - update
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
      - patch
  - apiGroups: [""]
    resources:
      - services
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups: ["extensions"]
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - clusterinformations
      - hostendpoints
    verbs:
      - create
      - get
      - list
      - update
      - watch

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

然后分別執(zhí)行如下命令完成calico的安裝:

[root@m1 ~]# kubectl apply -f /etc/kubernetes/addons/calico-rbac-kdd.yaml
[root@m1 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

查看狀態(tài):

[root@m1 ~]# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5bc4fc6f5f-pdjls   1/1     Running   0          2m47s
kube-system   calico-node-tkdmv                          1/1     Running   0          2m47s
kube-system   coredns-f9fd979d6-kg4lf                    1/1     Running   0          23h
kube-system   coredns-f9fd979d6-t8xzj                    1/1     Running   0          23h
kube-system   etcd-m1                                    1/1     Running   1          23h
kube-system   kube-apiserver-m1                          1/1     Running   2          23h
kube-system   kube-controller-manager-m1                 1/1     Running   2          23h
kube-system   kube-proxy-rjgnw                           1/1     Running   1          23h
kube-system   kube-scheduler-m1                          1/1     Running   2          23h
[root@m1 ~]# 

將其它master節(jié)點(diǎn)加入集群

使用之前保存的kubeadm join命令加入集群簸呈,但是要注意masterworkerjoin命令是不同的不要搞錯(cuò)了榕订。分別在m2m3上執(zhí)行:

$ kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
    --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 \
    --control-plane --certificate-key a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f76360121e
  • Tips:master節(jié)點(diǎn)的join命令包含--control-plane --certificate-key參數(shù)

然后等待一會(huì),該命令執(zhí)行成功會(huì)輸出如下內(nèi)容:

[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m3] and IPs [10.96.0.1 192.168.243.141 192.168.243.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m3] and IPs [192.168.243.141 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m3] and IPs [192.168.243.141 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node m3 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

然后按照提示完成kubectl配置文件的拷貝:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

并且此時(shí)6443端口應(yīng)該是被監(jiān)聽的:

[root@m2 ~]# netstat -lntp |grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      31910/kube-apiserve 
[root@m2 ~]# 

join命令執(zhí)行成功不一定代表就加入集群成功蜕便,此時(shí)需要回到m1節(jié)點(diǎn)上去查看節(jié)點(diǎn)是否為Ready狀態(tài):

[root@m1 ~]# kubectl get nodes
NAME   STATUS     ROLES    AGE     VERSION
m1     Ready      master   24h     v1.19.0
m2     NotReady   master   3m47s   v1.19.0
m3     NotReady   master   3m31s   v1.19.0
[root@m1 ~]# 

可以看到m2m3都是NotReady狀態(tài)劫恒,代表沒有成功加入到集群。于是我使用如下命令查看日志:

$ journalctl -f

發(fā)現(xiàn)是萬惡的網(wǎng)絡(luò)問題(墻)導(dǎo)致無法成功拉取pause鏡像:

8月 31 20:09:11 m2 kubelet[10122]: W0831 20:09:11.713935   10122 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
8月 31 20:09:12 m2 kubelet[10122]: E0831 20:09:12.442430   10122 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
8月 31 20:09:17 m2 kubelet[10122]: E0831 20:09:17.657880   10122 kuberuntime_manager.go:730] createPodSandbox for pod "calico-node-jksvg_kube-system(5b76b6d7-0bd9-4454-a674-2d2fa4f6f35e)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.2": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

于是在m2m3上執(zhí)行如下命令拷貝m1上之前用于拉取國內(nèi)鏡像的腳本并執(zhí)行:

$ scp -r m1:/root/pullk8s.sh /root/pullk8s.sh
$ sh /root/pullk8s.sh

執(zhí)行完成并等待幾分鐘后轿腺,回到m1節(jié)點(diǎn)再次查看nodes信息两嘴,這次就都是Ready狀態(tài)了:

[root@m1 ~]# kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
m1     Ready    master   24h   v1.19.0
m2     Ready    master   14m   v1.19.0
m3     Ready    master   13m   v1.19.0
[root@m1 ~]# 

將worker節(jié)點(diǎn)加入集群

與上一小節(jié)的步驟基本是相同的,只不過是在s1s2節(jié)點(diǎn)上執(zhí)行而已族壳,kubeadm join命令不要搞錯(cuò)了就行憔辫,所以這里簡(jiǎn)略帶過:

# 使用之前保存的join命令加入集群
$ kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
    --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 

# 耐心等待一會(huì),可以觀察下日志
$ journalctl -f

成功將所有的worker節(jié)點(diǎn)加入集群后仿荆,至此我們就完成了k8s高可用集群的搭建贰您。此時(shí)集群的node信息如下:

[root@m1 ~]# kubectl get nodes 
NAME   STATUS   ROLES    AGE     VERSION
m1     Ready    master   24h     v1.19.0
m2     Ready    master   60m     v1.19.0
m3     Ready    master   60m     v1.19.0
s1     Ready    <none>   9m45s   v1.19.0
s2     Ready    <none>   119s    v1.19.0
[root@m1 ~]# 

pod信息如下:

[root@m1 ~]# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5bc4fc6f5f-pdjls   1/1     Running   0          73m
kube-system   calico-node-8m8lz                          1/1     Running   0          9m43s
kube-system   calico-node-99xps                          1/1     Running   0          60m
kube-system   calico-node-f48zw                          1/1     Running   0          117s
kube-system   calico-node-jksvg                          1/1     Running   0          60m
kube-system   calico-node-tkdmv                          1/1     Running   0          73m
kube-system   coredns-f9fd979d6-kg4lf                    1/1     Running   0          24h
kube-system   coredns-f9fd979d6-t8xzj                    1/1     Running   0          24h
kube-system   etcd-m1                                    1/1     Running   1          24h
kube-system   kube-apiserver-m1                          1/1     Running   2          24h
kube-system   kube-controller-manager-m1                 1/1     Running   2          24h
kube-system   kube-proxy-22h6p                           1/1     Running   0          9m43s
kube-system   kube-proxy-khskm                           1/1     Running   0          60m
kube-system   kube-proxy-pkrgm                           1/1     Running   0          60m
kube-system   kube-proxy-rjgnw                           1/1     Running   1          24h
kube-system   kube-proxy-t4pxl                           1/1     Running   0          117s
kube-system   kube-scheduler-m1                          1/1     Running   2          24h
[root@m1 ~]# 

集群可用性測(cè)試

創(chuàng)建nginx ds

m1節(jié)點(diǎn)上創(chuàng)建nginx-ds.yml配置文件,內(nèi)容如下:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

然后執(zhí)行如下命令創(chuàng)建nginx ds:

[root@m1 ~]# kubectl create -f nginx-ds.yml
service/nginx-ds created
daemonset.apps/nginx-ds created
[root@m1 ~]# 

檢查各種ip連通性

稍等一會(huì)后拢操,檢查Pod狀態(tài)是否正常:

[root@m1 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE     IP               NODE   NOMINATED NODE   READINESS GATES
nginx-ds-6nnpm   1/1     Running   0          2m32s   172.22.152.193   s1     <none>           <none>
nginx-ds-bvpqj   1/1     Running   0          2m32s   172.22.78.129    s2     <none>           <none>
[root@m1 ~]# 

在每個(gè)節(jié)點(diǎn)上去嘗試ping Pod IP:

[root@s1 ~]# ping 172.22.152.193
PING 172.22.152.193 (172.22.152.193) 56(84) bytes of data.
64 bytes from 172.22.152.193: icmp_seq=1 ttl=63 time=0.269 ms
64 bytes from 172.22.152.193: icmp_seq=2 ttl=63 time=0.240 ms
64 bytes from 172.22.152.193: icmp_seq=3 ttl=63 time=0.228 ms
64 bytes from 172.22.152.193: icmp_seq=4 ttl=63 time=0.229 ms
^C
--- 172.22.152.193 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.228/0.241/0.269/0.022 ms
[root@s1 ~]# 

然后檢查Service的狀態(tài):

[root@m1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        2d1h
nginx-ds     NodePort    10.105.139.228   <none>        80:31145/TCP   3m21s
[root@m1 ~]# 

在每個(gè)節(jié)點(diǎn)上嘗試下訪問該服務(wù)锦亦,能正常訪問代表Service的IP也是通的:

[root@m1 ~]# curl 10.105.139.228:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@m1 ~]# 

然后在每個(gè)節(jié)點(diǎn)檢查NodePort的可用性,nginx-dsNodePort31145令境。如下能正常訪問代表NodePort也是正常的:

[root@m3 ~]# curl 192.168.243.140:31145
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@m3 ~]# 

檢查dns可用性

需要?jiǎng)?chuàng)建一個(gè)Nginx Pod杠园,首先定義一個(gè)pod-nginx.yaml配置文件,內(nèi)容如下:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80

然后基于該配置去創(chuàng)建Pod:

[root@m1 ~]# kubectl create -f pod-nginx.yaml
pod/nginx created
[root@m1 ~]# 

使用如下命令進(jìn)入到Pod里:

[root@m1 ~]# kubectl exec nginx -i -t -- /bin/bash

查看dns配置:

root@nginx:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
root@nginx:/# 

接著測(cè)試是否可以正確解析Service的名稱舔庶。如下能根據(jù)nginx-ds這個(gè)名稱解析出對(duì)應(yīng)的IP:10.105.139.228返劲,代表dns也是正常的:

root@nginx:/# ping nginx-ds
PING nginx-ds.default.svc.cluster.local (10.105.139.228): 48 data bytes

高可用測(cè)試

m1節(jié)點(diǎn)上執(zhí)行如下命令將其關(guān)機(jī):

[root@m1 ~]# init 0

然后查看虛擬IP是否成功漂移到了m2節(jié)點(diǎn)上:

[root@m2 ~]# ip a |grep 192.168.243.100
    inet 192.168.243.100/32 scope global ens32
[root@m2 ~]# 

接著測(cè)試能否在m2m3節(jié)點(diǎn)上使用kubectl與集群進(jìn)行交互,能正常交互則代表集群具備了一定程度的高可用性:

[root@m2 ~]# kubectl get nodes
NAME   STATUS     ROLES    AGE   VERSION
m1     NotReady   master   3d    v1.19.0
m2     Ready      master   16m   v1.19.0
m3     Ready      master   13m   v1.19.0
s1     Ready      <none>   2d    v1.19.0
s2     Ready      <none>   47h   v1.19.0
[root@m2 ~]# 

部署dashboard

dashboard是k8s提供的一個(gè)可視化操作界面栖茉,用于簡(jiǎn)化我們對(duì)集群的操作和管理,在界面上我們可以很方便的查看各種信息孵延、操作Pod吕漂、Service等資源,以及創(chuàng)建新的資源等尘应。dashboard的倉庫地址如下惶凝,

dashboard的部署也比較簡(jiǎn)單吼虎,首先定義dashboard-all.yaml配置文件,內(nèi)容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30005
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.3
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

創(chuàng)建dashboard服務(wù):

[root@m1 ~]# kubectl create -f dashboard-all.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@m1 ~]# 

查看deployment運(yùn)行情況:

[root@m1 ~]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           29s
[root@m1 ~]# 

查看dashboard pod運(yùn)行情況:

[root@m1 ~]# kubectl --namespace kubernetes-dashboard get pods -o wide |grep dashboard
dashboard-metrics-scraper-7b59f7d4df-q4jqj   1/1     Running   0          5m27s   172.22.152.198   s1     <none>           <none>
kubernetes-dashboard-5dbf55bd9d-nqvjz        1/1     Running   0          5m27s   172.22.202.17    m1     <none>           <none>
[root@m1 ~]# 

查看dashboard service的運(yùn)行情況:

[root@m1 ~]# kubectl get services kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.104.217.178   <none>        443:30005/TCP   5m57s
[root@m1 ~]# 

查看30005端口是否有被正常監(jiān)聽:

[root@m1 ~]# netstat -ntlp |grep 30005
tcp        0      0 0.0.0.0:30005      0.0.0.0:*     LISTEN      4085/kube-proxy     
[root@m1 ~]# 

訪問dashboard

為了集群安全苍鲜,從 1.7 開始思灰,dashboard 只允許通過 https 訪問,我們使用NodePort的方式暴露服務(wù)混滔,可以使用 https://NodeIP:NodePort 地址訪問洒疚。例如使用curl進(jìn)行訪問:

[root@m1 ~]# curl https://192.168.243.138:30005 -k
<!--
Copyright 2017 The Kubernetes Authors.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

<!doctype html>
<html lang="en">

<head>
  <meta charset="utf-8">
  <title>Kubernetes Dashboard</title>
  <link rel="icon"
        type="image/png"
        href="assets/images/kubernetes-logo.png" />
  <meta name="viewport"
        content="width=device-width">
<link rel="stylesheet" href="styles.988f26601cdcb14da469.css"></head>

<body>
  <kd-root></kd-root>
<script src="runtime.ddfec48137b0abfd678a.js" defer></script><script src="polyfills-es5.d57fe778f4588e63cc5c.js" nomodule defer></script><script src="polyfills.49104fe38e0ae7955ebb.js" defer></script><script src="scripts.391d299173602e261418.js" defer></script><script src="main.b94e335c0d02b12e3a7b.js" defer></script></body>

</html>
[root@m1 ~]# 
  • 由于dashboard的證書是自簽的绕德,所以這里需要加-k參數(shù)指定不驗(yàn)證證書進(jìn)行https請(qǐng)求

關(guān)于自定義證書

默認(rèn)dashboard的證書是自動(dòng)生成的浊伙,肯定是非安全的證書,如果大家有域名和對(duì)應(yīng)的安全證書可以自己替換掉肴捉。使用安全的域名方式訪問dashboard领跛。

dashboard-all.yaml中增加dashboard啟動(dòng)參數(shù)乏德,可以指定證書文件,其中證書文件是通過secret注進(jìn)來的吠昭。

- –tls-cert-file
- dashboard.cer
- –tls-key-file
- dashboard.key

登錄dashboard

Dashboard 默認(rèn)只支持 token 認(rèn)證喊括,所以如果使用 KubeConfig 文件,需要在該文件中指定 token矢棚,我們這里使用token的方式登錄郑什。

首先創(chuàng)建service account:

[root@m1 ~]# kubectl create sa dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@m1 ~]#

創(chuàng)建角色綁定關(guān)系:

[root@m1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@m1 ~]# 

查看dashboard-admin的secret名稱:

[root@m1 ~]# kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}'
dashboard-admin-token-ph7h2
[root@m1 ~]# 

打印secret的token:

[root@m1 ~]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
[root@m1 ~]# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6IkVnaDRYQXgySkFDOGdDMnhXYXJWbkY2WVczSDVKeVJRaE5vQ0ozOG5PanMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcGg3aDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjA1ZWY3OTAtOWY3OC00NDQzLTgwMDgtOWRiMjU1MjU0MThkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.xAO3njShhTRkgNdq45nO7XNy242f8XVs-W4WBMui-Ts6ahdZECoNegvWjLDCEamB0UW72JeG67f2yjcWohANwfDCHobRYPkOhzrVghkdULbrCCGai_fe60Svwf_apSmlKP3UUdu16M4GxopaTlINZpJY_z5KJ4kLq66Y1rjAA6j9TI4Ue4EazJKKv0dciv6NsP28l7-nvUmhj93QZpKqY3PQ7vvcPXk_sB-jjSSNJ5ObWuGeDBGHgQMRI4F1XTWXJBYClIucsbu6MzDA8yop9S7Ci8D00QSa0u3M_rqw-3UHtSxQee41uVVjIASfnCEVayKDIbJzG3gc2AjqGqJhkQ
[root@m1 ~]# 

獲取到token后,使用瀏覽器訪問https://192.168.243.138:30005幻妓,由于是dashboard是自簽的證書蹦误,所以此時(shí)瀏覽器會(huì)提示警告。不用理會(huì)直接點(diǎn)擊“高級(jí)” -> “繼續(xù)前往”即可:

image.png

然后輸入token:


image.png

成功登錄后首頁如下:


image.png

可視化界面也沒啥可說的肉津,這里就不進(jìn)一步介紹了强胰,可以自行探索一下。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末妹沙,一起剝皮案震驚了整個(gè)濱河市偶洋,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌距糖,老刑警劉巖玄窝,帶你破解...
    沈念sama閱讀 217,185評(píng)論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異悍引,居然都是意外死亡恩脂,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,652評(píng)論 3 393
  • 文/潘曉璐 我一進(jìn)店門趣斤,熙熙樓的掌柜王于貴愁眉苦臉地迎上來俩块,“玉大人,你說我怎么就攤上這事∮窨” “怎么了势腮?”我有些...
    開封第一講書人閱讀 163,524評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長漫仆。 經(jīng)常有香客問我捎拯,道長,這世上最難降的妖魔是什么盲厌? 我笑而不...
    開封第一講書人閱讀 58,339評(píng)論 1 293
  • 正文 為了忘掉前任署照,我火速辦了婚禮,結(jié)果婚禮上狸眼,老公的妹妹穿的比我還像新娘藤树。我一直安慰自己,他們只是感情好拓萌,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,387評(píng)論 6 391
  • 文/花漫 我一把揭開白布岁钓。 她就那樣靜靜地躺著,像睡著了一般微王。 火紅的嫁衣襯著肌膚如雪屡限。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,287評(píng)論 1 301
  • 那天炕倘,我揣著相機(jī)與錄音钧大,去河邊找鬼。 笑死罩旋,一個(gè)胖子當(dāng)著我的面吹牛啊央,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播涨醋,決...
    沈念sama閱讀 40,130評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼瓜饥,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了浴骂?” 一聲冷哼從身側(cè)響起乓土,我...
    開封第一講書人閱讀 38,985評(píng)論 0 275
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎溯警,沒想到半個(gè)月后趣苏,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,420評(píng)論 1 313
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡梯轻,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,617評(píng)論 3 334
  • 正文 我和宋清朗相戀三年食磕,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片喳挑。...
    茶點(diǎn)故事閱讀 39,779評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡芬为,死狀恐怖萄金,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情媚朦,我是刑警寧澤,帶...
    沈念sama閱讀 35,477評(píng)論 5 345
  • 正文 年R本政府宣布日戈,位于F島的核電站询张,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏浙炼。R本人自食惡果不足惜份氧,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,088評(píng)論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望弯屈。 院中可真熱鬧蜗帜,春花似錦、人聲如沸资厉。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,716評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽宴偿。三九已至湘捎,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間窄刘,已是汗流浹背窥妇。 一陣腳步聲響...
    開封第一講書人閱讀 32,857評(píng)論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留娩践,地道東北人活翩。 一個(gè)月前我還...
    沈念sama閱讀 47,876評(píng)論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像翻伺,于是被迫代替她去往敵國和親材泄。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,700評(píng)論 2 354