centos7 安裝k8s V1.19.4版本

一、部署環(huán)境

主機(jī)列表:

主機(jī)名 Centos7版本 IP 主機(jī)配置 k8s版本
master 192.168.214.128 2CPU 2G v1.19.4
node01 192.168.214.129 2CPU 2G v1.19.4
node02 192.168.214.130 2CPU 2G v1.19.4
安裝共有3臺(tái)服務(wù)器 1臺(tái)master策菜,2臺(tái)node丘损。

master最少配置為2CPU 2GB(至少2GB)或更多的CPU和RAM。
node1最少1CPU 2G
node2最少1CPU 2G

Kubernetes集群組件:

etcd 一個(gè)高可用的K/V鍵值對(duì)存儲(chǔ)和服務(wù)發(fā)現(xiàn)系統(tǒng)
flannel 實(shí)現(xiàn)夸主機(jī)的容器網(wǎng)絡(luò)的通信
kube-apiserver 提供kubernetes集群的API調(diào)用
kube-controller-manager 確保集群服務(wù)
kube-scheduler 調(diào)度容器,分配到Node
kubelet 在Node節(jié)點(diǎn)上按照配置文件中定義的容器規(guī)格啟動(dòng)容器
kube-proxy 提供網(wǎng)絡(luò)代理服務(wù)

Kubernetes 的優(yōu)勢(shì):

一、服務(wù)發(fā)現(xiàn)和負(fù)載均衡
Kubernetes 可以使用 DNS 名稱或自己的 IP 地址公開容器昆禽,如果進(jìn)入容器的流量很大, Kubernetes 可以負(fù)載均衡并分配網(wǎng)絡(luò)流量拙友,從而使部署穩(wěn)定。

二歼郭、存儲(chǔ)編排
Kubernetes 允許你自動(dòng)掛載你選擇的存儲(chǔ)系統(tǒng)遗契,例如本地存儲(chǔ)、公共云提供商等病曾。

三牍蜂、自動(dòng)部署和回滾
你可以使用 Kubernetes 描述已部署容器的所需狀態(tài),它可以以受控的速率將實(shí)際狀態(tài) 更改為期望狀態(tài)泰涂。例如鲫竞,你可以自動(dòng)化 Kubernetes 來(lái)為你的部署創(chuàng)建新容器, 刪除現(xiàn)有容器并將它們的所有資源用于新容器逼蒙。

四从绘、自動(dòng)完成裝箱計(jì)算
Kubernetes 允許你指定每個(gè)容器所需 CPU 和內(nèi)存(RAM)。 當(dāng)容器指定了資源請(qǐng)求時(shí),Kubernetes 可以做出更好的決策來(lái)管理容器的資源僵井。

五陕截、自我修復(fù)
Kubernetes 重新啟動(dòng)失敗的容器、替換容器批什、殺死不響應(yīng)用戶定義的 運(yùn)行狀況檢查的容器农曲,并且在準(zhǔn)備好服務(wù)之前不將其通告給客戶端。

六驻债、密鑰與配置管理
Kubernetes 允許你存儲(chǔ)和管理敏感信息乳规,例如密碼、OAuth 令牌和 ssh 密鑰合呐。 你可以在不重建容器鏡像的情況下部署和更新密鑰和應(yīng)用程序配置暮的,也無(wú)需在堆棧配置中暴露密鑰。

Kubernetes官方地址:https://kubernetes.io
Kubernetes官方項(xiàng)目地址:https://github.com/kubernetes/kubernetes

二合砂、安裝準(zhǔn)備工作

修改主機(jī)名

主節(jié)點(diǎn) [root@centos7 ~] hostnamectl set-hostname master
節(jié)點(diǎn)1 [root@centos7 ~] hostnamectl set-hostname node1
節(jié)點(diǎn)2 [root@centos7 ~] hostnamectl set-hostname node2

修改完成后退出重新SSH登陸即可顯示新設(shè)置的主機(jī)名master青扔、node1、node2

關(guān)閉防火墻和關(guān)閉selinux

三臺(tái)服務(wù)器全部需要執(zhí)行
關(guān)于關(guān)閉防火墻(僅用于測(cè)試環(huán)境翩伪,生產(chǎn)環(huán)境請(qǐng)勿使用)
如使用云服務(wù)器(如阿里云ECS 到服務(wù)器安全組手動(dòng)增加相關(guān)端口)
關(guān)閉selinux的原因(關(guān)閉selinux以允許容器訪問(wèn)宿主機(jī)的文件系統(tǒng))

關(guān)閉防火墻和設(shè)置開機(jī)禁用防火墻

[root@centos7 ~] systemctl stop firewalld && systemctl disable firewalld

永久關(guān)閉selinux

[root@centos7 ~] sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

臨時(shí)關(guān)閉selinux

root@centos7 ~] setenforce 0

保證yum倉(cāng)庫(kù)可用(使用國(guó)內(nèi)鏡像倉(cāng)庫(kù)微猖,所有節(jié)點(diǎn))

[root@localhost ~]# sed -ri s/^#baseurl/baseurl/g /etc/yum.repos.d/CentOS-Base.repo
[root@localhost ~]# sed -ri s/^mirrorlist/#mirrorlist/g /etc/yum.repos.d/CentOS-Base.repo
[root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache fast

添加本地解析(所有節(jié)點(diǎn))

[root@master ~]# cat >> /etc/hosts <<eof
10.9.62.205 k8s-master
10.9.62.70 node1
10.9.62.69 node2
10.9.62.68 node3
eof

禁用swap虛擬內(nèi)存(所有節(jié)點(diǎn)都要執(zhí)行)

master node節(jié)點(diǎn)都執(zhí)行本部分操作。
Swap會(huì)導(dǎo)致docker的運(yùn)行不正常缘屹,Kubernetes性能下降凛剥。
詳情開發(fā)人員說(shuō)明:https://github.com/kubernetes/kubernetes/issues/53533

臨時(shí)禁用

[root@master ~] swapoff -a

永久禁用

[root@master ~] sed -i.bak '/swap/s/^/#/' /etc/fstab
然后關(guān)機(jī)再啟動(dòng)

配置內(nèi)核參數(shù)

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

[root@root]# sysctl --system
[root@root]# modprobe br_netfilter
[root@root]# sysctl -p /etc/sysctl.d/k8s.conf

加載ipvs相關(guān)內(nèi)核模塊
如果重新開機(jī),需要重新加載(可以寫在 /etc/rc.local 中開機(jī)自動(dòng)加載)
[root@root]# modprobe ip_vs
[root@root]# modprobe ip_vs_rr
[root@root]# modprobe ip_vs_wrr
[root@root]# modprobe ip_vs_sh
[root@root]# modprobe nf_conntrack_ipv4

查看是否加載成功
[root@root]# lsmod | grep ip_vs

所有機(jī)器安裝docker

vim docker.sh

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum -y install docker-ce docker-ce-cli
systemctl start docker
systemctl enable docker

所有機(jī)器安裝kubeadm和kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  • [] 中括號(hào)中的是repository id轻姿,唯一犁珠,用來(lái)標(biāo)識(shí)不同倉(cāng)庫(kù)
  • name 倉(cāng)庫(kù)名稱,自定義
  • baseurl 倉(cāng)庫(kù)地址
  • enable 是否啟用該倉(cāng)庫(kù)互亮,默認(rèn)為1表示啟用
  • gpgcheck 是否驗(yàn)證從該倉(cāng)庫(kù)獲得程序包的合法性犁享,1為驗(yàn)證
  • repo_gpgcheck 是否驗(yàn)證元數(shù)據(jù)的合法性 元數(shù)據(jù)就是程序包列表,1為- - 驗(yàn)證
  • gpgkey=URL 數(shù)字簽名的公鑰文件所在位置豹休,如果gpgcheck值為1炊昆,此處就需要指定gpgkey文件的位置,如果gpgcheck值為0就不需要此項(xiàng)了

更新緩存

[root@master ~] yum clean all
[root@master ~] yum -y makecache

命令補(bǔ)全 安裝bash-completion

# 安裝bash-completion
[root@master ~] yum -y install bash-completion
# 加載bash-completion
[root@master ~] source /etc/profile.d/bash_completion.sh

kubelet 版本信息查看

[root@master ~] yum list kubelet --showduplicates | sort -r
安裝kubelet威根、kubeadm和kubectl
[root@master ~] yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 ipvsadm

  • kubelet 運(yùn)行在集群所有節(jié)點(diǎn)上凤巨,用于啟動(dòng)Pod和容器等對(duì)象的工具
  • kubeadm 用于初始化集群,啟動(dòng)集群的命令工具
  • kubectl 用于和集群通信的命令行洛搀,通過(guò)kubectl可以部署和管理應(yīng)用敢茁,查看各種資源,創(chuàng)建留美、刪除和更新各種組件

啟動(dòng)kubelet

啟動(dòng)kubelet并設(shè)置開機(jī)啟動(dòng)

[root@master ~] systemctl enable kubelet && systemctl start kubelet

kubectl命令補(bǔ)全

[root@master ~] echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master ~] source ~/.bash_profile

鏡像下載

Kubernetes幾乎所有的安裝組件和Docker鏡像都放在goolge自己的網(wǎng)站上,直接訪問(wèn)可能會(huì)有網(wǎng)絡(luò)問(wèn)題彰檬,這里的解決辦法是從阿里云鏡像倉(cāng)庫(kù)下載鏡像伸刃,拉取到本地以后改回默認(rèn)的鏡像tag。本文通過(guò)運(yùn)行image.sh腳本方式拉取鏡像僧叉。

[root@master ~] vim image.sh

url=registry.aliyuncs.com/google_containers
version=v1.19.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

運(yùn)行腳本image.sh奕枝,下載指定版本的鏡像

[root@master ~] chmod 775 image.sh

運(yùn)行腳本

[root@master ~] ./image.sh

查看已下載的鏡像

[root@master ~] docker images

node節(jié)點(diǎn)直接運(yùn)行啟動(dòng)命令: kubeadm init 無(wú)需初始化master

三、初始化master

master節(jié)點(diǎn)執(zhí)行本部分操作瓶堕。

修改kubelet配置默認(rèn)cgroup driver

[root@master ~] mkdir -p /var/lib/kubelet/
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF 

需要重新啟動(dòng)kubelet

[root@master ~] # systemctl daemon-reload
[root@master ~] # systemctl enable kubelet && systemctl restart kubelet

環(huán)境是否正常

[root@master ~] kubeadm init phase preflight

初始化master 10.244.0.0/16是flannel固定使用的IP段隘道,設(shè)置取決于網(wǎng)絡(luò)組件要求

[root@master ~] kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.19.4

如初始化失敗可執(zhí)行kubeadm reset 命令來(lái)清理環(huán)境重新安裝

[root@master ~] kubeadm reset[root@master ~] rm -rf $HOME/.kube/config

5.5 配置master認(rèn)證

[root@master ~] echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@master ~] source /etc/profile

在master節(jié)點(diǎn)配置使用kubectl

mkdir -p HOME/.kube cp -i /etc/kubernetes/admin.confHOME/.kube/config
chown (id -u):(id -g) $HOME/.kube/config

配置網(wǎng)絡(luò)插件

master節(jié)點(diǎn)下載yaml配置文件

cd ~ && mkdir flannel && cd flannel

進(jìn)入網(wǎng)址拉取yml文件(https://github.com/ 搜 flannel)https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml(進(jìn)入網(wǎng)址復(fù)制內(nèi)容)

vi kube-flannel.yml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

把鏡像下載
docker pull quay.io/coreos/flannel:v0.13.1-rc1

修改配置文件

containers:
  - name: kube-flannel
    image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64 #文檔172、192等等行郎笆,好多行谭梗,都需要換掉   
    command:
    - /opt/bin/flanneld

    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=ens33  #文檔192行    改了加入一行
    - --iface=eth0

啟動(dòng)

# kubectl apply -f ~/flannel/kube-flannel.yml

查看
kubectl get pods --namespace kube-system
kubectl get service
kubectl get svc --namespace kube-system

(注釋注意查看)六、node節(jié)點(diǎn)加入集群

6.1 集群節(jié)點(diǎn)查看

[root@master ~]# kubectl get nodesNAME     STATUS   ROLES    AGE   VERSIONmaster   Ready    master   22h   v1.19.4node01   NotReady <none>   22h   v1.19.4node02   NotReady <none>   22h   v1.19.4 

6.2 node加入集群

需要root權(quán)限 (例如 sudo su -

首先在master節(jié)點(diǎn)上運(yùn)行獲取 token宛蚓,discovery-token-ca-cert-hash的值

在Node節(jié)點(diǎn)上執(zhí)行:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash> # 示例kubeadm join --token 4xpmwx.nw6psmvn9qi4d3cj 192.168.214.128:6443 --discovery-token-ca-cert-hash sha256:c7cbe95a66092c58b4da3ad20874f0fe2b6d6842d28b2762ffc8d36227d7a0a7 

在master節(jié)點(diǎn)上運(yùn)行以下命令來(lái)獲取token令牌:

[root@master ~] kubeadm token list  # 輸出以下內(nèi)容TOKEN                    TTL  EXPIRES              USAGES           DESCRIPTION            EXTRA GROUPS8ewj1p.9r9hcjoqgajrj4gi  23h  2018-06-12T02:51:28Z authentication,  The default bootstrap  system:                                                   signing          token generated by     bootstrappers:                                                                    'kubeadm init'.        kubeadm:                                                                                           default-node-token

默認(rèn)情況下激捏,令牌會(huì)在24小時(shí)后過(guò)期。如果要在當(dāng)前令牌過(guò)期后將節(jié)點(diǎn)加入集群凄吏, 則可以通過(guò)在控制平面節(jié)點(diǎn)上運(yùn)行以下命令來(lái)創(chuàng)建新token令牌:

[root@master ~] kubeadm token create  # 輸出以下內(nèi)容5didvk.d09sbcov8ph2amjw

獲取 --discovery-token-ca-cert-hash 的值:

[root@master ~] openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \   openssl dgst -sha256 -hex | sed 's/^.* //'  # 輸出以下內(nèi)容8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

如運(yùn)行kubeadm join -- 加入集群命令出現(xiàn)報(bào)錯(cuò):[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

原因:cgroup和systemd有沖突

[root@node01 ~]# docker info | grep Cgroup  Cgroup Driver: cgroupfs

通過(guò)以上命令查到當(dāng)前的cgroup driver 為cgroupfs远舅,需改為systemd

cat > /etc/docker/daemon.json <<EOF{  "exec-opts": ["native.cgroupdriver=systemd"],  "log-driver": "json-file",  "log-opts": {    "max-size": "100m"  },  "storage-driver": "overlay2",  "storage-opts": [    "overlay2.override_kernel_check=true"  ]}EOF
[root@node01 ~]# systemctl daemon-reload[root@node01 ~]# systemctl restart docker[root@node01 ~]# docker info | grep Cgroup         #再次查看cgroup driver 已改為systemd Cgroup Driver: systemd # 再次運(yùn)行加入集群[root@node01 ~]# kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0 \    --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d

7、配置所有node節(jié)點(diǎn)加入集群

在所有node節(jié)點(diǎn)操作痕钢,此命令為初始化master成功后返回的結(jié)果

# kubeadm join 192.168.1.200:6443 --token ccxrk8.myui0xu4syp99gxu --discovery-token-ca-cert-hash sha256:e3c90ace969aa4d62143e7da6202f548662866dfe33c140095b020031bff2986

8图柏、集群檢測(cè)

查看pods

說(shuō)明:節(jié)點(diǎn)加入到集群之后需要等待幾分鐘再查看

# kubectl get pods -n kube-system
NAME                                    READY  STATUS       RESTARTS  AGE
coredns-6c66ffc55b-l76bq        1/1   Running       0      16m
coredns-6c66ffc55b-zlsvh        1/1   Running       0      16m
etcd-node1                          1/1   Running       0      16m
kube-apiserver-node1            1/1   Running       0      16m
kube-controller-manager-node1 1/1   Running         0      15m
kube-flannel-ds-sr6tq           0/1   CrashLoopBackOff  6      7m12s
kube-flannel-ds-ttzhv           1/1   Running       0      9m24s
kube-proxy-nfbg2                1/1   Running       0      7m12s
kube-proxy-r4g7b                1/1   Running       0      16m
kube-scheduler-node1            1/1   Running       0      16m

遇到異常狀態(tài)0/1的pod長(zhǎng)時(shí)間啟動(dòng)不了可刪除它等待集群創(chuàng)建新的pod資源

# kubectl delete pod kube-flannel-ds-sr6tq -n kube-system
pod "kube-flannel-ds-sr6tq" deleted

刪除后再次查看,發(fā)現(xiàn)狀態(tài)為正常

[root@master flannel]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6955765f44-g767b         1/1     Running   0          18m
coredns-6955765f44-l8zzs         1/1     Running   0          18m
etcd-master                      1/1     Running   0          18m
kube-apiserver-master            1/1     Running   0          18m
kube-controller-manager-master   1/1     Running   0          18m
kube-flannel-ds-amd64-bsdcr      1/1     Running   0          60s
kube-flannel-ds-amd64-g8d7x      1/1     Running   0          2m33s
kube-flannel-ds-amd64-qjpzg      1/1     Running   0          5m9s
kube-proxy-5pmgv                 1/1     Running   0          2m33s
kube-proxy-r962v                 1/1     Running   0          60s
kube-proxy-zklq2                 1/1     Running   0          18m
kube-scheduler-master            1/1     Running   0          18m

再次查看節(jié)點(diǎn)狀態(tài)

[root@master flannel]# kubectl get nodes -n kube-system
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   19m     v1.17.2
node1    Ready    <none>   3m16s   v1.17.2
node2    Ready    <none>   103s    v1.17.2

到此集群配置完成

安裝Kubernetes Dashboard

Dashboard 只在在master上安裝

Kubernetes Dashboard 是 Kubernetes 的官方 Web UI

  • 向 Kubernetes 集群部署容器化應(yīng)用
  • 診斷容器化應(yīng)用的問(wèn)題
  • 管理集群的資源
  • 查看集群上所運(yùn)行的應(yīng)用程序
  • 創(chuàng)建任连、修改Kubernetes 上的資源(例如 Deployment蚤吹、Job、DaemonSet等)
  • 展示集群上發(fā)生的錯(cuò)誤

國(guó)人開發(fā)的Web UI頁(yè)面: https://kuboard.cn/install/install-dashboard.html

有興趣可以了解一下随抠,值得推薦裁着。

下載yaml

訪問(wèn)網(wǎng)站https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

\vi recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

修改配置文件

# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #新加此行
  ports:
    - port: 443
      nodePort: 30001 #新加此行
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

應(yīng)用配置文件
kubectl apply -f recommended.yaml
查看pod和service
kubectl get pod -o wide -n kubernetes-dashboard
kubectl get svc -o wide -n kubernetes-dashboard
火狐瀏覽器訪問(wèn)
https://10.9.62.205:30001

創(chuàng)建一個(gè)dashboard用戶

# vim create-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  
# kubectl apply -f create-admin.yaml

獲取Token

[root@master dashboard1]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-z4jp6
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 349285ce-741d-4dc1-a600-1843a6ec9751

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InY5M1pSc3RpejBVZ0x6LTNSbWlCc2t5b01ualNZWnpYMVB5YzUwNmZ3ZmsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXo0anA2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNDkyODVjZS03NDFkLTRkYzEtYTYwMC0xODQzYTZlYzk3NTEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.JtCa0VC7tYtIGLWlwSKUwqSL0T8eRvZ8jk_AUxB4Atmi5PjF9IjAHNNwGS3HaTL3Q86fCI8MvYGf3Eplk9X-n-g9WsrFIxXxa0wGJxZp0d8R78A6vuN7I7Zd5CeQm_O2ycTUuQhYnSZlNplF8X033QOfjOoFnKKevbn2094XXWWZuAsT9haGnZ8BX92DmYzsaMyLesfv7ZziJD80KgSQ8_jtb0n55zw5cedYTsRCZgofJ_o9U5SUW3I0AXG-vVhI28m0sMBjZkuMppfB4eMLnSDH-XAw3Gvwe_2NOLfS4hBTkYu7gJket-gif9Cs8Ybkzvf2qXdZW5fydZUuSylafg
ca.crt:     1025 bytes
namespace:  20 bytes

訪問(wèn)網(wǎng)址token粘貼,完成進(jìn)入拱她。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末二驰,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子秉沼,更是在濱河造成了極大的恐慌桶雀,老刑警劉巖,帶你破解...
    沈念sama閱讀 217,509評(píng)論 6 504
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件氧猬,死亡現(xiàn)場(chǎng)離奇詭異背犯,居然都是意外死亡坏瘩,警方通過(guò)查閱死者的電腦和手機(jī)盅抚,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,806評(píng)論 3 394
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)倔矾,“玉大人妄均,你說(shuō)我怎么就攤上這事柱锹。” “怎么了丰包?”我有些...
    開封第一講書人閱讀 163,875評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵禁熏,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我邑彪,道長(zhǎng)瞧毙,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,441評(píng)論 1 293
  • 正文 為了忘掉前任寄症,我火速辦了婚禮宙彪,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘有巧。我一直安慰自己释漆,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,488評(píng)論 6 392
  • 文/花漫 我一把揭開白布篮迎。 她就那樣靜靜地躺著男图,像睡著了一般。 火紅的嫁衣襯著肌膚如雪甜橱。 梳的紋絲不亂的頭發(fā)上逊笆,一...
    開封第一講書人閱讀 51,365評(píng)論 1 302
  • 那天,我揣著相機(jī)與錄音渗鬼,去河邊找鬼览露。 笑死,一個(gè)胖子當(dāng)著我的面吹牛譬胎,可吹牛的內(nèi)容都是我干的差牛。 我是一名探鬼主播,決...
    沈念sama閱讀 40,190評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼堰乔,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼偏化!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起镐侯,我...
    開封第一講書人閱讀 39,062評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤侦讨,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后苟翻,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體韵卤,經(jīng)...
    沈念sama閱讀 45,500評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,706評(píng)論 3 335
  • 正文 我和宋清朗相戀三年崇猫,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了沈条。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,834評(píng)論 1 347
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡诅炉,死狀恐怖蜡歹,靈堂內(nèi)的尸體忽然破棺而出屋厘,到底是詐尸還是另有隱情,我是刑警寧澤月而,帶...
    沈念sama閱讀 35,559評(píng)論 5 345
  • 正文 年R本政府宣布汗洒,位于F島的核電站,受9級(jí)特大地震影響父款,放射性物質(zhì)發(fā)生泄漏溢谤。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,167評(píng)論 3 328
  • 文/蒙蒙 一憨攒、第九天 我趴在偏房一處隱蔽的房頂上張望溯香。 院中可真熱鬧,春花似錦浓恶、人聲如沸玫坛。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,779評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)湿镀。三九已至,卻和暖如春伐憾,著一層夾襖步出監(jiān)牢的瞬間勉痴,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 32,912評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工树肃, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留蒸矛,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 47,958評(píng)論 2 370
  • 正文 我出身青樓胸嘴,卻偏偏與公主長(zhǎng)得像雏掠,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子劣像,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,779評(píng)論 2 354

推薦閱讀更多精彩內(nèi)容