Kubeadm 部署 Kubernetes 1.14.2 集群

kubernetes-logo.png

kubernetes來源于希臘語,意為舵手或領航員,從k8s的logo也能看出這個船舵圖標與其名稱對應。而我們常說的k8s中的8代表的就是ubernete這個八個字符均澳。這里引用k8s中文社區(qū)文檔對k8s的描述:Kubernetes是一個開源的,用于管理云平臺中多個主機上的容器化的應用符衔,Kubernetes的目標是讓部署容器化的應用簡單并且高效(powerful),Kubernetes提供了應用部署找前,規(guī)劃,更新判族,維護的一種機制躺盛。

環(huán)境、主從節(jié)點規(guī)劃

各個節(jié)點規(guī)劃

IP地址 角色 服務器系統(tǒng)
172.31.76.16 k8s從節(jié)點 CentOS 7.6
172.31.76.17 k8s從節(jié)點 CentOS 7.6
172.31.76.18 k8s主節(jié)點 CentOS 7.6

每個節(jié)點軟件版本

軟件名稱 版本 作用
Docker 18.09.6 容器
Kubernetes 1.14.2 管理容器

Kubernetes安裝組件介紹

組件名稱 版本 作用
kubeadm 1.14.2-0 初始化k8s集群工具
kubectl 1.14.2-0 k8s命令行工具形帮,命令控制部署管理應用颗品,CRUD各種資源
kubelet 1.14.2-0 運行于所有節(jié)點上,負責啟動容器和 Pod

準備工作

每臺節(jié)點服務器設置主機名

# 主節(jié)點主機名對應 172.31.76.18
hostnamectl --static set-hostname  k8s-master
# 從節(jié)點主機名對應 172.31.76.16 172.31.76.17
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2
  • 使用 hostnamectl命令可以查看是否設置成功
# 使用hostnamectl命令 顯示信息
Static hostname: k8s-node-1
Transient hostname: docker_76_16
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 8919fc90446b48fcbeb2c6cf267caba2
           Boot ID: a684023646094b999b7ace62aed3cd2e
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-327.el7.x86_64
      Architecture: x86-64
  • 每個節(jié)點的主機加入host 解析
# 編輯每臺機器的 /etc/hosts文件沃缘,寫入下面內(nèi)容

172.31.76.16 k8s-node-1
172.31.76.17 k8s-node-2
172.31.76.18 k8s-master
  • 關(guān)閉每個節(jié)點的防火墻
# 注意以下命令是下次生效
systemctl disable firewalld.service
systemctl stop firewalld.service

# 關(guān)閉防火墻立即生效
iptables -F

# 防火墻關(guān)閉后可以使用以下命令查看防火墻狀態(tài)
systemctl status firewalld  
  • 臨時禁用SELINUX(它是一個 Linux 內(nèi)核模塊,也是 Linux 的一個安全子系統(tǒng))则吟,我的機器默認是關(guān)閉的
setenforce 0                  ##設置SELinux 成為permissive模式 (不用重啟機器)

# 修改配置文件 (重啟機器生效)
vim /etc/selinux/config
SELINUX=disabled
  • 每個節(jié)點關(guān)閉 swap
swapoff -a 

各個節(jié)點組件安裝

  • 經(jīng)過前面的準備工作槐臀,接下來我們開始安裝組件,注意一下組件每個節(jié)點都需要安裝

Docker安裝

  • 請看我寫的關(guān)于Docker的文章

安裝 kubeadm氓仲、kubectl水慨、kubelet

  • 安裝這幾個組件前先準備repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 接著直接安裝 kubeadm、kubectl敬扛、kubelet這個三個組件
yum install -y kubelet kubeadm kubectl
  • kubeadm垦垂、kubectl掀虎、kubelet組件下載安裝成功


    Kubernetes組件安裝.png
  • 啟動剛剛安裝的kubelet

systemctl enable kubelet && systemctl start kubelet

k8s Master 節(jié)點配置

準備鏡像文件

  • 國內(nèi)環(huán)境由于網(wǎng)絡不通暢問題,我們只能手動下載好鏡像,再打上對應tag來制作本地鏡像
  • Master 節(jié)點獲取鏡像文件
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.2
docker pull mirrorgooglecontainers/kube-proxy:v1.14.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

  • 給拉取的鏡像文件打tag
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag mirrorgooglecontainers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
  • 刪除拉取的初始鏡像滓彰,留下我們加了tag的鏡像
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.2           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.2  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.2          
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.2               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.3.10                      
docker rmi coredns/coredns:1.3.1
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

docker rmi k8s.gcr.io/kube-apiserver:v1.14.2           
docker rmi k8s.gcr.io/kube-controller-manager:v1.14.2  
docker rmi k8s.gcr.io/kube-scheduler:v1.14.2          
docker rmi k8s.gcr.io/kube-proxy:v1.14.2               
docker rmi k8s.gcr.io/pause:3.1                        
docker rmi k8s.gcr.io/etcd:3.3.10                      
docker rmi k8s.gcr.io/coredns:1.3.1
docker rmi quay.io/coreos/flannel:v0.10.0-amd64
加完tag完成之后的鏡像文件.png

開始安裝kubernetes

  • 輸入以下命令開始安裝kubernetes
# --kubernetes-version=v1.14.2 指定安裝的k8s版本
# --apiserver-advertise-address 用于指定使用k8s-master的哪個network 端口進行通信 
# --pod-network-cidr 用于指定Pod的網(wǎng)絡范圍,下面采用的是flannel方案(https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md)
kubeadm init --kubernetes-version=v1.14.2 --apiserver-advertise-address 172.31.76.18 --pod-network-cidr=10.244.0.0/16
  • 如下為kubernetes初始化日志打印
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.76.18 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.76.18 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.76.18]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501690 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: y6awgp.6bvxt8l3rie2du5s
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.76.18:6443 --token y6awgp.6bvxt8l3rie2du5s \
    --discovery-token-ca-cert-hash sha256:9989fe3160fe36c428ab2e05866f8d04a91704c5973dcf8025721c9e5e1b230c 
  • 注意: 看到上面Kubernetes初始化信息,我們需要注意最后一句話,等會我們子節(jié)點加入Kubernetes集群就是使用這一句話
kubeadm join 172.31.76.18:6443 --token y6awgp.6bvxt8l3rie2du5s \
    --discovery-token-ca-cert-hash sha256:9989fe3160fe36c428ab2e05866f8d04a91704c5973dcf8025721c9e5e1b230c 

配置kubectl

# root 模式下導入環(huán)境變量
export KUBECONFIG=/etc/kubernetes/admin.conf

# 重啟 kubelet
systemctl restart kubelet

安裝Pod的網(wǎng)絡(flannel方案)

sysctl net.bridge.bridge-nf-call-iptables=1
  • 然后在k8s-master節(jié)點上執(zhí)行kube-flannel.yaml配置侮邀,也可根據(jù)官方文檔來操作下載kube-flannel.yaml文件,下文也給出kube-flannel.yaml文件內(nèi)容
kubectl apply -f kube-flannel.yaml

[圖片上傳中...(Pod正常運行.png-b1689c-1567098320716-0)]

安裝Pod網(wǎng)絡.png
  • kube-flannel.yaml 文件
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
  • 查看Kubernetes的Pod 是否正常運行
kubectl get pods --all-namespaces -o wide
Pod正常運行.png
  • 查看Kubernetes主節(jié)點是否已經(jīng)就緒
kubectl get nodes
Kubernetes主節(jié)點已經(jīng)就緒.png
  • 最后別忘了執(zhí)行(不執(zhí)行使用kubectl命令會出現(xiàn)錯誤1)
mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

k8s從節(jié)點(node)加入集群

  • 前面準備工作中我們已經(jīng)在各個節(jié)點中安裝了kubelet kubeadm kubectl這三個組件贝润,在搭建k8s master 主節(jié)點這一小節(jié)也提到過加入集群的操作(忘記了可以往上翻翻)
  • 按照配置主節(jié)點的內(nèi)容在docker 中加入鏡像

加入集群

# 基礎命令示例 kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

kubeadm join 172.31.76.18:6443 --token pamsj1.4d5funpottlqofs1 --discovery-token-ca-cert-hash sha256:1152aa95b6a45e88211686b44a3080d643fa95b94ebf98c5041a7f88063f2f4e
node節(jié)點加入集群.png
  • 我們可以在另一臺node節(jié)點機器再次重復該操作

  • 查看剛剛加入集群的子節(jié)點

node節(jié)點加入集群成功.png
  • 至此集群的搭建完成绊茧。

子節(jié)點加入集群注意事項

  • 加入集群前保證子節(jié)點服務器已經(jīng)打開了docker服務
  • 注意 token是否過期(默認24小時過期)
  • 子節(jié)點注意保持鏡像文件版本和主節(jié)點一致
  • 子節(jié)點準備工作安裝flannel網(wǎng)絡
  • 子節(jié)點如果加入集群不成功出現(xiàn)錯誤,下次再加入集群錢則使用 kubeadm reset 命令清除子節(jié)點加入集群自動生成的配置文件

k8s集群清理解散

  • 刪除子節(jié)點
# 查詢k8s集群所以節(jié)點
kubectl get nodes

# 刪除子節(jié)點 打掘,<node name> 代表子節(jié)點名稱
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
  • 重置節(jié)點
# 不論主節(jié)點 還是 子節(jié)點該命令都能重置節(jié)點
kubeadm reset

k8s集群可視化管理工具Dashboard安裝

獲取Dashboard鏡像

  • 官方地址
  • 目前官方最新版本為v1.10.1华畏,和前面獲取國內(nèi)鏡像文件一樣,我們先獲取鏡像尊蚁,在把鏡像打成對應tag的鏡像(注意是每個節(jié)點都需要拉取鏡像)
# 拉取國內(nèi)鏡像
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

# 重新標 tag
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

# 刪除國內(nèi)拉取的鏡像
docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

安裝Dashboard

# 官方文檔的安裝操作
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

# 下載kubernetes-dashboard.yaml安裝
kubectl create -f kubernetes-dashboard.yaml
可視化管理工具dashboard安裝成功.png

Dashboard訪問

API Server的方式訪問 Dashboard

  • 首先我們查看k8s運行的地址和端口號
#使用如下命令
kubectl cluster-info

# 集群正常會得到以下信息
Kubernetes master is running at https://172.31.76.18:6443
KubeDNS is running at https://172.31.76.18:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

  • 接著我們就可以開始訪問Dashboard了
# 使用如下地址格式訪問
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

https://172.31.76.18:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
  • 根據(jù)如上格式訪問Dashboard會得到拒絕訪問的信息穿香,錯誤的原因是k8s基于安全性的考慮,瀏覽器必須要安裝一個根證書绎速,防止中間人攻擊(官方描述)皮获,接下來我們來生成證書再操作。
{
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
    "reason": "Forbidden",
    "details": {
        "name": "https:kubernetes-dashboard:",
        "kind": "services"
    },
    "code": 403
}

生成證書(master 節(jié)點操作)

  • 生成 crt 文件
grep 'client-certificate-data' /etc/kubernetes/admin.conf | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
  • 生成 key 文件
grep 'client-key-data' /etc/kubernetes/admin.conf | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
  • 生成 p12 證書文件纹冤,需要設置生成證書密碼
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
  • 將生產(chǎn)的p12證書導入到谷歌瀏覽器中洒宝,證書導入也需要密碼,也就是上面步驟生成p12證書文件設置的密碼萌京,證書導入成功之后重啟谷歌瀏覽器(如何導入證書這里就不細說了)
  • 再次訪問如下地址就會提示我們選擇剛剛導入的證書雁歌,接下來就會顯示如下圖所示的認證界面
https://172.31.76.18:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
k8s提示需要認證.png
  • 這里我們使用token認證,使用token認證前先創(chuàng)建dashboard用戶知残,
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF
  • 創(chuàng)建ClusterRoleBinding
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF
  • 然后我們在獲取用戶的token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
獲取創(chuàng)建的dashboard用戶token.png
  • 然后將token輸入即可靠瞎,至此Dashboard安裝完成
Dashboard安裝完成.png

刪除部署的dashboard

  • 如果dashboard部署不對,可以使用如下命令刪除dashboard再重新部署
kubectl delete -f kubernetes-dashboard.yaml

搭建過程中出現(xiàn)的錯誤

錯誤1: kubectl get nodes 命令出錯

錯誤描述

  • The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • node 節(jié)點使用kubectl get nodes命令不出意外也會出現(xiàn)上述錯誤描述,則我們應該把master 節(jié)點的/etc/kubernetes/admin.conf文件復制到node節(jié)點/etc/kubernetes/目錄下再執(zhí)行下面命令即可较坛。
  • 解決:參考地址
mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config
  • 其實以上解決答案在我們初始化master 節(jié)點的成功的打印信息中就已經(jīng)提示我們配置了印蔗,不信可以翻看前文master 節(jié)點打印信息。

錯誤2: 子節(jié)點加入Kubernetes集群出現(xiàn)錯誤

錯誤描述

  • FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
  • 解決:參考地址
  • 該錯誤的原因主要是因為token過期了(token默認有效期為24h),所以我們只要在k8s master節(jié)點使用kubeadm命令重新創(chuàng)建新的token就好了
# 創(chuàng)建新token
kubeadm token create
# 獲取sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | 
openssl dgst -sha256 -hex | sed 's/^.* //'
k8s master節(jié)點創(chuàng)建新的token.png

錯誤3:Kubeadm init 或者 join 出現(xiàn)錯誤

錯誤描述

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
  • 重啟機器 reboot the machine丑勤,機器重啟之后如果docker 不是開機啟動的記得啟動docker服務
## 啟動 docker 服務
systemctl enable docker.service
## 啟動docker
systemctl start docker
  • 重啟服務器
# 重啟命令
reboot

錯誤4:子節(jié)點加入集群node節(jié)點DNS 服務 CrashLoopBackOff

錯誤描述

node節(jié)點DNS CrashLoopBackOff.png
  • 解決:

查看有問題服務的日志

kubectl --namespace kube-system logs kube-flannel-ds-amd64-g997s

錯誤日志:Error from server: Get https://172.31.76.17:10250/containerLogs/kube-system/kube-flannel-ds-amd64-g997s/kube-flannel: dial tcp 172.31.76.17:10250: connect: no route to host
  • 從錯誤日志中可以看出是默認網(wǎng)關(guān)的問題华嘹,加入網(wǎng)卡默認網(wǎng)關(guān)即可,默認網(wǎng)關(guān)添加具體需要看自己服務器而定法竞。

錯誤5:子節(jié)點加入集群node節(jié)點出現(xiàn)錯誤

錯誤描述(路由異常問題)

error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

# 執(zhí)行以下命令
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

#再次執(zhí)行 kubeadm join .......命令

文章中如果有錯誤耙厚,請大家給我提出來,大家一起學習進步岔霸,如果覺得我的文章給予你幫助薛躬,也請給我一個喜歡和關(guān)注,同時也歡迎訪問我的個人博客呆细。

參考鏈接

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市絮爷,隨后出現(xiàn)的幾起案子趴酣,更是在濱河造成了極大的恐慌,老刑警劉巖坑夯,帶你破解...
    沈念sama閱讀 211,743評論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件岖寞,死亡現(xiàn)場離奇詭異,居然都是意外死亡柜蜈,警方通過查閱死者的電腦和手機仗谆,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,296評論 3 385
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來淑履,“玉大人隶垮,你說我怎么就攤上這事∶卦耄” “怎么了岁疼?”我有些...
    開封第一講書人閱讀 157,285評論 0 348
  • 文/不壞的土叔 我叫張陵,是天一觀的道長缆娃。 經(jīng)常有香客問我,道長瑰排,這世上最難降的妖魔是什么贯要? 我笑而不...
    開封第一講書人閱讀 56,485評論 1 283
  • 正文 為了忘掉前任,我火速辦了婚禮椭住,結(jié)果婚禮上崇渗,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好宅广,可當我...
    茶點故事閱讀 65,581評論 6 386
  • 文/花漫 我一把揭開白布葫掉。 她就那樣靜靜地躺著,像睡著了一般跟狱。 火紅的嫁衣襯著肌膚如雪俭厚。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,821評論 1 290
  • 那天驶臊,我揣著相機與錄音挪挤,去河邊找鬼。 笑死关翎,一個胖子當著我的面吹牛扛门,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播纵寝,決...
    沈念sama閱讀 38,960評論 3 408
  • 文/蒼蘭香墨 我猛地睜開眼论寨,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了爽茴?” 一聲冷哼從身側(cè)響起葬凳,我...
    開封第一講書人閱讀 37,719評論 0 266
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎闹啦,沒想到半個月后沮明,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,186評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡窍奋,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,516評論 2 327
  • 正文 我和宋清朗相戀三年荐健,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片琳袄。...
    茶點故事閱讀 38,650評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡江场,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出窖逗,到底是詐尸還是另有隱情址否,我是刑警寧澤,帶...
    沈念sama閱讀 34,329評論 4 330
  • 正文 年R本政府宣布碎紊,位于F島的核電站佑附,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏仗考。R本人自食惡果不足惜音同,卻給世界環(huán)境...
    茶點故事閱讀 39,936評論 3 313
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望秃嗜。 院中可真熱鬧权均,春花似錦顿膨、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,757評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至必指,卻和暖如春囊咏,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背取劫。 一陣腳步聲響...
    開封第一講書人閱讀 31,991評論 1 266
  • 我被黑心中介騙來泰國打工匆笤, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人谱邪。 一個月前我還...
    沈念sama閱讀 46,370評論 2 360
  • 正文 我出身青樓炮捧,卻偏偏與公主長得像,于是被迫代替她去往敵國和親惦银。 傳聞我的和親對象是個殘疾皇子咆课,可洞房花燭夜當晚...
    茶點故事閱讀 43,527評論 2 349

推薦閱讀更多精彩內(nèi)容