二棠枉、Kubernetes 環(huán)境搭建

1、Kubernetes 安裝方式

Kubernetes 安裝有很多種方式泡挺,有極其復(fù)雜的辈讶,也有相對(duì)復(fù)雜的,當(dāng)然也有相對(duì)簡(jiǎn)單的娄猫,不過簡(jiǎn)單的是企業(yè)級(jí)的解決方案贱除,是收費(fèi)的咳促,這里舉幾個(gè)例子來安裝 Kubernetes!

本章只演示 MinikubeKubeadm 兩種安裝方式

2鳄虱、安裝

2.1弟塞、Minikube 搭建方式

  • 安裝kubectl

    • 根據(jù)官網(wǎng)步驟下載

    • 直接下載

    • kubectl&minikube 百度盤下載,提取碼: pap8

    • 配置 kubectl.exe 環(huán)境變量拙已,使得cmd窗口可以直接使用kubectl命令

    • 檢查是否配置成功

      • kubectl version

        C:\Users\32731>kubectl version
        Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}
        
        # k8s還沒安裝, 所以這里連不上
        Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
        
  • 安裝minikube

  • 配置 minikube.exe 環(huán)境變量决记,使得cmd窗口可以直接使用minikube命令

  • 檢查是否配置成功

    • minikube version

      C:\Users\32731>minikube version
      minikube version: v1.5.2
      commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad-dirty
      
  • 安裝K8S

    • 由于需要 科學(xué)上網(wǎng),這里就不再繼續(xù)演示了

      # 指定VM驅(qū)動(dòng), 其實(shí)就是通過minikube創(chuàng)建一個(gè)虛擬機(jī)
      C:\Users\32731>minikube start --vm-driver=virtualbox
      ! Microsoft Windows 10 Pro 10.0.17763 Build 17763 上的 minikube v1.5.2
      * 正在下載 VM boot image...
      
  • 常用命令

    # 創(chuàng)建K8S
    minikube start
    # 刪除K8S
    minikube delete
    # 進(jìn)入到K8S的機(jī)器中
    minikube ssh
    # 查看狀態(tài)
    minikube status
    # 進(jìn)入dashboard
    minikube dashboard
    

其他系統(tǒng)下使用Minikube的操作這里就不演示了,可以去官網(wǎng)上看

2.2倍踪、 Kubeadm 安裝方式(無需科學(xué)上網(wǎng))

官網(wǎng)安裝 Kubeadm 步驟

2.2.1系宫、準(zhǔn)備環(huán)境
  • 版本統(tǒng)一

    • 這里采用舊版本,新版本據(jù)說有問題建车,我沒去試過扩借,就按下面的版本搭建
    • Docker 18.09.0
    • kubeadm-1.14.0-0
    • kubelet-1.14.0-0
    • kubectl-1.14.0-0
      • k8s.gcr.io/kube-apiserver:v1.14.0
      • k8s.gcr.io/kube-controller-manager:v1.14.0
      • k8s.gcr.io/kube-scheduler:v1.14.0
      • k8s.gcr.io/kube-proxy:v1.14.0
      • k8s.gcr.io/pause:3.1
      • k8s.gcr.io/etcd:3.3.10
      • k8s.gcr.io/coredns:1.3.1
    • calico:v3.9
  • 系統(tǒng)

    • win10
  • 虛擬化技術(shù)

    • Virtual Box
    • 采用vagrant + virtual box配合使用搭建centos7系統(tǒng)
  • 配置要求

    • 每臺(tái)機(jī)器 2 GB 或更多的 RAM (如果少于這個(gè)數(shù)字將會(huì)影響您應(yīng)用的運(yùn)行內(nèi)存)
    • 2核 CPU 或更多
    • 集群中的所有機(jī)器的網(wǎng)絡(luò)彼此均能相互連接(公網(wǎng)和內(nèi)網(wǎng)都可以)
  • vagrant安裝方式

    • 可以參考之前寫的 一、Docker環(huán)境準(zhǔn)備

    • 這里僅提供一次安裝多個(gè)虛擬機(jī)的Vagrantfile

      boxes = [
          {
              # 虛擬機(jī)名稱
              :name => "master-kubeadm-k8s",
              # ip地址, 需要與win10的內(nèi)網(wǎng)地址在同一個(gè)網(wǎng)段
              :eth1 => "192.168.50.111",
              # 分配2G內(nèi)存
              :mem => "2048",
              # 分配2核CPU
              :cpu => "2",
              :sshport => 22230
          },
          {
              :name => "worker01-kubeadm-k8s",
              :eth1 => "192.168.50.112",
              :mem => "2048",
              :cpu => "2",
              :sshport => 22231
          },
          {
              :name => "worker02-kubeadm-k8s",
              :eth1 => "192.168.50.113",
              :mem => "2048",
              :cpu => "2",
              :sshport => 22232
          }
      ]
      Vagrant.configure(2) do |config|
          config.vm.box = "centos/7"
          boxes.each do |opts|
              config.vm.define opts[:name] do |config|
                  config.vm.hostname = opts[:name]
                  config.vm.network :public_network, ip: opts[:eth1]
                  config.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: "true"
              config.vm.network "forwarded_port", guest: 22, host: opts[:sshport]
                  config.vm.provider "vmware_fusion" do |v|
                      v.vmx["memsize"] = opts[:mem]
                      v.vmx["numvcpus"] = opts[:cpu]
                  end
                  config.vm.provider "virtualbox" do |v|
                      v.customize ["modifyvm", :id, "--memory", opts[:mem]]
                  v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
                      v.customize ["modifyvm", :id, "--name", opts[:name]]
                  end
              end
          end
      end
      
  • 安裝效果

image.png
2.2.2癞志、安裝依賴往枷,更改配置
  • 更新 yum 源,3臺(tái)虛擬機(jī)都要更新

    yum -y update
    yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
    
image.png
  • 安裝Docker

    # 1凄杯、卸載之前安裝的docker
    sudo yum remove docker docker latest docker-latest-logrotate \
    docker-logrotate docker-engine docker-client docker-client-latest docker-common
    
    # 2错洁、安裝必要依賴
    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    
    # 3、設(shè)置docker倉(cāng)庫(kù)
    sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    # 4戒突、設(shè)置阿里云鏡像加速,這里的鏡像地址大家可以去自己的阿里云鏡像倉(cāng)庫(kù)復(fù)制,可能不一樣
    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://rrpa5ijo.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    
    # 5屯碴、更新yum緩存
    sudo yum makecache fast
    
    # 6、安裝 18.09.0版本 docker
    sudo yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
    
    # 7膊存、啟動(dòng)docker并設(shè)置開機(jī)啟動(dòng)
    sudo systemctl start docker && sudo systemctl enable docker
    
    # 8导而、測(cè)試docker安裝是否成功
    sudo docker run hello-world
    
  • 修改 hosts文件忱叭,配置 hostname

    # 1、設(shè)置master的hostname
    [root@master-kubeadm-k8s ~]# sudo hostnamectl set-hostname master
    
    # 2今艺、設(shè)置worker01/02的hostname
    [root@worker01-kubeadm-k8s ~]# sudo hostnamectl set-hostname worker01
    [root@worker02-kubeadm-k8s ~]# sudo hostnamectl set-hostname worker02
    
    # 3韵丑、修改3臺(tái)機(jī)器的 hosts 文件
    vi /etc/hosts
    
    192.168.50.111 master
    192.168.50.112 worker01
    192.168.50.113 worker02
    
    # 永久修改hostname,需要重啟
    sudo vi /etc/sysconfig/network
    # 添加內(nèi)容
    hostname=master/worker01/worker02
    
    # 4、在每臺(tái)機(jī)器上 ping 測(cè)試一下虚缎,保證每臺(tái)都可以 ping 通即可
    [root@master-kubeadm-k8s ~]# ping worker01
    PING worker01 (192.168.50.112) 56(84) bytes of data.
    64 bytes from worker01 (192.168.50.112): icmp_seq=1 ttl=64 time=0.840 ms
    64 bytes from worker01 (192.168.50.112): icmp_seq=2 ttl=64 time=0.792 ms
    64 bytes from worker01 (192.168.50.112): icmp_seq=3 ttl=64 time=0.806 ms
    .....
    
  • 系統(tǒng)基礎(chǔ)前提配置

    # 1撵彻、關(guān)閉防火墻
    systemctl stop firewalld && systemctl disable firewalld
    
    # 2、關(guān)閉selinux
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    # 3实牡、關(guān)閉swap
    swapoff -a
    sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
    
    # 4陌僵、配置iptables的ACCEPT規(guī)則
    iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
    
    # 5、設(shè)置系統(tǒng)參數(shù)
    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    
2.2.3创坞、安裝 Kubeadm碗短、Kubelet 和 Kubectl
  • 配置 yum 源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  • 安裝kubeadm、kubelet题涨、kubectl

    yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0
    
  • docker和k8s設(shè)置同一個(gè)cgroup

    # docker
    vi /etc/docker/daemon.json
    # 添加下面這個(gè)到首行,逗號(hào)別丟了
    "exec-opts": ["native.cgroupdriver=systemd"],
    
    # 重啟docker偎谁,一定要執(zhí)行
    systemctl restart docker
    
    # kubelet,這邊如果發(fā)現(xiàn)輸出 No such file or directory纲堵,說明是沒問題的搭盾,繼續(xù)往下進(jìn)行即可
    sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    # 重啟kubelet,一定要執(zhí)行
    systemctl enable kubelet && systemctl start kubelet
    
2.2.4婉支、拉取 Kubeadm 必備的幾個(gè)鏡像
  • 查看 kubeadm 使用的鏡像

    [root@master-kubeadm-k8s ~]# kubeadm config images list
    ...
    
    # 這幾個(gè)就是運(yùn)行 Kubeadm 必備的幾個(gè)鏡像,但是都是國(guó)外鏡像,沒有科學(xué)上網(wǎng)不好直接拉取
    k8s.gcr.io/kube-apiserver:v1.14.0
    k8s.gcr.io/kube-controller-manager:v1.14.0
    k8s.gcr.io/kube-scheduler:v1.14.0
    k8s.gcr.io/kube-proxy:v1.14.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    
  • 解決國(guó)外鏡像不能訪問的問題

    可以通過國(guó)內(nèi)鏡像倉(cāng)庫(kù)下載所需鏡像鸯隅,然后修改鏡像名稱

    • 創(chuàng)建 kubeadm.sh 腳本,用于拉取鏡像向挖、打tag蝌以、刪除原有鏡像

      • 創(chuàng)建 kubeadm.sh 文件
      #!/bin/bash
      set -e
      KUBE_VERSION=v1.14.0
      KUBE_PAUSE_VERSION=3.1
      ETCD_VERSION=3.3.10
      CORE_DNS_VERSION=1.3.1
      GCR_URL=k8s.gcr.io
      ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
      images=(kube-proxy:${KUBE_VERSION}
      kube-scheduler:${KUBE_VERSION}
      kube-controller-manager:${KUBE_VERSION}
      kube-apiserver:${KUBE_VERSION}
      pause:${KUBE_PAUSE_VERSION}
      etcd:${ETCD_VERSION}
      coredns:${CORE_DNS_VERSION})
      
      for imageName in ${images[@]} ; do
          docker pull $ALIYUN_URL/$imageName
          docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
          docker rmi $ALIYUN_URL/$imageName
      done
      
    • 運(yùn)行腳本和查看鏡像

      # 運(yùn)行腳本
      sh ./kubeadm.sh
      
      # 可以看到 Kubeadm 需要的鏡像都下載好了
      [root@master-kubeadm-k8s ~]# docker images
      REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
      k8s.gcr.io/kube-proxy                v1.14.0             5cd54e388aba        12 months ago       82.1MB
      k8s.gcr.io/kube-apiserver            v1.14.0             ecf910f40d6e        12 months ago       210MB
      k8s.gcr.io/kube-controller-manager   v1.14.0             b95b1efa0436        12 months ago       158MB
      k8s.gcr.io/kube-scheduler            v1.14.0             00638a24688b        12 months ago       81.6MB
      k8s.gcr.io/coredns                   1.3.1               eb516548c180        14 months ago       40.3MB
      hello-world                          latest              fce289e99eb9        14 months ago       1.84kB
      k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        15 months ago       258MB
      k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
      
2.2.5、kube init初始化master
  • 初始化master節(jié)點(diǎn)

    # 若要重新初始化集群狀態(tài):kubeadm reset何之,然后再進(jìn)行 init 操作
    # 指定 Kubernetes 的版本跟畅,指定主節(jié)點(diǎn)的 ip,指定網(wǎng)段的ip(可以不不指定)
    kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.50.111 --pod-network-cidr=10.244.0.0/16
    
    # 執(zhí)行 init 完成后給出的提示
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
    # ================根據(jù)下面的提示繼續(xù)再主節(jié)點(diǎn)執(zhí)行========================
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    # ================================================================
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    # kubeadm join 這里要先自己保存好溶推,后面操作 worker節(jié)點(diǎn)要用它來加入集群
    kubeadm join 192.168.50.111:6443 --token se5kqz.roc626v5x1jzv2mp \
        --discovery-token-ca-cert-hash sha256:de8685c390d0f2addsdf86468fea9e02622705fb5eed84daa5b5ca667df29dff
    
    • 執(zhí)行完上面的 3 個(gè)命令后查看pod驗(yàn)證一下

      [root@master-kubeadm-k8s ~]# kubectl get pods -n kube-system
      NAME                                         READY   STATUS    RESTARTS   AGE
      coredns-fb8b8dccf-gwqj9                      0/1     Pending   0          4m55s
      coredns-fb8b8dccf-lj92j                      0/1     Pending   0          4m55s
      etcd-master-kubeadm-k8s                      1/1     Running   0          4m13s
      kube-apiserver-master-kubeadm-k8s            1/1     Running   0          4m2s
      kube-controller-manager-master-kubeadm-k8s   1/1     Running   0          3m59s
      kube-proxy-hhnmc                             1/1     Running   0          4m55s
      kube-scheduler-master-kubeadm-k8s            1/1     Running   0          4m24s
      

      注意:coredns沒有啟動(dòng)徊件,是因?yàn)檫€需要安裝網(wǎng)絡(luò)插件

    • 健康檢查

      [root@master-kubeadm-k8s ~]# curl -k https://localhost:6443/healthz
      
    • kubeadm init流程

      不需要執(zhí)行,這里只是說明 Kubeadm init的流程

      # 1蒜危、進(jìn)行一系列檢查虱痕,以確定這臺(tái)機(jī)器可以部署kubernetes
      # 2、生成kubernetes對(duì)外提供服務(wù)所需要的各種證書可對(duì)應(yīng)目錄
      ls /etc/kubernetes/pki/*
      
      # 3辐赞、為其他組件生成訪問kube-ApiServer所需的配置文件
      ls /etc/kubernetes/
          
          # admin.conf 
          # controller-manager.conf 
          # kubelet.conf 
          # scheduler.conf
          
      # 4部翘、為 Master組件生成Pod配置文件。
      ls /etc/kubernetes/manifests/*.yaml
      
          # kube-apiserver.yaml 
          # kube-controller-manager.yaml
          # kube-scheduler.yaml
          
      # 5响委、生成etcd的Pod YAML文件新思。
      ls /etc/kubernetes/manifests/*.yaml
      
          # kube-apiserver.yaml 
          # kube-controller-manager.yaml
          # kube-scheduler.yaml
          # etcd.yaml
          
      # 6窖梁、一旦這些 YAML 文件出現(xiàn)在被 kubelet 監(jiān)視的/etc/kubernetes/manifests/目錄下,kubelet就會(huì)自動(dòng)創(chuàng)建這些yaml文件定義的pod夹囚,即master組件的容器纵刘。master容器啟動(dòng)后,kubeadm會(huì)通過檢查localhost:6443/healthz這個(gè)master組件的健康狀態(tài)檢查URL荸哟,等待master組件完全運(yùn)行起來
      
      # 7彰导、為集群生成一個(gè)bootstrap token
      
      # 8、將ca.crt等 Master節(jié)點(diǎn)的重要信息敲茄,通過ConfigMap的方式保存在etcd中,工后續(xù)部署node節(jié)點(diǎn)使用
      
      # 9山析、最后一步是安裝默認(rèn)插件堰燎,kubernetes默認(rèn)kube-proxy和DNS兩個(gè)插件是必須安裝的
      
  • 部署calico網(wǎng)絡(luò)插件

    # 同樣在master節(jié)點(diǎn)上操作
    
    # 如果網(wǎng)速夠快的話,可以直接安裝calico,不需要單獨(dú)去拉取鏡像,這里只是把步驟單獨(dú)提取出來執(zhí)行了
    # 可以先手動(dòng)拉取 calico 的 yml 文件,查看需要哪些鏡像
    [root@master-kubeadm-k8s ~]# curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml | grep image
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0 20674    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0 
    100 20674  100 20674    0     0   3216      0  0:00:06  0:00:06 --:--:--  4955
    
            # 版本會(huì)變化笋轨,需要根據(jù)實(shí)際情況拉取鏡像
              image: calico/cni:v3.9.5
              image: calico/pod2daemon-flexvol:v3.9.5
              image: calico/node:v3.9.5
              image: calico/kube-controllers:v3.9.5
    
    # 拉取 calico 所需鏡像, 可能會(huì)比較慢
    [root@master-kubeadm-k8s ~]# docker pull calico/cni:v3.9.5
    [root@master-kubeadm-k8s ~]# docker pull calico/pod2daemon-flexvol:v3.9.5
    [root@master-kubeadm-k8s ~]# docker pull calico/node:v3.9.5
    [root@master-kubeadm-k8s ~]# docker pull calico/kube-controllers:v3.9.5
    
    # 安裝 calico
    [root@master-kubeadm-k8s ~]# kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
    
    # -w 監(jiān)控所有的Pods的狀態(tài)變化
    [root@master-kubeadm-k8s ~]# kubectl get pods --all-namespaces -w
    

    不動(dòng)的話取消重新執(zhí)行,當(dāng)所有 pod 的狀態(tài)都是 Running 表示完成

image.png
2.2.6秆剪、worker節(jié)點(diǎn)加入集群
  • kube join

    復(fù)制之前保存的 初始化master節(jié)點(diǎn)時(shí)最后打印的 Kubeadm Join 信息到worker節(jié)點(diǎn)執(zhí)行

    # worker01 節(jié)點(diǎn)
    [root@worker01-kubeadm-k8s ~]# kubeadm join 192.168.50.111:6443 --token se5kqz.roc626v5x1jzv2mp \
    >     --discovery-token-ca-cert-hash sha256:de8685c390d0f2addsdf86468fea9e02622705fb5eed84daa5b5ca667df29dff
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    # worker02 節(jié)點(diǎn)
    [root@worker02-kubeadm-k8s ~]# kubeadm join 192.168.50.111:6443 --token se5kqz.roc626v5x1jzv2mp \
    >     --discovery-token-ca-cert-hash sha256:de8685c390d0f2addsdf86468fea9e02622705fb5eed84daa5b5ca667df29dff
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    
  • 檢查集群信息

    #在 Master 節(jié)點(diǎn)執(zhí)行
    [root@master-kubeadm-k8s ~]# kubectl get nodes
    NAME                   STATUS     ROLES    AGE   VERSION
    master-kubeadm-k8s     Ready      master   37m   v1.14.0
    # 這里還是 NotReady, 等待完成即可
    worker01-kubeadm-k8s   NotReady   <none>   84s   v1.14.0
    worker02-kubeadm-k8s   NotReady   <none>   79s   v1.14.0
    
    # 等一會(huì)再次執(zhí)行,可以看到所有節(jié)點(diǎn)都是 Ready 狀態(tài), 表示集群已經(jīng)搭建完成了!
    [root@master-kubeadm-k8s ~]# kubectl get nodes
    NAME                   STATUS   ROLES    AGE     VERSION
    master-kubeadm-k8s     Ready    master   40m     v1.14.0
    worker01-kubeadm-k8s   Ready    <none>   3m48s   v1.14.0
    worker02-kubeadm-k8s   Ready    <none>   3m43s   v1.14.0
    

2.3、初體驗(yàn) Pod

  • 定義 pod.yml 文件

    • 建立文件夾
    [root@master-kubeadm-k8s ~]# mkdir pod_nginx_rs
    [root@master-kubeadm-k8s ~]# cd pod_nginx_rs/
    
    • 編寫 yml 文件

      # 編寫 yml 文件, yml爵政、yaml都可以識(shí)別
      cat > pod_nginx_rs.yaml <<EOF
      apiVersion: apps/v1
      kind: ReplicaSet
      metadata:
        name: nginx
        labels:
          tier: frontend
      spec:
        replicas: 3
        selector:
          matchLabels:
            tier: frontend
        template:
          metadata:
            name: nginx
            labels:
              tier: frontend
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
              - containerPort: 80
      EOF
      
    • 根據(jù)pod_nginx_rs.yml文件創(chuàng)建pod

      [root@master-kubeadm-k8s pod_nginx_rs]# kubectl apply -f pod_nginx_rs.yaml
      replicaset.apps/nginx created
      
    • 查看 Pod

      • kubectl get pods

        # 現(xiàn)在還沒有準(zhǔn)備好,等會(huì)可以再次執(zhí)行查看
        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods
        NAME          READY   STATUS              RESTARTS   AGE
        nginx-hdz6w   0/1     ContainerCreating   0          27s
        nginx-kbqxx   0/1     ContainerCreating   0          27s
        nginx-xtttc   0/1     ContainerCreating   0          27s
        
        # 已經(jīng)完成了
        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods
        NAME          READY   STATUS    RESTARTS   AGE
        nginx-hdz6w   1/1     Running   0          3m10s
        nginx-kbqxx   1/1     Running   0          3m10s
        nginx-xtttc   1/1     Running   0          3m10s
        
      • kubectl get pods -o wide

        # 查看 pods 詳情,可以看到 worker01 節(jié)點(diǎn)有兩個(gè) pod, worker02 有一個(gè) pod
        # 注意: 這里面的 IP 是網(wǎng)絡(luò)插件幫助生成的, 并不是指 宿主機(jī)的IP
        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods -o wide
        NAME          READY   STATUS    RESTARTS   AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
        nginx-hdz6w   1/1     Running   0          3m26s   192.168.14.2     worker01-kubeadm-k8s   <none>           <none>
        nginx-kbqxx   1/1     Running   0          3m26s   192.168.221.65   worker02-kubeadm-k8s   <none>           <none>
        nginx-xtttc   1/1     Running   0          3m26s   192.168.14.1     worker01-kubeadm-k8s   <none>           <none>
        
        # worker01 是有 2 個(gè)Nginx的, 下面的 pause 是不算的,原因后面章節(jié)再解釋
        [root@worker01-kubeadm-k8s ~]# docker ps | grep nginx
        acf671c4b9e5        nginx                  "nginx -g 'daemon of…"   3 minutes ago       Up 3 minutes 
        4109bd09f0a1        nginx                  "nginx -g 'daemon of…"   4 minutes ago       Up 4 minutes 
        3e5dcc552287        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes 
        9e0d36cb813c        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes
        
        # worker02 只有一個(gè) Nginx
        [root@worker02-kubeadm-k8s ~]# docker ps | grep nginx
        c490e8d291d3        nginx                  "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes
        b5ab5b408063        k8s.gcr.io/pause:3.1   "/pause"                 8 minutes ago       Up 8 minutes
        
      • kubectl describe pod nginx

        # 查看 pod 的詳情描述,包含了創(chuàng)建過程仅讽、yml文件內(nèi)容、鏡像拉取信息等等钾挟。洁灵。。掺出。
        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl describe pod nginx
        Name:               nginx-hdz6w
        Namespace:          default
        Priority:           0
        PriorityClassName:  <none>
        Node:               worker01-kubeadm-k8s/10.0.2.15
        Start Time:         Tue, 24 Mar 2020 15:14:43 +0000
        Labels:             tier=frontend
        Annotations:        cni.projectcalico.org/podIP: 192.168.14.2/32
        Status:             Running
        IP:                 192.168.14.2
        Controlled By:      ReplicaSet/nginx
        Containers:
          nginx:
            Container ID:   docker://4109bd09f0a11c0de77f411258e2cd18cc7ea624ad733a2e9c16f6468aadd448
            Image:          nginx
            Image ID:       docker-pullable://nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b
            Port:           80/TCP
            Host Port:      0/TCP
            State:          Running
              Started:      Tue, 24 Mar 2020 15:16:21 +0000
            Ready:          True
            Restart Count:  0
            Environment:    <none>
            Mounts:
              /var/run/secrets/kubernetes.io/serviceaccount from default-token-xggf5 (ro)
        Conditions:
          Type              Status
          Initialized       True
          Ready             True
          ContainersReady   True
          PodScheduled      True
        Volumes:
          default-token-xggf5:
            Type:        Secret (a volume populated by a Secret)
            SecretName:  default-token-xggf5
            Optional:    false
        QoS Class:       BestEffort
        Node-Selectors:  <none>
        Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                         node.kubernetes.io/unreachable:NoExecute for 300s
        Events:
          Type    Reason     Age    From                           Message
          ----    ------     ----   ----                           -------
          Normal  Scheduled  3m56s  default-scheduler              Successfully assigned default/nginx-hdz6w to worker01-kubeadm-k8s
          Normal  Pulling    3m52s  kubelet, worker01-kubeadm-k8s  Pulling image "nginx"
          Normal  Pulled     2m20s  kubelet, worker01-kubeadm-k8s  Successfully pulled image "nginx"
          Normal  Created    2m18s  kubelet, worker01-kubeadm-k8s  Created container nginx
          Normal  Started    2m18s  kubelet, worker01-kubeadm-k8s  Started container nginx
        
    • pod 擴(kuò)容

      # 將 nginx 擴(kuò)容為 5 個(gè) pod
      [root@master-kubeadm-k8s pod_nginx_rs]# kubectl scale rs nginx --replicas=5
      replicaset.extensions/nginx scaled
      
      # 查看 pod, 新增的 2個(gè) pod 正在創(chuàng)建
      [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods -o wide
      NAME          READY   STATUS              RESTARTS   AGE   IP               NODE                   NOMINATED NODE   READINESS GATES
      nginx-7xf8m   0/1     ContainerCreating   0          5s    <none>           worker01-kubeadm-k8s   <none>           <none>
      nginx-hdz6w   1/1     Running             0          14m   192.168.14.2     worker01-kubeadm-k8s   <none>           <none>
      nginx-kbqxx   1/1     Running             0          14m   192.168.221.65   worker02-kubeadm-k8s   <none>           <none>
      nginx-qw2dh   0/1     ContainerCreating   0          5s    <none>           worker02-kubeadm-k8s   <none>           <none>
      nginx-xtttc   1/1     Running             0          14m   192.168.14.1     worker01-kubeadm-k8s   <none>           <none>
      
    • 測(cè)試

      [root@master-kubeadm-k8s pod_nginx_rs]# ping 192.168.14.2
      PING 192.168.14.2 (192.168.14.2) 56(84) bytes of data.
      64 bytes from 192.168.14.2: icmp_seq=1 ttl=63 time=1.64 ms
      64 bytes from 192.168.14.2: icmp_seq=2 ttl=63 time=1.03 ms
      ^C
      --- 192.168.14.2 ping statistics ---
      2 packets transmitted, 2 received, 0% packet loss, time 1002ms
      rtt min/avg/max/mdev = 1.033/1.337/1.641/0.304 ms
      
      # 訪問任意 pod 的IP,訪問成功
      [root@master-kubeadm-k8s pod_nginx_rs]# curl 192.168.14.2
      <!DOCTYPE html>
      <html>
      <head>
      <title>Welcome to nginx!</title>
      <style>
          body {
              width: 35em;
              margin: 0 auto;
              font-family: Tahoma, Verdana, Arial, sans-serif;
          }
      </style>
      </head>
      <body>
      <h1>Welcome to nginx!</h1>
      <p>If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.</p>
      
      <p>For online documentation and support please refer to
      <a >nginx.org</a>.<br/>
      Commercial support is available at
      <a >nginx.com</a>.</p>
      
      <p><em>Thank you for using nginx.</em></p>
      </body>
      </html>
      
    • 刪除 pod

      [root@master-kubeadm-k8s pod_nginx_rs]# kubectl delete -f pod_nginx_rs.yaml
      replicaset.apps "nginx" deleted
      

Kubernetes 集群搭建已經(jīng)全部完成,Kubeadm 方式搭建還是比較麻煩的,如果公司有需要搭建 K8S 集群准夷,也是完全可以通過這種方式來搭建的辞做!

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市闲礼,隨后出現(xiàn)的幾起案子牍汹,更是在濱河造成了極大的恐慌,老刑警劉巖柬泽,帶你破解...
    沈念sama閱讀 222,252評(píng)論 6 516
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件慎菲,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡锨并,警方通過查閱死者的電腦和手機(jī)钧嘶,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,886評(píng)論 3 399
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來琳疏,“玉大人有决,你說我怎么就攤上這事闸拿。” “怎么了书幕?”我有些...
    開封第一講書人閱讀 168,814評(píng)論 0 361
  • 文/不壞的土叔 我叫張陵新荤,是天一觀的道長(zhǎng)。 經(jīng)常有香客問我台汇,道長(zhǎng)苛骨,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,869評(píng)論 1 299
  • 正文 為了忘掉前任苟呐,我火速辦了婚禮痒芝,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘牵素。我一直安慰自己严衬,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,888評(píng)論 6 398
  • 文/花漫 我一把揭開白布笆呆。 她就那樣靜靜地躺著请琳,像睡著了一般。 火紅的嫁衣襯著肌膚如雪赠幕。 梳的紋絲不亂的頭發(fā)上俄精,一...
    開封第一講書人閱讀 52,475評(píng)論 1 312
  • 那天,我揣著相機(jī)與錄音榕堰,去河邊找鬼竖慧。 笑死,一個(gè)胖子當(dāng)著我的面吹牛逆屡,可吹牛的內(nèi)容都是我干的测蘑。 我是一名探鬼主播,決...
    沈念sama閱讀 41,010評(píng)論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼康二,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼碳胳!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起沫勿,我...
    開封第一講書人閱讀 39,924評(píng)論 0 277
  • 序言:老撾萬榮一對(duì)情侶失蹤挨约,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后产雹,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體诫惭,經(jīng)...
    沈念sama閱讀 46,469評(píng)論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,552評(píng)論 3 342
  • 正文 我和宋清朗相戀三年蔓挖,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了夕土。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,680評(píng)論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖怨绣,靈堂內(nèi)的尸體忽然破棺而出角溃,到底是詐尸還是另有隱情,我是刑警寧澤篮撑,帶...
    沈念sama閱讀 36,362評(píng)論 5 351
  • 正文 年R本政府宣布减细,位于F島的核電站,受9級(jí)特大地震影響赢笨,放射性物質(zhì)發(fā)生泄漏未蝌。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,037評(píng)論 3 335
  • 文/蒙蒙 一茧妒、第九天 我趴在偏房一處隱蔽的房頂上張望萧吠。 院中可真熱鬧,春花似錦桐筏、人聲如沸纸型。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,519評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至毕匀,卻和暖如春铸鹰,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背皂岔。 一陣腳步聲響...
    開封第一講書人閱讀 33,621評(píng)論 1 274
  • 我被黑心中介騙來泰國(guó)打工蹋笼, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人躁垛。 一個(gè)月前我還...
    沈念sama閱讀 49,099評(píng)論 3 378
  • 正文 我出身青樓剖毯,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親教馆。 傳聞我的和親對(duì)象是個(gè)殘疾皇子逊谋,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,691評(píng)論 2 361

推薦閱讀更多精彩內(nèi)容