在CentOS7 上使用kubeadm安裝Kubernetes Docker 集群

安裝環(huán)境:
理想情況下應(yīng)該是三臺(tái)機(jī)器比較典型,但是因?yàn)楸景惭b使用主機(jī)性能限制拴驮,僅使用兩臺(tái)虛擬機(jī)楞陷。

  • 3 CentOS 7 Servers
    192.168.59.192 k8s-master (2Core)
    192.168.59.193 node01
    192.168.59.194 node02 (未有韵洋,建議你使用)
  • Root privileges

單臺(tái)CentOS 環(huán)境如下:

[root@centosk8s ~]# uname -a
Linux k8s-master 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@centosk8s ~]# grep 'physical id' /proc/cpuinfo | sort -u | wc -l
2

安裝步驟:

  • 安裝Kubernetes
  • Kubernetes集群初始化
  • 增加集群節(jié)點(diǎn)
  • 測(cè)試 - 創(chuàng)建Pod

1. 安裝Kubernetes

這些步驟需要在master 和node server上都要運(yùn)行。

1.1. Configure Hosts

[root@centosk8s ~]# vim /etc/hosts
192.168.59.192 k8s-master
192.168.59.193 node01
hostnamectl set-hostname k8s-master

1.2. Disable SELinux

本安裝不會(huì)涉及到SELinux configuration for Docker, 故我們關(guān)閉它.
關(guān)閉防火墻景图。

systemctl stop firewalld && systemctl disable firewalld
[root@centosk8s ~]# setenforce 0
[root@centosk8s ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

1.3. Enable br_netfilter Kernel Module

安裝kubernetes需要br_netfilter模塊较雕。啟用此內(nèi)核模塊,以便IPtables處理通過網(wǎng)橋的數(shù)據(jù)包進(jìn)行過濾和端口轉(zhuǎn)發(fā)挚币,并且集群中的kubernetes數(shù)據(jù)包可以相互通信郎笆。

執(zhí)行以下命令:

[root@centosk8s ~]# modprobe br_netfilter
[root@centosk8s ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@centosk8s ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

1.4. Disable SWAP

[root@centosk8s ~]# swapoff -a
[root@centosk8s ~]# vim /etc/fstab
[root@centosk8s ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Feb 18 15:40:16 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=d09e99de-a101-4c19-bea2-dfac60ae2e7d /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

1.5. 安裝 Docker CE

yum install -y yum-utils device-mapper-persistent-data lvm2

[root@centosk8s ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
yum install -y docker-ce

1.6. 安裝 Kubernetes

增加kubernetes repository。(因?yàn)閲鴥?nèi)的原因忘晤,此步不要做了宛蚓。)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

國內(nèi)使用以下方式替代:
修改CentOS-Base,

[root@centosk8s ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
[root@centosk8s ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2019-03-20 09:28:46--  http://mirrors.aliyun.com/repo/Centos-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 180.163.155.8, 101.227.0.139, 101.227.0.133, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|180.163.155.8|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523 (2.5K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’

100%[=======================================================================================================================================================>] 2,523       --.-K/s   in 0.001s

2019-03-20 09:28:46 (2.65 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523]

配置kubernetes.repo 设塔,

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg       
      http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

執(zhí)行結(jié)果:

[root@centosk8s ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
>       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@centosk8s ~]# yum makecache
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                                                                                                                      | 3.6 kB  00:00:00
docker-ce-stable                                                                                                                                                          | 3.5 kB  00:00:00
extras                                                                                                                                                                    | 3.4 kB  00:00:00
kubernetes/signature                                                                                                                                                      |  454 B  00:00:00
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
kubernetes/signature                                                                                                                                                      | 1.4 kB  00:00:41 !!!
updates                                                                                                                                                                   | 3.4 kB  00:00:00
(1/3): kubernetes/filelists                                                                                                                                               |  16 kB  00:00:02
(2/3): kubernetes/primary                                                                                                                                                 |  45 kB  00:00:01
(3/3): kubernetes/other                                                                                                                                                   |  30 kB  00:00:00
kubernetes                                                                                                                                                                               323/323
kubernetes                                                                                                                                                                               323/323
kubernetes                                                                                                                                                                               323/323
Metadata Cache Created

安裝 kubernetes packages kubeadm, kubelet, and kubectl.

yum install -y kubelet kubeadm kubectl

安裝完成凄吏,重啟服務(wù)器

reboot

啟動(dòng) services, docker and kubelet.

systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

1.7. 修改 cgroup-driver

確保 docker-ce 和 kubernetes 使用同樣的 'cgroup'.

首先檢查 docker cgroup .

[root@k8sminion ~]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs

可以看到'cgroupfs' 已被作為cgroup-driver.

使用 'cgroupfs'替換 kuberetes cgroup-driver .

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Reload the systemd system and restart the kubelet service.

systemctl daemon-reload
systemctl restart kubelet

2. Kubernetes集群初始化

在本步驟中,初始化 kubernetes master cluster 配置.

登錄到 'k8s-master' 使用以下命令創(chuàng)建 kubernetes master.

kubeadm 是kubernetes 的集群安裝工具闰蛔,能夠快速安裝kubernetes 集群痕钢。kubeadm init 命令默認(rèn)使用的docker鏡像倉庫為k8s.gcr.io,國內(nèi)無法直接訪問序六,需要中轉(zhuǎn)一下才可以使用任连。
通過 docker.io/mirrorgooglecontainers中轉(zhuǎn)一下https://hub.docker.com/u/mirrorgooglecontainers

查看 kubeadm 會(huì)用到的鏡像

[root@centosk8s ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.4
k8s.gcr.io/kube-controller-manager:v1.13.4
k8s.gcr.io/kube-scheduler:v1.13.4
k8s.gcr.io/kube-proxy:v1.13.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

拉取鏡像并設(shè)置tag

[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-apiserver
3d80316e96d7: Pull complete
27ea5b112863: Pull complete
Digest: sha256:b205bb95ca597510be7785f65c15123830c2b0978af9abf1be60d67ec49573ff
Status: Downloaded newer image for mirrorgooglecontainers/kube-apiserver:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-controller-manager
3d80316e96d7: Already exists
cdd81260a26d: Pull complete
Digest: sha256:2d977f0ea449497deb35478ea59b8637bb478cdda42f6c01b09431b77d61af49
Status: Downloaded newer image for mirrorgooglecontainers/kube-controller-manager:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-scheduler
3d80316e96d7: Already exists
0f2f7ad628c3: Pull complete
Digest: sha256:09bd0a85d002b2f2570b870f672c80c5a05a30e108b976efe279f0fc67a004b3
Status: Downloaded newer image for mirrorgooglecontainers/kube-scheduler:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-proxy
3d80316e96d7: Already exists
09263547f210: Pull complete
59c4a3c9440d: Pull complete
Digest: sha256:244282d1be8d814b8ea70f6e4890d0031b00a148f2d3d4953e062fb46da229c4
Status: Downloaded newer image for mirrorgooglecontainers/kube-proxy:v1.13.4
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/pause:3.1
3.1: Pulling from mirrorgooglecontainers/pause
67ddbfb20a22: Pull complete
Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
Status: Downloaded newer image for mirrorgooglecontainers/pause:3.1
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@centosk8s ~]# docker pull docker.io/mirrorgooglecontainers/etcd:3.2.24
3.2.24: Pulling from mirrorgooglecontainers/etcd
019658bedd5c: Pull complete
c4267897bb00: Pull complete
c5b72c728005: Pull complete
Digest: sha256:08b3afd3485fc29e78b28d05b434d2524f9bbfd8dec7464c396e2679541c91fc
Status: Downloaded newer image for mirrorgooglecontainers/etcd:3.2.24
[root@centosk8s ~]# docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@centosk8s ~]# docker pull docker.io/coredns/coredns:1.2.6
1.2.6: Pulling from coredns/coredns
2796eccf0de2: Pull complete
6ad5128a7d32: Pull complete
Digest: sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51
Status: Downloaded newer image for coredns/coredns:1.2.6
[root@centosk8s ~]# docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

初始化

kubeadm init --apiserver-advertise-address=192.168.59.192 --pod-network-cidr=10.18.0.0/16

注:

--apiserver-advertise-address = determines which IP address Kubernetes should advertise its API server on.

--pod-network-cidr = specify the range of IP addresses for the pod network. We're using the 'flannel' virtual network. If you want to use another pod network such as weave-net or calico, change the range IP address.

初始化完成, 輸出如下:

[root@centosk8s ~]# kubeadm init --apiserver-advertise-address=192.168.59.192 --pod-network-cidr=10.18.0.0/16
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.59.192 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.59.192 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.59.192]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.003836 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nynqmr.zjacu8opmi8zb1xb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.59.192:6443 --token nynqmr.zjacu8opmi8zb1xb --discovery-token-ca-cert-hash sha256:c15398d40a83ae21e65c8ca8c35d8044967af08322f7a8380b1591b397481959

注:

把 'kubeadm join ... ... ...' 命令復(fù)制到文本編輯器. 此命令將用來向 kubernetes 集群注冊(cè)節(jié)點(diǎn).

為了使用Kubernetes, 我們需要執(zhí)行下面的命令例诀。

創(chuàng)建 '.kube' 配置目錄 并復(fù)制 'admin.conf'.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

下一步随抠,部署flannel network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

結(jié)果如下:

[root@centosk8s ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

flannel network 部署到了 Kubernetes cluster.

等幾分鐘裁着,檢查一下kubernetes node and pods:

kubectl get nodes
kubectl get pods --all-namespaces
[root@centosk8s ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   7m10s   v1.13.4
[root@centosk8s ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-pdhjg             1/1     Running   0          7m28s
kube-system   coredns-86c58d9df4-qhngt             1/1     Running   0          7m28s
kube-system   etcd-k8s-master                      1/1     Running   0          6m37s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          6m51s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          6m32s
kube-system   kube-flannel-ds-amd64-lqh6h          1/1     Running   0          117s
kube-system   kube-proxy-vx62g                     1/1     Running   0          7m28s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          6m41s

將看到“k8s master”節(jié)點(diǎn),狀態(tài)為“ready”拱她,并且將看到集群的所有pods二驰,包括用于網(wǎng)絡(luò)pod配置的“kube flannel ds”。

確保所有的 kube-system pods 狀態(tài)是 'running'.
Kubernetes集群 master 初始化和配置完成秉沼。

3. 增加集群節(jié)點(diǎn)

本步驟中桶雀,我們向‘k8s’集群加入節(jié)點(diǎn)。

[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-apiserver
3d80316e96d7: Pull complete
27ea5b112863: Pull complete
Digest: sha256:b205bb95ca597510be7785f65c15123830c2b0978af9abf1be60d67ec49573ff
Status: Downloaded newer image for mirrorgooglecontainers/kube-apiserver:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4
[root@k8sminion ~]#  docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-controller-manager
3d80316e96d7: Already exists
cdd81260a26d: Pull complete
Digest: sha256:2d977f0ea449497deb35478ea59b8637bb478cdda42f6c01b09431b77d61af49
Status: Downloaded newer image for mirrorgooglecontainers/kube-controller-manager:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4
[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-scheduler
3d80316e96d7: Already exists
0f2f7ad628c3: Pull complete
Digest: sha256:09bd0a85d002b2f2570b870f672c80c5a05a30e108b976efe279f0fc67a004b3
Status: Downloaded newer image for mirrorgooglecontainers/kube-scheduler:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4
[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4
v1.13.4: Pulling from mirrorgooglecontainers/kube-proxy
3d80316e96d7: Already exists
09263547f210: Pull complete
59c4a3c9440d: Pull complete
Digest: sha256:244282d1be8d814b8ea70f6e4890d0031b00a148f2d3d4953e062fb46da229c4
Status: Downloaded newer image for mirrorgooglecontainers/kube-proxy:v1.13.4
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4
[root@k8sminion ~]#  docker pull docker.io/mirrorgooglecontainers/pause:3.1
3.1: Pulling from mirrorgooglecontainers/pause
67ddbfb20a22: Pull complete
Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
Status: Downloaded newer image for mirrorgooglecontainers/pause:3.1
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@k8sminion ~]# docker pull docker.io/mirrorgooglecontainers/etcd:3.2.24
3.2.24: Pulling from mirrorgooglecontainers/etcd
019658bedd5c: Pull complete
c4267897bb00: Pull complete
c5b72c728005: Pull complete
Digest: sha256:08b3afd3485fc29e78b28d05b434d2524f9bbfd8dec7464c396e2679541c91fc
Status: Downloaded newer image for mirrorgooglecontainers/etcd:3.2.24
[root@k8sminion ~]# docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@k8sminion ~]# docker pull docker.io/coredns/coredns:1.2.6
1.2.6: Pulling from coredns/coredns
2796eccf0de2: Pull complete
6ad5128a7d32: Pull complete
Digest: sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51
Status: Downloaded newer image for coredns/coredns:1.2.6
[root@k8sminion ~]# docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

登錄到node01唬复,執(zhí)行如下命令矗积。

kubeadm join 192.168.59.192:6443 --token nynqmr.zjacu8opmi8zb1xb --discovery-token-ca-cert-hash sha256:c15398d40a83ae21e65c8ca8c35d8044967af08322f7a8380b1591b397481959
[root@k8sminion ~]# kubeadm join 192.168.59.192:6443 --token nynqmr.zjacu8opmi8zb1xb --discovery-token-ca-cert-hash sha256:c15398d40a83ae21e65c8ca8c35d8044967af08322f7a8380b1591b397481959
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.59.192:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.59.192:6443"
[discovery] Requesting info from "https://192.168.59.192:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.59.192:6443"
[discovery] Successfully established connection with API Server "192.168.59.192:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

等待幾分鐘,在'k8s-master' 上檢查nodes和pods狀態(tài)敞咧。

kubectl get nodes
kubectl get pods --all-namespaces

現(xiàn)在棘捣,node01被加入了集群,狀態(tài)是'ready'.

[root@centosk8s ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   24m    v1.13.4
node01       NotReady   <none>   104s   v1.13.4
[root@centosk8s ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   31m     v1.13.4
node01       Ready    <none>   8m50s   v1.13.4
[root@centosk8s ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-pdhjg             1/1     Running   0          31m
kube-system   coredns-86c58d9df4-qhngt             1/1     Running   0          31m
kube-system   etcd-k8s-master                      1/1     Running   0          31m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          31m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          30m
kube-system   kube-flannel-ds-amd64-dl262          1/1     Running   0          9m31s
kube-system   kube-flannel-ds-amd64-lqh6h          1/1     Running   0          26m
kube-system   kube-proxy-vx62g                     1/1     Running   0          31m
kube-system   kube-proxy-x5gnm                     1/1     Running   0          9m31s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          31m

Kubernetes的node妄均,NotReady 如何查問題:

journalctl -f -u kubelet

4. 測(cè)試-創(chuàng)建 Pod

在本步驟柱锹,我們將嘗試在 kubernetes集群部署Nginx pod哪自。pod是一個(gè)或多個(gè)容器的組丰包,他們?cè)?kubernetes中共享存儲(chǔ)和網(wǎng)絡(luò)。

登陸 'k8s-master' server 創(chuàng)建一個(gè)名叫 'nginx' 的部署壤巷。

kubectl create deployment nginx --image=nginx

通過下面的命令獲得nginx pod 部署信息邑彪。

[root@centosk8s ~]# kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Wed, 20 Mar 2019 14:28:15 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-5c7588df (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  42s   deployment-controller  Scaled up replica set nginx-5c7588df to 1

下一步,暴露nginx pod胧华, 需要?jiǎng)?chuàng)建一個(gè)新的service NodePort

執(zhí)行 kubectl 命令.

kubectl create service nodeport nginx --tcp=80:80

確保沒有錯(cuò)誤信息〖闹ⅲ現(xiàn)在可以檢查Nginx服務(wù)IP和端口了。

kubectl get pods
kubectl get svc
[root@centosk8s ~]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
nginx-5c7588df-mdnbf   1/1     Running   0          2m10s
[root@centosk8s ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        57m
nginx        NodePort    10.98.119.101   <none>        80:30039/TCP   45s

現(xiàn)在可以訪問運(yùn)行在集群IP address '10.98.119.101' port 80 下的nginx pod矩动, 節(jié)點(diǎn)主IP地址 '192.168.59.193' 端口 '30039'.

從 'k8s-master' server 執(zhí)行Curl命令 curl command .

curl node01:30039
[root@centosk8s ~]# curl node01:30039
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@centosk8s ~]#

nginx pod現(xiàn)在已經(jīng)部署在kubernetes集群下有巧,可以通過互聯(lián)網(wǎng)訪問了。

訪問以下地址.
http://192.168.59.192:30039/

image.png

到此勝利結(jié)束悲没!

5. 參考

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末篮迎,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子示姿,更是在濱河造成了極大的恐慌甜橱,老刑警劉巖,帶你破解...
    沈念sama閱讀 217,185評(píng)論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件栈戳,死亡現(xiàn)場(chǎng)離奇詭異岂傲,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)子檀,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,652評(píng)論 3 393
  • 文/潘曉璐 我一進(jìn)店門镊掖,熙熙樓的掌柜王于貴愁眉苦臉地迎上來乃戈,“玉大人,你說我怎么就攤上這事堰乔∑” “怎么了?”我有些...
    開封第一講書人閱讀 163,524評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵镐侯,是天一觀的道長侦讨。 經(jīng)常有香客問我,道長苟翻,這世上最難降的妖魔是什么韵卤? 我笑而不...
    開封第一講書人閱讀 58,339評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮崇猫,結(jié)果婚禮上沈条,老公的妹妹穿的比我還像新娘。我一直安慰自己诅炉,他們只是感情好蜡歹,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,387評(píng)論 6 391
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著涕烧,像睡著了一般月而。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上议纯,一...
    開封第一講書人閱讀 51,287評(píng)論 1 301
  • 那天父款,我揣著相機(jī)與錄音,去河邊找鬼瞻凤。 笑死憨攒,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的阀参。 我是一名探鬼主播肝集,決...
    沈念sama閱讀 40,130評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼蛛壳!你這毒婦竟也來了杏瞻?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 38,985評(píng)論 0 275
  • 序言:老撾萬榮一對(duì)情侶失蹤炕吸,失蹤者是張志新(化名)和其女友劉穎伐憾,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體赫模,經(jīng)...
    沈念sama閱讀 45,420評(píng)論 1 313
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡树肃,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,617評(píng)論 3 334
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了瀑罗。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片胸嘴。...
    茶點(diǎn)故事閱讀 39,779評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡雏掠,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出劣像,到底是詐尸還是另有隱情乡话,我是刑警寧澤,帶...
    沈念sama閱讀 35,477評(píng)論 5 345
  • 正文 年R本政府宣布耳奕,位于F島的核電站绑青,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏屋群。R本人自食惡果不足惜闸婴,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,088評(píng)論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望芍躏。 院中可真熱鬧邪乍,春花似錦、人聲如沸对竣。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,716評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽否纬。三九已至吕晌,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間烦味,已是汗流浹背聂使。 一陣腳步聲響...
    開封第一講書人閱讀 32,857評(píng)論 1 269
  • 我被黑心中介騙來泰國打工壁拉, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留谬俄,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 47,876評(píng)論 2 370
  • 正文 我出身青樓弃理,卻偏偏與公主長得像溃论,于是被迫代替她去往敵國和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子痘昌,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,700評(píng)論 2 354

推薦閱讀更多精彩內(nèi)容