使用kubeadm快速搭建單機(jī)kubernetes 1.13集群

kubeadm可謂是快速搭建k8集群的神器爽航,想當(dāng)年有多少人倒在k8集群搭建的這道坎上横浑,我自己去年也是通過二進(jìn)制方式手動搭了一個k8 1.9的集群传藏,安裝大量的組件腻暮,各種證書配置,各種依賴毯侦。哭靖。。那酸爽真的不忍回憶侈离,而且搭出來的集群還是有一些問題试幽,證書一直有問題,dashboard也只能用老版本的∝阅耄現(xiàn)在有了kubeadm铺坞,它幫助我們做了大量原來需要手動安裝、配置洲胖、生成證書的事情济榨,一杯咖啡的功夫集群就能搭建好了。

和minikube的區(qū)別

minikube基本上你可以認(rèn)為是一個實(shí)驗室工具绿映,只能單機(jī)部署擒滑,里面整合了k8最主要的組件,無法搭建集群叉弦,且由于程序做死無法安裝各種擴(kuò)展插件(比如網(wǎng)絡(luò)插件丐一、dns插件、ingress插件等等)淹冰,主要作用是給你了解k8用的钝诚。而kudeadm搭建出來是一個真正的k8集群,可用于生產(chǎn)環(huán)境(HA需要自己做)榄棵,和二進(jìn)制搭建出來的集群幾乎沒有區(qū)別凝颇。

環(huán)境要求

  • 本次安裝使用virtualbox虛擬機(jī)(macOs)潘拱,分配2C2G內(nèi)存
  • 操作系統(tǒng)為centos 7.6,下述安裝步驟均基于centos拧略,注意centos版本最好是最新的芦岂,否則會有各種各樣奇怪的坑(之前基于7.0被坑了不少)
  • 虛擬機(jī)需要保持和宿主機(jī)的雙向互通且可以訪問公網(wǎng),具體設(shè)置這邊不展開垫蛆,網(wǎng)上教程很多
  • kubernetes安裝的基線版本為1.13.1

設(shè)置yum源

首先去/etc/yum.repos.d/目錄禽最,刪除該目錄下所有repo文件(先做好備份)

下載centos基礎(chǔ)yum源配置(這里用的是阿里云的鏡像)

curl -o CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

下載docker的yum源配置

curl -o docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

配置kubernetes的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

執(zhí)行下列命令刷新yum源緩存

# yum clean all  
# yum makecache  
# yum repolist

得到這面這個列表,說明源配置正確

[root@MiWiFi-R1CM-srv yum.repos.d]# yum repolist
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
源標(biāo)識                                                                               源名稱                                                                                    狀態(tài)
base/7/x86_64                                                                        CentOS-7 - Base - 163.com                                                                 10,019
docker-ce-stable/x86_64                                                              Docker CE Stable - x86_64                                                                     28
extras/7/x86_64                                                                      CentOS-7 - Extras - 163.com                                                                  321
kubernetes                                                                           Kubernetes                                                                                   299
updates/7/x86_64                                                                     CentOS-7 - Updates - 163.com                                                                 628
repolist: 11,295

安裝docker

yum install -y docker-ce

我這邊直接裝的最新穩(wěn)定版18.09袱饭,如果對于版本有要求川无,可以先執(zhí)行

[root@MiWiFi-R1CM-srv yum.repos.d]# yum list docker-ce --showduplicates | sort -r
已加載插件:fastestmirror
已安裝的軟件包
可安裝的軟件包
Loading mirror speeds from cached hostfile
docker-ce.x86_64            3:18.09.1-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.1-3.el7                    @docker-ce-stable
docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable 
docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable 
docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable 
docker-ce.x86_64            18.03.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            18.03.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.12.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.12.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.09.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.06.0.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.3.ce-1.el7                   docker-ce-stable 
docker-ce.x86_64            17.03.2.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable 
docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable 

列出所有版本,再執(zhí)行

yum install -y docker-ce-<VERSION STRING>

安裝指定版本
安裝完成后虑乖,執(zhí)行

[root@MiWiFi-R1CM-srv yum.repos.d]# systemctl start docker
[root@MiWiFi-R1CM-srv yum.repos.d]# systemctl enable docker
[root@MiWiFi-R1CM-srv yum.repos.d]# docker info
Containers: 24
 Running: 21
 Paused: 0
 Stopped: 3
Images: 11
Server Version: 18.09.1
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 96ec2177ae841256168fcf76954f7177af9446eb
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-957.1.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.795GiB
Name: MiWiFi-R1CM-srv
ID: DSTM:KH2I:Y4UV:SUPX:WIP4:ZV4C:WTNO:VMZR:4OKK:HM3G:3YFS:FXMY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: bridge-nf-call-ip6tables is disabled

說明安裝正確

kubeadm安裝k8s

可能大家對于kubeadm安裝出來的kubernetes集群的穩(wěn)定性還有疑慮懦趋,這邊援引官方的說明文檔


kubeadm.png

可以看到核心功能都已經(jīng)GA了,可以放心用疹味,大家比較關(guān)心的HA還是在alpha階段仅叫,還得再等等,目前來說kubeadm搭建出來的k8集群master還是單節(jié)點(diǎn)的糙捺,要做高可用還需要自己手動搭建etcd集群诫咱。

由于之前已經(jīng)設(shè)置好了kubernetes的yum源,我們只要執(zhí)行

yum install -y kubeadm

系統(tǒng)就會幫我們自動安裝最新版的kubeadm了(我安裝的時候是1.13.1)洪灯,一共會安裝kubelet坎缭、kubeadm、kubectl签钩、kubernetes-cni這四個程序掏呼。

  • kubeadm:k8集群的一鍵部署工具,通過把k8的各類核心組件和插件以pod的方式部署來簡化安裝過程
  • kubelet:運(yùn)行在每個節(jié)點(diǎn)上的node agent边臼,k8集群通過kubelet真正的去操作每個節(jié)點(diǎn)上的容器哄尔,由于需要直接操作宿主機(jī)的各類資源,所以沒有放在pod里面柠并,還是通過服務(wù)的形式裝在系統(tǒng)里面
  • kubectl:kubernetes的命令行工具岭接,通過連接api-server完成對于k8的各類操作
  • kubernetes-cni:k8的虛擬網(wǎng)絡(luò)設(shè)備,通過在宿主機(jī)上虛擬一個cni0網(wǎng)橋臼予,來完成pod之間的網(wǎng)絡(luò)通訊鸣戴,作用和docker0類似。

安裝完后粘拾,執(zhí)行

kubeadmin init --pod-network-cidr=10.244.0.0/16

開始master節(jié)點(diǎn)的初始化工作窄锅,注意這邊的--pod-network-cidr=10.244.0.0/16,是k8的網(wǎng)絡(luò)插件所需要用到的配置信息缰雇,用來給node分配子網(wǎng)段入偷,我這邊用到的網(wǎng)絡(luò)插件是flannel追驴,就是這么配,其他的插件也有相應(yīng)的配法疏之,官網(wǎng)上都有詳細(xì)的說明殿雪,具體參考這個網(wǎng)頁

初始化的時候kubeadm會做一系列的校驗锋爪,以檢測你的服務(wù)器是否符合kubernetes的安裝條件丙曙,檢測結(jié)果分為[WARNING][ERROR]兩種,類似如下的信息(一般第一次執(zhí)行都會失敗其骄。亏镰。)

[root@MiWiFi-R1CM-srv ~]# kubeadm init
I0112 00:30:18.868179   13025 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0112 00:30:18.868645   13025 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
    [WARNING Hostname]: hostname "miwifi-r1cm-srv" could not be reached
    [WARNING Hostname]: hostname "miwifi-r1cm-srv": lookup miwifi-r1cm-srv on 192.168.31.1:53: no such host
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
    [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

[WARNING]的有比如docker服務(wù)沒設(shè)置成自動啟動啦,docker版本不符合兼容性要求啦拯爽,hostname設(shè)置不規(guī)范之類索抓,這些一般問題不大,不影響安裝某抓,當(dāng)然盡量你按照它提示的要求能改掉是最好纸兔。

[ERROR]的話就要重視惰瓜,雖然可以通過--ignore-preflight-errors忽略錯誤強(qiáng)制安裝否副,但為了不出各種奇怪的毛病,所以強(qiáng)烈建議error的問題一定要解決了再繼續(xù)執(zhí)行下去崎坊。比如系統(tǒng)資源不滿足要求(master節(jié)點(diǎn)要求至少2C2G)备禀,swap沒關(guān)等等(會影響kubelet的啟動),swap的話可以通過設(shè)置swapoff -a來進(jìn)行關(guān)閉奈揍,另外注意/proc/sys/net/bridge/bridge-nf-call-iptables這個參數(shù)曲尸,需要設(shè)置為1,否則kubeadm預(yù)檢也會通不過男翰,貌似網(wǎng)絡(luò)插件會用到這個內(nèi)核參數(shù)另患。

一頓修改后,預(yù)檢全部通過蛾绎,kubeadm就開始安裝了昆箕,經(jīng)過一陣等待,不出意外的話安裝會失敗-_-租冠,原因自然是眾所周知的原因鹏倘,gcr.io無法訪問(谷歌自己的容器鏡像倉庫),但是錯誤信息很有價值顽爹,我們來看一下

[root@MiWiFi-R1CM-srv ~]# kubeadm init
I0112 00:39:39.813145   13591 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0112 00:39:39.813263   13591 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
    [WARNING Hostname]: hostname "miwifi-r1cm-srv" could not be reached
    [WARNING Hostname]: hostname "miwifi-r1cm-srv": lookup miwifi-r1cm-srv on 192.168.31.1:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1

這里面明確列出了安裝需要用到的鏡像名稱和tag纤泵,那么我們只需要提前把這些鏡像pull下來,再安裝就ok了镜粤。你也可以通過kubeadm config images pull預(yù)先下載好鏡像捏题,再執(zhí)行kubeadm init玻褪。

知道名字就好辦了,這點(diǎn)小問題難不倒我們公荧。目前國內(nèi)的各大云計算廠商都提供了kubernetes的鏡像服務(wù)归园,比如阿里云,我可以通過

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24

來拉取etcd的鏡像稚矿,再通過

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24

來改成kudeadm安裝時候需要的鏡像名稱庸诱,其他的鏡像也是如法炮制。注意所需的鏡像和版本號晤揣,可能和我這邊列出的不一樣桥爽,kubernetes項目更新很快,具體還是要以你當(dāng)時執(zhí)行的時候列出的出錯信息里面的為準(zhǔn)昧识,但是處理方式都是一樣的钠四。(其實(shí)不改名,kubeadm還可以通過yaml文件申明安裝所需的鏡像名稱跪楞,這部分就留給你自己去研究啦)缀去。

鏡像都搞定之后,再次執(zhí)行

[root@MiWiFi-R1CM-srv ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
I0112 01:35:38.758110    4544 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: x509: certificate has expired or is not yet valid
I0112 01:35:38.758428    4544 version.go:95] falling back to the local client version: v1.13.1
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
    [WARNING Hostname]: hostname "miwifi-r1cm-srv" could not be reached
    [WARNING Hostname]: hostname "miwifi-r1cm-srv": lookup miwifi-r1cm-srv on 192.168.31.1:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [miwifi-r1cm-srv localhost] and IPs [192.168.31.175 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [miwifi-r1cm-srv localhost] and IPs [192.168.31.175 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [miwifi-r1cm-srv kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.31.175]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.508735 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "miwifi-r1cm-srv" as an annotation
[mark-control-plane] Marking the node miwifi-r1cm-srv as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node miwifi-r1cm-srv as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wde86i.tmjaf7d18v26zg03
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.31.175:6443 --token wde86i.tmjaf7d18v26zg03 --discovery-token-ca-cert-hash sha256:b05fa53d8f8c10fa4159ca499eb91cf11fbb9b27801b7ea9eb7d5066d86ae366

可以看到終于安裝成功了甸祭,kudeadm幫你做了大量的工作缕碎,包括kubelet配置、各類證書配置池户、kubeconfig配置咏雌、插件安裝等等(這些東西自己搞不知道要搞多久,反正估計用過kubeadm沒人會再愿意手工安裝了)校焦。注意最后一行赊抖,kubeadm提示你,其他節(jié)點(diǎn)需要加入集群的話寨典,只需要執(zhí)行這條命令就行了氛雪,里面包含了加入集群所需要的token。同時kubeadm還提醒你耸成,要完成全部安裝报亩,還需要安裝一個網(wǎng)絡(luò)插件kubectl apply -f [podnetwork].yaml,并且連如何安裝網(wǎng)絡(luò)插件的網(wǎng)址都提供給你了(很貼心啊有木有)墓猎。同時也提示你捆昏,需要執(zhí)行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

把相關(guān)配置信息拷貝入.kube的目錄,這個是用來配置kubectl和api-server之間的認(rèn)證毙沾,其他node節(jié)點(diǎn)的話需要將此配置信息拷貝入node節(jié)點(diǎn)的對應(yīng)目錄骗卜。此時我們執(zhí)行一下

[root@MiWiFi-R1CM-srv yum.repos.d]# kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
miwifi-r1cm-srv   NotReady    master   4h56m   v1.13.1

顯示目前節(jié)點(diǎn)是notready狀態(tài),先不要急,我們先來看一下kudeadm幫我們安裝了哪些東西:

核心組件

前面介紹過寇仓,kudeadm的思路举户,是通過把k8主要的組件容器化,來簡化安裝過程遍烦。這時候你可能就有一個疑問俭嘁,這時候k8集群還沒起來,如何來部署pod服猪?難道直接執(zhí)行docker run供填?當(dāng)然是沒有那么low,其實(shí)在kubelet的運(yùn)行規(guī)則中罢猪,有一種特殊的啟動方法叫做“靜態(tài)pod”(static pod)近她,只要把pod定義的yaml文件放在指定目錄下,當(dāng)這個節(jié)點(diǎn)的kubelet啟動時膳帕,就會自動啟動yaml文件中定義的pod粘捎。從這個機(jī)制你也可以發(fā)現(xiàn),為什么叫做static pod危彩,因為這些pod是不能調(diào)度的攒磨,只能在這個節(jié)點(diǎn)上啟動,并且pod的ip地址直接就是宿主機(jī)的地址汤徽。在k8中娩缰,放這些預(yù)先定義yaml文件的位置是/etc/kubernetes/manifests,我們來看一下

[root@MiWiFi-R1CM-srv manifests]# ls -l
總用量 16
-rw-------. 1 root root 1999 1月  12 01:35 etcd.yaml
-rw-------. 1 root root 2674 1月  12 01:35 kube-apiserver.yaml
-rw-------. 1 root root 2547 1月  12 01:35 kube-controller-manager.yaml
-rw-------. 1 root root 1051 1月  12 01:35 kube-scheduler.yaml

這四個就是k8的核心組件了泻骤,以靜態(tài)pod的方式運(yùn)行在當(dāng)前節(jié)點(diǎn)上

  • etcd:k8s的數(shù)據(jù)庫漆羔,所有的集群配置信息梧奢、密鑰狱掂、證書等等都是放在這個里面,所以生產(chǎn)上面一般都會做集群亲轨,掛了不是開玩笑的
  • kube-apiserver: k8的restful api入口趋惨,所有其他的組件都是通過api-server來操作kubernetes的各類資源,可以說是k8最底層的組件
  • kube-controller-manager: 負(fù)責(zé)管理容器pod的生命周期
  • kube-scheduler: 負(fù)責(zé)pod在集群中的調(diào)度


    image

具體操作來說惦蚊,在之前的文章中已經(jīng)介紹過器虾,docker架構(gòu)調(diào)整后,已經(jīng)拆分出containerd組件蹦锋,所以現(xiàn)在是kubelet直接通過cri-containerd來調(diào)用containerd進(jìn)行容器的創(chuàng)建(不走docker daemon了)兆沙,從進(jìn)程信息里面可以看出

[root@MiWiFi-R1CM-srv manifests]# ps -ef|grep containerd
root      3075     1  0 00:29 ?        00:00:55 /usr/bin/containerd
root      4740  3075  0 01:35 ?        00:00:01 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/ec93247aeb737218908557f825344b33dd58f0c098bd750c71da1bc0ec9a49b0 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root      4754  3075  0 01:35 ?        00:00:01 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/f738d56f65b9191a63243a1b239bac9c3924b5a2c7c98e725414c247fcffbb8f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root      4757  3

其中3075這個進(jìn)程就是由docker服務(wù)啟動時帶起來的containerd daemon,47404754是由containerd進(jìn)程創(chuàng)建的cotainerd-shim子進(jìn)程莉掂,用來真正的管理容器進(jìn)程葛圃。多說一句,之前的docker版本這幾個進(jìn)程名字分別叫docker-containerddocker-cotainerd-shim库正,docker-runc,現(xiàn)在的進(jìn)程名字里面已經(jīng)完全看不到docker的影子了曲楚,去docker化越來越明顯了。

插件addon

  • CoreDNS: cncf項目褥符,主要是用來做服務(wù)發(fā)現(xiàn)龙誊,目前已經(jīng)取代kube-dns作為k8默認(rèn)的服務(wù)發(fā)現(xiàn)組件
  • kube-proxy: 基于iptables來做的負(fù)載均衡,service會用到喷楣,這個性能不咋地趟大,知道一下就好

我們執(zhí)行一下

[root@MiWiFi-R1CM-srv ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  0/1     Pending   0          5m28s
coredns-86c58d9df4-kzljk                  0/1     Pending   0          5m28s
etcd-miwifi-r1cm-srv                      1/1     Running   0          4m40s
kube-apiserver-miwifi-r1cm-srv            1/1     Running   0          4m52s
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   0          5m3s
kube-proxy-9c8cs                          1/1     Running   0          5m28s
kube-scheduler-miwifi-r1cm-srv            1/1     Running   0          4m45s

可以看到kubeadm幫我們安裝的,就是我上面提到的那些組件铣焊,并且都是以pod的形式安裝护昧。同時你也應(yīng)該注意到了,coredns的兩個pod都是pending狀態(tài)粗截,這是因為網(wǎng)絡(luò)插件還沒有安裝惋耙。我們根據(jù)前面提到的官方頁面的說明安裝網(wǎng)絡(luò)插件,這邊我用到的是flannel熊昌,安裝方式也很簡單绽榛,標(biāo)準(zhǔn)的k8式的安裝

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

安裝完之后我們再看一下pod的狀態(tài)

[root@MiWiFi-R1CM-srv ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  1/1     Running   0          11m
coredns-86c58d9df4-kzljk                  1/1     Running   0          11m
etcd-miwifi-r1cm-srv                      1/1     Running   0          11m
kube-apiserver-miwifi-r1cm-srv            1/1     Running   0          11m
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   0          11m
kube-flannel-ds-amd64-kwx59               1/1     Running   0          57s
kube-proxy-9c8cs                          1/1     Running   0          11m
kube-scheduler-miwifi-r1cm-srv            1/1     Running   0          11m

可以看到coredns的兩個pod都已經(jīng)啟動,同時還多了一個kube-flannel-ds-amd64-kwx59婿屹,這正是我們剛才安裝的網(wǎng)絡(luò)插件flannel灭美。

這時候我們再來看一下核心組件的狀態(tài)

[root@MiWiFi-R1CM-srv yum.repos.d]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"} 

可以看到組件的狀態(tài)都已經(jīng)ok了,我們再看看node的狀態(tài)

[root@MiWiFi-R1CM-srv yum.repos.d]# kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
miwifi-r1cm-srv   Ready    master   4h56m   v1.13.1

node的狀態(tài)是Ready昂利,說明我們的master安裝成功届腐,至此大功告成!
默認(rèn)的master節(jié)點(diǎn)是不能調(diào)度應(yīng)用pod的蜂奸,所以我們還需要給master節(jié)點(diǎn)打一個污點(diǎn)標(biāo)記

kubectl taint nodes --all node-role.kubernetes.io/master-

安裝DashBoard

k8項目提供了一個官方的dashboard犁苏,雖然平時還是命令行用的多,但是有個UI總是好的扩所,我們來看看怎么安裝围详。安裝其實(shí)也是非常簡單,標(biāo)準(zhǔn)的k8聲明式安裝

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

安裝完后查看pod信息

[root@MiWiFi-R1CM-srv yum.repos.d]# kubectl get po -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-gbgzx                  1/1     Running   0          4h45m
coredns-86c58d9df4-kzljk                  1/1     Running   0          4h45m
etcd-miwifi-r1cm-srv                      1/1     Running   0          4h44m
kube-apiserver-miwifi-r1cm-srv            1/1     Running   0          4h44m
kube-controller-manager-miwifi-r1cm-srv   1/1     Running   0          4h44m
kube-flannel-ds-amd64-kwx59               1/1     Running   0          4h34m
kube-proxy-9c8cs                          1/1     Running   0          4h45m
kube-scheduler-miwifi-r1cm-srv            1/1     Running   0          4h44m
kubernetes-dashboard-57df4db6b-bn5vn      1/1     Running   0          4h8m

可以看到多了一個kubernetes-dashboard-57df4db6b-bn5vn祖屏,并且已經(jīng)正常啟動助赞。但出于安全性考慮,dashboard是不提供外部訪問的袁勺,所以我們這邊需要添加一個service雹食,并且指定為NodePort類型,以供外部訪問期丰,service配置如下

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-01-11T18:12:43Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "6015"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 7dd0deb6-15cc-11e9-bb65-08002726d64d
spec:
  clusterIP: 10.102.157.202
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30443
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

dashboard應(yīng)用的默認(rèn)端口是8443群叶,這邊我們指定一個30443端口進(jìn)行映射漠嵌,提供外部訪問入口。這時候我們就可以通過https://ip:8443來訪問dashboard了盖呼,注意用官方的yaml創(chuàng)建出來的servcieaccount登陸的話儒鹿,是啥權(quán)限都沒有的,全部是forbidden几晤,因為官方的給了一個minimal的role约炎。。蟹瘾。我們這邊為了測試方便圾浅,直接創(chuàng)建一個超級管理員的賬號,配置如下

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard
subjects:
  - kind: ServiceAccount
    name: dashboard
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

創(chuàng)建完了之后憾朴,系統(tǒng)會自動創(chuàng)建該用戶的secret狸捕,通過如下命令獲取secret

[root@MiWiFi-R1CM-srv yum.repos.d]# kubectl describe secret dashboard -n kube-system
Name:         dashboard-token-s9hqc
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard
              kubernetes.io/service-account.uid: 63c43e1e-15d6-11e9-bb65-08002726d64d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tczlocWMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3Vi

將該token填入登陸界面中的token位置,即可登陸众雷,并具有全部權(quán)限灸拍。


dashboard.png

至此一個完整的單節(jié)點(diǎn)k8集群安裝完畢!

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末砾省,一起剝皮案震驚了整個濱河市鸡岗,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌编兄,老刑警劉巖轩性,帶你破解...
    沈念sama閱讀 222,183評論 6 516
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異狠鸳,居然都是意外死亡揣苏,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,850評論 3 399
  • 文/潘曉璐 我一進(jìn)店門件舵,熙熙樓的掌柜王于貴愁眉苦臉地迎上來卸察,“玉大人,你說我怎么就攤上這事芦圾《昱桑” “怎么了?”我有些...
    開封第一講書人閱讀 168,766評論 0 361
  • 文/不壞的土叔 我叫張陵个少,是天一觀的道長。 經(jīng)常有香客問我眯杏,道長夜焦,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,854評論 1 299
  • 正文 為了忘掉前任岂贩,我火速辦了婚禮茫经,結(jié)果婚禮上巷波,老公的妹妹穿的比我還像新娘。我一直安慰自己卸伞,他們只是感情好抹镊,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,871評論 6 398
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著荤傲,像睡著了一般垮耳。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上遂黍,一...
    開封第一講書人閱讀 52,457評論 1 311
  • 那天终佛,我揣著相機(jī)與錄音,去河邊找鬼雾家。 笑死铃彰,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的芯咧。 我是一名探鬼主播牙捉,決...
    沈念sama閱讀 40,999評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼敬飒!你這毒婦竟也來了鹃共?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,914評論 0 277
  • 序言:老撾萬榮一對情侶失蹤驶拱,失蹤者是張志新(化名)和其女友劉穎霜浴,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體蓝纲,經(jīng)...
    沈念sama閱讀 46,465評論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡阴孟,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,543評論 3 342
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了税迷。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片永丝。...
    茶點(diǎn)故事閱讀 40,675評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖箭养,靈堂內(nèi)的尸體忽然破棺而出慕嚷,到底是詐尸還是另有隱情,我是刑警寧澤毕泌,帶...
    沈念sama閱讀 36,354評論 5 351
  • 正文 年R本政府宣布喝检,位于F島的核電站,受9級特大地震影響撼泛,放射性物質(zhì)發(fā)生泄漏挠说。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,029評論 3 335
  • 文/蒙蒙 一愿题、第九天 我趴在偏房一處隱蔽的房頂上張望损俭。 院中可真熱鬧蛙奖,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,514評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至攒砖,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間骆膝,已是汗流浹背祭衩。 一陣腳步聲響...
    開封第一講書人閱讀 33,616評論 1 274
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留阅签,地道東北人掐暮。 一個月前我還...
    沈念sama閱讀 49,091評論 3 378
  • 正文 我出身青樓,卻偏偏與公主長得像政钟,于是被迫代替她去往敵國和親路克。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,685評論 2 360

推薦閱讀更多精彩內(nèi)容