arm ubuntu部署k8s

環(huán)境

華為泰山服務(wù)器驻子,ip為10.203.1.19

節(jié)點信息

root@horatio:~# hostnamectl
   Static hostname: horatio
         Icon name: computer-server
           Chassis: server
        Machine ID: c9c709c6a0f04fe3a93b1368c361083a
           Boot ID: d76bd870c49a43649c90e4669bba79e6
  Operating System: Ubuntu 18.04.4 LTS
            Kernel: Linux 4.15.0-76-generic
      Architecture: arm64

前期準(zhǔn)備

關(guān)閉swap

編輯/etc/fstab 文件灿意,將以/swapfile開頭的這一行注釋掉,然后reboot
reboot之后使用top命令查看崇呵,可以看到Swap那一行都為0缤剧,證明修改生效

root@horatio:~# top
top - 11:19:41 up 5 min,  1 user,  load average: 0.00, 0.03, 0.00
Tasks: 905 total,   1 running, 438 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 65689596 total, 64585336 free,   892840 used,   211420 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 64368672 avail Mem 

安裝軟件

Docker

命令

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

結(jié)果

root@horatio:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
# Executing docker install script, commit: 3d8fe77c2c46c5b7571f94b42793905e5b3e42e4
+ sh -c 'apt-get update -qq >/dev/null'
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic/InRelease  Could not connect to 10.203.1.225:80 (10.203.1.225). - connect (113: No route to host)
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-security/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-updates/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-proposed/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-backports/InRelease  Unable to connect to 10.203.1.225:http:
W: Some index files failed to download. They have been ignored, or old ones used instead.
+ sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
+ sh -c 'curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null'
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c 'echo "deb [arch=arm64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic stable" > /etc/apt/sources.list.d/docker.list'
+ sh -c 'apt-get update -qq >/dev/null'
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic/InRelease  Could not connect to 10.203.1.225:80 (10.203.1.225). - connect (113: No route to host)
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-security/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-updates/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-proposed/InRelease  Unable to connect to 10.203.1.225:http:
W: Failed to fetch http://10.203.1.225/ubuntu/arm64/ubuntu-ports/dists/bionic-backports/InRelease  Unable to connect to 10.203.1.225:http:
W: Some index files failed to download. They have been ignored, or old ones used instead.
+ '[' -n '' ']'
+ sh -c 'apt-get install -y -qq --no-install-recommends docker-ce >/dev/null'
+ sh -c 'docker version'
Client: Docker Engine - Community
 Version:           20.10.1
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        831ebea
 Built:             Tue Dec 15 04:34:49 2020
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.1
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       f001486
  Built:            Tue Dec 15 04:32:48 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

kubeadm,kubectl,kubelet

需要安裝指定版本,因為不指定版本的話會自動下載最新版1.20.0域慷,但是image倉庫中沒有對應(yīng)版本的ARM架構(gòu)的image荒辕,經(jīng)過測試,使用版本1.18.0

安裝證書

命令

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

結(jié)果

root@horatio:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1974  100  1974    0     0  11963      0 --:--:-- --:--:-- --:--:-- 11891
OK

添加源

命令

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF

結(jié)果

root@horatio:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
root@horatio:~#

更新源

root@horatio:~# apt update
Hit:1 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [9,383 B]                                                                                               
Hit:3 http://cn.ports.ubuntu.com/ubuntu-ports bionic InRelease                                                                                                                      
Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease                          
Ign:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 Packages         
Hit:6 http://cn.ports.ubuntu.com/ubuntu-ports bionic-updates InRelease   
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 Packages [41.9 kB]
Hit:7 http://cn.ports.ubuntu.com/ubuntu-ports bionic-backports InRelease            
Fetched 51.3 kB in 1s (49.3 kB/s)                  
Reading package lists... Done
Building dependency tree       
Reading state information... Done
173 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@horatio:~#

安裝軟件

命令

apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00 

結(jié)果

root@horatio:~# apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00  
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  kubernetes-cni
The following NEW packages will be installed:
  kubeadm kubectl kubelet kubernetes-cni
0 upgraded, 4 newly installed, 0 to remove and 173 not upgraded.
Need to get 55.0 MB of archives.
After this operation, 258 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubernetes-cni arm64 0.8.7-00 [23.1 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubelet arm64 1.18.0-00 [17.2 MB]
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubectl arm64 1.18.0-00 [7,622 kB]
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubeadm arm64 1.18.0-00 [7,073 kB]
Fetched 55.0 MB in 4s (14.5 MB/s)  
Selecting previously unselected package kubernetes-cni.
(Reading database ... 67551 files and directories currently installed.)
Preparing to unpack .../kubernetes-cni_0.8.7-00_arm64.deb ...
Unpacking kubernetes-cni (0.8.7-00) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../kubelet_1.18.0-00_arm64.deb ...
Unpacking kubelet (1.18.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../kubectl_1.18.0-00_arm64.deb ...
Unpacking kubectl (1.18.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../kubeadm_1.18.0-00_arm64.deb ...
Unpacking kubeadm (1.18.0-00) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up kubelet (1.18.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.18.0-00) ...
Setting up kubeadm (1.18.0-00) ...
root@horatio:~#

初始化集群

準(zhǔn)備image

需要提前pull arm版本的image犹褒,因為初始化過程中kubeadm不會pull arm版本的image抵窒,pull下來的x86的image會造成集群無法初始化成功

Pull images

命令

docker pull mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
docker pull mirrorgcrio/etcd-arm64:3.4.3-0
docker pull mirrorgcrio/pause-arm64:3.2
docker pull tkestack/coredns-arm64:1.6.9

結(jié)果

root@horatio:~# docker pull mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
docker pull mirrorgcrio/etcd-arm64:3.4.3-0
docker pull mirrorgcrio/pause-arm64:3.2
v1.18.1: Pulling from mirrorgcrio/kube-apiserver-arm64
ed2e7fd67416: Pull complete 
6df437f7efad: Pull complete 
Digest: sha256:29165d4e875c996bce3790226ac90cc8f7db50b2c952929522d81106a85f3226
Status: Downloaded newer image for mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker.io/mirrorgcrio/kube-apiserver-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
v1.18.1: Pulling from mirrorgcrio/kube-controller-manager-arm64
ed2e7fd67416: Already exists 
8e08af3f3336: Pull complete 
Digest: sha256:a2150210ea0b5a62fbcae903467e4c20992c03e5a484ff3b9230f41a6507f39b
Status: Downloaded newer image for mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker.io/mirrorgcrio/kube-controller-manager-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
v1.18.1: Pulling from mirrorgcrio/kube-scheduler-arm64
ed2e7fd67416: Already exists 
79c79f4c4434: Pull complete 
Digest: sha256:1aebd94ad45b5204a89f05313838352c4fc2861da7a9ab97f3c41a37aaaa7119
Status: Downloaded newer image for mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker.io/mirrorgcrio/kube-scheduler-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
v1.18.1: Pulling from mirrorgcrio/kube-proxy-arm64
ed2e7fd67416: Already exists 
d033d9855b96: Pull complete 
7bd91d4a9747: Pull complete 
6c3c2821ac4d: Pull complete 
b8ac04191d92: Pull complete 
355857a7a906: Pull complete 
ea9711a0e51a: Pull complete 
Digest: sha256:1cd85e909859001b68022f269c6ce223370cdb7889d79debd9cb87626a8280fb
Status: Downloaded newer image for mirrorgcrio/kube-proxy-arm64:v1.18.1
docker.io/mirrorgcrio/kube-proxy-arm64:v1.18.1
root@horatio:~# docker pull mirrorgcrio/etcd-arm64:3.4.3-0
3.4.3-0: Pulling from mirrorgcrio/etcd-arm64
9f9ba9541db2: Pull complete 
6feb97f21dc3: Pull complete 
de473e163c10: Pull complete 
Digest: sha256:fbc0f8b4861d23c9989edf877df7ae2533083e98c05687eb22b00422b9825c2f
Status: Downloaded newer image for mirrorgcrio/etcd-arm64:3.4.3-0
docker.io/mirrorgcrio/etcd-arm64:3.4.3-0
root@horatio:~# docker pull mirrorgcrio/pause-arm64:3.2
3.2: Pulling from mirrorgcrio/pause-arm64
84f9968a3238: Pull complete 
Digest: sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636
Status: Downloaded newer image for mirrorgcrio/pause-arm64:3.2
docker.io/mirrorgcrio/pause-arm64:3.2
root@horatio:~# docker pull tkestack/coredns-arm64:1.6.9
1.6.9: Pulling from tkestack/coredns-arm64
c6568d217a00: Pull complete 
9ee498572cc0: Pull complete 
Digest: sha256:0b24ee66a96fb4142d4d0d7014f78507dda2a8da28567e858461eef5a0734402
Status: Downloaded newer image for tkestack/coredns-arm64:1.6.9
docker.io/tkestack/coredns-arm64:1.6.9

修改tag

命令

docker tag mirrorgcrio/kube-apiserver-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.1
docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
docker tag mirrorgcrio/etcd-arm64:3.4.3-0 registry.aliyuncs.com/google_containers/etcd:3.4.3-0
docker tag mirrorgcrio/pause-arm64:3.2 registry.aliyuncs.com/google_containers/pause:3.2
docker tag tkestack/coredns-arm64:1.6.9 registry.aliyuncs.com/google_containers/coredns-arm64:1.6.9

結(jié)果

root@horatio:~# docker tag mirrorgcrio/kube-apiserver-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.1
docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
root@horatio:~# docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.1
docker tag mirrorgcrio/etcd-arm64:3.4.3-0 registry.aliyuncs.com/google_containers/etcd:3.4.3-0
docker tag mirrorgcrio/pause-arm64:3.2 registry.aliyuncs.com/google_containers/pause:3.2root@horatio:~# docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1
root@horatio:~# docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 registry.aliyuncs.com/google_containers/kube-proxy:v1.18.1
root@horatio:~#docker tag tkestack/coredns-arm64:1.6.9 registry.aliyuncs.com/google_containers/coredns-arm64:1.6.9

初始化

需要根據(jù)提前pull的image指定版本進行初始化

初始化

命令

kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.1

結(jié)果

root@horatio:~# kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.1
W1223 16:41:24.406091   13803 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1\. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [horatio kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.203.1.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1223 16:41:32.111535   13803 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1223 16:41:32.112957   13803 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.503593 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node horatio as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node horatio as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: igj7rd.xlm267318e42bjt5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.203.1.19:6443 --token igj7rd.xlm267318e42bjt5 \
    --discovery-token-ca-cert-hash sha256:0d7a42c18ddbe1a0cb1d97e9758904551cf2d5d546fb8f1175391173309865ac

配置kubectl工具

命令

 mkdir -p /root/.kube && cp /etc/kubernetes/admin.conf /root/.kube/config

結(jié)果

root@horatio:~# mkdir -p /root/.kube && \
> cp /etc/kubernetes/admin.conf /root/.kube/config

部署flannel網(wǎng)絡(luò)

編輯一個kube-flannel.yaml文件,內(nèi)容如下

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

執(zhí)行kubectl apply

root@horatio:~# kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看狀態(tài)

root@horatio:~# kubectl get all -n kube-system                
NAME                                  READY   STATUS    RESTARTS   AGE
pod/coredns-7c98c5f7b9-n8l9g          1/1     Running   0          10m
pod/coredns-7c98c5f7b9-vqcqd          1/1     Running   0          10m
pod/etcd-horatio                      1/1     Running   0          32m
pod/kube-apiserver-horatio            1/1     Running   0          32m
pod/kube-controller-manager-horatio   1/1     Running   0          32m
pod/kube-flannel-ds-bjhzf             1/1     Running   0          26m
pod/kube-proxy-8rfxt                  1/1     Running   0          32m
pod/kube-scheduler-horatio            1/1     Running   0          32m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   32m

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/kube-flannel-ds   1         1         1       1            1           <none>                   26m
daemonset.apps/kube-proxy        1         1         1       1            1           kubernetes.io/os=linux   32m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           32m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-7c98c5f7b9   2         2         2       10m

問題

執(zhí)行“kubeadm init”初始化并不會去下載arm版本的image來部署集群

描述

執(zhí)行初始化之后一直報錯

root@horatio:~# kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16  
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1\. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [horatio kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.203.1.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [horatio localhost] and IPs [10.203.1.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

解決辦法

查看kubectl狀態(tài)

正常

root@horatio:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2020-12-23 13:13:50 CST; 4min 42s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 20359 (kubelet)
    Tasks: 62 (limit: 14745)

查看pod狀態(tài)

提示錯誤

root@horatio:~# docker logs 858c5d4a664c
standard_init_linux.go:219: exec user process caused: exec format error

查找相關(guān)資料后是因為下載的image是x86版本的叠骑,需要先下載arm 版本李皇,再tag成初始化需要的版本

kubeadm版本過高,無法指定低版本進行初始化

描述

通過“kubeadm config images list”命令查看當(dāng)前版本kubeadm初始化k8s集群所需要的image

root@horatio:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.20.1
k8s.gcr.io/kube-controller-manager:v1.20.1
k8s.gcr.io/kube-scheduler:v1.20.1
k8s.gcr.io/kube-proxy:v1.20.1
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

但通過查找宙枷,此時倉庫中并沒有1.20.1 arm版本的image掉房,經(jīng)過比對,決定部署1.18.0版本的
pull image之后慰丛,需要指定1.18.0版本進行init
結(jié)果提示報錯卓囚,只能指定>= 1.19.0版本的

root@horatio:~# kubeadm init --apiserver-advertise-address=10.203.1.19 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.1
this version of kubeadm only supports deploying clusters with the control plane version >= 1.19.0\. Current version: v1.18.1
To see the stack trace of this error execute with --v=5 or higher

解決辦法

查找資料后發(fā)現(xiàn)kubeadm在一定版本下也只能初始化一定版本的k8s集群,所以要下載指定版本的kubeadm

卸載當(dāng)前kubeadm版本以及其他相關(guān)軟件

命令

apt --purge -y remove kubeadm kubectl kubelet

結(jié)果

root@horatio:~# apt --purge remove kubeadm
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  cri-tools
Use 'apt autoremove' to remove it.
The following packages will be REMOVED:
  kubeadm*
0 upgraded, 0 newly installed, 1 to remove and 173 not upgraded.
After this operation, 36.2 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 67576 files and directories currently installed.)
Removing kubeadm (1.20.1-00) ...
(Reading database ... 67575 files and directories currently installed.)
Purging configuration files for kubeadm (1.20.1-00) ...
root@horatio:~# apt --purge remove kubectl
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  cri-tools
Use 'apt autoremove' to remove it.
The following packages will be REMOVED:
  kubectl*
0 upgraded, 0 newly installed, 1 to remove and 173 not upgraded.
After this operation, 37.2 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 67573 files and directories currently installed.)
Removing kubectl (1.20.1-00) ...
root@horatio:~# apt --purge remove kubernetes-cni kubelet
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  conntrack cri-tools socat
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  kubelet* kubernetes-cni*
0 upgraded, 0 newly installed, 2 to remove and 173 not upgraded.
After this operation, 177 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 67572 files and directories currently installed.)
Removing kubelet (1.20.1-00) ...
Warning: The unit file, source configuration file or drop-ins of kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Removing kubernetes-cni (0.8.7-00) ...
dpkg: warning: while removing kubernetes-cni, directory '/opt' not empty so not removed
(Reading database ... 67551 files and directories currently installed.)
Purging configuration files for kubelet (1.20.1-00) ..

下載指定版本的kubeadm及相關(guān)軟件

命令

 apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00

結(jié)果

root@horatio:~# apt install kubeadm=1.18.0-00 kubectl=1.18.0-00 kubelet=1.18.0-00  
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  kubernetes-cni
The following NEW packages will be installed:
  kubeadm kubectl kubelet kubernetes-cni
0 upgraded, 4 newly installed, 0 to remove and 173 not upgraded.
Need to get 55.0 MB of archives.
After this operation, 258 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubernetes-cni arm64 0.8.7-00 [23.1 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubelet arm64 1.18.0-00 [17.2 MB]
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubectl arm64 1.18.0-00 [7,622 kB]
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main arm64 kubeadm arm64 1.18.0-00 [7,073 kB]
Fetched 55.0 MB in 4s (14.5 MB/s)  
Selecting previously unselected package kubernetes-cni.
(Reading database ... 67551 files and directories currently installed.)
Preparing to unpack .../kubernetes-cni_0.8.7-00_arm64.deb ...
Unpacking kubernetes-cni (0.8.7-00) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../kubelet_1.18.0-00_arm64.deb ...
Unpacking kubelet (1.18.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../kubectl_1.18.0-00_arm64.deb ...
Unpacking kubectl (1.18.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../kubeadm_1.18.0-00_arm64.deb ...
Unpacking kubeadm (1.18.0-00) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up kubelet (1.18.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubectl (1.18.0-00) ...
Setting up kubeadm (1.18.0-00) ...

其他

查看當(dāng)前系統(tǒng)軟件

dpkg --list

查看可下載的kubeadm版本

apt-cache show kubeadm | grep Version

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末诅病,一起剝皮案震驚了整個濱河市哪亿,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌贤笆,老刑警劉巖蝇棉,帶你破解...
    沈念sama閱讀 217,406評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異苏潜,居然都是意外死亡银萍,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,732評論 3 393
  • 文/潘曉璐 我一進店門恤左,熙熙樓的掌柜王于貴愁眉苦臉地迎上來贴唇,“玉大人,你說我怎么就攤上這事飞袋〈疗” “怎么了?”我有些...
    開封第一講書人閱讀 163,711評論 0 353
  • 文/不壞的土叔 我叫張陵巧鸭,是天一觀的道長瓶您。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么呀袱? 我笑而不...
    開封第一講書人閱讀 58,380評論 1 293
  • 正文 為了忘掉前任贸毕,我火速辦了婚禮,結(jié)果婚禮上夜赵,老公的妹妹穿的比我還像新娘明棍。我一直安慰自己,他們只是感情好寇僧,可當(dāng)我...
    茶點故事閱讀 67,432評論 6 392
  • 文/花漫 我一把揭開白布摊腋。 她就那樣靜靜地躺著,像睡著了一般嘁傀。 火紅的嫁衣襯著肌膚如雪兴蒸。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,301評論 1 301
  • 那天细办,我揣著相機與錄音橙凳,去河邊找鬼。 笑死蟹腾,一個胖子當(dāng)著我的面吹牛痕惋,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播娃殖,決...
    沈念sama閱讀 40,145評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼议谷!你這毒婦竟也來了炉爆?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,008評論 0 276
  • 序言:老撾萬榮一對情侶失蹤卧晓,失蹤者是張志新(化名)和其女友劉穎芬首,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體逼裆,經(jīng)...
    沈念sama閱讀 45,443評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡郁稍,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,649評論 3 334
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了胜宇。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片耀怜。...
    茶點故事閱讀 39,795評論 1 347
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖桐愉,靈堂內(nèi)的尸體忽然破棺而出财破,到底是詐尸還是另有隱情,我是刑警寧澤从诲,帶...
    沈念sama閱讀 35,501評論 5 345
  • 正文 年R本政府宣布左痢,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏俊性。R本人自食惡果不足惜略步,卻給世界環(huán)境...
    茶點故事閱讀 41,119評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望定页。 院中可真熱鬧趟薄,春花似錦、人聲如沸拯勉。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,731評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽宫峦。三九已至岔帽,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間导绷,已是汗流浹背犀勒。 一陣腳步聲響...
    開封第一講書人閱讀 32,865評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留妥曲,地道東北人贾费。 一個月前我還...
    沈念sama閱讀 47,899評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像檐盟,于是被迫代替她去往敵國和親褂萧。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,724評論 2 354

推薦閱讀更多精彩內(nèi)容