一家妆、服務(wù)器架構(gòu)
環(huán)境介紹:CentOS Linux 3.10.0-957.el7.x86_64
名稱 | IP | 服務(wù) |
---|---|---|
master | 192.168.247.130 | kubelet、kubeadm簸呈、kubectl榕订、kubernetes-cni、docker蜕便、flannel |
node1 | 192.168.247.131 | kubelet劫恒、kubeadm、kubectl、kubernetes-cni |
node2 | 192.168.247.132 | kubelet忍饰、kubeadm、kubectl、kubernetes-cni |
二迁杨、安裝配置K8S(所有節(jié)點(diǎn))
1. 前置條件:安裝Docker 并啟動(dòng)Docker
# 關(guān)閉 SeLinux
[root@master ~]# setenforce 0
[root@master ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 關(guān)閉 swap
[root@master ~]# swapoff -a
[root@master ~]# yes | cp /etc/fstab /etc/fstab_bak
[root@master ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab
# 配置內(nèi)核參數(shù)
[root@master ~]# vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
[root@master ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
# 配置國(guó)內(nèi)鏡像
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2. master節(jié)點(diǎn)
# 安裝kubelet、kubeadm伶选、kubectl词渤、kubernetes-cni
[root@master ~]# yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 kubernetes-cni-0.7.5
# 啟動(dòng)kubelet
[root@master ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# 查看需要的鏡像版本
[root@master ~]# kubeadm config images list
W1105 09:44:45.595840 11838 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1105 09:44:45.596854 11838 version.go:102] falling back to the local client version: v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
# 根據(jù)需要的版本,直接拉取國(guó)內(nèi)鏡像喧务,并修改tag (所有節(jié)點(diǎn))
[root@manager ~]# vi kubeadm.sh
腳本內(nèi)容:
#!/bin/bash
## 使用如下腳本下載國(guó)內(nèi)鏡像赖歌,并修改tag為google的tag
set -e
KUBE_VERSION=v1.16.2
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.15-0
CORE_DNS_VERSION=1.6.2
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
pause-amd64:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
# 運(yùn)行腳本,拉取鏡像
[root@manager ~]# sh ./kubeadm.sh
# master節(jié)點(diǎn)執(zhí)行功茴,初始化k8s 一定要注意IP地址為本機(jī)IP庐冯。
# 初始化主節(jié)點(diǎn) pod-network-cidr: 選項(xiàng)--pod-network-cidr=192.168.0.0/16表示集群將使用網(wǎng)絡(luò)的子網(wǎng)范圍
[root@manager ~]# sudo kubeadm init --apiserver-advertise-address 192.168.247.130 --kubernetes-version=v1.16.2 --pod-network-cidr=192.168.0.0/16
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.247.130:6443 --token nla9a9.wz320s15z4zopwgv \
--discovery-token-ca-cert-hash sha256:3168a2e3963d9f35e590d5459f59c85393b6b8a42abeb2377849886ab82d8ef0
# 初始化 root 用戶的 kubectl 配置 (環(huán)境變量設(shè)置)為當(dāng)前用戶授權(quán)kubectl權(quán)限
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
三、安裝Flannel
# 安裝Flannel(只在Master節(jié)點(diǎn))
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
# 查看pods,等待pod的STATUS全為Running坎穿,然后ctrl+c退出
[root@master ~]# watch kubectl get pods --all-namespaces
Every 2.0s: kubectl get pods --all-namespaces Thu Nov 7 12:26:46 2019
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-flr7l 1/1 Running 0 11m
kube-system coredns-5644d7b6d9-l79hw 1/1 Running 0 11m
kube-system etcd-master 1/1 Running 0 10m
kube-system kube-apiserver-master 1/1 Running 0 10m
kube-system kube-controller-manager-master 1/1 Running 0 10m
kube-system kube-flannel-ds-amd64-tppb8 1/1 Running 0 73s
kube-system kube-proxy-jgbv8 1/1 Running 0 11m
kube-system kube-scheduler-master 1/1 Running 0 10m
# 查看網(wǎng)絡(luò)
[root@master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:53:5a:8e brd ff:ff:ff:ff:ff:ff
inet 192.168.247.130/24 brd 192.168.247.255 scope global noprefixroute dynamic ens33
valid_lft 5268044sec preferred_lft 5268044sec
inet6 fe80::7888:4525:c7b7:73e6/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:aa:20:6c:3b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether da:d2:a6:6a:d8:c3 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::d8d2:a6ff:fe6a:d8c3/64 scope link
valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 0a:ad:78:04:25:0f brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::8ad:78ff:fe04:250f/64 scope link
valid_lft forever preferred_lft forever
6: veth05805c5c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 9a:c4:bf:89:55:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::98c4:bfff:fe89:559a/64 scope link
valid_lft forever preferred_lft forever
7: vetha2ba003a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 2a:7a:7f:04:c3:d1 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::287a:7fff:fe04:c3d1/64 scope link
valid_lft forever preferred_lft forever
# 查看節(jié)點(diǎn)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 13m v1.16.2
在k8s master節(jié)點(diǎn)上需要運(yùn)行以下組件:
Kubernetes API Server:提供http restful接口服務(wù)展父,也是集群控制入口
Kubernetes Controller Manager:資源對(duì)象控制中心
Kubernetes Scheduler:負(fù)責(zé)pod的調(diào)度
kubelet每隔幾秒鐘重新啟動(dòng)一次,因?yàn)樗诒罎⒀h(huán)中等待kubeadm告訴它該怎么做玲昧。 此崩潰循環(huán)是正称苘裕現(xiàn)象,請(qǐng)繼續(xù)進(jìn)行下一步孵延,并且kubelet將開始正常運(yùn)行吕漂。
四、創(chuàng)建集群
# node1加入集群
[root@node1 ~]# kubeadm join 192.168.247.130:6443 --token hw3ejo.rsrdyi73hl7yixvs \
> --discovery-token-ca-cert-hash sha256:3b0c89163746d0a3f6b2c6dc190381def07963bc3637fa5e4e5ea9171b04aaa0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 23m v1.16.2
node1 NotReady <none> 99s v1.16.2
# 拷貝文件
[root@master ~]# scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/admin.conf
[root@master ~]# scp /etc/kubernetes/admin.conf node2:/etc/kubernetes/admin.conf
# node2加入集群
[root@node2 ~]# kubeadm join 192.168.247.130:6443 --token hw3ejo.rsrdyi73hl7yixvs \
> --discovery-token-ca-cert-hash sha256:3b0c89163746d0a3f6b2c6dc190381def07963bc3637fa5e4e5ea9171b04aaa0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 查看節(jié)點(diǎn)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 62m v1.16.2
node1 Ready <none> 40m v1.16.2
node2 Ready <none> 3m6s v1.16.2
# 查看docker 信息 查看驅(qū)動(dòng)
[root@master ~]# docker info
Cgroup Driver: systemd
# 查看組件信息
[root@master ~]# kubectl get cs
NAME AGE
scheduler <unknown>
controller-manager <unknown>
etcd-0 <unknown>
# .查看當(dāng)前可用的API版本
[root@master ~]# kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
# 節(jié)點(diǎn)網(wǎng)絡(luò)
[root@node1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:00:21:f4 brd ff:ff:ff:ff:ff:ff
inet 192.168.247.131/24 brd 192.168.247.255 scope global dynamic ens33
valid_lft 5264385sec preferred_lft 5264385sec
inet6 fe80::20c:29ff:fe00:21f4/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:cd:49:46:98 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether ba:33:87:c5:26:91 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
五尘应、常見問題:
- Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
錯(cuò)誤表明證書可能不匹配惶凝。
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 主節(jié)點(diǎn):untime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
網(wǎng)絡(luò)插件沒有安裝:
[root@master ~]# kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
- 子節(jié)點(diǎn):Unable to update cni config: no networks found in /etc/cni/net.d
沒有pull所需鏡像。
# 執(zhí)行腳本
[root@node2 ~]# sh ./kubeadm.sh
- (子節(jié)點(diǎn))The connection to the server localhost:8080 was refused - did you specify the right host or port?
出現(xiàn)這個(gè)問題的原因是kubectl命令需要使用kubernetes-admin來運(yùn)行犬钢,解決方法如下苍鲜,將主節(jié)點(diǎn)中的【/etc/kubernetes/admin.conf】文件拷貝到從節(jié)點(diǎn)相同目錄下,然后配置環(huán)境變量:
[root@master ~]# scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/admin.conf
root@node1's password:
admin.conf 100% 5455 1.9MB/s 00:00
# 配置環(huán)境變量
[root@node1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
# 立即生效
[root@node1 ~]# source ~/.bash_profile