一.安裝docker
詳細(xì)步驟可參考鏈接
http://doc.loongnix.org/web/#/50?page_id=148
命令行如下:
yum install docker-ce -y
啟動(dòng)服務(wù):
systemctl start docker.service
查看版本:
docker version
二.部署kubernetes
詳細(xì)步驟參考
http://doc.loongnix.org/web/#/71?page_id=232
1.軟件包下載地址
在master和node均需獲取以下軟件包
kubeadm-1.18.3-0.lns7.mips64el.rpm
kubectl-1.18.3-0.lns7.mips64el.rpm
kubelet-1.18.3-0.lns7.mips64el.rpm
kubernetes-cni-0.8.6-0.lns7.mips64el.rpm
2.拉取鏡像
docker pull loongnixk8s/node:v3.13.2
docker pull loongnixk8s/cni:v3.13.2
docker pull loongnixk8s/pod2daemon-flexvol:v3.13.2
docker pull loongnixk8s/kube-controllers:v3.13.2
docker pull loongnixk8s/kube-apiserver-mips64le:v1.18.3
docker pull loongnixk8s/kube-controller-manager-mips64le:v1.18.3
docker pull loongnixk8s/kube-proxy-mips64le:v1.18.3
docker pull loongnixk8s/kube-scheduler-mips64le:v1.18.3
docker pull loongnixk8s/pause:3.2
docker pull loongnixk8s/coredns:1.6.5
docker pull loongnixk8s/etcd:3.3.12
3.在/etc/hosts文件中底循,添加master和node對(duì)應(yīng)的物理ip和hostname(如下示例)
10.130.0.125 master001
10.130.0.71 node001
在master節(jié)點(diǎn)的/etc/hostname文件中添加內(nèi)容:master001
在node節(jié)點(diǎn)的/etc/hostname文件中添加內(nèi)容:node001
4.安裝軟件包
[root@master001 ~]# cd /etc/kubernetes
[root@master001 kubernetes]# ls | grep rpm
kubeadm-1.18.3-0.mips64el.rpm
kubectl-1.18.3-0.mips64el.rpm
kubelet-1.18.3-0.mips64el.rpm
kubernetes-cni-0.8.6-0.mips64el.rpm
[root@master001 kubernetes]# rpm -ivh *.rpm
5.關(guān)閉防火墻/交換分區(qū)/SELINUX
在終端執(zhí)行以下命令清除防火墻規(guī)則并查看清除后的結(jié)果:
iptables -F && iptables -X && iptables -Z && iptables -L&&systemctl stop iptables&&systemctl status iptables
在終端執(zhí)行下面兩條命令關(guān)閉交換分區(qū):
swapoff -a;sed -i -e /swap/d /etc/fstab
在終端執(zhí)行下面兩條命令關(guān)閉selinux分區(qū):
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
以上步驟需要在master和node節(jié)點(diǎn)執(zhí)行,以下步驟只需在master節(jié)點(diǎn)執(zhí)行
6.準(zhǔn)備kubeadm配置文件
(1)通過(guò)以下命令生成配置文件模板
kubeadm config print init-defaults > init_default.yaml
修改init_default.yaml文件中以下內(nèi)容使之與當(dāng)前部署環(huán)境和版本一致
找到對(duì)應(yīng)配置椰苟,修改為以下內(nèi)容
localAPIEndpoint:
advertiseAddress: 10.130.0.125(master的主機(jī)IP)
bindPort: 6443
........
imageRepository: loongnixk8s(私有倉(cāng)庫(kù)地址)
kind: ClusterConfiguration
kubernetesVersion: v1.18.3(當(dāng)前k8s版本)
networking:
dnsDomain: cluster.local
(2) 執(zhí)行如下命令查看kubeadm配置后所需鏡像版本
[root@master001 kubernetes]# kubeadm config images list --config init_default.yaml
loongnixk8s/kube-apiserver:v1.18.3
loongnixk8s/kube-controller-manager:v1.18.3
loongnixk8s/kube-scheduler:v1.18.3
loongnixk8s/kube-proxy:v1.18.3
loongnixk8s/pause:3.2
loongnixk8s/etcd:3.4.3-0
loongnixk8s/coredns:1.6.7
(3) 通過(guò)以下命令對(duì)本地鏡像進(jìn)行重命名只怎,使之與kubeadm要求的鏡像名一致
docker tag loongnixk8s/kube-apiserver-mips64le:v1.18.3 loongnixk8s/kube-apiserver:v1.18.3
docker tag loongnixk8s/kube-controller-manager-mips64le:v1.18.3 loongnixk8s/kube-controller-manager:v1.18.3
docker tag loongnixk8s/kube-scheduler-mips64le:v1.18.3 loongnixk8s/kube-scheduler:v1.18.3
docker tag loongnixk8s/kube-proxy-mips64le:v1.18.3 loongnixk8s/kube-proxy:v1.18.3
docker tag loongnixk8s/pause:3.2 loongnixk8s/pause:3.2
docker tag loongnixk8s/etcd:3.3.12 loongnixk8s/etcd:3.4.3-0
docker tag loongnixk8s/coredns:1.6.5 loongnixk8s/coredns:1.6.7
7.calico配置文件準(zhǔn)備
通過(guò)以下命令獲取官方calico配置文件
curl https://docs.projectcalico.org/archive/v3.13/manifests/calico.yaml -O
修改calico.yaml中對(duì)應(yīng)配置,使配置文件中鏡像名稱(chēng)與本地鏡像一致
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: loongnixk8s/cni:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: loongnixk8s/cni:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: loongnixk8s/pod2daemon-flexvol:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
# container programs network policy and routes on each
# host.
- name: calico-node
image: loongnixk8s/node:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: loongnixk8s/kube-controllers:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
kubectl apply -f calico.yaml
7.master節(jié)點(diǎn)初始化
(1)執(zhí)行以下命令進(jìn)行kubeadm初始化
[root@master001 kubernetes]# kubeadm init --config=init_default.yaml
終端輸出的結(jié)果帝洪,如下:
W0702 10:54:50.953310 24907 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [bogon kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.130.0.125]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [bogon localhost] and IPs [10.130.0.125 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [bogon localhost] and IPs [10.130.0.125 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0702 10:56:52.414997 24907 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0702 10:56:52.418399 24907 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 43.010877 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node bogon as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node bogon as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975
注:如果初始化失敗,可以嘗試執(zhí)行kubeadm reset重啟kubeadm(會(huì)刪除創(chuàng)建的文件和節(jié)點(diǎn))
(2) 初始化完成后在終端執(zhí)行以下命令帆锋,拷貝對(duì)應(yīng)的配置文件绣檬。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
注:如果重復(fù)執(zhí)行初始化操作,需要先刪除$HOME/.kube目錄,否則會(huì)報(bào)錯(cuò)
(3) 查看當(dāng)前master狀態(tài)鱼炒。
[root@master001 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master001 Ready master 8m45s v1.18.3
[root@master001 kubernetes]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-889c78476-c5dd7 0/1 Pending 0 8m45s <none> master001 <none> <none>
kube-system coredns-889c78476-sd9gd 0/1 Pending 0 8m45s <none> master001 <none> <none>
kube-system etcd-master001 1/1 Running 0 8m41s 10.130.0.125 master001 <none> <none>
kube-system kube-apiserver-master001 1/1 Running 0 8m41s 10.130.0.125 master001 <none> <none>
kube-system kube-controller-manager-master001 1/1 Running 0 8m41s 10.130.0.125 master001 <none> <none>
kube-system kube-proxy-dzzc9 1/1 Running 0 8m45s 10.130.0.125 master001 <none> <none>
kube-system kube-scheduler-master001 1/1 Running 0 8m41s 10.130.0.125 master001 <none> <none>
至此master節(jié)點(diǎn)部署完畢,可以添加node節(jié)點(diǎn)
(1) 加入集群琴拧,在終端執(zhí)行命令,如下示(注:以下token由3.2.1(1)生成):
kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975
如果無(wú)法加入可能是因?yàn)閠oken 過(guò)期, 可通過(guò)在master節(jié)點(diǎn)上執(zhí)行 kubeadm token create --print-join-command
重新生成加入命令衙传,并使用輸出的新命令在工作節(jié)點(diǎn)上重新執(zhí)行即可决帖。
node節(jié)點(diǎn)終端輸出的結(jié)果,如下:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
(2) 查看node是否成功加入集群
在master中執(zhí)行kubectl get nodes
蓖捶,顯示如下
[root@master001 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master001 Ready master 11m v1.18.3
node001 Ready <none> 12s v1.18.3
(3)master終端查看pod信息地回。
終端輸入命令和終端輸出內(nèi)容,如下示:
[root@master001 kubernetes]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-66dc75b87-lgqvn 1/1 Running 0 3m59s 192.168.152.129 node001 <none> <none>
kube-system calico-node-lxr6t 1/1 Running 0 3m59s 10.130.0.125 master001 <none> <none>
kube-system calico-node-sqhq8 1/1 Running 0 3m59s 10.130.0.71 node001 <none> <none>
kube-system coredns-889c78476-c5dd7 1/1 Running 0 16m 192.168.163.66 master001 <none> <none>
kube-system coredns-889c78476-sd9gd 1/1 Running 0 16m 192.168.163.64 master001 <none> <none>
kube-system etcd-master001 1/1 Running 0 15m 10.130.0.125 master001 <none> <none>
kube-system kube-apiserver-master001 1/1 Running 0 15m 10.130.0.125 master001 <none> <none>
kube-system kube-controller-manager-master001 1/1 Running 0 15m 10.130.0.125 master001 <none> <none>
kube-system kube-proxy-dzzc9 1/1 Running 0 16m 10.130.0.125 master001 <none> <none>
kube-system kube-proxy-hlv7s 1/1 Running 0 4m59s 10.130.0.71 node001 <none> <none>
kube-system kube-scheduler-master001 1/1 Running 0 15m 10.130.0.125 master001 <none> <none>
若全部pod的READY和STATUS狀態(tài)如上所示俊鱼,則表示部署成功刻像。
測(cè)試ngnix pod的示例
在node中獲取nginx鏡像
[root@node001 kubernetes]# docker pull loongnixk8s/nginx:1.17.7
在Master上創(chuàng)建nginx pod。
(1) 創(chuàng)建nginx.yaml文件,內(nèi)容如下(可根據(jù)實(shí)際情況修改):
# API 版本號(hào)
apiVersion: apps/v1
# 類(lèi)型并闲,如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
metadata:
# Kind 的名稱(chēng)
name: nginx-app
spec:
selector:
matchLabels:
# 容器標(biāo)簽的名字细睡,發(fā)布 Service 時(shí),selector 需要和這里對(duì)應(yīng)
app: nginx
# 部署的實(shí)例數(shù)量
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
# 配置容器焙蚓,數(shù)組類(lèi)型纹冤,說(shuō)明可以配置多個(gè)容器
containers:
# 容器名稱(chēng)
- name: nginx
# 容器鏡像
image: loongnixk8s/nginx:1.17.7
# 只有鏡像不存在時(shí),才會(huì)進(jìn)行鏡像拉取
imagePullPolicy: IfNotPresent
ports:
# Pod 端口
- containerPort: 80
在終端執(zhí)行以下命令
[root@master001 kubernetes]# kubectl apply -f nginx.yaml
deployment.apps/nginx-app created
(2)查看pod是否運(yùn)行正常购公。
在終端輸入的命令和終端輸出的結(jié)果萌京,如下示:
[root@master001 kubernetes]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-74ddf9865c-8fmwb 1/1 Running 0 91s
nginx-app-74ddf9865c-vrgvv 1/1 Running 0 91s
(3)部署service 。
在終端輸入的命令和終端輸出的結(jié)果宏浩,如下示:
[root@master001 kubernetes]# kubectl expose deployment nginx-app --port=88 --target-port=80 --type=NodePort
service/nginx-app exposed
(4)查看service知残。
在終端輸入的命令和終端輸出的結(jié)果,如下示:
[root@master001 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 116m
nginx-app NodePort 10.101.225.240 <none> 88:31541/TCP 43s
(5)nginx服務(wù)訪問(wèn)比庄。
通過(guò)pod服務(wù)+端口的方式操作求妹。
在終端輸入的命令和終端輸出的結(jié)果乏盐,如下示:
[root@master001 kubernetes]# curl 10.101.225.240:88
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
注:2個(gè)節(jié)點(diǎn)的集群部署完成,node節(jié)點(diǎn)繼續(xù)加入集群請(qǐng)執(zhí)行下述命令:
kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975
完成一個(gè)node節(jié)點(diǎn)配置的腳本內(nèi)容如下
[root@master001 kubernetes]# cat k8s_dep.sh
#!/bin/bash
#kubernetes 1.18.3環(huán)境搭建(安裝包和鏡像下載,適用于master和node節(jié)點(diǎn))
#下載安裝包(node and master)
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubeadm-1.18.3-0.lns7.mips64el.rpm
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubectl-1.18.3-0.lns7.mips64el.rpm
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubelet-1.18.3-0.lns7.mips64el.rpm
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubernetes-cni-0.8.6-0.lns7.mips64el.rpm
#安裝
yum install conntrack socat -y
rpm -ivh kubeadm-1.18.3-0.lns7.mips64el.rpm
rpm -ivh kubectl-1.18.3-0.lns7.mips64el.rpm
rpm -ivh kubernetes-cni-0.8.6-0.lns7.mips64el.rpm
rpm -ivh kubelet-1.18.3-0.lns7.mips64el.rpm
#安裝docker 啟動(dòng)并設(shè)置開(kāi)機(jī)自啟動(dòng)
yum install docker-ce -y
systemctl start docker.service
systemctl enable docker.service
#iptables設(shè)置
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
#關(guān)閉交換分區(qū)
swapoff -a
sed -i -e /swap/d /etc/fstab
#關(guān)閉 SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
#拉取所需鏡像
docker pull loongnixk8s/node:v3.13.2
docker pull loongnixk8s/cni:v3.13.2
docker pull loongnixk8s/pod2daemon-flexvol:v3.13.2
docker pull loongnixk8s/kube-controllers:v3.13.2
docker pull loongnixk8s/kube-apiserver-mips64le:v1.18.3
docker pull loongnixk8s/kube-controller-manager-mips64le:v1.18.3
docker pull loongnixk8s/kube-proxy-mips64le:v1.18.3
docker pull loongnixk8s/kube-scheduler-mips64le:v1.18.3
docker pull loongnixk8s/pause:3.2
docker pull loongnixk8s/coredns:1.6.5
docker pull loongnixk8s/etcd:3.3.12
#重命名使之與kubeadm要求的鏡像名一致
docker tag loongnixk8s/kube-apiserver-mips64le:v1.18.3 loongnixk8s/kube-apiserver:v1.18.3
docker tag loongnixk8s/kube-controller-manager-mips64le:v1.18.3 loongnixk8s/kube-controller-manager:v1.18.3
docker tag loongnixk8s/kube-scheduler-mips64le:v1.18.3 loongnixk8s/kube-scheduler:v1.18.3
docker tag loongnixk8s/kube-proxy-mips64le:v1.18.3 loongnixk8s/kube-proxy:v1.18.3
docker tag loongnixk8s/pause:3.2 loongnixk8s/pause:3.2
docker tag loongnixk8s/etcd:3.3.12 loongnixk8s/etcd:3.4.3-0
docker tag loongnixk8s/coredns:1.6.5 loongnixk8s/coredns:1.6.7