龍芯平臺(tái)上kubernetes+calico部署

一.安裝docker
詳細(xì)步驟可參考鏈接
http://doc.loongnix.org/web/#/50?page_id=148
命令行如下:
yum install docker-ce -y
啟動(dòng)服務(wù):
systemctl start docker.service
查看版本:
docker version

二.部署kubernetes
詳細(xì)步驟參考
http://doc.loongnix.org/web/#/71?page_id=232

1.軟件包下載地址
在master和node均需獲取以下軟件包
kubeadm-1.18.3-0.lns7.mips64el.rpm
kubectl-1.18.3-0.lns7.mips64el.rpm
kubelet-1.18.3-0.lns7.mips64el.rpm
kubernetes-cni-0.8.6-0.lns7.mips64el.rpm

2.拉取鏡像

docker pull loongnixk8s/node:v3.13.2
docker pull loongnixk8s/cni:v3.13.2
docker pull loongnixk8s/pod2daemon-flexvol:v3.13.2
docker pull loongnixk8s/kube-controllers:v3.13.2
docker pull loongnixk8s/kube-apiserver-mips64le:v1.18.3
docker pull loongnixk8s/kube-controller-manager-mips64le:v1.18.3
docker pull loongnixk8s/kube-proxy-mips64le:v1.18.3
docker pull loongnixk8s/kube-scheduler-mips64le:v1.18.3
docker pull loongnixk8s/pause:3.2
docker pull loongnixk8s/coredns:1.6.5
docker pull loongnixk8s/etcd:3.3.12

3.在/etc/hosts文件中底循,添加master和node對(duì)應(yīng)的物理ip和hostname(如下示例)

10.130.0.125 master001
10.130.0.71 node001

在master節(jié)點(diǎn)的/etc/hostname文件中添加內(nèi)容:master001
在node節(jié)點(diǎn)的/etc/hostname文件中添加內(nèi)容:node001

4.安裝軟件包

[root@master001 ~]# cd /etc/kubernetes 
[root@master001 kubernetes]# ls | grep rpm
kubeadm-1.18.3-0.mips64el.rpm
kubectl-1.18.3-0.mips64el.rpm
kubelet-1.18.3-0.mips64el.rpm
kubernetes-cni-0.8.6-0.mips64el.rpm
[root@master001 kubernetes]# rpm -ivh *.rpm

5.關(guān)閉防火墻/交換分區(qū)/SELINUX
在終端執(zhí)行以下命令清除防火墻規(guī)則并查看清除后的結(jié)果:
iptables -F && iptables -X && iptables -Z && iptables -L&&systemctl stop iptables&&systemctl status iptables
在終端執(zhí)行下面兩條命令關(guān)閉交換分區(qū):
swapoff -a;sed -i -e /swap/d /etc/fstab
在終端執(zhí)行下面兩條命令關(guān)閉selinux分區(qū):
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

以上步驟需要在master和node節(jié)點(diǎn)執(zhí)行,以下步驟只需在master節(jié)點(diǎn)執(zhí)行

6.準(zhǔn)備kubeadm配置文件
(1)通過(guò)以下命令生成配置文件模板
kubeadm config print init-defaults > init_default.yaml
修改init_default.yaml文件中以下內(nèi)容使之與當(dāng)前部署環(huán)境和版本一致
找到對(duì)應(yīng)配置椰苟,修改為以下內(nèi)容

localAPIEndpoint:
advertiseAddress: 10.130.0.125(master的主機(jī)IP)
bindPort: 6443
........
imageRepository: loongnixk8s(私有倉(cāng)庫(kù)地址)
kind: ClusterConfiguration
kubernetesVersion: v1.18.3(當(dāng)前k8s版本)
networking:
dnsDomain: cluster.local

(2) 執(zhí)行如下命令查看kubeadm配置后所需鏡像版本

[root@master001 kubernetes]# kubeadm config images list --config init_default.yaml
loongnixk8s/kube-apiserver:v1.18.3
loongnixk8s/kube-controller-manager:v1.18.3
loongnixk8s/kube-scheduler:v1.18.3
loongnixk8s/kube-proxy:v1.18.3
loongnixk8s/pause:3.2
loongnixk8s/etcd:3.4.3-0
loongnixk8s/coredns:1.6.7

(3) 通過(guò)以下命令對(duì)本地鏡像進(jìn)行重命名只怎,使之與kubeadm要求的鏡像名一致

docker tag loongnixk8s/kube-apiserver-mips64le:v1.18.3 loongnixk8s/kube-apiserver:v1.18.3
docker tag loongnixk8s/kube-controller-manager-mips64le:v1.18.3 loongnixk8s/kube-controller-manager:v1.18.3
docker tag loongnixk8s/kube-scheduler-mips64le:v1.18.3 loongnixk8s/kube-scheduler:v1.18.3
docker tag loongnixk8s/kube-proxy-mips64le:v1.18.3 loongnixk8s/kube-proxy:v1.18.3
docker tag loongnixk8s/pause:3.2 loongnixk8s/pause:3.2
docker tag loongnixk8s/etcd:3.3.12 loongnixk8s/etcd:3.4.3-0
docker tag loongnixk8s/coredns:1.6.5 loongnixk8s/coredns:1.6.7

7.calico配置文件準(zhǔn)備
通過(guò)以下命令獲取官方calico配置文件
curl https://docs.projectcalico.org/archive/v3.13/manifests/calico.yaml -O
修改calico.yaml中對(duì)應(yīng)配置,使配置文件中鏡像名稱(chēng)與本地鏡像一致

        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          image: loongnixk8s/cni:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: loongnixk8s/cni:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: loongnixk8s/pod2daemon-flexvol:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: loongnixk8s/node:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)
--
      priorityClassName: system-cluster-critical
      containers:
        - name: calico-kube-controllers
          image: loongnixk8s/kube-controllers:v3.13.2(保持與私有倉(cāng)庫(kù)地址一致)

kubectl apply -f calico.yaml

7.master節(jié)點(diǎn)初始化
(1)執(zhí)行以下命令進(jìn)行kubeadm初始化

[root@master001 kubernetes]#  kubeadm init --config=init_default.yaml

終端輸出的結(jié)果帝洪,如下:

W0702 10:54:50.953310   24907 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [bogon kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.130.0.125]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [bogon localhost] and IPs [10.130.0.125 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [bogon localhost] and IPs [10.130.0.125 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0702 10:56:52.414997   24907 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0702 10:56:52.418399   24907 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 43.010877 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node bogon as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node bogon as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975
注:如果初始化失敗,可以嘗試執(zhí)行kubeadm reset重啟kubeadm(會(huì)刪除創(chuàng)建的文件和節(jié)點(diǎn))

(2) 初始化完成后在終端執(zhí)行以下命令帆锋,拷貝對(duì)應(yīng)的配置文件绣檬。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
注:如果重復(fù)執(zhí)行初始化操作,需要先刪除$HOME/.kube目錄,否則會(huì)報(bào)錯(cuò)

(3) 查看當(dāng)前master狀態(tài)鱼炒。

[root@master001 kubernetes]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
master001   Ready    master   8m45s   v1.18.3
[root@master001 kubernetes]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                READY   STATUS              RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
kube-system   coredns-889c78476-c5dd7             0/1     Pending   0          8m45s   <none>         master001   <none>           <none>
kube-system   coredns-889c78476-sd9gd             0/1     Pending   0          8m45s   <none>         master001   <none>           <none>
kube-system   etcd-master001                      1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
kube-system   kube-apiserver-master001            1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
kube-system   kube-controller-manager-master001   1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
kube-system   kube-proxy-dzzc9                    1/1     Running             0          8m45s   10.130.0.125   master001   <none>           <none>
kube-system   kube-scheduler-master001            1/1     Running             0          8m41s   10.130.0.125   master001   <none>           <none>
至此master節(jié)點(diǎn)部署完畢,可以添加node節(jié)點(diǎn)

(1) 加入集群琴拧,在終端執(zhí)行命令,如下示(注:以下token由3.2.1(1)生成):

kubeadm join 10.130.0.125:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975

如果無(wú)法加入可能是因?yàn)閠oken 過(guò)期, 可通過(guò)在master節(jié)點(diǎn)上執(zhí)行 kubeadm token create --print-join-command重新生成加入命令衙传,并使用輸出的新命令在工作節(jié)點(diǎn)上重新執(zhí)行即可决帖。

node節(jié)點(diǎn)終端輸出的結(jié)果,如下:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

(2) 查看node是否成功加入集群
在master中執(zhí)行kubectl get nodes蓖捶,顯示如下

[root@master001 kubernetes]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
master001   Ready    master   11m   v1.18.3
node001     Ready    <none>   12s   v1.18.3

(3)master終端查看pod信息地回。
終端輸入命令和終端輸出內(nèi)容,如下示:

[root@master001 kubernetes]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-66dc75b87-lgqvn   1/1     Running   0          3m59s   192.168.152.129   node001     <none>           <none>
kube-system   calico-node-lxr6t                         1/1     Running   0          3m59s   10.130.0.125      master001   <none>           <none>
kube-system   calico-node-sqhq8                         1/1     Running   0          3m59s   10.130.0.71       node001     <none>           <none>
kube-system   coredns-889c78476-c5dd7                   1/1     Running   0          16m     192.168.163.66    master001   <none>           <none>
kube-system   coredns-889c78476-sd9gd                   1/1     Running   0          16m     192.168.163.64    master001   <none>           <none>
kube-system   etcd-master001                            1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>
kube-system   kube-apiserver-master001                  1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>
kube-system   kube-controller-manager-master001         1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>
kube-system   kube-proxy-dzzc9                          1/1     Running   0          16m     10.130.0.125      master001   <none>           <none>
kube-system   kube-proxy-hlv7s                          1/1     Running   0          4m59s   10.130.0.71       node001     <none>           <none>
kube-system   kube-scheduler-master001                  1/1     Running   0          15m     10.130.0.125      master001   <none>           <none>

若全部pod的READY和STATUS狀態(tài)如上所示俊鱼,則表示部署成功刻像。

測(cè)試ngnix pod的示例

在node中獲取nginx鏡像

[root@node001 kubernetes]# docker pull loongnixk8s/nginx:1.17.7

在Master上創(chuàng)建nginx pod。

(1) 創(chuàng)建nginx.yaml文件,內(nèi)容如下(可根據(jù)實(shí)際情況修改):

# API 版本號(hào)
apiVersion: apps/v1
# 類(lèi)型并闲,如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
metadata:
  # Kind 的名稱(chēng)
  name: nginx-app
spec:
  selector:
    matchLabels:
      # 容器標(biāo)簽的名字细睡,發(fā)布 Service 時(shí),selector 需要和這里對(duì)應(yīng)
      app: nginx
  # 部署的實(shí)例數(shù)量
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      # 配置容器焙蚓,數(shù)組類(lèi)型纹冤,說(shuō)明可以配置多個(gè)容器
      containers:
      # 容器名稱(chēng)
      - name: nginx
        # 容器鏡像
        image: loongnixk8s/nginx:1.17.7
        # 只有鏡像不存在時(shí),才會(huì)進(jìn)行鏡像拉取
        imagePullPolicy: IfNotPresent
        ports:
        # Pod 端口
        - containerPort: 80

在終端執(zhí)行以下命令

[root@master001 kubernetes]# kubectl apply -f nginx.yaml
deployment.apps/nginx-app created

(2)查看pod是否運(yùn)行正常购公。
在終端輸入的命令和終端輸出的結(jié)果萌京,如下示:

[root@master001 kubernetes]# kubectl get po
NAME                         READY   STATUS    RESTARTS   AGE
nginx-app-74ddf9865c-8fmwb  1/1  Running  0  91s
nginx-app-74ddf9865c-vrgvv 1/1  Running  0  91s

(3)部署service 。
在終端輸入的命令和終端輸出的結(jié)果宏浩,如下示:

[root@master001 kubernetes]# kubectl expose deployment nginx-app --port=88  --target-port=80  --type=NodePort
service/nginx-app exposed

(4)查看service知残。
在終端輸入的命令和終端輸出的結(jié)果,如下示:

 [root@master001 kubernetes]# kubectl get svc
 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S) AGE
 kubernetes ClusterIP  10.96.0.1  <none>  443/TCP 116m
 nginx-app NodePort  10.101.225.240  <none>  88:31541/TCP 43s

(5)nginx服務(wù)訪問(wèn)比庄。
通過(guò)pod服務(wù)+端口的方式操作求妹。
在終端輸入的命令和終端輸出的結(jié)果乏盐,如下示:


[root@master001 kubernetes]# curl 10.101.225.240:88
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width:  35em;
margin:  0  auto;
font-family:  Tahoma,  Verdana,  Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working.  Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for  using nginx.</em></p>
</body>
</html>

注:2個(gè)節(jié)點(diǎn)的集群部署完成,node節(jié)點(diǎn)繼續(xù)加入集群請(qǐng)執(zhí)行下述命令:

kubeadm join 10.130.0.125:6443  --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c2cb8a894e19f48c1b15c2440f9c150d9e8559df0147262d9223cc28a475975

完成一個(gè)node節(jié)點(diǎn)配置的腳本內(nèi)容如下

[root@master001 kubernetes]# cat k8s_dep.sh 
#!/bin/bash
#kubernetes 1.18.3環(huán)境搭建(安裝包和鏡像下載,適用于master和node節(jié)點(diǎn))


#下載安裝包(node and master)
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubeadm-1.18.3-0.lns7.mips64el.rpm 
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubectl-1.18.3-0.lns7.mips64el.rpm 
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubelet-1.18.3-0.lns7.mips64el.rpm 
wget http://ftp.loongnix.org/os/loongnix-server/1.7/virt/mips64el/kubernetes118/kubernetes-cni-0.8.6-0.lns7.mips64el.rpm 

#安裝
yum install conntrack socat -y
rpm -ivh kubeadm-1.18.3-0.lns7.mips64el.rpm
rpm -ivh kubectl-1.18.3-0.lns7.mips64el.rpm
rpm -ivh kubernetes-cni-0.8.6-0.lns7.mips64el.rpm
rpm -ivh kubelet-1.18.3-0.lns7.mips64el.rpm


#安裝docker 啟動(dòng)并設(shè)置開(kāi)機(jī)自啟動(dòng)
yum install docker-ce -y
systemctl start docker.service
systemctl enable docker.service


#iptables設(shè)置
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X


#關(guān)閉交換分區(qū)
swapoff -a
sed -i -e /swap/d /etc/fstab


#關(guān)閉 SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

#拉取所需鏡像
docker pull loongnixk8s/node:v3.13.2
docker pull loongnixk8s/cni:v3.13.2
docker pull loongnixk8s/pod2daemon-flexvol:v3.13.2
docker pull loongnixk8s/kube-controllers:v3.13.2
docker pull loongnixk8s/kube-apiserver-mips64le:v1.18.3
docker pull loongnixk8s/kube-controller-manager-mips64le:v1.18.3
docker pull loongnixk8s/kube-proxy-mips64le:v1.18.3
docker pull loongnixk8s/kube-scheduler-mips64le:v1.18.3
docker pull loongnixk8s/pause:3.2
docker pull loongnixk8s/coredns:1.6.5
docker pull loongnixk8s/etcd:3.3.12


#重命名使之與kubeadm要求的鏡像名一致
docker tag loongnixk8s/kube-apiserver-mips64le:v1.18.3 loongnixk8s/kube-apiserver:v1.18.3
docker tag loongnixk8s/kube-controller-manager-mips64le:v1.18.3 loongnixk8s/kube-controller-manager:v1.18.3
docker tag loongnixk8s/kube-scheduler-mips64le:v1.18.3 loongnixk8s/kube-scheduler:v1.18.3
docker tag loongnixk8s/kube-proxy-mips64le:v1.18.3 loongnixk8s/kube-proxy:v1.18.3
docker tag loongnixk8s/pause:3.2 loongnixk8s/pause:3.2
docker tag loongnixk8s/etcd:3.3.12 loongnixk8s/etcd:3.4.3-0
docker tag loongnixk8s/coredns:1.6.5 loongnixk8s/coredns:1.6.7
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市制恍,隨后出現(xiàn)的幾起案子父能,更是在濱河造成了極大的恐慌,老刑警劉巖净神,帶你破解...
    沈念sama閱讀 218,284評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件何吝,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡鹃唯,警方通過(guò)查閱死者的電腦和手機(jī)爱榕,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,115評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)坡慌,“玉大人黔酥,你說(shuō)我怎么就攤上這事『殚伲” “怎么了跪者?”我有些...
    開(kāi)封第一講書(shū)人閱讀 164,614評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)熄求。 經(jīng)常有香客問(wèn)我坑夯,道長(zhǎng),這世上最難降的妖魔是什么抡四? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 58,671評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮仗谆,結(jié)果婚禮上指巡,老公的妹妹穿的比我還像新娘。我一直安慰自己隶垮,他們只是感情好藻雪,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,699評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著狸吞,像睡著了一般勉耀。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上蹋偏,一...
    開(kāi)封第一講書(shū)人閱讀 51,562評(píng)論 1 305
  • 那天便斥,我揣著相機(jī)與錄音,去河邊找鬼威始。 笑死枢纠,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的黎棠。 我是一名探鬼主播晋渺,決...
    沈念sama閱讀 40,309評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼镰绎,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了木西?” 一聲冷哼從身側(cè)響起畴栖,我...
    開(kāi)封第一講書(shū)人閱讀 39,223評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎八千,沒(méi)想到半個(gè)月后吗讶,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,668評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡叼丑,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,859評(píng)論 3 336
  • 正文 我和宋清朗相戀三年关翎,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片鸠信。...
    茶點(diǎn)故事閱讀 39,981評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡纵寝,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出星立,到底是詐尸還是另有隱情爽茴,我是刑警寧澤,帶...
    沈念sama閱讀 35,705評(píng)論 5 347
  • 正文 年R本政府宣布绰垂,位于F島的核電站室奏,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏劲装。R本人自食惡果不足惜胧沫,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,310評(píng)論 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望占业。 院中可真熱鬧绒怨,春花似錦、人聲如沸谦疾。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 31,904評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)念恍。三九已至六剥,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間峰伙,已是汗流浹背疗疟。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 33,023評(píng)論 1 270
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留词爬,地道東北人秃嗜。 一個(gè)月前我還...
    沈念sama閱讀 48,146評(píng)論 3 370
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親锅锨。 傳聞我的和親對(duì)象是個(gè)殘疾皇子叽赊,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,933評(píng)論 2 355