3.Kubernetes

1.工作原理

image.png

master節(jié)點(diǎn)(Control Plane【控制面板】):master節(jié)點(diǎn)控制整個集群

master節(jié)點(diǎn)上有一些核心組件:
Controller Manager:控制管理器
etcd:鍵值數(shù)據(jù)庫(redis)【記賬本,記事本】
scheduler:調(diào)度器
api server:api網(wǎng)關(guān)(所有的控制都需要通過api-server)

node節(jié)點(diǎn)(worker工作節(jié)點(diǎn)):

kubelet(監(jiān)工):每一個node節(jié)點(diǎn)上必須安裝的組件。
kube-proxy:代理。代理網(wǎng)絡(luò)

部署一個應(yīng)用剪菱?
程序員:調(diào)用CLI告訴master,我們現(xiàn)在要部署一個tomcat應(yīng)用
程序員的所有調(diào)用都先去master節(jié)點(diǎn)的網(wǎng)關(guān)api-server症汹。這是matser的唯一入口(mvc模式中的c層)
收到的請求先交給master的api-server宾肺。由api-server交給controller-mannager進(jìn)行控制
controller-mannager 進(jìn)行 應(yīng)用部署
controller-mannager 會生成一次部署信息溯饵。 tomcat --image:tomcat6 --port 8080 ,真正不部署應(yīng)用
部署信息被記錄在etcd中
scheduler調(diào)度器從etcd數(shù)據(jù)庫中,拿到要部署的應(yīng)用锨用,開始調(diào)度丰刊。看哪個節(jié)點(diǎn)合適增拥,
scheduler把算出來的調(diào)度信息再放到etcd中
每一個node節(jié)點(diǎn)的監(jiān)控kubelet啄巧,隨時和master保持聯(lián)系的(給api-server發(fā)送請求不斷獲取最新數(shù)據(jù)),所有節(jié)點(diǎn)的kubelet就會從master
假設(shè)node2的kubelet最終收到了命令跪者,要部署棵帽。
kubelet就自己run一個應(yīng)用在當(dāng)前機(jī)器上,隨時給master匯報(bào)當(dāng)前應(yīng)用的狀態(tài)信息渣玲,分配ip
node和master是通過master的api-server聯(lián)系的
每一個機(jī)器上的kube-proxy能知道集群的所有網(wǎng)絡(luò)逗概。只要node訪問別人或者別人訪問node,node上的kube-proxy網(wǎng)絡(luò)代理自動計(jì)算進(jìn)行流量轉(zhuǎn)發(fā)

image.png

工作節(jié)點(diǎn)

image.png

Pod:

docker run 啟動的是一個container(容器)忘衍,容器是docker的基本單位逾苫,一個應(yīng)用是一個容器
kubelet run 啟動的一個應(yīng)用稱為一個Pod;Pod是k8s的基本單位枚钓。
Pod是容器的一個再封裝
一個容器往往代表不了一個基本應(yīng)用铅搓。博客(php+mysql合起來完成)
準(zhǔn)備一個Pod 可以包含多個 container;一個Pod代表一個基本的應(yīng)用搀捷。
IPod(看電影星掰、聽音樂、玩游戲)【一個基本產(chǎn)品嫩舟,原子】氢烘;
Pod(music container、movie container)【一個基本產(chǎn)品家厌,原子的】
Kubelet:監(jiān)工播玖,負(fù)責(zé)交互master的api-server以及當(dāng)前機(jī)器的應(yīng)用啟停等,在master機(jī)器就是master的小助手饭于。每一臺機(jī)器真正干活的都是這個 Kubelet
Kube-proxy:
其他:

  • 集群交互原理

1619075921614.png

想讓k8s部署一個tomcat蜀踏?

0、開機(jī)默認(rèn)所有節(jié)點(diǎn)的kubelet掰吕、master節(jié)點(diǎn)的scheduler(調(diào)度器)果覆、controller-manager(控制管理器)一直監(jiān)聽master的api-server發(fā)來的事件變化(for ::)
1、程序員使用命令行工具: kubectl 殖熟; kubectl create deploy tomcat --image=tomcat8(告訴master讓集群使用tomcat8鏡像随静,部署一個tomcat應(yīng)用)
2、kubectl命令行內(nèi)容發(fā)給api-server,api-server保存此次創(chuàng)建信息到etcd
3燎猛、etcd給api-server上報(bào)事件恋捆,說剛才有人給我里面保存一個信息。(部署Tomcat[deploy])
4重绷、controller-manager監(jiān)聽到api-server的事件沸停,是 (部署Tomcat[deploy])
5、controller-manager 處理這個 (部署Tomcat[deploy])的事件昭卓。controller-manager會生成Pod的部署信息【pod信息】
6愤钾、controller-manager 把Pod的信息交給api-server,再保存到etcd
7候醒、etcd上報(bào)事件【pod信息】給api-server能颁。
8、scheduler專門監(jiān)聽 【pod信息】 倒淫,拿到 【pod信息】的內(nèi)容伙菊,計(jì)算,看哪個節(jié)點(diǎn)合適部署這個Pod【pod調(diào)度過后的信息(node: node-02)】敌土,
9镜硕、scheduler把 【pod調(diào)度過后的信息(node: node-02)】交給api-server保存給etcd
10、etcd上報(bào)事件【pod調(diào)度過后的信息(node: node-02)】返干,給api-server
11兴枯、其他節(jié)點(diǎn)的kubelet專門監(jiān)聽 【pod調(diào)度過后的信息(node: node-02)】 事件,集群所有節(jié)點(diǎn)kubelet從api-server就拿到了 【pod調(diào)度過后的信息(node: node-02)】 事件
12矩欠、每個節(jié)點(diǎn)的kubelet判斷是否屬于自己的事情财剖;node-02的kubelet發(fā)現(xiàn)是他的事情
13、node-02的kubelet啟動這個pod癌淮。匯報(bào)給master當(dāng)前啟動好的所有信息

  • k8s集群安裝

準(zhǔn)備3臺機(jī)器

實(shí)例 公網(wǎng)ip 內(nèi)網(wǎng)ip 節(jié)點(diǎn)類型
i-wz9gesw0qtscfefs2xyl 47.106.12.54 172.27.228.2 master
i-wz9gesw0qtscfefs2xyn 120.79.96.89 172.27.209.121 worker1
i-wz9gesw0qtscfefs2xym 120.24.212.245 172.27.209.122 worker2

安裝方式
二進(jìn)制安裝(建議生產(chǎn)環(huán)境使用)
kebuadm引導(dǎo)方式(官方推薦)
大致流程:

準(zhǔn)備3臺服務(wù)器內(nèi)網(wǎng)互通
安裝Docker容器化環(huán)境
安裝kebernetes: 3臺機(jī)器安裝核心組件(kebuadm(創(chuàng)建集群的引導(dǎo)工具)躺坟、kubelet)、kubectl(程序員用的命令行)

安裝前置環(huán)境

關(guān)閉防火墻: 如果是云服務(wù)器该默,需要設(shè)置安全組策略放行端口

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
systemctl stop firewalld
systemctl disable firewalld

修改 hostname

ip hostName
47.106.12.54 hostnamectl set-hostname k8s-01
120.79.96.89 hostnamectl set-hostname k8s-02
120.24.212.245 hostnamectl set-hostname k8s-03

查看修改結(jié)果

[root@iZwz9gesw0qtscfefs2xynZ ~]# hostnamectl status
   Static hostname: k8s-02
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 20200914151306980406746494236010
           Boot ID: 6b72c6ee16094f48b50f681a7f0110b5
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-1127.19.1.el7.x86_64
      Architecture: x86-64

設(shè)置hostname解析
echo "127.0.0.1 $(hostname)" >> /etc/hosts
關(guān)閉selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
關(guān)閉 swap:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

允許 iptables 檢查橋接流量
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%85%81%E8%AE%B8-iptables-%E6%A3%80%E6%9F%A5%E6%A1%A5%E6%8E%A5%E6%B5%81%E9%87%8F
開啟br_netfilter
sudo modprobe br_netfilter
確認(rèn)下
lsmod | grep br_netfilter

安裝docker

sudo yum remove docker*
sudo yum install -y yum-utils
配置docker yum 源
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安裝docker 19.03.9
yum install -y docker-ce-3:19.03.9-3.el7.x86_64 docker-ce-cli-3:19.03.9-3.el7.x86_64 containerd.io
安裝docker 19.03.9 docker-ce 19.03.9
yum install -y docker-ce-19.03.9-3 docker-ce-cli-19.03.9 containerd.io
啟動服務(wù)
systemctl start docker
systemctl enable docker
配置加速
sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

安裝k8s

配置K8S的yum源
vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

1.卸載舊版本
yum remove -y kubelet kubeadm kubectl
2.查看可以安裝的版本
yum list kubelet --showduplicates | sort -r
3.# 安裝kubelet瞳氓、kubeadm策彤、kubectl 指定版本
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
4.開機(jī)啟動kubelet
systemctl enable kubelet && systemctl start kubelet
5.查看kubelet狀態(tài)

[root@k8s-01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 五 2023-10-06 10:17:05 CST; 8s ago
     Docs: https://kubernetes.io/docs/
  Process: 3854 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 3854 (code=exited, status=1/FAILURE)

10月 06 10:17:05 k8s-01 systemd[1]: Unit kubelet.service entered failed state.
10月 06 10:17:05 k8s-01 systemd[1]: kubelet.service failed.

初始化matsre節(jié)點(diǎn)(master執(zhí)行)

1.查看需要的鏡像

[root@k8s-01 ~]# kubeadm  config images  list
W1006 10:22:19.236666    4076 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1006 10:22:19.236766    4076 version.go:103] falling back to the local client version: v1.21.0
k8s.gcr.io/kube-apiserver:v1.21.0
k8s.gcr.io/kube-controller-manager:v1.21.0
k8s.gcr.io/kube-scheduler:v1.21.0
k8s.gcr.io/kube-proxy:v1.21.0
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

2.配置鏡像
vi images.sh

#!/bin/bash
images=(
  kube-apiserver:v1.21.0
  kube-proxy:v1.21.0
  kube-controller-manager:v1.21.0
  kube-scheduler:v1.21.0
  coredns:v1.8.0
  etcd:3.4.13-0
  pause:3.4.1
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
  1. 執(zhí)行腳本 chmod +x images.sh && ./images.sh

4.查看下載的images

[root@k8s-01 ~]# docker images
REPOSITORY                                                                 TAG        IMAGE ID       CREATED       SIZE
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-apiserver            v1.21.0    4d217480042e   2 years ago   126MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-proxy                v1.21.0    38ddd85fe90e   2 years ago   122MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-controller-manager   v1.21.0    09708983cc37   2 years ago   120MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-scheduler            v1.21.0    62ad3129eca8   2 years ago   50.6MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause                     3.4.1      0f8457a4c2ec   2 years ago   683kB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns                   v1.8.0     296a6d5035e2   2 years ago   42.5MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/etcd                      3.4.13-0   0369cf4303ff   3 years ago   253MB
注意1.21.0版本的k8s coredns鏡像比較特殊栓袖,結(jié)合阿里云需要特殊處理,重新打標(biāo)簽

docker tag registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns/coredns:v1.8.0

########kubeadm init 一個master########################
########kubeadm join 其他worker########################
kubeadm init \ --apiserver-advertise-address=172.27.228.2 \ --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.21.0 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16
注意:pod-cidr與service-cidr
cidr 無類別域間路由(Classless Inter-Domain Routing店诗、CIDR)
指定一個網(wǎng)絡(luò)可達(dá)范圍 pod的子網(wǎng)范圍+service負(fù)載均衡網(wǎng)絡(luò)的子網(wǎng)范圍+本機(jī)ip的子網(wǎng)范圍不能有重復(fù)域

安裝完成提示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.27.228.2:6443 --token aw37ip.t9nsblzyxe49tsco \
    --discovery-token-ca-cert-hash sha256:3a74d9f5336c804276f1b7bc494027b26b7a498ae1a5a396b35a92ce0b3411a1

按照如上提示操作
1.init完成后裹刮,復(fù)制相關(guān)文件夾
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.導(dǎo)出環(huán)境變量
export KUBECONFIG=/etc/kubernetes/admin.conf

3.部署一個pod網(wǎng)絡(luò)
kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml

4.獲取集群中所有部署好的應(yīng)用pod

[root@k8s-01 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-594649bd75-64s9q   0/1     CrashLoopBackOff   1          3m8s
kube-system   calico-node-bz9f8                          0/1     Init:2/3           0          3m20s
kube-system   coredns-b98666c6d-fz7cr                    1/1     Running            0          34h
kube-system   coredns-b98666c6d-g64zs                    1/1     Running            0          34h
kube-system   etcd-k8s-01                                1/1     Running            0          34h
kube-system   kube-apiserver-k8s-01                      1/1     Running            0          34h
kube-system   kube-controller-manager-k8s-01             1/1     Running            0          34h
kube-system   kube-proxy-tjsrb                           1/1     Running            0          34h
kube-system   kube-scheduler-k8s-01                      1/1     Running            0          34h
[root@k8s-01 ~]#

5.查看集群所有機(jī)器的狀態(tài)

[root@k8s-01 ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
k8s-01   Ready    control-plane,master   34h   v1.21.

初始化worker節(jié)點(diǎn)

1.在master節(jié)點(diǎn)執(zhí)行命令 kubeadm token create --print-join-command

[root@k8s-01 ~]# kubeadm token create --print-join-command
kubeadm join 172.27.228.2:6443 --token r1hj55.nllrkk4irqwkgpl2 --discovery-token-ca-cert-hash sha256:3a74d9f5336c804276f1b7bc494027b26b7a498ae1a5a396b35a92ce0b3411a1

2.將輸出結(jié)果拿到worker節(jié)點(diǎn)執(zhí)行

[root@k8s-02 ~]# kubeadm join 172.27.228.2:6443 --token r1hj55.nllrkk4irqwkgpl2 --discovery-token-ca-cert-hash sha256:3a74d9f5336c804276f1b7bc494027b26b7a498ae1a5a396b35a92ce0b3411a1
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.在master節(jié)點(diǎn)查看nodes節(jié)點(diǎn)(驗(yàn)證集群是否成功)

[root@k8s-01 ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
k8s-01   Ready    control-plane,master   34h     v1.21.0
k8s-02   Ready    <none>                 7m29s   v1.21.0
k8s-03   Ready    <none>                 5m16s   v1.21.0

4.給節(jié)點(diǎn)打標(biāo)簽
k8s中萬物皆對象,node:機(jī)器 pod:應(yīng)用容器

[root@k8s-01 ~]# kubectl  label node  k8s-03  node-role.kubernetes.io/worker3='worker-03'
node/k8s-03 labeled
[root@k8s-01 ~]# kubectl  label node  k8s-02  node-role.kubernetes.io/worker2='worker-02'
node/k8s-02 labeled
[root@k8s-01 ~]# kubectl  get nodes
NAME     STATUS   ROLES                  AGE   VERSION
k8s-01   Ready    control-plane,master   34h   v1.21.0
k8s-02   Ready    worker2         23m   v1.21.0
k8s-03   Ready    worker3         21m   v1.21.0

5.設(shè)置ipvs模式
k8s集群庞瘸,機(jī)器重啟后會自動加入集群捧弃,master重啟會自動再加入即去控制中心。k8s集群默認(rèn)是iptables,性能下降(kube-proxy會在集群之間同步iptables的內(nèi)容)
獲取集群中的所有資源

[root@k8s-01 ~]# kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-594649bd75-64s9q   1/1     Running   5          75m
kube-system   pod/calico-node-59gnt                          1/1     Running   0          53m
kube-system   pod/calico-node-89b8t                          1/1     Running   0          55m
kube-system   pod/calico-node-bz9f8                          1/1     Running   0          75m
kube-system   pod/coredns-b98666c6d-fz7cr                    1/1     Running   0          35h
kube-system   pod/coredns-b98666c6d-g64zs                    1/1     Running   0          35h
kube-system   pod/etcd-k8s-01                                1/1     Running   0          35h
kube-system   pod/kube-apiserver-k8s-01                      1/1     Running   0          35h
kube-system   pod/kube-controller-manager-k8s-01             1/1     Running   0          35h
kube-system   pod/kube-proxy-74clv                           1/1     Running   0          53m
kube-system   pod/kube-proxy-rfth6                           1/1     Running   0          55m
kube-system   pod/kube-proxy-tjsrb                           1/1     Running   0          35h
kube-system   pod/kube-scheduler-k8s-01                      1/1     Running   0          35h

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  35h
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   35h

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   3         3         3       3            3           kubernetes.io/os=linux   34h
kube-system   daemonset.apps/kube-proxy    3         3         3       3            3           kubernetes.io/os=linux   35h

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           34h
kube-system   deployment.apps/coredns                   2/2     2            2           35h

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-594649bd75   1         1         1       75m
kube-system   replicaset.apps/calico-kube-controllers-5d4b78db86   0         0         0       34h
kube-system   replicaset.apps/coredns-b98666c6d                    2         2         2       35h

修改kube-proxy的配置文件违霞,修改mode為ipvs嘴办。默認(rèn)為iptables,但是集群大了之后就很慢 kubectl edit cm kube-proxy -n kube-system, 修改mode="ipvs"

image.png

重啟kube-proxy(殺死舊的买鸽,會自動加入新配置的kube-proxy)

[root@k8s-01 ~]# kubectl delete  pod kude-proxy-74clv  -n kube-system
Error from server (NotFound): pods "kude-proxy-74clv" not found
[root@k8s-01 ~]# kubectl delete  pod  kube-proxy-74clv   -n kube-system
pod "kube-proxy-74clv" deleted
[root@k8s-01 ~]# kubectl delete  pod   kube-proxy-rfth6   -n kube-system
pod "kube-proxy-rfth6" deleted
[root@k8s-01 ~]# kubectl delete  pod  kube-proxy-tjsrb   -n kube-system
pod "kube-proxy-tjsrb" deleted
[root@k8s-01 ~]# kubectl get pods -A | grep kube-proxy
kube-system   kube-proxy-gcxvl                           1/1     Running   0          3m22s
kube-system   kube-proxy-gqkcg                           1/1     Running   0          2m49s
kube-system   kube-proxy-jzj9p                           1/1     Running   0          3m4s

查看k8s下所有的資源
kubectl api-resources --namespace=true

[root@k8s-01 ~]# kubectl api-resources
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
events                            ev           v1                                     true         Event
limitranges                       limits       v1                                     true         LimitRange
namespaces                        ns           v1                                     false        Namespace
nodes                             no           v1                                     false        Node
persistentvolumeclaims            pvc          v1                                     true         PersistentVolumeClaim
persistentvolumes                 pv           v1                                     false        PersistentVolume
pods                              po           v1                                     true         Pod
podtemplates                                   v1                                     true         PodTemplate
replicationcontrollers            rc           v1                                     true         ReplicationController
resourcequotas                    quota        v1                                     true         ResourceQuota

k8s輸出詳細(xì)描述信息

[root@k8s-01 ~]# kubectl describe pod my-nginx
Name:         my-nginx-6b74b79f57-grk69
Namespace:    default
Priority:     0
Node:         k8s-02/172.27.209.121
Start Time:   Sat, 07 Oct 2023 22:45:36 +0800
Labels:       app=my-nginx
              pod-template-hash=6b74b79f57
Annotations:  cni.projectcalico.org/containerID: ab0f6852275d9cdf43745b7b6af6bb846714b7ccdb874febe7544a8091bdd171
              cni.projectcalico.org/podIP: 192.168.179.1/32
              cni.projectcalico.org/podIPs: 192.168.179.1/32
Status:       Running
IP:           192.168.179.1
IPs:
  IP:           192.168.179.1
Controlled By:  ReplicaSet/my-nginx-6b74b79f57
Containers:
  nginx:
    Container ID:   docker://54cfa0f83b91f298427a8e4371ebdfcd7f9580bad4f7e4b65e4c36c1361db276
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 07 Oct 2023 22:45:46 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9zms (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-w9zms:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

創(chuàng)建(部署)一個k8s應(yīng)用

[root@k8s-01 ~]# kubectl  create  deploy my-nginx  --image=nginx
deployment.apps/my-nginx created
[root@k8s-01 ~]# kubectl get  pod -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
my-nginx-6b74b79f57-grk69   1/1     Running   0          16s   192.168.179.1   k8s-02   <none>           <none>
[root@k8s-01 ~]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
my-nginx-6b74b79f57-grk69   1/1     Running   0          2m16s   192.168.179.1   k8s-02   <none>           <none>
[root@k8s-01 ~]# kubectl  exec -it  my-nginx-6b74b79f57-grk69 -- /bin/bash

k8s基礎(chǔ)

docker是每一個worker節(jié)點(diǎn)的運(yùn)行時環(huán)境
kubelet負(fù)責(zé)控制所有容器的啟動停止涧郊,保證節(jié)點(diǎn)正常工作,以及幫助節(jié)點(diǎn)交互master
master節(jié)點(diǎn)的關(guān)鍵組件

1.kubelet(監(jiān)工):所有節(jié)點(diǎn)必備眼五,控制這個節(jié)點(diǎn)的所有pod的生命周期以及與api-server交互等工作
2.kube-api-server:負(fù)責(zé)接收所有請求妆艘。集群內(nèi)對集群的任何修改都是通過命令行、UI把請求發(fā)給api-server才能執(zhí)行的看幼。api-server是整個集群對內(nèi)批旺、對外的唯一入口。不包含后來我們部署應(yīng)用暴露端口的方式
3.kube-proxy:整個節(jié)點(diǎn)的網(wǎng)絡(luò)流量負(fù)責(zé)
4.cri: 容器運(yùn)行時環(huán)境

worker節(jié)點(diǎn)的關(guān)鍵組件

1.kubelet(監(jiān)工):所有節(jié)點(diǎn)必備诵姜,控制這個節(jié)點(diǎn)的所有pod的生命周期以及與api-server交互等工作
2.kube-proxy:整個節(jié)點(diǎn)的網(wǎng)絡(luò)流量負(fù)責(zé)
3.cri: 容器運(yùn)行時環(huán)境

應(yīng)用部署

1.kubectl create deploy xxxxxx:命令行會給api-server發(fā)送要部署xxx的請求
2.api-server把這個請求保存到etcd
kubectl create 幫我們創(chuàng)建k8s集群中的一些對象
kubectl create --help
kubectl create deployment 這次部署的名字 --image=應(yīng)用的鏡像
最終在一個機(jī)器上有pod汽煮、這個pod其實(shí)本質(zhì)里面就是一個容器

k8s_nginx_my-nginx-6b74b79f57-snlr4_default_dbeac79e-1ce9-42c9-bc59-c8ca0412674b_0
k8s_鏡像(nginx)pod名(my-nginx-6b74b79f57-snlr4)容器名(default_dbeac79e-1ce9-42c9-bc59-c8ca0412674b_0)

Create a deployment with command
kubectl create deployment my-nginx --image=nginx -- date
Create a deployment named my-nginx that runs the nginx image with 3 replicas
kubectl create deployment my-nginx --image=nginx --replicas=3
Create a deployment named my-nginx that runs the nginx image and expose port 80.
kubectl create deployment my-nginx --image=nginx --port=80

Deployment(部署)

1.在k8s中,通過發(fā)布 Deployment茅诱,可以創(chuàng)建應(yīng)用程序 (docker image) 的實(shí)例 (docker container)逗物,這個實(shí)例會被包含在稱為 Pod 的概念中,Pod 是 k8s 中最小可管理單元
2.在 k8s 集群中發(fā)布 Deployment 后瑟俭,Deployment 將指示 k8s 如何創(chuàng)建和更新應(yīng)用程序的實(shí)例翎卓,master 節(jié)點(diǎn)將應(yīng)用程序?qū)嵗{(diào)度到集群中的具體的節(jié)點(diǎn)上
3.創(chuàng)建應(yīng)用程序?qū)嵗螅琄ubernetes Deployment Controller 會持續(xù)監(jiān)控這些實(shí)例摆寄。如果運(yùn)行實(shí)例的 worker 節(jié)點(diǎn)關(guān)機(jī)或被刪除失暴,則 Kubernetes Deployment Controller 將在群集中資源最優(yōu)的另一個 worker 節(jié)點(diǎn)上重新創(chuàng)建一個新的實(shí)例。這提供了一種自我修復(fù)機(jī)制來解決機(jī)器故障或維護(hù)問題
4.在容器編排之前的時代微饥,各種安裝腳本通常用于啟動應(yīng)用程序逗扒,但是不能夠使應(yīng)用程序從機(jī)器故障中恢復(fù)。通過創(chuàng)建應(yīng)用程序?qū)嵗⒋_保它們在集群節(jié)點(diǎn)中的運(yùn)行實(shí)例個數(shù)欠橘,Kubernetes Deployment 提供了一種完全不同的方式來管理應(yīng)用程序
5.Deployment 處于 master 節(jié)點(diǎn)上矩肩,通過發(fā)布 Deployment,master 節(jié)點(diǎn)會選擇合適的 worker 節(jié)點(diǎn)創(chuàng)建 Container(即圖中的正方體)肃续,Container 會被包含在 Pod (即藍(lán)色圓圈)里

k8s擴(kuò)縮容

[root@k8s-01 ~]# kubectl get deploy,pod
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   1/1     1            1           36h

NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-6b74b79f57-grk69   1/1     Running   0          36h
[root@k8s-01 ~]# kubectl  get pod -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
my-nginx-6b74b79f57-grk69   1/1     Running   0          36h   192.168.179.1   k8s-02   <none>           <none>
[root@k8s-01 ~]# kubectl scale --replicas=3 deploy  my-nginx
deployment.apps/my-nginx scaled
[root@k8s-01 ~]# kubectl get pod -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
my-nginx-6b74b79f57-6lllw   1/1     Running   0          67s   192.168.179.2     k8s-02   <none>           <none>
my-nginx-6b74b79f57-grk69   1/1     Running   0          36h   192.168.179.1     k8s-02   <none>           <none>
my-nginx-6b74b79f57-w2gtf   1/1     Running   0          67s   192.168.165.193   k8s-03   <none>           <none>
[root@k8s-01 ~]#
[root@k8s-01 ~]# watch -n 1  kubectl get deploy,pod
Every 1.0s: kubectl get deploy,pod                                     Mon Oct  9 11:48:52 2023

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   3/3     3            3           37h

NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-6b74b79f57-6lllw   1/1     Running   0          10m
pod/my-nginx-6b74b79f57-grk69   1/1     Running   0          37h
pod/my-nginx-6b74b79f57-w2gtf   1/1     Running   0          10m

# 縮容
[root@k8s-01 ~]# kubectl scale  --replicas=1  deploy my-nginx

[root@k8s-01 ~]# watch -n 1  kubectl get deploy,pod
Every 1.0s: kubectl get deploy,pod                                     Mon Oct  9 11:50:53 2023

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   1/1     1            1           37h

NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-6b74b79f57-w2gtf   1/1     Running   0          12m

service和label

Service是通過一組Pod路由通信黍檩,Service是一種抽象,它允許pod死亡并在kubernetes中復(fù)制始锚,而不會影響應(yīng)用程序刽酱,在依賴的Pod之間進(jìn)行發(fā)現(xiàn)和路由是由Kubernetes Service處理的。Service匹配一組Pod是使用標(biāo)簽(Label)選擇器(Selector),他們是允許對Kubernetes中的對象進(jìn)行邏輯操作的一種分組原語瞧捌。標(biāo)簽(Label)是附加在對象上的鍵值對棵里,可以以多種方式使用:

指定用于開發(fā)润文、測試和生產(chǎn)的對象
嵌入版本標(biāo)簽
使用Label將對象分類

[root@k8s-01 ~]# kubectl get pod --show-labels
NAME                        READY   STATUS    RESTARTS   AGE   LABELS
my-nginx-6b74b79f57-w2gtf   1/1     Running   0          64m   app=my-nginx,pod-template-hash=6b74b79f57
kubectl expose

1.type=ClusterIP方式

[root@k8s-01 ~]# kubectl  get all
NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-6b74b79f57-5lv6s   1/1     Running   0          11m
pod/my-nginx-6b74b79f57-p8rld   1/1     Running   0          11m
pod/my-nginx-6b74b79f57-w2gtf   1/1     Running   0          82m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    3d1h
service/my-nginx     ClusterIP   10.96.239.231   <none>        8081/TCP   22s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   3/3     3            3           38h

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nginx-6b74b79f57   3         3         3       38h
[root@k8s-01 ~]# curl 10.96.239.231:8081
[root@k8s-01 ~]# kubectl delete service/my-nginx
service "my-nginx" deleted

2.type=NodePort

[root@k8s-01 ~]# kubectl expose  deploy my-nginx --port=8081 --target-port=80 --type=NodePort
service/my-nginx exposed
[root@k8s-01 ~]# kubectl get all
NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-6b74b79f57-5lv6s   1/1     Running   0          29m
pod/my-nginx-6b74b79f57-p8rld   1/1     Running   0          29m
pod/my-nginx-6b74b79f57-w2gtf   1/1     Running   0          100m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          3d2h
service/my-nginx     NodePort    10.96.97.250   <none>        8081:31819/TCP   42s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   3/3     3            3           38h

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nginx-6b74b79f57   3         3         3       38h
[root@k8s-01 ~]# netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      7201/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      9838/kube-proxy
tcp        0      0 0.0.0.0:31819           0.0.0.0:*               LISTEN      9838/kube-proxy
[root@k8s-01 ~]# netstat -nlpt | grep 31819
tcp        0      0 0.0.0.0:31819           0.0.0.0:*               LISTEN      9838/kube-proxy
[root@k8s-02 ~]# netstat -nlpt | grep 31819
tcp        0      0 0.0.0.0:31819           0.0.0.0:*               LISTEN      23576/kube-proxy
[root@k8s-03 ~]# netstat -nlpt | grep 31819
tcp        0      0 0.0.0.0:31819           0.0.0.0:*               LISTEN      12999/kube-proxy

[root@k8s-01 ~]# kubectl get deploy
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   3/3     3            3           38h
[root@k8s-01 ~]#

訪問方式:公網(wǎng)ip+端口 http://47.106.12.54:31819/

擴(kuò)縮容

目標(biāo)

用kubectl擴(kuò)縮應(yīng)用程序
擴(kuò)縮一個Deployment

我們創(chuàng)建一個Deployment,然后通過服務(wù)提供訪問Pod的方式殿怜,我們發(fā)布的Deployment只創(chuàng)建了一個Pod來運(yùn)行我們的應(yīng)用程序典蝌,當(dāng)流量增加時,我們需要對應(yīng)用程序進(jìn)行伸縮操作以滿足系統(tǒng)性能需求


image.png
擴(kuò)展

擴(kuò)容的Pod會自動加入到它之前存在的Service(負(fù)載均衡網(wǎng)絡(luò))
kubectl scale --replicas=3 deployment tomcat6
持續(xù)觀測結(jié)果
watch kubectl get pods -o wide

執(zhí)行滾動升級

目標(biāo): 使用kubectl執(zhí)行滾動更新

滾動更新允許通過使用新的實(shí)例逐步更新Pod實(shí)例從而實(shí)現(xiàn)Deployments更新头谜,停機(jī)時間為零

應(yīng)用升級: tomcat:alpine赠法、tomcat:jre8-alpine
kubectl set image deployment/my-nginx2 nginx=nginx:1.9.1
聯(lián)合jenkins 形成持續(xù)集成,灰度發(fā)布功能
kubectl set image deployment.apps/tomcat6 tomcat=tomcat:jre8-alpine
可以攜帶--record參數(shù)乔夯,記錄變更
回滾升級
查看歷史記錄
kubectl rollout history deployment.apps/tomcat6
kubectl rollout history deploy tomcat6
回滾到指定版本
kubectl rollout undo deployment.apps/tomcat6 --to-revision=1
kubectl rollout undo deploy tomcat6 --to-revision=1

為image做升級

[root@k8s-01 ~]# kubectl get deploy
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   3/3     3            3           2d12h
[root@k8s-01 ~]# kubectl get pod
NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-6b74b79f57-5lv6s   1/1     Running   0          22h
my-nginx-6b74b79f57-p8rld   1/1     Running   0          22h
my-nginx-6b74b79f57-w2gtf   1/1     Running   0          24h
[root@k8s-01 ~]# kubectl get pod  my-nginx-6b74b79f57-w2gtf -o yaml | grep container
    cni.projectcalico.org/containerID: 00ec829ae6d6db562b71761b2035e9b0fc47128cb22e7237622f01686fe8bef5
  containers:
  containerStatuses:
  - containerID: docker://428d2e4fb3c615f46b0d15fd323de19d70a7b24507e131b3e6f759cfa9c8f116
[root@k8s-01 ~]# kubectl get pod  my-nginx-6b74b79f57-w2gtf -o yaml | grep name
  name: my-nginx-6b74b79f57-w2gtf
  namespace: default
    name: my-nginx-6b74b79f57
    name: nginx
      name: kube-api-access-lz4mb
  - name: kube-api-access-lz4mb
          name: kube-root-ca.crt
              fieldPath: metadata.namespace
            path: namespace
    name: nginx
[root@k8s-01 ~]# kubectl get pod  my-nginx-6b74b79f57-w2gtf -o yaml | grep image
  - image: nginx
    imagePullPolicy: Always
    image: nginx:latest
    imageID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
[root@k8s-01 ~]# kubectl set image  deploy my-nginx nginx=nginx:1.9.2 --record
deployment.apps/my-nginx image updated

watch kubectl get pod

image.png

image.png

image.png

image.png

k8s回滾到以前的版本
1.查看歷史記錄

[root@k8s-01 ~]# kubectl rollout history deploy my-nginx
deployment.apps/my-nginx
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         kubectl set image deploy my-nginx nginx=nginx:1.9.2 --record=true

2.回滾到指定版本

[root@k8s-01 ~]# kubectl rollout  undo deploy my-nginx  --to-revision=1
deployment.apps/my-nginx rolled back

k8s對象描述文件

聲明式API砖织, 對象描述文件的方式(Pod -> yaml, Deploy -> yaml, Service -> yaml), kubectl apply -f xxx.yaml
部署一個Deploy

apiVersion: apps/v1 #與k8s集群版本有關(guān),使用 kubectl api-versions 即可查看當(dāng)前集群支持的版本
kind: Deployment    #該配置的類型末荐,我們使用的是 Deployment
metadata:           #譯名為元數(shù)據(jù)侧纯,即 Deployment 的一些基本屬性和信息
  name: nginx-deployment    #Deployment 的名稱
  labels:       #標(biāo)簽,可以靈活定位一個或多個資源甲脏,其中key和value均可自定義眶熬,可以定義多組,目前不需要理解
    app: nginx  #為該Deployment設(shè)置key為app块请,value為nginx的標(biāo)簽
spec:           #這是關(guān)于該Deployment的描述娜氏,可以理解為你期待該Deployment在k8s中如何使用
  replicas: 1   #使用該Deployment創(chuàng)建一個應(yīng)用程序?qū)嵗?  selector:     #標(biāo)簽選擇器,與上面的標(biāo)簽共同作用墩新,目前不需要理解
    matchLabels: #選擇包含標(biāo)簽app:nginx的資源
      app: nginx
  template:     #這是選擇或創(chuàng)建的Pod的模板
    metadata:   #Pod的元數(shù)據(jù)
      labels:   #Pod的標(biāo)簽贸弥,上面的selector即選擇包含標(biāo)簽app:nginx的Pod
        app: nginx
    spec:       #期望Pod實(shí)現(xiàn)的功能(即在pod中部署)
      containers:   #生成container,與docker中的container是同一種
      - name: nginx #container的名稱
        image: nginx:1.7.9  #使用鏡像nginx:1.7.9創(chuàng)建container海渊,該container默認(rèn)80端口可訪問

通過yaml部署前先刪除舊的部署

[root@k8s-01 ~]# kubectl get all
NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-6b74b79f57-2qftn   1/1     Running   0          7h25m
pod/my-nginx-6b74b79f57-7hs8g   1/1     Running   0          7h25m
pod/my-nginx-6b74b79f57-9q5q5   1/1     Running   0          7h25m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          4d8h
service/my-nginx     NodePort    10.96.97.250   <none>        8081:31819/TCP   30h

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   3/3     3            3           2d20h

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nginx-697c5bb596   0         0         0       7h47m
replicaset.apps/my-nginx-6b74b79f57   3         3         3       2d20h
replicaset.apps/my-nginx-f56756f49    0         0         0       7h39m
[root@k8s-01 ~]# kubectl  delete  deployment.apps/my-nginx
deployment.apps "my-nginx" deleted
[root@k8s-01 ~]# kubectl delete service/my-nginx
service "my-nginx" deleted
[root@k8s-01 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4d8h

[root@k8s-01 ~]# kubectl apply -f deploy.yaml
deployment.apps/nginx-deployment created

[root@k8s-01 ~]# kubectl  get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-746fbb99df-z8tnb   1/1     Running   0          50s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4d8h

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   1/1     1            1           51s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-746fbb99df   1         1         1       51s

修改deploy.yaml副本數(shù)=3

[root@k8s-01 ~]# kubectl apply -f  deploy.yaml
deployment.apps/nginx-deployment configured
[root@k8s-01 ~]# kubectl  get  deploy,pod
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   2/3     3            2           12m

NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-746fbb99df-bcbkb   1/1     Running   0          24s
pod/nginx-deployment-746fbb99df-r9p6t   1/1     Running   0          24s
pod/nginx-deployment-746fbb99df-z8tnb   1/1     Running   0          12m

刪除副本數(shù)

[root@k8s-01 ~]# kubectl delete -f deploy.yaml
deployment.apps "nginx-deployment" deleted
[root@k8s-01 ~]# kubectl  get  deploy,pod
No resources found in default namespace.

k8s部署DashBoard

1.部署Dashboard UI
``

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
禁止轉(zhuǎn)載绵疲,如需轉(zhuǎn)載請通過簡信或評論聯(lián)系作者。
  • 序言:七十年代末臣疑,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子讯沈,更是在濱河造成了極大的恐慌,老刑警劉巖问慎,帶你破解...
    沈念sama閱讀 211,042評論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件儒老,死亡現(xiàn)場離奇詭異记餐,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)囚衔,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 89,996評論 2 384
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來猴仑,“玉大人,你說我怎么就攤上這事辽俗〈鄯蹋” “怎么了?”我有些...
    開封第一講書人閱讀 156,674評論 0 345
  • 文/不壞的土叔 我叫張陵杈女,是天一觀的道長朱浴。 經(jīng)常有香客問我,道長达椰,這世上最難降的妖魔是什么翰蠢? 我笑而不...
    開封第一講書人閱讀 56,340評論 1 283
  • 正文 為了忘掉前任,我火速辦了婚禮啰劲,結(jié)果婚禮上梁沧,老公的妹妹穿的比我還像新娘。我一直安慰自己蝇裤,他們只是感情好趁尼,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,404評論 5 384
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著猖辫,像睡著了一般酥泞。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上啃憎,一...
    開封第一講書人閱讀 49,749評論 1 289
  • 那天芝囤,我揣著相機(jī)與錄音,去河邊找鬼辛萍。 笑死悯姊,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的贩毕。 我是一名探鬼主播悯许,決...
    沈念sama閱讀 38,902評論 3 405
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼辉阶!你這毒婦竟也來了先壕?” 一聲冷哼從身側(cè)響起瘩扼,我...
    開封第一講書人閱讀 37,662評論 0 266
  • 序言:老撾萬榮一對情侶失蹤集绰,失蹤者是張志新(化名)和其女友劉穎栽燕,沒想到半個月后碍岔,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體付秕,經(jīng)...
    沈念sama閱讀 44,110評論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,451評論 2 325
  • 正文 我和宋清朗相戀三年猛计,在試婚紗的時候發(fā)現(xiàn)自己被綠了奉瘤。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片盗温。...
    茶點(diǎn)故事閱讀 38,577評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖砚偶,靈堂內(nèi)的尸體忽然破棺而出染坯,到底是詐尸還是另有隱情单鹿,我是刑警寧澤仲锄,帶...
    沈念sama閱讀 34,258評論 4 328
  • 正文 年R本政府宣布昼窗,位于F島的核電站澄惊,受9級特大地震影響掸驱,放射性物質(zhì)發(fā)生泄漏毕贼。R本人自食惡果不足惜鬼癣,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,848評論 3 312
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望章郁。 院中可真熱鬧暖庄,春花似錦培廓、人聲如沸医舆。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,726評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽毙石。三九已至徐矩,卻和暖如春滤灯,著一層夾襖步出監(jiān)牢的瞬間鳞骤,已是汗流浹背豫尽。 一陣腳步聲響...
    開封第一講書人閱讀 31,952評論 1 264
  • 我被黑心中介騙來泰國打工渤滞, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留蔼水,地道東北人趴腋。 一個月前我還...
    沈念sama閱讀 46,271評論 2 360
  • 正文 我出身青樓优炬,卻偏偏與公主長得像蠢护,于是被迫代替她去往敵國和親葵硕。 傳聞我的和親對象是個殘疾皇子懈凹,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,452評論 2 348

推薦閱讀更多精彩內(nèi)容