第一章k8s介紹
1.應(yīng)用部署方式的演變
(1)傳統(tǒng)部署方式
(2)虛擬化部署方式
(3)容器化部署
2.3個(gè)容器編排工具
(1)swarm:Docker自己的
(2)Mesos:apache的疑苫,需要和marathon結(jié)合使用
(3)K8s:谷歌的
3.k8s的功能
自我修復(fù)睦疫、彈性伸縮、服務(wù)發(fā)現(xiàn)纵朋、負(fù)載均衡厨内、版本回退祈秕、存儲(chǔ)編排
4.k8s組件
(1)角色劃分及對應(yīng)組件
Master(控制平面):apiserver/scheduler/controllermanager/etcd
Node(數(shù)據(jù)平面):kubelet/kubeproxy/docker
第二章集群搭建
1.集群類型
一主多從(選擇這種方式安裝)、多主多從
2.安裝方式
Minikube:用于快速搭建單節(jié)點(diǎn)k8s
Kubeadm:用于快速搭建k8s集群(選擇這種方式安裝)
二進(jìn)制包:需要下載每個(gè)組件的二進(jìn)制包雏胃,依次安裝请毛,有利于理解k8s的各個(gè)組件
3.安裝步驟
(1)主機(jī)準(zhǔn)備:3臺(tái)
10.186.61.124 master
10.186.61.125 node1
10.186.61.134 node2
(2)操作系統(tǒng)優(yōu)化
a.主機(jī)名解析
[root@master ~]# cat /etc/hosts
10.186.61.124 master
10.186.61.134 node2
10.186.61.125 node1
b.時(shí)間同步(開啟chronyd服務(wù))
systemctl enable ?chronyd
systemctl start chronyd
c.禁用iptables/firewalld/swap
systemctl stop iptables
systemctl stop firewalld
systemctl disable?iptables
systemctldisable?firewalld
swapoff -a
d.修改/etc/selinux/config關(guān)閉selinux
[root@node2 ~]# cat /etc/selinux/config | grep SELINUX=disabled
SELINUX=disabled
e.修改linux內(nèi)核:
添加網(wǎng)橋過濾和地址轉(zhuǎn)發(fā)功能
[root@node2 ~]# cat /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.ipv4.ip_forward = 1
重新加載配置
sysctl -p
加載網(wǎng)橋過濾模塊
modprobe br_netfilter
查看是否加載成功
[root@node2 ~]# lsmod | grep br_netfilter
br_netfilter ??????????22256 ?0
bridge ???????????????146976 ?1 br_netfilter
f.配置ipvs功能
安裝ipset/ipvsadm
?yum install ipset ipvsadmin -y
加載模塊
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
驗(yàn)證是否加載成功
[root@node2 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 ?????15053 ?0
nf_defrag_ipv4 ????????12729 ?1 nf_conntrack_ipv4
ip_vs_sh ??????????????12688 ?0
ip_vs_wrr ?????????????12697 ?0
ip_vs_rr ??????????????12600 ?0
ip_vs ????????????????141473 ?6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack ?????????133053 ?2 ip_vs,nf_conntrack_ipv4
libcrc32c ?????????????12644 ?3 xfs,ip_vs,nf_conntrack
g.重啟機(jī)器reboot
(3)安裝docker
a.配置docker對應(yīng)的yum源,我這里選擇阿里云的瞭亮,配置文件下載地址為:
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
b.安裝指定版本的docker-ce
yum install -y --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7
c.為docker添加一個(gè)配置文件/etc/docker/daemon.json
[root@node1 docker]# cat daemon.json
{ "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
d.啟動(dòng)docker
systemctl restart docker
systemctl enable docker
e.驗(yàn)證docker是否安裝成功
docker version
(4)安裝kubeadm
a.配置k8s yum源
[root@node1 yum.repos.d]# pwd
/etc/yum.repos.d
[root@node1 yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
b.安裝指定版本的k8s組件
yum install -y --setop=obsoletes=0 kubeadm-1.17.4-0 ?kubectl-1.17.4-0 kubelet-1.17.4-0
c.修改/etc/sysconfig/kubelet
[root@node1 yum.repos.d]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
d.啟動(dòng)docker并設(shè)置開機(jī)啟動(dòng)
systemctl start docker
systemctl enable kubelet
(5)創(chuàng)建k8s集群
a.查看部署k8s集群所需鏡像包
[root@node2 /]# kubeadm config images list
I1206 08:33:30.013863 ???1787 version.go:251] remote version is much newer: v1.22.4; falling back to: stable-1.17
W1206 08:33:30.768607 ???1787 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1206 08:33:30.768656 ???1787 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.17
k8s.gcr.io/kube-controller-manager:v1.17.17
k8s.gcr.io/kube-scheduler:v1.17.17
k8s.gcr.io/kube-proxy:v1.17.17
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
b.docker pull 下載上述鏡像包
[root@master yum.repos.d]# images=(
> kube-apiserver:v1.17.17
> kube-controller-manager:v1.17.17
> kube-scheduler:v1.17.17
> kube-proxy:v1.17.17
> pause:3.1
> etcd:3.4.3-0
> coredns:1.6.5
> )
[root@master yum.repos.d]# for imageName in ${images[@]};do
> docker pull ?registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
> docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName ?k8s.gcr.io/$imageName
> docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
> done
c.查看下載的鏡像
[root@node1 /]# docker images
REPOSITORY ??????????????????????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE
nginx ???????????????????????????????latest ?????????????f652ca386ed1 ???????3 days ago ?????????141MB
tomcat ??????????????????????????????latest ?????????????904a98253fbf ???????2 weeks ago ????????680MB
nginx ???????????????????????????????<none> ?????????????ea335eea17ab ???????2 weeks ago ????????141MB
k8s.gcr.io/kube-proxy ???????????????v1.17.17 ???????????3ef67d180564 ???????10 months ago ??????117MB
k8s.gcr.io/kube-apiserver ???????????v1.17.17 ???????????38db32e0f351 ???????10 months ago ??????171MB
k8s.gcr.io/kube-controller-manager ??v1.17.17 ???????????0ddd96ecb9e5 ???????10 months ago ??????161MB
k8s.gcr.io/kube-scheduler ???????????v1.17.17 ???????????d415ebbf09db ???????10 months ago ??????94.4MB
quay.io/coreos/flannel ??????????????v0.12.0-amd64 ??????4e9f801d2217 ???????21 months ago ??????52.8MB
k8s.gcr.io/coredns ??????????????????1.6.5 ??????????????70f311871ae1 ???????2 years ago ????????41.6MB
k8s.gcr.io/etcd ?????????????????????3.4.3-0 ????????????303ce5db0e90 ???????2 years ago ????????288MB
nginx ???????????????????????????????1.17.1 ?????????????98ebf73aba75 ???????2 years ago ????????109MB
nginx ???????????????????????????????1.14-alpine ????????8a2fb25a19f5 ???????2 years ago ????????16MB
k8s.gcr.io/pause ????????????????????3.1 ????????????????da86e6ba6ca1 ???????3 years ago ????????742kB
(4)集群初始化(master上操作)
a. kubeadm init --kubernetes-version=v1.17.17 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.186.61.124
b. 創(chuàng)建必要文件:
make -p /root/.kube
cp -i /etc/kubernetes/admin.conf ?/root/.kube/config
c.node加入集群(node上操作)
kubeadm join 10.186.61.124:6443 --token gym7ln.8jfdfgc8ef7ei816 ?--discovery-token-ca-cert-hash sha256:0f064be8b3df46a3af22ca8255200e5df6b14981db24909a7849caf87e160e3d
d.配置網(wǎng)絡(luò)插件
[root@master k8s]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
kube-flannel.yml獲取地址:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
e.再次查看節(jié)點(diǎn)狀態(tài)方仿,已經(jīng)為ready狀態(tài)
d.如果部署過程中出現(xiàn)異常,可以執(zhí)行kubeadm reset并刪除相關(guān)文件刪除原來的集群统翩,再重新部署新集群
[if !supportLists]4.?[endif]在k8s集群部署一個(gè)nginx程序
a.創(chuàng)建一個(gè)deploy名字為nginx仙蚜,一個(gè)pod名字自動(dòng)生成,容器鏡像為nginx:1.14-alpine
[root@master ~]# kubectl create deploy nginx --image=nginx:1.14-alpine
deployment.apps/nginx created
b.暴露端口
[root@master ~]# kubectl expose deploy nginx --port=80 --type=NodePort
service/nginx exposed
c.查看deploy,pod厂汗,service
[root@master ~]# kubectl get deploy,pods,svc
NAME ???????????????????READY ??UP-TO-DATE ??AVAILABLE ??AGE
deployment.apps/nginx ??1/1 ????1 ???????????1 ??????????2m36s
NAME ????????????????????????READY ??STATUS ???RESTARTS ??AGE
pod/nginx-6867cdf567-dcl2z ??1/1 ????Running ??0 ?????????2m36s
NAME ????????????????TYPE ???????CLUSTER-IP ?????EXTERNAL-IP ??PORT(S) ???????AGE
service/kubernetes ??ClusterIP ??10.96.0.1 ??????<none> ???????443/TCP ???????16h
service/nginx ???????NodePort ???10.110.82.228 ??<none> ???????80:32627/TCP ??22s
d.訪問nginx
[root@master ~]# curl http://10.186.61.124:32627
第三章資源管理介紹
[if !supportLists]13.?[endif]yaml語言
(1)使用yaml注意事項(xiàng):
a.大小寫敏感
b.使用縮進(jìn)表示層級關(guān)系,縮進(jìn)不允許tab委粉,只允許空格(低版本要求),縮進(jìn)的空格數(shù)量不限制娶桦,只要相同層級的元素左對齊即可
c.#表示注釋
(2)json數(shù)據(jù)和yaml數(shù)據(jù)轉(zhuǎn)化檢查工具
http://json2yaml.com/
14.資源管理方式
(1)命令式對象管理:直接用命令操作k8s的資源
創(chuàng)建一個(gè)namespace名字為dev
[root@master ~]# kubectl create ns dev
namespace/dev created
查看所有namespace
[root@master ~]# kubectl get ns
NAME ?????????????STATUS ??AGE
default ??????????Active ??16h
dev ??????????????Active ??6s
kube-node-lease ??Active ??16h
kube-public ??????Active ??16h
kube-system ??????Active ??16h
在dev里創(chuàng)建一個(gè)deployment名字為pod,一個(gè)pod名字由系統(tǒng)定義
[root@master ~]# kubectl run pod --image=nginx -n dev
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/pod created
查看創(chuàng)建的deploy和pod
[root@master ~]# kubectl get deploy,pods -n dev
NAME ?????????????????READY ??UP-TO-DATE ??AVAILABLE ??AGE
deployment.apps/pod ??1/1 ????1 ???????????1 ??????????64s
NAME ??????????????????????READY ??STATUS ???RESTARTS ??AGE
pod/pod-864f9875b9-hrhs2 ??1/1 ????Running ??0 ?????????64s
[if !supportLists](2)[endif]命令式對象配置:用命令和配置文件操作k8s的資源贾节,可以創(chuàng)建和刪除資源
a.創(chuàng)建一個(gè)yaml文件
[root@master ~]# cat nginxpod.yaml
#創(chuàng)建一個(gè)ns dev
apiVersion: v1
kind: Namespace
metadata:
??name: dev
---
#在dev創(chuàng)建一個(gè)pod nginxpod,在pod里創(chuàng)建一個(gè)容器nginx-containers
apiVersion: v1
kind: Pod
metadata:
??name: nginxpod
??namespace: dev
spec:
??containers:
??- name: nginx-containers
????image: nginx:latest
b.使用yaml文件生成ns,pod,container
[root@master ~]# kubectl create -f nginxpod.yaml
namespace/dev created
pod/nginxpod created
c.驗(yàn)證是否創(chuàng)建成功
[root@master ~]# kubectl get ns
NAME ?????????????STATUS ??AGE
default ??????????Active ??16h
dev ??????????????Active ??10s
kube-node-lease ??Active ??16h
kube-public ??????Active ??16h
kube-system ??????Active ??16h
[root@master ~]# kubectl get pods -n dev
NAME ??????READY ??STATUS ???RESTARTS ??AGE
nginxpod ??1/1 ????Running ??0 ?????????18s
d.使用配置文件刪除ns和pod
[root@master ~]# kubectl delete -f nginxpod.yaml
namespace "dev" deleted
e.驗(yàn)證是否刪除成功
[root@master ~]# kubectl get ns
NAME ?????????????STATUS ??AGE
default ??????????Active ??16h
kube-node-lease ??Active ??16h
kube-public ??????Active ??16h
kube-system ??????Active ??16h
[root@master ~]# kubectl get pods -n dev
No resources found in dev namespace.
[if !supportLists](3)[endif]聲明式對象配置:用’apply’命令和配置文件操作k8s的資源,如果資源存在則更新衷畦,不存在則創(chuàng)建栗涂,不可以刪除資源
a.第一次執(zhí)行命令
[root@master ~]# kubectl apply -f nginxpod.yaml
namespace/dev created
pod/nginxpod created
b.第二次執(zhí)行命令
[root@master ~]# kubectl apply -f nginxpod.yaml
namespace/dev unchanged
pod/nginxpod unchanged
19.namespace資源
a.K8s的所有資源都屬于1個(gè)namespace,實(shí)現(xiàn)了pod之間的隔離
b.默認(rèn)ns
[root@master ~]# kubectl get ns
NAME ?????????????STATUS ??AGE
default ??????????Active ??17h
kube-node-lease ??Active ??17h
kube-public ??????Active ??17h
kube-system ??????Active ??17h
c.創(chuàng)建ns ?
[root@master ~]# kubectl create ns dev
namespace/dev created
d.查看ns
[root@master ~]# kubectl get ns
NAME ?????????????STATUS ??AGE
default ??????????Active ??17h
dev ??????????????Active ??3s
kube-node-lease ??Active ??17h
kube-public ??????Active ??17h
kube-system ??????Active ??17h
e.刪除ns
[root@master ~]# kubectl delete ns dev
namespace "dev" deleted
f.配置方式創(chuàng)建ns
[root@master ~]# cat ns.yaml
apiVersion: v1
kind: Namespace
metadata:
??name: dev2
[root@master ~]# kubectl create -f ns.yaml
namespace/dev2 created
20.pod
a.Pod是k8s管理的最小單位,pod中可以包含多個(gè)容器
b.Pod中包括2種類型的容器:用戶程序所在的容器(數(shù)量可多可少)和pause容器(每個(gè)pod中會(huì)有一個(gè)祈争;可以在根容器設(shè)置IP斤程,以實(shí)現(xiàn)內(nèi)部的網(wǎng)絡(luò)通信)
c.集群自己生成的pod
[root@master ~]# kubectl get pods -n kube-system
NAME ????????????????????????????READY ??STATUS ?????????????RESTARTS ??AGE
coredns-6955765f44-ffl6j ????????0/1 ????ContainerCreating ??0 ?????????17h
coredns-6955765f44-v96qf ????????0/1 ????ContainerCreating ??0 ?????????17h
etcd-master ?????????????????????1/1 ????Running ????????????0 ?????????17h
kube-apiserver-master ???????????1/1 ????Running ????????????0 ?????????17h
kube-controller-manager-master ??1/1 ????Running ????????????0 ?????????17h
kube-flannel-ds-amd64-djgsf ?????1/1 ????Running ????????????0 ?????????16h
kube-flannel-ds-amd64-jskxd ?????1/1 ????Running ????????????0 ?????????16h
kube-flannel-ds-amd64-rkrst ?????1/1 ????Running ????????????0 ?????????16h
kube-proxy-gtdq8 ????????????????1/1 ????Running ????????????0 ?????????17h
kube-proxy-nzkc4 ????????????????1/1 ????Running ????????????0 ?????????17h
kube-proxy-whslc ????????????????1/1 ????Running ????????????0 ?????????17h
kube-scheduler-master ???????????1/1 ????Running ????????????0 ?????????17h
d.生成一個(gè)pod
[root@master ~]# kubectl ?run nginx --image=nginx --port=80 --namespace dev
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
c.查看pod
[root@master ~]# kubectl get pods -n dev
NAME ????????????????????READY ??STATUS ???RESTARTS ??AGE
nginx-5578584966-jhsxc ??1/1 ????Running ??0 ?????????77s
d.查看pod的詳細(xì)信息
kubectl describe pod nginx-5578584966-jhsxc -n dev
e.獲取podIP
[root@master ~]# kubectl get pod -n dev -o wide
NAME ????????????????????READY ??STATUS ???RESTARTS ??AGE ????IP ??????????NODE ???NOMINATED NODE ??READINESS GATES
nginx-5578584966-jhsxc ??1/1 ????Running ??0 ?????????3m13s ??10.244.2.8 ??node2 ??<none> ??????????<none>
f.訪問pod
[root@master ~]# curl http://10.244.2.8:80
g.刪除pod
[root@master ~]# kubectl delete pod nginx-5578584966-jhsxc ?-n dev
pod "nginx-5578584966-jhsxc" deleted
刪除pod后,又生成了新的pod菩混,這是控制器導(dǎo)致的忿墅,需要?jiǎng)h除pod對應(yīng)的控制器
[root@master ~]# kubectl get pods -n dev
NAME ????????????????????READY ??STATUS ?????????????RESTARTS ??AGE
nginx-5578584966-lmxtg ??0/1 ????ContainerCreating ??0 ?????????5s
刪除控制器nginx
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted
pod已經(jīng)不存在了
[root@master ~]# kubectl get pods -n dev
No resources found in dev namespace.
[root@master ~]#
21.label
a.對k8s的資源加label,可以實(shí)現(xiàn)資源的分類沮峡,通過label對資源進(jìn)行選擇
b.label selector
可以通過標(biāo)簽選擇器球匕,選擇具有某些標(biāo)簽的資源,
選擇器的分類:基于等式帖烘、基于集合
kubectl label pod nginxpod version=1.0 -n dev2添加標(biāo)簽
kubectl label pod nginxpod version=2.0 -n dev2 --overwrite修改標(biāo)簽
kubectl get pod ?nginxpod ?-n dev2 --show-labels查看pod顯示標(biāo)簽
kubectl get pod -l version!=2.0 --show-labels -n dev2根據(jù)標(biāo)簽查找pod
kubectl get pod -l version=2.0 --show-labels -n dev2根據(jù)標(biāo)簽查找pod
通過配置方式配置標(biāo)簽
創(chuàng)建配置文件
[root@master ~]# cat label.yaml
apiVersion: v1
kind: Pod
metadata:
??name: nginx
??namespace: dev
??labels:
????version: "3.0"
????env: "test"
spec:
??containers:
??- ?image: nginx:latest
?????name: pod
?????ports:
?????- name: nginx-port
???????containerPort: 80
???????protocol: TCP
使用配置文件生成資源
[root@master ~]# kubectl apply -f label.yaml
pod/nginx created
查看pod標(biāo)簽
[root@master ~]# kubectl get pods -n dev --show-labels
NAME ???READY ??STATUS ???RESTARTS ??AGE ??LABELS
nginx ??1/1 ????Running ??0 ?????????22s ??env=test,version=3.0
22.deployment控制器
a.Deployment實(shí)現(xiàn)對pod的管理,確保pod資源符合預(yù)期的狀態(tài)
b.命令創(chuàng)建控制器和pod
[root@master ~]# kubectl run nginx --image=nginx --port=80 --replicas=3 -n dev
deployment.apps/nginx created
image是pod的鏡像橄杨,port指定端口秘症,replicas指定創(chuàng)建的數(shù)量照卦,n指定namespace
查看創(chuàng)建的資源
[root@master ~]# kubectl get deploy,pods -n dev
NAME ???????????????????READY ??UP-TO-DATE ??AVAILABLE ??AGE
deployment.apps/nginx ??2/3 ????3 ???????????2 ??????????2m49s
NAME ????????????????????????READY ??STATUS ?????????????RESTARTS ??AGE
pod/nginx-5578584966-glq9v ??1/1 ????Running ????????????0 ?????????2m49s
pod/nginx-5578584966-klx89 ??0/1 ????ContainerCreating ??0 ?????????2m49s
pod/nginx-5578584966-lqjzg ??1/1 ????Running ????????????0 ?????????2m49s
d.查看deploy詳細(xì)信息
kubectl describe deploy nginx -n dev
e.刪除deploy
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted
f.配置文件方式操作deploy
[root@master ~]# cat deploy-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
??name: nginx
??namespace: dev
spec:
??replicas: 3
??selector:
????matchLabels:
??????run: nginx
??template:
????metadata:
??????labels:
run;nginx-pod
????spec:
??????containers:
??????- name: pod
????????image: nginx:1.17.1
????????ports:
????????- name: nginx-port
??????????containerPort: 80
??????????protocol: TCP
[root@master ~]# kubectl apply -f deploy-nginx.yaml
deployment.apps/nginx created
[root@master ~]# kubectl get deploy -n dev -o wide --show-labels
NAME ???READY ??UP-TO-DATE ??AVAILABLE ??AGE ????CONTAINERS ??IMAGES ????????SELECTOR ???LABELS
nginx ??2/3 ????3 ???????????2 ??????????3m58s ??pod ?????????nginx:1.17.1 ??run=nginx ??<none>
[root@master ~]#
[root@master ~]# kubectl get pod -n dev ?--show-labels
NAME ????????????????????READY ??STATUS ?????????????RESTARTS ??AGE ????LABELS
nginx-568b566f4c-pmplx ??0/1 ????ContainerCreating ??0 ?????????4m58s ??pod-template-hash=568b566f4c,run=nginx
nginx-568b566f4c-znckp ??1/1 ????Running ????????????0 ?????????4m58s ??pod-template-hash=568b566f4c,run=nginx
nginx-568b566f4c-ztktv ??1/1 ????Running ????????????0 ?????????4m58s ??pod-template-hash=568b566f4c,run=nginx
23.service
(1)實(shí)現(xiàn)多個(gè)pod的統(tǒng)一訪問入口和負(fù)載均衡
(2)創(chuàng)建一個(gè)只能集群內(nèi)部訪問的svc
a.[root@master ~]# kubectl expose deploy nginx --name=svc-nginx --type=ClusterIP --port=80 --target-port=80 -n dev
service/svc-nginx exposed
b.查看svc
[root@master ~]# kubectl get svc -n dev
NAME ???????TYPE ???????CLUSTER-IP ????EXTERNAL-IP ??PORT(S) ??AGE
svc-nginx ??ClusterIP ??10.98.12.250 ??<none> ???????80/TCP ???14s
c.集群內(nèi)測試訪問
[root@master ~]# curl 10.98.12.250:80
(3)創(chuàng)建一個(gè)集群外部可以訪問的svc
a.[root@master ~]# kubectl expose deploy nginx --name=svc-nginx1 --type=NodePort --port=80 --target-port=80 -n dev
service/svc-nginx1 exposed
b.查看svc
[root@master ~]# kubectl get svc svc-nginx1 ?-n dev
NAME ????????TYPE ??????CLUSTER-IP ???EXTERNAL-IP ??PORT(S) ???????AGE
svc-nginx1 ??NodePort ??10.99.39.25 ??<none> ???????80:31522/TCP ??34s
c.通過主機(jī)ip和端口訪問svc
[root@master ~]# curl http://10.186.61.124:31522/
(4)刪除svc
[root@master ~]# kubectl delete svc svc-nginx1 -n dev
service "svc-nginx1" deleted
(5)通過配置文件操作svc
a.創(chuàng)建yaml文件
[root@master ~]# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
??name: svc-nginx1
??namespace: dev
spec:
??clusterIP: 10.109.179.231
??ports:
??- port: 80
????protocol: TCP
????targetPort: 80
??selector:
????run: nginx
??type: ClusterIP
b.執(zhí)行命令
[root@master ~]# kubectl apply -f svc.yaml
service/svc-nginx1 created
c.查看svc
[root@master ~]# kubectl get svc -n dev
NAME ????????TYPE ???????CLUSTER-IP ??????EXTERNAL-IP ??PORT(S) ??AGE
svc-nginx ???ClusterIP ??10.98.12.250 ????<none> ???????80/TCP ???131m
svc-nginx1 ??ClusterIP ??10.109.179.231 ??<none> ???????80/TCP ???66s
24.pod詳解
(1)查看pod配置文件相關(guān)參數(shù)
a.[root@master ~]# kubectl explain pod
b.[root@master ~]# kubectl explain pod.kind
c.[root@master ~]# kubectl explain pod.spec
(2)在k8s中基本所有資源的一級屬性都是一樣的乡摹,主要包括:
apiVersion/kind/metadata/spec/status
(3)使用配置文件創(chuàng)建一個(gè)pod
a.[root@master ~]# cat pod-xiangxi.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-base
namespace: dev
labels:
user: heima
spec:
containers:
- name: nginx
image: nginx:1.17.1
- name: busybox
image: busybox:1.30
b.[root@master ~]# kubectl apply -f pod-xiangxi.yaml
pod/pod-base created
c.[root@master ~]# kubectl get pods -n dev
NAME ??????READY ??STATUS ???RESTARTS ??AGE
pod-base ??1/2 ????Running ??3 ?????????72s
26.pod鏡像拉取策略
(1)創(chuàng)建配置文件
[root@master ~]# cat pod-xiangxi.yaml
apiVersion: v1
kind: Pod
metadata:
??name: pod-base
??namespace: dev
??labels:
????user: heima
spec:
??containers:
??- name: nginx
????image: nginx:1.17.1
????imagePullPolicy: Always
??- name: busybox
????image: busybox:1.30
(2)創(chuàng)建pod
[root@master ~]# kubectl apply -f pod-xiangxi.yaml
pod/pod-base created
(3)查看pod
[root@master ~]# kubectl get pods -n dev
NAME ??????READY ??STATUS ????????????RESTARTS ??AGE
pod-base ??1/2 ????CrashLoopBackOff ??1 ?????????21s
(4)鏡像拉取策略
imagePullPolicy用于設(shè)置鏡像拉取策略役耕,有3個(gè)值,
always:總是從遠(yuǎn)程拉取鏡像
ifNotPresent:本地有就使用本地的聪廉,沒有就遠(yuǎn)程拉取
Never:一直使用本地的
27.啟動(dòng)命令
(1)在前面busybox容器一直沒有啟動(dòng)成功瞬痘,是因?yàn)樗皇且粋€(gè)程序,是一個(gè)工具類的集合板熊,會(huì)自動(dòng)關(guān)閉框全,這就用到了command,用于容器初始化完成之后運(yùn)行一個(gè)命令
(2)寫一個(gè)yaml配置文件(重復(fù)部分省略了)
- name: busybox
image: busybox:1.30
command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hello.txt;sleep 3;done"]
(3)[root@master ~]# kubectl apply -f pod-xiangxi.yaml
pod/pod-base created
(4)2個(gè)容器都可以正常啟動(dòng)了
[root@master ~]# kubectl get pods ?-n dev
NAME ??????READY ??STATUS ???RESTARTS ??AGE
pod-base ??2/2 ????Running ??0 ?????????8s
(5)進(jìn)入容器查看comman命令執(zhí)行情況
[root@master ~]# kubectl exec pod-base -n dev -it -c ?busybox /bin/sh
/ # tail -f /tmp/hello.txt
07:03:38
07:03:41
07:03:44
28.環(huán)境變量
(1)寫一個(gè)配置文件
- name: busybox
image: busybox:1.30
command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hello.txt;sleep 3;done"]
env:#設(shè)置容器中的環(huán)境變量
- name: "username"
value: "admin"
(2)創(chuàng)建pod
[root@master ~]# kubectl apply -f pod-xiangxi.yaml
pod/pod-base created
[root@master ~]#
(3)進(jìn)入容器干签,查看環(huán)境變量
[root@master ~]# kubectl exec pod-base -n dev -it -c ?busybox /bin/sh
/ # echo $username
admin
/ #
29.端口設(shè)置
(1)創(chuàng)建配置文件
containers:
- name: nginx
image: nginx:1.17.1
imagePullPolicy: Never
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
(2)創(chuàng)建pod
[root@master ~]# kubectl apply -f pod-xiangxi.yaml
pod/pod-base created
[root@master ~]# kubectl get pods -n dev
NAME ??????READY ??STATUS ???RESTARTS ??AGE
pod-base ??2/2 ????Running ??0 ?????????57s
(3)查看pod端口相關(guān)信息
[root@master ~]# kubectl get pods -n dev -o yaml
30.資源配額
(1)創(chuàng)建配置文件
spec:
??containers:
??- name: nginx
????image: nginx:1.17.1
????imagePullPolicy: Never
????ports:
????- name: nginx-port
??????containerPort: 80
??????protocol: TCP
resources: #資源配額
limits: #限制資源(上限)
cpu: "2" #cpu限制津辩,單位是core數(shù)
memory: "10Gi" #內(nèi)存限制
requests: #請求資源(下限)
????????cpu: "1"
????????memory: "10Mi"
(2)創(chuàng)建及查看命令如上
31.pod生命周期
(1)pod的生命周期過程
a.pod創(chuàng)建
b.運(yùn)行初始化容器
c.運(yùn)行主容器:
容器啟動(dòng)后鉤子、容器終止前鉤子
容器的存活性探測容劳、就緒性探測
(2)pod的5種狀態(tài):
掛起pending:apiserver創(chuàng)建了pod資源對象喘沿,但尚未被調(diào)度完成或者處于下載鏡像中
運(yùn)行中running:pod已經(jīng)被調(diào)度到某節(jié)點(diǎn),并且所有容器已經(jīng)由kubelet創(chuàng)建完成
成功:succeeded:pod中的所有容器都已經(jīng)成功終止且不會(huì)被重啟
失敗failed:所有容器都已經(jīng)終止竭贩,但至少有一個(gè)終止失敗蚜印,
未知unknow:apiserver無法正常獲取到pod對象的狀態(tài)信息,通常由網(wǎng)絡(luò)通信失敗導(dǎo)致
33初始化容器
(1)初始化容器是在pod的主容器啟動(dòng)前要運(yùn)行的容器留量,主要是做主容器的前置工作
(2)創(chuàng)建帶有初始化容器的配置文件
?initContainers:
??- name: test-mysql
????image: busybox:1.30
????command: ["/bin/sh","-c","until ping 10.244.2.21 -c 1;do echo waiting for mysql....;sleep 2;done"]
??- name: test-redis
????image: busybox:1.30
????command: ["/bin/sh","-c","until ping 10.244.2.21 -c 1;do echo waiting for mysql....;sleep 2;done"]
(3)[root@master ~]# kubectl apply -f pod-chushihuarongqi.yaml
pod/pod-inintcontainer created
(4)由于初始化容器沒有成功執(zhí)行窄赋,所以主容器沒有啟動(dòng)
[root@master ~]# kubectl get pod pod-inintcontainer -n dev
NAME ????????????????READY ??STATUS ????RESTARTS ??AGE
pod-inintcontainer ??0/1 ????Init:0/2 ??0 ?????????2m2s
34.鉤子函數(shù)
(1)postStart:容器創(chuàng)建之后執(zhí)行,如果執(zhí)行失敗了會(huì)重啟容器
a.創(chuàng)建配置文件
spec:
??containers:
??- name: nginx
????image: nginx:1.17.1
????lifecycle:
??????postStart:
exec: #容器啟動(dòng)時(shí)肪获,修改nginx默認(rèn)首頁內(nèi)容
??????????command: ["/bin/sh","-c","echo postStart... > /usr/share/nginx/html/index.html"]
??????preStop:
exec: #容器停止前停止nginx
??????????command: ["/user/sbin/nginx","-s","quit"]
b.[root@master ~]# kubectl apply -f pod-gouzi.yaml
pod/pod-base created
[root@master ~]#
[root@master ~]# kubectl get pods -n dev
NAME ??????READY ??STATUS ???RESTARTS ??AGE
pod-base ??1/1 ????Running ??0 ?????????3s
c.驗(yàn)證效果
[root@master ~]# curl 10.244.2.24
postStart...
(2)pre stop:容器終止之前執(zhí)行寝凌,執(zhí)行完成后容器將成功終止,在完成之前會(huì)阻塞刪除容器的操作
35.容器探測
(1)探測容器中的應(yīng)用實(shí)例是否正常工作孝赫,是保障業(yè)務(wù)可用性的一種傳統(tǒng)機(jī)制较木。如果經(jīng)探測,實(shí)例狀態(tài)異常會(huì)重啟對應(yīng)容器或不轉(zhuǎn)發(fā)流量到這個(gè)容器
(2)探測容器的2種實(shí)現(xiàn)方式:
LivenessProbes:存活性探針青柄,如果異常伐债,會(huì)重啟容器,
Readiness probes:就緒性探針致开,如果異常峰锁,不會(huì)向?qū)?yīng)容器轉(zhuǎn)發(fā)流量
(3)3種探測方式:
exec命令:在容器中執(zhí)行一次命令,命令執(zhí)行成功双戳,則認(rèn)為實(shí)例正常虹蒋,否則異常
tcpSocket:訪問用戶容器的端口,能夠建立鏈接,則認(rèn)為實(shí)例正常魄衅,否則異常
httpGet:調(diào)用容器內(nèi)web應(yīng)用的URL峭竣,如果返回的狀態(tài)碼在200-399之間,則認(rèn)為實(shí)例正常晃虫,否則異常
(4)使用livenessProbe/Exec方式實(shí)現(xiàn)容器探測
[if !supportLists]a.?[endif]創(chuàng)建配置文件
[root@master ~]# cat pod-liveness-exec.yaml
apiVersion: v1
kind: Pod
metadata:
??name: pod-liveness-exec
??namespace: dev
??labels:
??????version: "3.0"
??????env: "test"
spec:
??containers:
??- name: containers-liveness-exec
????image: nginx:1.17.1
????ports:
????- name: nginx-port
??????containerPort: 80
??????protocol: TCP
????livenessProbe:
??????exec:
????????command: ["/bin/cat","/tmp/hello.txt"]
b.創(chuàng)建pod
[root@master ~]# kubectl delete -f pod-liveness-exec.yaml
pod "pod-liveness-exec" deleted
c.查看pod狀態(tài)
[root@master ~]# kubectl get pods -n dev
NAME ???????????????READY ??STATUS ???RESTARTS ??AGE
pod-liveness-exec ??1/1 ????Running ??0 ?????????5s
d.再次查看狀態(tài)皆撩,pod在不斷被重啟,這是因?yàn)闆]有/tmp/hello.txt文件哲银,導(dǎo)致存活性探測失敗扛吞,從而不斷重啟容器
[root@master ~]# kubectl get pods -n dev
NAME ???????????????READY ??STATUS ????????????RESTARTS ??AGE
pod-liveness-exec ??0/1 ????CrashLoopBackOff ??4 ?????????2m56s
e.查看pod詳細(xì)情況
[root@master ~]# kubectl describe pod pod-liveness-exec -n dev
(5)使用livenessProbe/tcpSocket方式實(shí)現(xiàn)容器探測
[if !supportLists]a.?[endif]配置文件格式如下
spec:
??containers:
??- name: containers-liveness-exec
????image: nginx:1.17.1
????ports:
????- name: nginx-port
??????containerPort: 80
??????protocol: TCP
????livenessProbe:
??????tcpSocket:
????????port: 8080
(6)使用livenessProbe/httpGet方式實(shí)現(xiàn)容器探測
a.配置文件格式如下
spec:
??containers:
??- name: containers-liveness-exec
????image: nginx:1.17.1
????ports:
????- name: nginx-port
??????containerPort: 80
??????protocol: TCP
????livenessProbe:
??????httpGet:
????????scheme: HTTP
????????port: 80
????????path: /
(7)容器探測的其他屬性值
initialDelaySeconds:容器啟動(dòng)多少秒后執(zhí)行第一次探測
timeoutSeconds:探測超時(shí)時(shí)間,默認(rèn)1秒荆责,最少1秒
periodSeconds:探測頻率滥比,默認(rèn)10秒,最小1秒
failureThreshold:連續(xù)探測多少次失敗草巡,認(rèn)為是失敗
successThreshold:連續(xù)探測多少次成功守呜,才是成功
37.重啟策略
(1)使用存活性探針后,如果容器異常山憨,就會(huì)被重啟查乒,具體的如何重啟由重啟策略決定,有3個(gè)重啟策略:
Always:容器失效時(shí)郁竟,自動(dòng)重啟容器玛迄,這是默認(rèn)值
onFailure:容器終止運(yùn)行且退出碼不為0時(shí)重啟
Never:無論什么狀態(tài)都不重啟
(2)重啟策略適用于pod中的所有容器,首次需要重啟的容器棚亩,將在需要時(shí)立即進(jìn)行重啟蓖议,隨后需要再次重啟的,按照如下延遲進(jìn)行重啟讥蟆,10s/20s/40s/80s/160s/300s,300s是最大延遲
(3)配置文件格式如下
spec:
containers:
- name: containers-liveness-exec
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /ss
??restartPolicy: Never
38調(diào)度
(1)K8s提供了以下幾種pod調(diào)度方式:
自動(dòng)調(diào)度:由scheduler自動(dòng)完成
定向調(diào)度:通過nodeSelector實(shí)現(xiàn),即使選擇的node不存在勒虾,也會(huì)強(qiáng)制調(diào)度到這個(gè)node 上,這種情況pod肯定運(yùn)行失敗
親和性調(diào)度:nodeAffinity/podAffinity/podANTIaffinity
污點(diǎn)調(diào)度taints:preferNoscheduler/noscheduler/noexecute
容忍調(diào)度:toleration
(2)定向調(diào)度
a.配置格式如下
spec:
containers:
- name: containers-liveness-exec
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
nodeName: node1 #這是為pod指定的node,如果這里配置的node不存在瘸彤,也會(huì)被強(qiáng)制調(diào)度這個(gè)不 存在的node上
b.創(chuàng)建pod
[root@master ~]# kubectl create -f pod-diaodu-node.yaml
pod/pod-liveness-exec created
c.查看pod,pod被調(diào)度到了指定的節(jié)點(diǎn)node1
[root@master ~]# kubectl get pods -n dev -o wide
NAME ????????????????????READY ??STATUS ?????????????RESTARTS ??AGE ??IP NODE
pod-liveness-exec ????????0/1 ????ContainerCreating ??0 ?????????61s ???<none> ???????node1
(3)定向調(diào)度--通過node標(biāo)簽選擇node
a.為2個(gè)node創(chuàng)建標(biāo)簽
[root@master ~]# kubectl label nodes node1 nodeenv=pro
node/node1 labeled
[root@master ~]# kubectl label nodes node2 nodeenv=test
node/node2 labeled
b.配置文件如下
spec:
containers:
- name: containers-liveness-exec
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
??nodeSelector:
nodeenv: test
c.創(chuàng)建pod并查看pod詳情修然,已經(jīng)被調(diào)度到了node2
[root@master ~]# kubectl apply -f pod-diaodu-node.yaml
pod/pod-liveness-exec created
[root@master ~]# kubectl get pods -n dev -o wide
NAME ???????????????READY ??STATUS ???RESTARTS ??AGE ??IP ???????????NODE ?
pod-liveness-exec ??1/1 ????Running ??0 ?????????13s ??10.244.2.29 ??node2
(4)node親和性調(diào)度----硬限制
a.配置文件如下
spec:
containers:
- name: pod
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:#硬限制,必須按照條件選擇
nodeSelectorTerms:
- matchExpressions:
- key: nodeenv
operator: In
values: ["test","yyy"]#如果這里沒有符合要求的值质况,會(huì)報(bào)錯(cuò)
b.創(chuàng)建pod,并查看pod詳情愕宋,已經(jīng)調(diào)度到了node2上
[root@master ~]# kubectl apply -f pod-nodeaffinity-required.yaml
pod/nginx created
[root@master ~]# kubectl get pods ?-n dev -o wide
NAME ???READY ??STATUS ???RESTARTS ??AGE ??IP ???????????NODE
nginx ??1/1 ????Running ??0 ?????????9s ???10.244.2.30 ??node2
(5)node親和性調(diào)度----軟限制
a.配置文件如下
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:#軟限制
- weight: 1
preference:
matchEcpressions:
- key: nodeenv
operator: In
values: ["xxx","yyy"]#這里沒有符合要求的值,pod也會(huì)被正常創(chuàng)建
b.創(chuàng)建pod并查看pod詳情结榄,pod被調(diào)度到了node2
[root@master ~]# kubectl apply -f pod-nodeaffinity-prefer.yaml
pod/nginx created
[root@master ~]# kubectl get pods -n dev -o wide
NAME ???READY ??STATUS ???RESTARTS ??AGE ??IP ???????????NODE ?
nginx ??1/1 ????Running ??0 ?????????61s ??10.244.2.31 ??node2
(6)pod親和性
a.創(chuàng)建1個(gè)帶標(biāo)簽的pod
[root@master ~]# kubectl get pod -n dev --show-labels
NAME ???READY ??STATUS ???RESTARTS ??AGE ???LABELS
nginx ??1/1 ????Running ??0 ?????????120m ??env=test,podenv=test,version=3.0
b.創(chuàng)建配置文件
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv
operator: In
values: ["test","yyy"]
topologyKey: kubernetes.io/hostname
c.創(chuàng)建pod,查看pod詳情中贝,已經(jīng)被調(diào)度到了和上面的pod nginx同樣的node上
[root@master ~]# kubectl apply -f pod-podaffinity-required.yaml
pod/tomcat created
[root@master ~]# kubectl get pods -n dev -o wide
NAME ????READY ??STATUS ???RESTARTS ??AGE ????IP ???????????NODE ?
nginx ???1/1 ????Running ??0 ?????????130m ???10.244.2.31 ??node2
tomcat ??1/1 ????Running ??0 ?????????2m38s ??10.244.2.32 ??node2
(7)pod反親和性
a.創(chuàng)建配置文件
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv
operator: In
values: ["test","yyy"]
topologyKey: kubernetes.io/hostname
b.創(chuàng)建pod,查看pod詳情猎物,已經(jīng)被調(diào)度到了和上面的pod nginx不同node上
[root@master ~]# kubectl get pods -n dev -o wide
NAME ????READY ??STATUS ?????????????RESTARTS ??AGE ????IP ???????????NODE ?
mysql ???0/1 ????ContainerCreating ??0 ?????????3m55s ??<none> ???????node1
nginx ???1/1 ????Running ????????????0 ?????????149m ???10.244.2.31 ??node2
(8)污點(diǎn)
(1).上面的調(diào)度是對pod進(jìn)行配置扫倡,而污點(diǎn)則是對node進(jìn)行配置
(2).設(shè)置污點(diǎn)的格式為key=value:effect
effect有3個(gè)值:
preferNoscheduler:盡量不要來,除非沒辦法
Noscheduler:新的不要來涮雷,舊的不用動(dòng)
NoExecute:新的不要來,舊的也要走
(3)模擬污點(diǎn)效果
a.為了更好的展示效果老厌,現(xiàn)在先暫停node2
[root@master ~]# kubectl cordon node2
node/node2 cordoned
b.為node1設(shè)置preferNoscheduler污點(diǎn)
[root@master ~]# kubectl taint nodes node1 tag=zss:PreferNoSchedule
node/node1 tainted
c.創(chuàng)建pod
配置文件如下
[root@master ~]# cat pod-suiyi.yaml
apiVersion: v1
kind: Pod
metadata:
??name: pod1
??namespace: dev
spec:
??containers:
??- ?name: nginx1
?????image: nginx:1.17.1
可以將pod創(chuàng)建在node1上
[root@master ~]# kubectl get pods -n dev -o wide
NAME ??READY ??STATUS ?????????????RESTARTS ??AGE ??IP ??????NODE ?
pod1 ??0/1 ????ContainerCreating ??0 ?????????22s ??<none> ??node1
d.為node1取消preferNoSchedule污點(diǎn)設(shè)置NoSchedule污點(diǎn)
[root@master ~]# kubectl taint nodes node1 tag:PreferNoSchedule-
node/node1 untainted
[root@master ~]# kubectl taint nodes node1 tag=zss:NoSchedule
node/node1 tainted
e.再創(chuàng)建一個(gè)pod2,已經(jīng)不能被調(diào)度到node1
[root@master ~]# kubectl get pods -n dev -o wide
NAME ??READY ??STATUS ?????????????RESTARTS ??AGE ????IP ??????NODE ??
pod1 ??0/1 ????ContainerCreating ??0 ?????????9m10s ??<none> ??node1 ?
pod2 ??0/1 ????Pending ????????????0 ?????????13s ????<none> ??<none>
f.污點(diǎn)的查看
[root@master ~]# kubectl describe ?nodes node1 | grep Taints
Taints: ????????????tag=zss:NoSchedule
[root@master ~]#
[root@master ~]# kubectl describe ?nodes master | grep Taints
Taints: ????????????node-role.kubernetes.io/master:NoSchedule
(9)容忍
a.pod可以添加容忍瘟则,從而被調(diào)度到有污點(diǎn)的節(jié)點(diǎn)上
b.創(chuàng)建一個(gè)具有容忍的pod
創(chuàng)建配置文件
[root@master ~]# cat pod-suiyi.yaml
apiVersion: v1
kind: Pod
metadata:
??name: pod3
??namespace: dev
spec:
??containers:
??- ?name: nginx3
?????image: nginx:1.17.1
??tolerations:
??- key: "tag"
????operator: "Equal"
????value: "zss"
????effect: "NoSchedule"
[root@master ~]# kubectl apply -f pod-suiyi.yaml
pod/pod3 created
通過添加容忍,pod3可以被調(diào)度到node1上了
[root@master ~]# kubectl get pods -n dev -o wide
NAME ??READY ??STATUS ?????????????RESTARTS ??AGE ????IP ??????NODE ?
pod1 ??0/1 ????ContainerCreating ??0 ?????????18m ????<none> ??node1
pod2 ??0/1 ????Pending ????????????0 ?????????9m59s ??<none> ??<none>
pod3 ??0/1 ????ContainerCreating ??0 ?????????19s ????<none> ??node1
46pod控制器
(1)pod可以分為是否有對應(yīng)的控制器枝秤,具有控制器的pod受控制器的控制
(2)主要的pod控制器:
replicaSet:保證指定數(shù)量的pod運(yùn)行,并支持pod數(shù)量變更慷嗜,鏡像版本變更
deployment:通過控制replicaset來控制pod,支持滾動(dòng)升級淀弹,版本回退
horizontal pod autoscaler:可以根據(jù)集群負(fù)載自動(dòng)調(diào)整pod的數(shù)量,實(shí)現(xiàn)削峰填谷
daemonset:在集群的指定node上都運(yùn)行一個(gè)副本庆械,一般用于守護(hù)進(jìn)程類的任務(wù)
job:它創(chuàng)建出的pod只要完成任務(wù)就立即退出薇溃,用于執(zhí)行一次性任務(wù)
cronjob:他創(chuàng)建的pod會(huì)周期性的執(zhí)行,用于執(zhí)行周期性任務(wù)
statefulset:管理有狀態(tài)應(yīng)用
47.replicaset控制器
a.創(chuàng)建配置文件
[root@master ~]# cat deploy-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
??name: replicaset
??namespace: dev
spec:
??replicas: 3
??selector:
????matchLabels:
??????app: nginx-pod
??template:
????metadata:
??????labels:
????????app: nginx-pod
????spec:
??????containers:
??????- name: nginx
????????image: nginx:1.17.1
b.創(chuàng)建rs,查看rs,查看rs的pod數(shù)量
[root@master ~]# kubectl apply -f deploy-replicaset.yaml
replicaset.apps/replicaset created
[root@master ~]# kubectl get rs -n dev -o wide
NAME ????????DESIRED ??CURRENT ??READY ??AGE ???CONTAINERS ??IMAGES ????????SELECTOR
replicaset ??3 ????????3 ????????0 ??????3m6s ??nginx ???????nginx:1.17.1 ??app=nginx-pod
[root@master ~]# kubectl get pods -n dev
NAME ??????????????READY ??STATUS ???RESTARTS ??AGE
replicaset-7gq9s ??0/1 ????Pending ??0 ?????????2m32s
replicaset-sf4wg ??0/1 ????Pending ??0 ?????????2m32s
replicaset-z9h5j ??0/1 ????Pending ??0 ?????????2m32s
c.修改rs的副本數(shù)量為4個(gè)
[root@master ~]# kubectl edit rs replicaset -n dev
replicaset.apps/replicaset edited
[root@master ~]# kubectl get pods -n dev
NAME ??????????????READY ??STATUS ???RESTARTS ??AGE
replicaset-7gq9s ??0/1 ????Pending ??0 ?????????6m4s
replicaset-c8zwb ??0/1 ????Pending ??0 ?????????9s
replicaset-sf4wg ??0/1 ????Pending ??0 ?????????6m4s
replicaset-szbm7 ??0/1 ????Pending ??0 ?????????9s
非交互式修改副本數(shù)量為2
[root@master ~]# kubectl scale rs replicaset --replicas=2 -n dev
replicaset.apps/replicaset scaled
[root@master ~]# kubectl get pods -n dev
NAME ??????????????READY ??STATUS ???RESTARTS ??AGE
replicaset-7gq9s ??0/1 ????Pending ??0 ?????????8m15s
replicaset-z9h5j ??0/1 ????Pending ??0 ?????????8m15s
d.鏡像升級的2種方法
kubectl edit rs replicaset -n dev
?kubectl set image rs replicaset nginx=nginx ?-n dev
e.刪除rs
[root@master ~]# kubectl delete rs replicaset -n dev
replicaset.apps "replicaset" deleted
48.deployment
(1)deployment是通過管理replicaset來管理pod
(2)實(shí)操deployment
a.創(chuàng)建配置文件
[root@master ~]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
??name: deployment1
??namespace: dev
spec:
??replicas: 3
??selector:
????matchLabels:
??????app: nginx-pod
??template:
????metadata:
??????labels:
????????app: nginx-pod
????spec:
??????containers:
??????- name: nginx
????????image: nginx:1.17.1
[if !supportLists]b.?[endif]創(chuàng)建deployment并查看
[root@master ~]# kubectl apply -f deployment.yaml ?--record=true
deployment.apps/deployment1 created
[root@master ~]# kubectl get deploy -n dev
NAME ?????????READY ??UP-TO-DATE ??AVAILABLE ??AGE
deployment1 ??0/3 ????3 ???????????0 ??????????2m35s
c.創(chuàng)建deploy后創(chuàng)建了一個(gè)rs
[root@master ~]# kubectl get rs -n dev
NAME ????????????????????DESIRED ??CURRENT ??READY ??AGE
deployment1-5d89bdfbf9 ??3 ????????3 ????????0 ??????3m19s
d.非交互式擴(kuò)容
[root@master ~]# kubectl scale deploy deployment1 --replicas=5 -n dev
deployment.apps/deployment1 scaled
已經(jīng)由3個(gè)變成了5個(gè)
[root@master ~]# kubectl get pods -n dev
NAME ??????????????????????????READY ??STATUS ???RESTARTS ??AGE
deployment1-5d89bdfbf9-62jvx ??1/1 ????Running ??0 ?????????9s
deployment1-5d89bdfbf9-gdmdm ??1/1 ????Running ??0 ?????????9s
deployment1-5d89bdfbf9-lwqmj ??1/1 ????Running ??0 ?????????8m25s
deployment1-5d89bdfbf9-szjc9 ??1/1 ????Running ??0 ?????????8m25s
deployment1-5d89bdfbf9-v9dg2 ??1/1 ????Running ??0 ?????????8m25s
e.交互式修改副本數(shù)量為2
[root@master ~]# kubectl edit deploy deployment1 -n dev
deployment.apps/deployment1 edited
[root@master ~]# kubectl get pods -n dev
NAME ??????????????????????????READY ??STATUS ???RESTARTS ??AGE
deployment1-5d89bdfbf9-szjc9 ??1/1 ????Running ??0 ?????????12m
deployment1-5d89bdfbf9-v9dg2 ??1/1 ????Running ??0 ?????????12m
50.deployment控制器的鏡像更新
(1)鏡像更新策略分為2種:重建更新缭乘、滾動(dòng)更新
(2)重建更新
a.創(chuàng)建配置文件
[root@master ~]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
??name: deployment1
??namespace: dev
spec:
??replicas: 3
??selector:
????matchLabels:
??????app: nginx-pod
??strategy:
????type:
??????Recreate
??template:
????metadata:
??????labels:
????????app: nginx-pod
????spec:
??????containers:
??????- name: nginx
????????image: nginx:1.17.1
b.創(chuàng)建deploy,更新鏡像沐序,觀察鏡像更新過程
[root@master ~]# kubectl ?apply -f ?pod-podaffinity-required.yaml
pod/mysql created
[root@master ~]# ?kubectl set image deploy deployment1 nginx=nginx:1.17.1 -n dev
deployment.apps/deployment1 image updated
(3)滾動(dòng)更新
a.創(chuàng)建配置文件
??strategy:
????type:
??????RollingUpdate
????rollingUpdate:
??????maxUnavailable: 25%
??????maxSurge: 25%
b.創(chuàng)建deploy,更新鏡像,觀察鏡像更新過程
[root@master ~]# kubectl apply -f deployment.yaml
deployment.apps/deployment1 created
[root@master ~]# kubectl set image deploy deployment1 nginx=nginx:1.17.2 -n dev
deployment.apps/deployment1 image updated
最終啟動(dòng)時(shí)間不一致
[root@master ~]# kubectl get pods -n dev
NAME ??????????????????????????READY ??STATUS ???RESTARTS ??AGE
deployment1-675d469f8b-6kpk6 ??1/1 ????Running ??0 ?????????2m9s
deployment1-675d469f8b-dbzrv ??1/1 ????Running ??0 ?????????2m8s
deployment1-675d469f8b-n769z ??1/1 ????Running ??0 ?????????2m11s
51.deployment版本回退
(1)在版本更新中堕绩,生成了新的rs,老的rs還在策幼,用于版本回退
[root@master ~]# kubectl get rs -n dev
NAME ????????????????????DESIRED ??CURRENT ??READY ??AGE
deployment1-5d89bdfbf9 ??0 ????????0 ????????0 ??????10m
deployment1-675d469f8b ??3 ????????3 ????????3 ??????8m20s
(2)查看版本是否升級成功
[root@master ~]# kubectl rollout status ?deploy deployment1 -n dev
deployment "deployment1" successfully rolled out
(3)查看升級歷史記錄,這里顯示有一次版本升級
[root@master ~]# kubectl rollout history deploy deployment1 -n dev
deployment.apps/deployment1
REVISION ?CHANGE-CAUSE
1 ????????kubectl apply --filename=deployment.yaml --record=true
2 ????????kubectl apply --filename=deployment.yaml --record=true
查看當(dāng)前鏡像版本
[root@master ~]# kubectl get deployment -n dev -o wide
NAME ?????????READY ??UP-TO-DATE ??AVAILABLE ??AGE ??CONTAINERS ??IMAGES ????????SELECTOR
deployment1 ??3/3 ????3 ???????????3 ??????????21m ??nginx ???????nginx:1.17.3 ??app=nginx-pod
回滾到指定版本
[root@master ~]# kubectl rollout undo deployment deployment1 --to-revision=1 -n dev
deployment.apps/deployment1 rolled back
已經(jīng)回滾成功
[root@master ~]# kubectl get deployment -n dev -o wide
NAME ?????????READY ??UP-TO-DATE ??AVAILABLE ??AGE ??CONTAINERS ??IMAGES ????????SELECTOR
deployment1 ??3/3 ????3 ???????????3 ??????????22m ??nginx ???????nginx:1.17.1 ??
版本更新記錄已經(jīng)更新
[root@master ~]# kubectl rollout history deploy deployment1 -n dev
deployment.apps/deployment1
REVISION ?CHANGE-CAUSE
2 ????????kubectl apply --filename=deployment.yaml --record=true
3 ????????kubectl apply --filename=deployment.yaml --record=true
[root@master ~]#
[if !supportLists](4)[endif]kubectl rollout相關(guān)選項(xiàng):status/history/pause/resume/restart/undo
52.金絲雀發(fā)布 ?????
(1)在更新一批pod的鏡像的時(shí)候奴紧,先更新一部分pod的鏡像特姐,之后,讓部分請求路由到更新后的pod中黍氮,觀察是否可以正常響應(yīng)唐含,如果正常再更新剩下的pod的鏡像,這就是金絲雀發(fā)布
(2)實(shí)操金絲雀發(fā)布
a.查看當(dāng)前deploy鏡像版本
[root@master ~]# kubectl get deploy deployment1 -n dev -o wide
NAME ?????????READY ??UP-TO-DATE ??AVAILABLE ??AGE ??CONTAINERS ??IMAGES ??????
deployment1 ??6/6 ????6 ???????????6 ??????????16h ??nginx ???????nginx:1.17.1
b.更新deploy版本沫浆,并暫停deploy
[root@master ~]# kubectl set image deploy deployment1 nginx=nginx:1.17.4 -n dev && kubectl rollout pause deploy deployment1 -n dev
deployment.apps/deployment1 image updated
deployment.apps/deployment1 paused
c.查看更新情況捷枯,3個(gè)已經(jīng)被更新了,還有3個(gè)沒有更新
[root@master ~]# kubectl rollout status deploy deployment1 -n dev
Waiting for deployment "deployment1" rollout to finish: 3 out of 6 new replicas have been updated...
e.查看pod情況
[root@master ~]# ?kubectl get pods -n dev
NAME ??????????????????????????READY ??STATUS ???RESTARTS ??AGE
deployment1-5d89bdfbf9-4nxpb ??1/1 ????Running ??0 ?????????16h
deployment1-5d89bdfbf9-87z52 ??1/1 ????Running ??0 ?????????10m
deployment1-5d89bdfbf9-d9mxn ??1/1 ????Running ??0 ?????????10m
deployment1-5d89bdfbf9-mx7wz ??1/1 ????Running ??0 ?????????16h
deployment1-5d89bdfbf9-smc9z ??1/1 ????Running ??0 ?????????16h
deployment1-6c9f56fcfb-7w755 ??1/1 ????Running ??0 ?????????4m46s
deployment1-6c9f56fcfb-h54cg ??1/1 ????Running ??0 ?????????4m46s
deployment1-6c9f56fcfb-m8jsh ??1/1 ????Running ??0 ?????????4m46s
f.確定更新后的pod沒問題后专执,繼續(xù)更新
[root@master ~]# kubectl rollout ?resume deploy deployment1 ?-n dev
deployment.apps/deployment1 resumed
g.已經(jīng)被更新
[root@master ~]# kubectl get deploy deployment1 -n dev -o wide
NAME ?????????READY ??UP-TO-DATE ??AVAILABLE ??AGE ??CONTAINERS ??IMAGES ?????
deployment1 ??6/6 ????6 ???????????6 ??????????16h ??nginx ???????nginx:1.17.4
53.HPA
(1)HPA(Horizontal Pod Autoscaler)是基于deploy和rs來實(shí)現(xiàn)系統(tǒng)擴(kuò)縮容的
(2)安裝metrics-server收集集群中的資源使用情況
a.[root@master ~]yum install git
[root@master ~]git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server
[root@master ~]cd /root/metrics-server/deploy/1.8+/
b.修改metrics-server-deployment.yaml
????spec:
??????hostNetwork: true
??????serviceAccountName: metrics-server
??????volumes:
??????# mount in tmp so we can safely use from-scratch images and/or read-only containers
??????- name: tmp-dir
????????emptyDir: {}
??????containers:
??????- name: metrics-server
????????#image: k8s.gcr.io/metrics-server-amd64:v0.3.6
????????image: registry.cn-hangzhou.aliyuncs.com/google _containers/metrics-server-amd64:v0.3.6
????????imagePullPolicy: Always
????????args:
????????- --kubelet-insecure-tls
????????- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname,ExternalDNS,
????????volumeMounts:
????????- name: tmp-dir
??????????mountPath: /tmp
c.創(chuàng)建所需pod
[root@master 1.8+]# kubectl apply -f ./
d.查看pod
[root@master 1.8+]# kubectl get pod -n kube-system
metrics-server-54645cfcfb-bftqc ??1/1 ????Running ????????????0 ?????????35s
e.查看節(jié)點(diǎn)資源使用情況
[root@master 1.8+]# kubectl top node
NAME ????CPU(cores) ??CPU% ??MEMORY(bytes) ??MEMORY%
master ??135m ????????3% ????1669Mi ?????????45%
node1 ???1426m ???????35% ???1866Mi ?????????50%
node2 ???43m ?????????1% ????794Mi ??????????21%
f.查看pod使用資源情況
[root@master 1.8+]# kubectl top pod -n kube-system
NAME ?????????????????????????????CPU(cores) ??MEMORY(bytes)
etcd-master ??????????????????????15m ?????????312Mi
kube-apiserver-master ????????????35m ?????????419Mi
kube-controller-manager-master ???13m ?????????42Mi
kube-flannel-ds-amd64-djgsf ??????2m ??????????14Mi
kube-flannel-ds-amd64-jskxd ??????2m ??????????12Mi
kube-flannel-ds-amd64-rkrst ??????2m ??????????10Mi
kube-proxy-gtdq8 ?????????????????1m ??????????13Mi
kube-proxy-nzkc4 ?????????????????1m ??????????12Mi
kube-proxy-whslc ?????????????????1m ??????????15Mi
kube-scheduler-master ????????????3m ??????????20Mi
metrics-server-54645cfcfb-bftqc ??1m ??????????11Mi
(3)使用metrics-server
a.創(chuàng)建一個(gè)deploy,一個(gè)svc
[root@master 1.8+]# kubectl run nginx --image=nginx:1.7.1 --requests=cpu=100m -n dev
deployment.apps/nginx created
[root@master 1.8+]# kubectl expose deployment nginx --type=NodePort --port=80 -n dev
service/nginx exposed
[root@master 1.8+]#
[root@master 1.8+]# kubectl get deploy,pod,svc -n dev
NAME ???????????????????READY ??UP-TO-DATE ??AVAILABLE ??AGE
deployment.apps/nginx ??1/1 ????1 ???????????1 ??????????67s
NAME ???????????????????????READY ??STATUS ???RESTARTS ??AGE
pod/nginx-cd84c9547-jtsr4 ??1/1 ????Running ??0 ?????????67s
NAME ????????????????TYPE ???????CLUSTER-IP ??????EXTERNAL-IP ??PORT(S) ???????AGE
service/nginx ???????NodePort ???10.97.183.181 ???<none> ???????80:30641/TCP ??25s
b.創(chuàng)建一個(gè)Hpa
配置文件如下
[root@master ~]# cat kzq-hpa.yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
??name: hpa
??namespace: dev
spec:
??minReplicas: 1
??maxReplicas: 10
??targetCPUUtilizationPercentage: 3
??scaleTargetRef:
????apiVersion: apps/v1
????kind: Deployment
????name: nginx
[root@master ~]# kubectl apply -f kzq-hpa.yaml
horizontalpodautoscaler.autoscaling/hpa created
[root@master ~]# kubectl get hpa -n dev
NAME ??REFERENCE ?????????TARGETS ??MINPODS ??MAXPODS ??REPLICAS ??AGE
hpa ???Deployment/nginx ??0%/3% ????1 ????????10 ???????1 ?????????3m45s
55.daemonSet
(1)保證集群中的每一個(gè)或指定節(jié)點(diǎn)都運(yùn)行一個(gè)副本淮捆,一般適用于節(jié)點(diǎn)監(jiān)控和日志收集場景
(2)操作daemonSet
a.創(chuàng)建配置文件
[root@master ~]# cat kzq-daemonSet.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
??name: daemonset
??namespace: dev
spec:
??selector:
????matchLabels:
??????app: nginx-pod
??template:
??????metadata:
????????labels:
??????????app: nginx-pod
??????spec:
????????containers:
????????- name: nginx
??????????image: nginx:1.17.1
[if !supportLists]b.?[endif]創(chuàng)建daemonset,并查看pod,已經(jīng)在2個(gè)node都生成了pod
[root@master ~]# kubectl apply -f kzq-daemonSet.yaml
daemonset.apps/daemonset created
[root@master ~]# kubectl get pods -n dev -o wide
NAME ?????????????READY ??STATUS ?????????????RESTARTS ??AGE ??IP ???????????NODE ?
daemonset-8fbnd ??1/1 ????Running ????????????0 ?????????4s ???10.244.2.82 ??node2
daemonset-tmw94 ??0/1 ????ContainerCreating ??0 ?????????7s ???<none> ???????node1
56.job控制器
(1)用于負(fù)責(zé)批量處理短暫的一次性任務(wù)
(2)操作job
a.創(chuàng)建配置文件
[root@master ~]# cat kzq-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
??name: job
??namespace: dev
spec:
??manualSelector: true
??selector:
????matchLabels:
??????app: counter-pod
??template:
??????metadata:
????????labels:
??????????app: counter-pod
??????spec:
????????restartPolicy: Never
????????containers:
????????- name: counter
??????????image: busybox:1.30
??????????command: ["/bin/sh","-c","for i in 9 8 7 6 do echo $i; sleep 3;done"]
b.創(chuàng)建失敗他炊,
[root@master ~]# kubectl get pods -n dev
NAME ???????READY ??STATUS ??RESTARTS ??AGE
job-bc9d4 ??0/1 ????Error ???0 ?????????11m
job-ftn9j ??0/1 ????Error ???0 ?????????10m
57.cronjob控制器
(1)cronjob在特定的時(shí)間點(diǎn)反復(fù)的運(yùn)行job任務(wù)
58.service
(1)service概述
a.service為多個(gè)pod提供了統(tǒng)一的訪問入口并實(shí)現(xiàn)了負(fù)載均衡争剿,當(dāng)pod ip地址變化時(shí)不影響訪問;底層由kube-proxy實(shí)現(xiàn)
b.kube-proxy的三種工作模式:userspace/iptables/ipvs模式
基于ipvx模式的模式需要安裝ipvs
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
yum -y install kernel-devel make gcc openssl-devel libnl* popt*
c.
[root@master ~]# kubectl get pod -n kube-system --show-labels | grep k8s-app=kube-proxy
kube-proxy-76rpt ?????????????????1/1 ????Running ??0 ?????????12s ????controller-revision-hash=69bdcfb59b,k
kube-proxy-8gf2c ?????????????????1/1 ????Running ??0 ?????????3s ?????controller-revision-hash=69bdcfb59b,k
kube-proxy-l62wn ?????????????????1/1 ????Running ??0 ?????????11s ????controller-revision-hash=69bdcfb59b,k
[root@master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-gtdq8" deleted
pod "kube-proxy-nzkc4" deleted
pod "kube-proxy-whslc" deleted
(2)service--clusterip
a.準(zhǔn)備實(shí)驗(yàn)環(huán)境痊末,創(chuàng)建3個(gè)pod
[root@master ~]# kubectl get pod -n dev
NAME ??????????????????????????READY ??STATUS ???RESTARTS ??AGE
deployment1-6696798b78-4xzvs ??1/1 ????Running ??0 ?????????14m
deployment1-6696798b78-rbkjq ??1/1 ????Running ??0 ?????????14m
deployment1-6696798b78-tlgbv ??1/1 ????Running ??0 ?????????14m
用如下方式修改3個(gè)pod的訪問首頁內(nèi)容
[root@master ~]# kubectl exec -it deployment1-6696798b78-rbkjq -n dev /bin/sh
# echo "10.244.1.10" > /usr/share/nginx/html/index.html
# exit
[root@master ~]# curl 10.244.1.9:80
10.244.1.9
[root@master ~]# curl 10.244.1.10:80
10.244.1.10
[root@master ~]# curl 10.244.2.83:80
10.244.1.83
b.為這3個(gè)pod創(chuàng)建一個(gè)service
service配置文件如下
[root@master ~]# cat service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
??name: service-cluster
??namespace: dev
spec:
????selector:
??????app: nginx-pod
????clusterIP: 10.97.97.97
????type: ClusterIP
????ports:
????- port: 80
??????targetPort: 80
[root@master ~]# kubectl apply -f service-clusterip.yaml
service/service-cluster created
[root@master ~]# kubectl get svc -n dev
NAME ?????????????TYPE ???????CLUSTER-IP ??????EXTERNAL-IP ??PORT(S) ???????AGE
service-cluster ??ClusterIP ??10.97.97.97 ?????<none> ???????80/TCP ????????11s
訪問service蚕苇,可以看到實(shí)現(xiàn)了負(fù)載均衡
[root@master ~]# curl 10.97.97.97:80
10.244.1.83
[root@master ~]# curl 10.97.97.97:80
10.244.1.9
[root@master ~]# curl 10.97.97.97:80
10.244.1.10
[root@master ~]# curl 10.97.97.97:80
10.244.1.83
[root@master ~]# curl 10.97.97.97:80
10.244.1.9
[root@master ~]# curl 10.97.97.97:80
10.244.1.10
查看ipvs映射規(guī)則
(3)HeadLiness類型的service
這種service不提供負(fù)載均衡功能
a.創(chuàng)建配置文件
[root@master ~]# cat service-headliness.yaml
apiVersion: v1
kind: Service
metadata:
??name: service-headliness
??namespace: dev
spec:
????selector:
??????app: nginx-pod
????clusterIP: None
????type: ClusterIP
????ports:
????- port: 80
??????targetPort: 80
b.創(chuàng)建并查看
[root@master ~]# kubectl apply -f service-headliness.yaml
service/service-headliness created
[root@master ~]# kubectl get svc service-headliness -n dev -o ?wide
NAME ????????????????TYPE ???????CLUSTER-IP ??EXTERNAL-IP ??PORT(S) ??AGE ??SELECTOR
service-headliness ??ClusterIP ??None ????????<none> ???????80/TCP ???26s ??app=nginx-pod
[root@master ~]# kubectl exec -it deployment1-6696798b78-4xzvs -n dev /bin/sh
# cat /etc/resolve.conf
cat: /etc/resolve.conf: No such file or directory
# cat /etc/resolv.conf
nameserver 10.96.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
# exit
(4)nodeport類型的service
a.配置文件如下
[root@master ~]# cat svc-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
??name: service-nodeport
??namespace: dev
spec:
????selector:
??????app: nginx-pod
????type: NodePort
????ports:
????- port: 80
??????nodePort: 30002
??????targetPort: 80
b.創(chuàng)建并,查看svc
[root@master ~]# kubectl apply -f svc-nodeport.yaml
service/service-nodeport created
[root@master ~]# kubectl get svc -n dev
NAME ??????????????TYPE ??????CLUSTER-IP ??????EXTERNAL-IP ??PORT(S) ???????AGE
service-nodeport ??NodePort ??10.101.105.100 ??<none> ???????80:30002/TCP ??17s
c.測試訪問
66.ingress
a.為了避免過多的service占用過多的端口所以出現(xiàn)了ingress,
70.數(shù)據(jù)存儲(chǔ)