我們可以執(zhí)行kubectl scale命令和Dashboard上操作可以實(shí)現(xiàn)pod的擴(kuò)縮容灭衷,但是這樣畢竟需要每次手工操作一次次慢,而且指不定什么時(shí)候業(yè)務(wù)請(qǐng)求量就很大了,所以如果不能做到自動(dòng)化的去擴(kuò)縮容的話(huà),這也是一件麻煩的事迫像。如果kubernetes系統(tǒng)能夠根據(jù)Pod的當(dāng)前負(fù)載的變化情況來(lái)自動(dòng)的進(jìn)行擴(kuò)縮容就好了拭抬,因?yàn)檫@個(gè)過(guò)程本來(lái)就是不固定的,頻繁發(fā)生的侵蒙,所以純手工方式不是很現(xiàn)實(shí)造虎。
Kubernetes為我們提供了一個(gè)資源對(duì)象:Horizontal Pod Autoscaling(Pod水平自動(dòng)伸縮),簡(jiǎn)稱(chēng)HPA纷闺。HPA通過(guò)監(jiān)控分析RC或者Deployment控制所有Pod的負(fù)載變化情況來(lái)確定是否需要調(diào)整pod的副本數(shù)算凿,這是HPA最基本的原理。
HPA
HPA在kubernetes集群中被設(shè)計(jì)成一個(gè)controller犁功,我們可以簡(jiǎn)單通過(guò)kubectl autoscale 命令來(lái)創(chuàng)建一個(gè)HPA資源對(duì)象氓轰,HPA Controller默認(rèn)30秒輪詢(xún)一次(可通過(guò)kube-controller-manager的標(biāo)志 --horizontal-pod-autoscaler-sync-period進(jìn)行設(shè)置),查詢(xún)指定的資源(RC或者Deployment)中的pod的資源利用率浸卦,并且與創(chuàng)建時(shí)設(shè)定的值和指標(biāo)做對(duì)比署鸡,從而實(shí)現(xiàn)自動(dòng)伸縮的功能。
當(dāng)你創(chuàng)建HPA后限嫌,HPA會(huì)從Heapster或者用戶(hù)自定義的RESTClient端獲取每一個(gè)pod利用率或者原始值的平均值靴庆,然后和HPA中定義的指標(biāo)進(jìn)行對(duì)比,同時(shí)計(jì)算出需要伸縮的具體值并進(jìn)行相應(yīng)的操作怒医。目前炉抒,HPA可以從兩個(gè)地方獲取數(shù)據(jù):
* Heapster:僅支持獲取cpu使用率
* 自定義監(jiān)控:
Heapster
首先安裝Heapster,前面我們?cè)趉ubeadm搭建集群的課程中稚叹,實(shí)際上我們已經(jīng)默認(rèn)把Heapster相關(guān)的鏡像都已經(jīng)拉取到節(jié)點(diǎn)上了焰薄,所以接下來(lái)我們只需要部署即可,我們這里使用的是Heapster 1.4.2版本的扒袖,前往Heapster的githup頁(yè)面:
[https://github.com/kubernetes/heapster](https://github.com/kubernetes/heapster/tree/v1.4.2/deploy/kube-config/influxdb)
[root@app-139-42 HPA]# vim heapster.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: heapster-admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: heapster
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: 172.20.139.17:5000/heapster-amd64:v1.3.0
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true # 10255端口不通塞茅,k8s默認(rèn)使用10250
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
[root@app-139-42 HPA]# vim grafana.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: 172.20.139.17:5000/heapster-grafana-amd64:v4.2.0
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
value: /
volumes:
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
[root@app-139-42 HPA]# vim influxdb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: influxdb
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: 172.20.139.17:5000/heapster-influxdb-amd64:v1.1.1
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
我們可以把該頁(yè)面的下的yaml文件拷貝到我們的集群中,使用kubectl創(chuàng)建即可季率,另外創(chuàng)建完野瘦,如果需要在dashboard當(dāng)中看到監(jiān)控圖表,我們還需要在Dashboard中配置上我們的heapster-host蚀同。
同樣的缅刽,我們來(lái)創(chuàng)建一個(gè)Deployment管理pod啊掏,然后利用HPA來(lái)進(jìn)行自動(dòng)擴(kuò)縮容蠢络,定義Deployment的yaml文件如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hap-nginx-deploy
labels:
app: nginx-demo
spec:
replicas: 3
revisionHistoryLimit: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: web
image: nginx
ports:
- containerPort: 80
# 然后創(chuàng)建Deployment:
$ kubectl create -f hpa-deploy-demo.yaml
# 現(xiàn)在我們來(lái)創(chuàng)建一個(gè)HPA,可以使用kubectl autoscale命令來(lái)創(chuàng)建:
$ kubectl autoscale deployment hpa-nginx-deploy --cpu-percent=10 --min=1 --max=10
deployment "hpa-nginx-deploy" autoscaled
···
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
hpa-nginx-deploy Deployment/hpa-nginx-deploy 10% 0% 1 10 13s
此命令創(chuàng)建了一個(gè)關(guān)聯(lián)資源 hpa-nginx-deploy 的HPA迟蜜,最小的 pod 副本數(shù)為1刹孔,最大為10。HPA會(huì)根據(jù)設(shè)定的 cpu使用率(10%)動(dòng)態(tài)的增加或者減少pod數(shù)量。
使用kubectl autoscale命令來(lái)創(chuàng)建外髓霞,我們依然可以通過(guò)yaml形式來(lái)創(chuàng)建HPA資源對(duì)象卦睹。如果不知道怎么編寫(xiě),可以查看上面命令行創(chuàng)建的HPA的YAML文件
$ kubectl get hpa hpa-nginx-deploy -o yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
creationTimestamp: 2017-06-29T08:04:08Z
name: nginxtest
namespace: default
resourceVersion: "951016361"
selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/nginxtest
uid: 86febb63-5ca1-11e7-aaef-5254004e79a3
spec:
maxReplicas: 5 //資源最大副本數(shù)
minReplicas: 1 //資源最小副本數(shù)
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment //需要伸縮的資源類(lèi)型
name: nginxtest //需要伸縮的資源名稱(chēng)
targetCPUUtilizationPercentage: 50 //觸發(fā)伸縮的cpu使用率
status:
currentCPUUtilizationPercentage: 48 //當(dāng)前資源下pod的cpu使用率
currentReplicas: 1 //當(dāng)前的副本數(shù)
desiredReplicas: 2 //期望的副本數(shù)
lastScaleTime: 2017-07-03T06:32:19Z
現(xiàn)在可以根據(jù)上面的yaml文件可以自己創(chuàng)建一個(gè)基于YAML的HPA描述文件了方库。
現(xiàn)在我們來(lái)增大負(fù)載進(jìn)行測(cè)試结序,我們來(lái)創(chuàng)建一個(gè)busybox,并且循環(huán)訪(fǎng)問(wèn)上面創(chuàng)建的服務(wù)
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -q -O- http://172.16.255.60:4000; done
下圖可以看到纵潦,HPA已經(jīng)開(kāi)始工作徐鹤。
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
hpa-nginx-deploy Deployment/hpa-nginx-deploy 10% 29% 1 10
同時(shí)我們查看相關(guān)資源hpa-nginx-deploy的副本數(shù)量,副本數(shù)量已經(jīng)從原來(lái)的1變成了3邀层。
$ kubectl get deployment hpa-nginx-deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hpa-nginx-deploy 3 3 3 3 4d
同時(shí)再次查看HPA返敬,由于副本數(shù)量的增加,使用率也保持在了10%左右寥院。
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
hpa-nginx-deploy Deployment/hpa-nginx-deploy 10% 9% 1 10 35m
同樣的這個(gè)時(shí)候我們來(lái)關(guān)掉busybox來(lái)減少負(fù)載劲赠,然后等待一段時(shí)間觀察下HPA和Deployment對(duì)象
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
hpa-nginx-deploy Deployment/hpa-nginx-deploy 10% 0% 1 10 48m
$ kubectl get deployment hpa-nginx-deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hpa-nginx-deploy 1 1 1 1 4d
可以看到副本數(shù)量已經(jīng)由3變?yōu)?。
不過(guò)當(dāng)前的HPA只有CPU使用率這一個(gè)指標(biāo)秸谢,還不是很靈活的凛澎,在后面的課程中我們來(lái)根據(jù)我們自定義的監(jiān)控來(lái)自動(dòng)對(duì)Pod進(jìn)行擴(kuò)縮容。