1. 部署策略對比
分別對滾動部署辑甜、藍(lán)綠部署和金絲雀部署進(jìn)行對比
滾動部署: 應(yīng)用的新版本逐步替換舊版本。實(shí)際的部署發(fā)生在一段時(shí)間內(nèi)袍冷。在此期間磷醋,新舊版本會共存,而不會影響功能和用戶體驗(yàn)胡诗。這個過程可以更輕易的回滾和舊組件不兼容的任何新組件邓线。
藍(lán)綠部署:應(yīng)用的新版本部署在綠色版本環(huán)境中煌恢,進(jìn)行功能和性能測試骇陈。一旦測試通過,應(yīng)用的流量從藍(lán)色版本路由到綠色版本瑰抵。然后綠色版本變成新的生產(chǎn)環(huán)境你雌。在這個方法中,兩個相同的生產(chǎn)環(huán)境并行工作二汛。
金絲雀部署:采用金絲雀部署习贫,你可以在生產(chǎn)環(huán)境的基礎(chǔ)設(shè)施中小范圍的部署新的應(yīng)用代碼逛球。一旦應(yīng)用簽署發(fā)布,只有少數(shù)用戶被路由到它苫昌。最大限度的降低影響颤绕。如果沒有錯誤發(fā)生幸海,新版本可以逐漸推廣到整個基礎(chǔ)設(shè)施。以前曠工開礦下礦洞前奥务,先會放一只金絲雀進(jìn)去探是否有有毒氣體物独,看金絲雀能否活下來,金絲雀發(fā)布由此得名氯葬。下圖示范了金絲雀部署:
金絲雀部署包括將生產(chǎn)流量從版本A逐漸轉(zhuǎn)移到版本B。通常帚称,流量是根據(jù)權(quán)重分配的官研。 例如,90%的請求發(fā)送到版本A闯睹,10%的請求發(fā)送到版本B戏羽。
2. 使用Kubernetes實(shí)現(xiàn)金絲雀部署
主要步驟:
1.部署v1版本的應(yīng)用,此時(shí)service訪問的都是v1版本的服務(wù)
2.部署v2版本楼吃,數(shù)量為x/10始花,同時(shí)縮小v1版本的數(shù)量x/10,此時(shí)有x/10的流量到v2版本的服務(wù)
3.逐步縮小v1孩锡,擴(kuò)大v2酷宵,最終v2版本替換全部的v1
2.1 搭建模擬的服務(wù)
app-v1.yaml : https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/canary/native/app-v1.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 10
selector:
matchLabels:
app: my-app
version: v1.0.0
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
$kubectl apply -f app-v1.yaml
service/my-app created
deployment.apps/my-app-v1 created
$kubectl get service my-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-app NodePort 10.98.198.198 <none> 80:30969/TCP 23m
$curl 10.98.198.198:80
Host: my-app-v1-c9b7f9985-5qvz4, Version: v1.0.0
2.2 應(yīng)用使用金絲雀部署方式來升級
接下來,我們對my-app-v1升級到my-app-v2:
app-v2.yaml : https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/canary/native/app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
version: v2.0.0
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
開啟watch來監(jiān)控pod和deployment的變化
$kubectl get --watch deployment
$kubectl get --watch pod
升級
$kubectl apply -f app-v2.yaml
deployment.apps/my-app-v2 created
此時(shí)可以看到躬窜,my-app-v2啟動了1個
$kubectl get --watch deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-app-v1 10/10 10 10 45m
my-app-v2 1/1 1 1 46s
$kubectl scale --replicas=9 deploy my-app-v1
deployment.apps/my-app-v1 scaled
$kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
my-app-v1 9/9 9 9 47m
my-app-v2 1/1 1 1 2m48s
此時(shí)浇垦,我們將my-app-v1 縮小到9個,這樣通過service的負(fù)載均衡斩披,my-app-v2會承接到%10(1/20)的流量
$service=10.98.198.198:80
$while sleep 0.1; do curl "$service"; done
Host: my-app-v1-c9b7f9985-mqnmr, Version: v1.0.0
Host: my-app-v1-c9b7f9985-bl4g7, Version: v1.0.0
Host: my-app-v1-c9b7f9985-rmng9, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mz9hc, Version: v1.0.0
Host: my-app-v1-c9b7f9985-bl4g7, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mz9hc, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mm6fp, Version: v1.0.0
Host: my-app-v2-77fc8c9499-m6n9j, Version: v2.0.0
Host: my-app-v1-c9b7f9985-l69pf, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mqnmr, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mz9hc, Version: v1.0.0
Host: my-app-v1-c9b7f9985-62zb4, Version: v1.0.0
驗(yàn)證通過后溜族,我們逐步將my-app-v2擴(kuò)容到10個,將my-app-v1縮小到0個
$kubectl scale --replicas=10 deploy my-app-v2
$kubectl delete deploy my-app-v1
再次驗(yàn)證服務(wù)垦沉,會發(fā)現(xiàn)my-app-v2承接了所有流量:
$while sleep 0.1; do curl "$service"; done
測試完成清理數(shù)據(jù)
$kubectl delete all -l app=my-app
3. 使用Kubernetes實(shí)現(xiàn)藍(lán)綠部署
主要步驟:
1.部署v1版本 煌抒,此時(shí)service訪問的都是v1版本的服務(wù)
2.部署v2版本,直到部署完成
3.將service的流量從v1版本切換到v2版本
4.銷毀v1
首先厕倍,通過如下命令監(jiān)控pod的實(shí)時(shí)狀態(tài)
$watch kubectl get pod
3.1 搭建模擬的服務(wù)
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
# Note here that we match both the app and the version
selector:
app: my-app
version: v1.0.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v1.0.0
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
部署服務(wù)和v1版本
$kubectl apply -f app-v1.yaml
service/my-app created
deployment.apps/my-app-v1 created
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
my-app NodePort 10.111.231.242 <none> 80:31540/TCP 18s
$while sleep 0.1;do curl 10.111.231.242:80;done
Host: my-app-v1-c9b7f9985-wqpf5, Version: v1.0.0
Host: my-app-v1-c9b7f9985-wqpf5, Version: v1.0.0
Host: my-app-v1-c9b7f9985-gnhr4, Version: v1.0.0
3.2 部署v2版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v2.0.0
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
部署完成后寡壮,我們可以看到,總共個部署了兩個版本的deployment讹弯。有3個pod是v1况既,另外3個是v2的。而當(dāng)前service訪問的都是v1版本的服務(wù)
接下來组民,就是要將服務(wù)的流量切到v2
$kubectl patch service my-app -p '{"spec":{"selector":{"version":"v2.0.0"}}}'
此時(shí)可以看到棒仍,服務(wù)的流量都到了v2
驗(yàn)證沒問題后,我們把v1刪除
$kubectl delete deploy my-app-v1
若有問題臭胜,可以回滾
$kubectl patch service my-app -p '{"spec":{"selector":{"version":"v1.0.0"}}}'
4 使用Kubernetes實(shí)現(xiàn)滾動部署
主要步驟:
1.部署v1版本 莫其,此時(shí)service訪問的都是v1版本的服務(wù)
2.設(shè)置v2版本癞尚,且更新策略為滾動更新
3.部署v2版本
4.1 搭建模擬的服務(wù)
app-v1.yaml: https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/ramped/app-v1.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 10
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
部署app-v1.yaml
$kubectl apply -f app-v1.yaml
service/my-app created
deployment.apps/my-app created
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
my-app NodePort 10.100.43.22 <none> 80:32725/TCP 45s
$curl 10.100.43.22:80
Host: my-app-c9b7f9985-ph2fz, Version: v1.0.0
4.2 接下來準(zhǔn)備進(jìn)行滾動升級
通過如下命令監(jiān)控pod的變化
$watch kubectl get pod
app-v2.yaml : https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/ramped/app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 10
# Here we define the rolling update strategy
# - maxSurge define how many pod we can add at a time
# - maxUnavailable define how many pod can be unavailable
# during the rolling update
#
# Setting maxUnavailable to 0 would make sure we have the appropriate
# capacity during the rolling update.
# You can also use percentage based value instead of integer.
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
# The selector field tell the deployment which pod to update with
# the new version. This field is optional, but if you have labels
# uniquely defined for the pod, in this case the "version" label,
# then we need to redefine the matchLabels and eliminate the version
# field from there.
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
# Intial delay is set to a high value to have a better
# visibility of the ramped deployment
initialDelaySeconds: 15
periodSeconds: 5
開始升級
$kubectl apply -f app-v2.yaml
deployment.apps/my-app configured
同時(shí)可以看到pod正在被逐步替換
在逐步替換的過程中,為了驗(yàn)證流量乱陡,可以隨時(shí)暫停升級浇揩,暫停恢復(fù)命令如下
$kubectl rollout pause deploy my-app
deployment.apps/my-app paused
$kubectl rollout resume deploy my-app
deployment.apps/my-app resumed