一旭贬、Envoy xDS示例
代碼克隆參照Envoy示例博文
- eds-filesystem
cd servicemesh_in_practise/Dynamic-Configuration/eds-filesystem
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
curl 172.31.11.2
注:該示例的docker-compose中有三個(gè)envoy鏡像位置,需要添加三次论衍;而且envoy監(jiān)聽的listener端口和端點(diǎn)發(fā)現(xiàn)端口链嘀,可以一樣晨另,不會(huì)沖突
修改配置文件,模擬添加節(jié)點(diǎn)穿扳,讓eds動(dòng)態(tài)發(fā)現(xiàn)
docker exec -it edsfilesystem_envoy_1 sh
cd /etc/envoy/eds.conf.d
cat eds.yaml.v2 > eds.yaml
mv eds.yaml bak && mv bak eds.yaml # 此步驟是在容器的時(shí)候需要運(yùn)行藤违,在宿主機(jī)不需要運(yùn)行,是為了強(qiáng)制改變文件纵揍,以便基于inode監(jiān)視的工作機(jī)制可被觸發(fā)顿乒,讓envoy動(dòng)態(tài)發(fā)現(xiàn)新增節(jié)點(diǎn)
curl 172.31.11.2:9901/clusters # 可以通過這個(gè)命令查看新增接節(jié)點(diǎn),mv命令執(zhí)行前泽谨,雖然已經(jīng)新增接節(jié)點(diǎn)璧榄,但是envoy并沒有發(fā)現(xiàn)特漩,mv命令執(zhí)行后,可以查看到
# 驗(yàn)證
curl 172.31.11.2
- lds-cds-filesystem
cd servicemesh_in_practise/Dynamic-Configuration/lds-cds-filesystem
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
...
- webserver01-sidecar
- webserver # 添加別名骨杂,后續(xù)修改cds.yaml配置文件中的域名驗(yàn)證
...
- webserver02-sidecar
- webserver # 添加別名涂身,后續(xù)修改cds.yaml配置文件中的域名驗(yàn)證
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
curl 172.31.12.2
環(huán)境變量添加三次
修改配置文件,監(jiān)聽端口驗(yàn)證
docker exec -it ldscdsfilesystem_envoy_1 sh
cd /etc/envoy/conf.d
# 修改listener端口搓蚪,從80改為10080
vi lds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
name: listener_http
address:
socket_address: { address: 0.0.0.0, port_value: 10080 }
mv lds.yaml bak && mv bak lds.yaml
curl 172.31.12.2:9901/listeners # 查看監(jiān)聽端口
# 驗(yàn)證
curl 172.31.12.2:10080
修改配置文件蛤售,刪除節(jié)點(diǎn)驗(yàn)證
vi cds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
name: webcluster
connect_timeout: 1s
type: STRICT_DNS
load_assignment:
cluster_name: webcluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: webserver01
port_value: 80
mv cds.yaml bak && mv bak cds.yaml
curl 172.31.12.2:9901/clusters # 驗(yàn)證集群節(jié)點(diǎn)信息
# 驗(yàn)證
curl 172.31.12.2:10080
修改域名解析驗(yàn)證
vi cds.yaml
mv cds.yaml bak && mv bak cds.yaml
curl 172.31.12.2:9901/clusters # 驗(yàn)證集群節(jié)點(diǎn)信息
# 驗(yàn)證
curl 172.31.12.2:10080
- lds-cds-grpc
cd servicemesh_in_practise/Dynamic-Configuration/lds-cds-grpc
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
curl 172.31.15.2:9901/clusters # 查看集群信息
curl 172.31.15.2:9901/listeners # 查看監(jiān)聽端口信息
# 驗(yàn)證
curl 172.31.15.2
修改配置文件,監(jiān)聽端口和添加節(jié)點(diǎn)驗(yàn)證
docker exec -it ldscdsgrpc_xdsserver_1 sh
cd /etc/envoy-xds-server/config
cat config.yaml-v2 > config.yaml # 由于配置文件實(shí)時(shí)生效妒潭,最好不要直接編輯config.yaml文件悴能,vi的自動(dòng)保存功能會(huì)讓xDS server下發(fā)配置
curl 172.31.15.2:9901/clusters # 查看集群信息
curl 172.31.15.2:9901/listeners # 查看監(jiān)聽端口信息
yum install jq -y # 安裝jq命令
# 根據(jù)config_dump接口查出來的信息,用jq命令過濾出根據(jù)配置信息動(dòng)態(tài)發(fā)現(xiàn)的集群
curl -s 172.31.15.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
[
{
"version_info": "411",
"cluster": {
"@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"name": "webcluster", # 對(duì)應(yīng)集群的名字
"type": "EDS",
"eds_cluster_config": {
"eds_config": {
"api_config_source": {
"api_type": "GRPC", # api協(xié)議
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "xds_cluster" # 發(fā)現(xiàn)webcluster對(duì)應(yīng)的上游動(dòng)態(tài)服務(wù)(xDS server)集群名字
}
}
],
"set_node_on_first_message_only": true,
"transport_api_version": "V3" # api版本
},
"resource_api_version": "V3"
}
},
"connect_timeout": "5s",
"dns_lookup_family": "V4_ONLY"
},
"last_updated": "20xx-04-25Txx:13:45.024Z"
}
]
# 查看listener詳細(xì)信息
curl -s 172.31.15.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener'
{
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
"name": "listener_http",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 10080
}
},
"filter_chains": [
{
"filters": [
{
"name": "envoy.filters.network.http_connection_manager",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"stat_prefix": "http",
"rds": {
"config_source": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "xds_cluster"
}
}
],
"set_node_on_first_message_only": true,
"transport_api_version": "V3"
},
"resource_api_version": "V3"
},
"route_config_name": "local_route"
},
"http_filters": [
{
"name": "envoy.filters.http.router"
}
]
}
}
]
}
]
}
# 驗(yàn)證
curl 172.31.15.2:10080
修改配置文件雳灾,listener名字驗(yàn)證
vi config.yaml
name: myconfig
spec:
listeners:
- name: listener_http1 # 修改名字
address: 0.0.0.0
port: 10081 # 修改端口號(hào)
curl 172.31.15.2:9901/listeners # 查看監(jiān)聽端口信息
# 驗(yàn)證
curl 172.31.15.2:10080
curl 172.31.15.2:10081
修改listener名字會(huì)生成一個(gè)新的listener漠酿,并且原來的listener也存在且可以訪問
- 健康檢測(cè)(health-check)
cd servicemesh_in_practise/Cluster-Manager/health-check
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
curl 172.31.18.2:9901/clusters # 查看集群信息
curl 172.31.18.2:9901/listeners # 查看監(jiān)聽端口信息
# 驗(yàn)證
curl 172.31.18.2
修改livez檢測(cè)值為FAIL驗(yàn)證
curl -XPOST -d "livez=FAIL" 172.31.18.11/livez # 修改web服務(wù)sidercar的livez值
curl -I 172.31.18.11/livez # 修改后驗(yàn)證響應(yīng)碼為506
HTTP/1.1 506 Variant Also Negotiates
content-type: text/html; charset=utf-8
content-length: 4
server: envoy
date: Tue, 26 Apr 20xx xx:54:29 GMT
x-envoy-upstream-service-time: 1
# 驗(yàn)證
curl 172.31.18.2
修改livez檢測(cè)值為OK驗(yàn)證
curl -XPOST -d "livez=OK" 172.31.18.11/livez
curl -I 172.31.18.11/livez
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 2
server: envoy
date: Tue, 26 Apr 20xx xx:57:38 GMT
x-envoy-upstream-service-time: 1
# 驗(yàn)證
curl 172.31.18.2
- 異常檢測(cè)(outlier-detection)
cd servicemesh_in_practise/Cluster-Manager/outlier-detection
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
curl 172.31.20.2
模擬故障,修改livez的值為FAIL驗(yàn)證
curl -XPOST -d 'livez=FAIL' 172.31.20.12/livez
curl -I 172.31.20.12/livez
# 驗(yàn)證
while true; do curl 172.31.20.2/livez; sleep 0.5; done
恢復(fù)故障驗(yàn)證
curl -XPOST -d 'livez=OK' 172.31.20.12/livez
curl -I 172.31.20.12/livez
# 驗(yàn)證
while true; do curl 172.31.20.2/livez; sleep 0.5; done
- least-requests
cd servicemesh_in_practise/Cluster-Manager/least-requests
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
./send-request.sh 172.31.22.2 # 使用腳本發(fā)送測(cè)試
Send 300 requests, and print the result. This will take a while.
Weight of all endpoints:
Red:Blue:Green = 1:3:5
Response from:
Red:Blue:Green = 35:92:173
- ring-hash
cd servicemesh_in_practise/Cluster-Manager/ring-hash
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
while true; do index=$[$RANDOM%3]; curl -H "User-Agent: Browser_${index}" 172.31.25.2/user-agent && curl -H "User-Agent: Browser_${index}" 172.31.25.2/hostname && echo; sleep .1; done # 使用循環(huán)測(cè)試
User-Agent: Browser_0
ServerName: green
User-Agent: Browser_0
ServerName: green
User-Agent: Browser_2
ServerName: red
User-Agent: Browser_0
ServerName: green
User-Agent: Browser_2
ServerName: red
User-Agent: Browser_0
ServerName: green
User-Agent: Browser_2
ServerName: red
User-Agent: Browser_1
ServerName: blue
相同的瀏覽器id谎亩,請(qǐng)求落在相同的主機(jī)
- priority-levels
cd servicemesh_in_practise/Cluster-Manager/priority-levels
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
while true; do curl 172.31.29.2; sleep .5;done # 正常五個(gè)節(jié)點(diǎn)炒嘲,只發(fā)給配置高優(yōu)先級(jí)的三個(gè)節(jié)點(diǎn)
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12!
模擬故障驗(yàn)證
curl -XPOST -d 'livez=FAIL' 172.31.29.13/livez
while true; do curl 172.31.29.2; sleep .5;done # 按照默認(rèn)超配因子1.4計(jì)算,有大約6.8%的流量會(huì)調(diào)度到低優(yōu)先級(jí)的兩個(gè)節(jié)點(diǎn)上
v3版本的優(yōu)先級(jí)調(diào)度暫時(shí)沒調(diào)通
- lb-subsets
環(huán)境介紹
endpoint | stage | version | type | xlarge |
---|---|---|---|---|
e1 | prod | 1.0 | std | true |
e2 | prod | 1.0 | std | |
e3 | prod | 1.1 | std | |
e4 | prod | 1.1 | std | |
e5 | prod | 1.0 | bigmem | |
e6 | prod | 1.1 | bigmem | |
e7 | dev | 1.2-pre | std |
標(biāo)簽分類
keys: [stage,type]匈庭,subnets如下
[prod,std] - e1,e2,e3,e4
[prod,bigmem] - e5,e6
[dev,std] - e7
[dev,bigmem] - 沒有
keys: [stage,version]
[prod,1.0] - e1,e2,e5
[prod,1.1] - e3,e4,e6
[prod,1.2-pre] - 沒有
[dev,1.0] - 沒有
[dev,1.1] - 沒有
[dev,1.2-pre] - e7
keys:[version]
[1.0] - e1,e2,e5
[1.1] - e3,e4,e6
[1.2-pre] - e7
keys:[xlarge,version]
[true,1.0] - e1
[true,1.1] - 沒有
[true,1.2-pre] - 沒有
共生成如上10種子集(subnet)夫凸,還有一個(gè)默認(rèn)的自己
stage=prod,type=std,version=1.0 - e1,e2
cd servicemesh_in_practise/Cluster-Manager/lb-subsets
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 替換routes部分為如下內(nèi)容
routes:
- match:
prefix: "/"
headers:
- name: x-custom-version
exact_match: pre-release
- name: x-environment-state
exact_match: dev
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.2-pre"
stage: "dev"
- match:
prefix: "/"
headers:
- name: x-custom-version
exact_match: v1.0
- name: x-environment-state
exact_match: prod
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.0"
stage: "prod"
- match:
prefix: "/"
headers:
- name: x-custom-version
exact_match: v1.1
- name: x-environment-state
exact_match: prod
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.1"
stage: "prod"
- match:
prefix: "/"
headers:
- name: x-hardware-test
exact_match: memory
- name: x-environment-state
exact_match: prod
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
type: "bigmem"
stage: "prod"
- match:
prefix: "/"
headers:
- name: x-hardware-test
exact_match: std
- name: x-environment-state
exact_match: prod
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
type: "std"
stage: "prod"
- match:
prefix: "/"
headers:
- name: x-hardware-test
exact_match: std
- name: x-environment-state
exact_match: dev
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
type: "std"
stage: "dev"
- match:
prefix: "/"
headers:
- name: x-custom-version
exact_match: v1.0
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.0"
- match:
prefix: "/"
headers:
- name: x-custom-version
exact_match: v1.1
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.1"
- match:
prefix: "/"
headers:
- name: x-custom-version
exact_match: pre-release
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.2-pre"
- match:
prefix: "/"
headers:
- name: x-custom-xlarge
exact_match: isTrue
- name: x-custom-version
exact_match: pre-release
route:
cluster: webcluster1
metadata_match:
filter_metadata:
envoy.lb:
version: "1.0"
xlarge: "true"
- match:
prefix: "/"
route:
weighted_clusters:
clusters:
- name: webcluster1
weight: 90
metadata_match:
filter_metadata:
envoy.lb:
version: "1.0"
- name: webcluster1
weight: 10
metadata_match:
filter_metadata:
envoy.lb:
version: "1.1"
metadata_match:
filter_metadata:
envoy.lb:
stage: "prod"
http_filters:
- name: envoy.filters.http.router
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
./test.sh 172.31.33.2 # 發(fā)送200次,根據(jù)組合比例阱持,大概是9:1
Requests: v1.0:v1.1 = 184:16
curl -H "x-custom-version: v1.0" -H "x-environment-state: prod" 172.31.33.2/hostname # 調(diào)用1.0,prod子集
ServerName: e1
ServerName: e2
ServerName: e5
curl -H "x-custom-version: v1.1" -H "x-environment-state: prod" 172.31.33.2/hostname # 調(diào)用1.1,prod子集
curl -H "x-custom-version: pre-release" -H "x-environment-state: dev" 172.31.33.2/hostname # 調(diào)用1.2-pre,dev子集
ServerName: e7
curl -H "x-environment-state: prod" -H "x-hardware-test: memory" 172.31.33.2/hostname # 調(diào)用prod,bigmem子集
ServerName: e5
ServerName: e6
curl -H "x-environment-state: prod" -H "x-hardware-test: std" 172.31.33.2/hostname # 調(diào)用prod,std子集
ServerName: e1
ServerName: e2
ServerName: e3
ServerName: e4
curl -H "x-environment-state: dev" -H "x-hardware-test: std" 172.31.33.2/hostname # 調(diào)用dev,std子集
ServerName: e7
curl -H "x-custom-version: v1.0" 172.31.33.2/hostname # 調(diào)用1.0子集
ServerName: e1
ServerName: e2
ServerName: e5
curl -H "x-custom-version: v1.1" 172.31.33.2/hostname # 調(diào)用1.1子集
ServerName: e3
ServerName: e4
ServerName: e6
curl -H "x-custom-version: pre-release" 172.31.33.2/hostname # 調(diào)用1.2-pre子集
ServerName: e7
curl -H "x-custom-version: pre-release" -H "x-custom-xlarge: isTrue" 172.31.33.2/hostname # 調(diào)用true,1.0子集
ServerName: e1
curl 172.31.33.2/hostname # 調(diào)用默認(rèn)子集
ServerName: e1
ServerName: e2
- circuit-breaker
cd servicemesh_in_practise/Cluster-Manager/circuit-breaker
# 修改docker-compose配置文件
vim docker-compose.yaml
services:
envoy:
image: envoyproxy/envoy-alpine:v1.21-latest
environment: # 添加環(huán)境變量
- ENVOY_UID=0 # 添加環(huán)境變量
# 啟動(dòng)
docker-compose up
# 驗(yàn)證
./send-requests.sh http://172.31.35.2 300 # 對(duì)集群一發(fā)送300次請(qǐng)求夭拌,超過最大連接數(shù),會(huì)有報(bào)錯(cuò)503的響應(yīng)碼出現(xiàn)
./send-requests.sh http://172.31.35.2/livez 300 # 對(duì)集群二發(fā)送300次請(qǐng)求紊选,超過最大連接數(shù)啼止,會(huì)有報(bào)錯(cuò)503的響應(yīng)碼出現(xiàn)
模擬故障驗(yàn)證
curl -XPOST -d 'livez=FAIL' 172.31.35.14/livez
./send-requests.sh http://172.31.35.2/livez 300 # 此時(shí)發(fā)送請(qǐng)求,會(huì)觸發(fā)outlier_detection設(shè)置兵罢,看日志會(huì)發(fā)現(xiàn)献烦,在請(qǐng)求三次報(bào)錯(cuò)506之后,會(huì)將故障主機(jī)踢出集群卖词,所有請(qǐng)求會(huì)有更多的503出現(xiàn)