27.kubernetes(k8s)筆記 Ingress(二) Envoy

前言:
流量入口代理作為互聯(lián)網(wǎng)系統(tǒng)的門戶組件酸些,具備眾多選型:從老牌代理 HAProxy惠况、Nginx,到微服務(wù) API 網(wǎng)關(guān) Kong磕蒲、Zuul,再到容器化 Ingress 規(guī)范與實(shí)現(xiàn)只盹,不同選型間功能辣往、性能、可擴(kuò)展性殖卑、適用場(chǎng)景參差不齊站削。當(dāng)云原生時(shí)代大浪襲來,Envoy 這一 CNCF 畢業(yè)數(shù)據(jù)面組件為更多人所知孵稽。那么许起,優(yōu)秀“畢業(yè)生”Envoy 能否成為云原生時(shí)代下流量入口標(biāo)準(zhǔn)組件十偶?

背景 —— 流量入口的眾多選型與場(chǎng)景

在互聯(lián)網(wǎng)體系下,凡是需要對(duì)外暴露的系統(tǒng)幾乎都需要網(wǎng)絡(luò)代理:較早出現(xiàn)的 HAProxy园细、Nginx 至今仍在流行惦积;進(jìn)入微服務(wù)時(shí)代后,功能更豐富猛频、管控能力更強(qiáng)的 API 網(wǎng)關(guān)又成為流量入口必備組件狮崩;在進(jìn)入容器時(shí)代后,Kubernetes Ingress 作為容器集群的入口鹿寻,是容器時(shí)代微服務(wù)的流量入口代理標(biāo)準(zhǔn)睦柴。關(guān)于這三類典型的七層代理,核心能力對(duì)比如下:


  • 從上述核心能力對(duì)比來看:
  1. HAProxy&Nginx 在具備基礎(chǔ)路由功能基礎(chǔ)上毡熏,性能坦敌、穩(wěn)定性經(jīng)歷多年考驗(yàn)。Nginx 的下游社區(qū) OpenResty 提供了完善的 Lua 擴(kuò)展能力招刹,使得 Nginx 可以更廣泛的應(yīng)用與擴(kuò)展恬试,如 API 網(wǎng)關(guān) Kong 即是基于 Nginx+OpenResty 實(shí)現(xiàn)。
  2. API 網(wǎng)關(guān)作為微服務(wù)對(duì)外 API 流量暴露的基礎(chǔ)組件疯暑,提供比較豐富的功能和動(dòng)態(tài)管控能力训柴。
  3. Ingress 作為 Kubernetes 入口流量的標(biāo)準(zhǔn)規(guī)范,具體能力視實(shí)現(xiàn)方式而定妇拯。如基于 Nginx 的 Ingress 實(shí)現(xiàn)能力更接近于 Nginx幻馁,Istio Ingress Gateway 基于 Envoy+Istio 控制面實(shí)現(xiàn),功能上更加豐富(本質(zhì)上 Istio Ingress Gateway 能力上強(qiáng)于通常的 Ingress 實(shí)現(xiàn)越锈,但未按照 Ingress 規(guī)范實(shí)現(xiàn))仗嗦。
  • 那么問題來了:同樣是流量入口,在云原生技術(shù)趨勢(shì)下甘凭,能否找到一個(gè)能力全面的技術(shù)方案稀拐,讓流量入口標(biāo)準(zhǔn)化?

  • Envoy 核心能力介紹
    Envoy 是一個(gè)為云原生應(yīng)用設(shè)計(jì)的開源邊緣與服務(wù)代理(ENVOY IS AN OPEN SOURCE EDGE AND SERVICE PROXY, DESIGNED FOR CLOUD-NATIVE APPLICATIONS丹弱,@envoyproxy.io)德撬,是云原生計(jì)算基金會(huì)(CNCF)第三個(gè)畢業(yè)的項(xiàng)目,GitHub 目前有 13k+ Star躲胳。

  • Envoy 有以下主要特性:

  1. 基于現(xiàn)代 C++ 開發(fā)的 L4/L7 高性能代理蜓洪。
  2. 透明代理。
  3. 流量管理坯苹。支持路由隆檀、流量復(fù)制、分流等功能。
  4. 治理特性恐仑。支持健康檢查泉坐、熔斷、限流菊霜、超時(shí)坚冀、重試济赎、故障注入鉴逞。
  5. 多協(xié)議支持。支持 HTTP/1.1司训,HTTP/2构捡,GRPC,WebSocket 等協(xié)議代理與治理壳猜。
  6. 負(fù)載均衡勾徽。加權(quán)輪詢、加權(quán)最少請(qǐng)求统扳、Ring hash喘帚、Maglev、隨機(jī)等算法支持咒钟。支持區(qū)域感知路由吹由、故障轉(zhuǎn)移等特性。
  7. 動(dòng)態(tài)配置 API朱嘴。提供健壯的管控代理行為的接口倾鲫,實(shí)現(xiàn) Envoy 動(dòng)態(tài)配置熱更新。
  8. 可觀察性設(shè)計(jì)萍嬉。提供七層流量高可觀察性乌昔,原生支持分布式追蹤。
  9. 支持熱重啟壤追】牡溃可實(shí)現(xiàn) Envoy 的無縫升級(jí)。
  10. 自定義插件能力行冰。Lua 與多語言擴(kuò)展沙箱 WebAssembly溺蕉。
  • 總體來說,Envoy 是一個(gè)功能與性能都非常優(yōu)秀的“雙優(yōu)生”资柔。在實(shí)際業(yè)務(wù)流量入口代理場(chǎng)景下焙贷,Envoy 具備先天優(yōu)勢(shì),可以作為云原生技術(shù)趨勢(shì)流量入口的標(biāo)準(zhǔn)技術(shù)方案:
  1. 較 HAProxy贿堰、Nginx 更豐富的功能
    相較于 HAProxy辙芍、Nginx 提供流量代理所需的基本功能(更多高級(jí)功能通常需要通過擴(kuò)展插件方式實(shí)現(xiàn)),Envoy 本身基于 C++ 已經(jīng)實(shí)現(xiàn)了相當(dāng)多代理所需高級(jí)功能,如高級(jí)負(fù)載均衡故硅、熔斷庶灿、限流、故障注入吃衅、流量復(fù)制往踢、可觀測(cè)性等。更為豐富的功能不僅讓 Envoy 天生就可以用于多種場(chǎng)景徘层,原生 C++ 的實(shí)現(xiàn)相較經(jīng)過擴(kuò)展的實(shí)現(xiàn)方式性能優(yōu)勢(shì)更為明顯峻呕。

  2. 與 Nginx 相當(dāng),遠(yuǎn)高于傳統(tǒng) API 網(wǎng)關(guān)的性能
    在性能方面趣效,Envoy 與 Nginx 在常用協(xié)議代理(如 HTTP)上性能相當(dāng)瘦癌。與傳統(tǒng) API 網(wǎng)關(guān)相比,性能優(yōu)勢(shì)明顯.

目前Service Mesh已經(jīng)進(jìn)入了以Istio為代表的第二代跷敬,由Data Panel(Proxy)讯私、Control Panel兩部分組成。Istio是對(duì)Service Mesh的產(chǎn)品化實(shí)踐西傀,幫助微服務(wù)實(shí)現(xiàn)了分層解耦斤寇,架構(gòu)圖如下:


HTTPProxy資源規(guī)范
apiVersion: projectcontour.io/v1 #API群組及版本;
kind: HTTPProxy #CRD資源的名稱;
metadata:
  name <string>
  namespace <string>  #名稱空間級(jí)別的資源
spec:
  virtualhost <VirtualHost>    #定義FQDN格式的虛擬主機(jī),類似于Ingress中host fqdn  <string>  #虛擬主機(jī)FQDN格式的名稱
  tls <TLS>   #啟用HTTPS拥褂,且默認(rèn)以301將HTTP請(qǐng)求重定向至HTTPS
    secretName <string>  #存儲(chǔ)于證書和私鑰信息的Secret資源名稱
    minimumProtocolVersion <string>  #支持的SSL/TLS協(xié)議的最低版本
    passthrough <boolean>  #是否啟用透?jìng)髂J侥锼瑔⒂脮r(shí)控制器不卸載HTTPS會(huì)話
    clientvalidation <DownstreamValidation>  #驗(yàn)證客戶端證書,可選配置
      caSecret <string>  #用于驗(yàn)證客戶端證書的CA的證書
  routes <[ ]Route> #定義路由規(guī)則
    conditions <[]Condition>  #流量匹配條件肿仑,支持PATH前綴和標(biāo)頭匹配兩種檢測(cè)機(jī)制
      prefix <String> #PATH路徑前綴匹配致盟,類似于Ingress中的path字段
    permitInsecure <Boolean> #是否禁止默認(rèn)的將HTTP重定向到HTTPS的功能
    services <[ ]Service> #后端服務(wù),會(huì)對(duì)應(yīng)轉(zhuǎn)換為Envoy的Cluster定義
      name <String>  #服務(wù)名稱
      port <Integer> #服務(wù)端口
       protocol <string> #到達(dá)后端服務(wù)的協(xié)議尤慰,可用值為tls馏锡、h2或者h(yuǎn)2c
       validation <UpstreamValidation> #是否校驗(yàn)服務(wù)端證書
          caSecret <string>
          subjectName <string> #要求證書中使用的Subject值
HTTPProxy 高級(jí)路由資源規(guī)范
spec:
  routes <[]Route> #定義路由規(guī)則
    conditions <[]Condition>
      prefix <String>
      header <Headercondition> #請(qǐng)求報(bào)文標(biāo)頭匹配
        name <String>   #標(biāo)頭名稱
        present <Boolean>  #true表示存在該標(biāo)頭即滿足條件,值false沒有意義
        contains <String>  #標(biāo)頭值必須包含的子串
        notcontains <string> #標(biāo)頭值不能包含的子串
        exact <String> #標(biāo)頭值精確的匹配
        notexact <string> #標(biāo)頭值精確反向匹配伟端,即不能與指定的值相同
    services <[ ]Service>#后端服務(wù)杯道,轉(zhuǎn)換為Envoy的Cluster
      name <String>
      port <Integer>
      protocol <String>
      weight <Int64>  #服務(wù)權(quán)重,用于流量分割
      mirror <Boolean> #流量鏡像
      requestHeadersPolicy <HeadersPolicy> #到上游服務(wù)器請(qǐng)求報(bào)文的標(biāo)頭策略
        set <[ ]HeaderValue>  #添加標(biāo)頭或設(shè)置指定標(biāo)頭的值
          name <String>
          value <String>
        remove <[]String>#移除指定的標(biāo)頭
      responseHeadersPolicy <HeadersPolicy>  #到下游客戶端響應(yīng)報(bào)文的標(biāo)頭策略
    loadBalancerPolicy <LoadBalancerPolicy>  #指定要使用負(fù)載均衡策略
      strategy <String>#具體使用的策略,支持Random责蝠、RoundRobin党巾、Cookie 
                       #和weightedLeastRequest,默認(rèn)為RoundRobin;
    requestHeadersPolicy <HeadersPolicy>  #路由級(jí)別的請(qǐng)求報(bào)文標(biāo)頭策略
    reHeadersPolicy <HeadersPolicy>  #路由級(jí)別的響應(yīng)報(bào)文標(biāo)頭策略
    pathRewritePolicy <PathRewritePolicy>  #URL重寫
      replacePrefix <[]ReplacePrefix>
        prefix <String> #PATH路由前綴
        replacement <string>  #替換為的目標(biāo)路徑

HTTPProxy服務(wù)彈性 健康檢查資源規(guī)范
spec:
  routes <[]Route>
    timeoutPolicy <TimeoutPolicy> #超時(shí)策略
      response <String> #等待服務(wù)器響應(yīng)報(bào)文的超時(shí)時(shí)長(zhǎng)
      idle <String> # 超時(shí)后霜医,Envoy維持與客戶端之間連接的空閑時(shí)長(zhǎng)
    retryPolicy <RetryPolicy> #重試策略
      count <Int64> #重試的次數(shù)齿拂,默認(rèn)為1
      perTryTimeout <String> #每次重試的超時(shí)時(shí)長(zhǎng)
    healthCheckPolicy <HTTPHealthCheckPolicy> # 主動(dòng)健康狀態(tài)檢測(cè)
      path <String> #檢測(cè)針對(duì)的路徑(HTTP端點(diǎn))
      host <String> #檢測(cè)時(shí)請(qǐng)求的虛擬主機(jī)
      intervalSeconds <Int64> #時(shí)間間隔,即檢測(cè)頻度,默認(rèn)為5秒
      timeoutSeconds <Int64> #超時(shí)時(shí)長(zhǎng)肴敛,默認(rèn)為2秒
      unhealthyThresholdCount <Int64> # 判定為非健康狀態(tài)的閾值署海,即連續(xù)錯(cuò)誤次數(shù)
      healthyThresholdCount <Int64> # 判定為健康狀態(tài)的閾值
Envoy部署
$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

[root@k8s-master Ingress]# kubectl get ns
NAME                   STATUS   AGE
default                Active   14d
dev                    Active   13d
ingress-nginx          Active   29h
kube-node-lease        Active   14d
kube-public            Active   14d
kube-system            Active   14d
kubernetes-dashboard   Active   21h
longhorn-system        Active   21h
projectcontour         Active   39m   #新增名稱空間
test                   Active   12d
[root@k8s-master Ingress]# kubectl nget pod -n projectcontour

[root@k8s-master Ingress]# kubectl get pod -n projectcontour
NAME                            READY   STATUS      RESTARTS   AGE
contour-5449c4c94d-mqp9b        1/1     Running     3          37m
contour-5449c4c94d-xgvqm        1/1     Running     5          37m
contour-certgen-v1.18.1-82k8k   0/1     Completed   0          39m
envoy-n2bs9                     2/2     Running     0          37m
envoy-q777l                     2/2     Running     0          37m
envoy-slt49                     1/2     Running     2          37m

[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
contour   ClusterIP      10.100.120.94   <none>        8001/TCP                     39m
envoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   39m  #因?yàn)椴皇荌aas平臺(tái)上 顯示為pending狀態(tài),表示一直在申請(qǐng)資源而掛起,不影響通過NodePort的方式訪問

[root@k8s-master Ingress]# kubectl api-resources
NAME                              SHORTNAMES                           APIGROUP                       NAMESPACED   KIND
...
extensionservices                 extensionservice,extensionservices   projectcontour.io              true         ExtensionService
httpproxies                       proxy,proxies                        projectcontour.io              true         HTTPProxy
tlscertificatedelegations         tlscerts                             projectcontour.io              true         TLSCertificateDelegation
[root@k8s-master Ingress]# cat httpproxy-demo.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-demo
  namespace: default
spec:
  virtualhost:
    fqdn: www.ik8s.io  #虛擬主機(jī)
    tls:
      secretName: ik8s-tls
      minimumProtocolVersion: "tlsv1.1"  #最低兼容的協(xié)議版本
  routes :
  - conditions:
    - prefix: /
    services :
    - name: demoapp-deploy   #后端svc
      port: 80
    permitInsecure: true  #明文訪問是否重定向 true為否

[root@k8s-master Ingress]# kubectl apply -f httpproxy-demo.yaml 
httpproxy.projectcontour.io/httpproxy-demo configured
  • 查看代理httpproxy或 httpproxies
[root@k8s-master Ingress]# kubectl get httpproxy 
NAME             FQDN          TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-demo   www.ik8s.io   ik8s-tls     valid    Valid HTTPProxy

[root@k8s-master Ingress]# kubectl get httpproxies
NAME             FQDN          TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-demo   www.ik8s.io   ik8s-tls     valid    Valid HTTPProxy

[root@k8s-master Ingress]# kubectl describe httpproxy  httpproxy-demo 
...
Spec:
  Routes:
    Conditions:
      Prefix:         /
    Permit Insecure:  true
    Services:
      Name:  demoapp-deploy
      Port:  80
  Virtualhost:
    Fqdn:  www.ik8s.io
    Tls:
      Minimum Protocol Version:  tlsv1.1
      Secret Name:               ik8s-tls
Status:
  Conditions:
    Last Transition Time:  2021-09-13T08:44:00Z
    Message:               Valid HTTPProxy
    Observed Generation:   2
    Reason:                Valid
    Status:                True
    Type:                  Valid
  Current Status:          valid
  Description:             Valid HTTPProxy
  Load Balancer:
Events:  <none>


[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
contour   ClusterIP      10.100.120.94   <none>        8001/TCP                     39m
envoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   39m
  • 添加hosts 訪問測(cè)試
[root@bigyong ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
...
192.168.54.171   www.ik8s.io

[root@bigyong ~]# curl  www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, ServerIP: 192.168.12.39!

[root@bigyong ~]# curl  www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-gw6qp, ServerIP: 192.168.113.39!
  • HTTPS訪問
[root@bigyong ~]# curl https://www.ik8s.io:32278
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

[root@bigyong ~]# curl -k https://www.ik8s.io:32278   #忽略證書不受信問題  訪問成功
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, Se
示例1:訪問控制
  • 創(chuàng)建2個(gè)Pod 分別使用不同版本
[root@k8s-master Ingress]# kubectl create deployment demoappv11 --image='ikubernetes/demoapp:v1.1' -n dev
deployment.apps/demoappv11 created

[root@k8s-master Ingress]# kubectl create deployment demoappv12 --image='ikubernetes/demoapp:v1.2' -n dev
deployment.apps/demoappv12 created
  • 創(chuàng)造與之對(duì)應(yīng)的Svc
[root@k8s-master Ingress]# kubectl create service clusterip demoappv11 --tcp=80 -n dev
service/demoappv11 created
[root@k8s-master Ingress]# kubectl create service clusterip demoappv12 --tcp=80 -n dev
service/demoappv12 created
[root@k8s-master Ingress]# kubectl get svc -n dev
kuNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
demoappv11   ClusterIP   10.99.204.65   <none>        80/TCP    19s
demoappv12   ClusterIP   10.97.211.38   <none>        80/TCP    17s

[root@k8s-master Ingress]# kubectl describe svc demoappv11 -n dev
Name:              demoappv11
Namespace:         dev
Labels:            app=demoappv11
Annotations:       <none>
Selector:          app=demoappv11
Type:              ClusterIP
IP:                10.99.204.65
Port:              80  80/TCP
TargetPort:        80/TCP
Endpoints:         192.168.12.53:80
Session Affinity:  None
Events:            <none>

[root@k8s-master Ingress]# kubectl describe svc demoappv12 -n dev
Name:              demoappv12
Namespace:         dev
Labels:            app=demoappv12
Annotations:       <none>
Selector:          app=demoappv12
Type:              ClusterIP
IP:                10.97.211.38
Port:              80  80/TCP
TargetPort:        80/TCP
Endpoints:         192.168.51.79:80
Session Affinity:  None
Events:            <none>
  • 訪問測(cè)試
[root@k8s-master Ingress]# curl 10.99.204.65
iKubernetes demoapp v1.1 !! ClientIP: 192.168.4.170, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@k8s-master Ingress]# curl 10.97.211.38
iKubernetes demoapp v1.2 !! ClientIP: 192.168.4.170, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
  • 部署Envoy httpproxy
[root@k8s-master Ingress]# cat httpproxy-headers-routing.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-headers-routing
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:  #路由
  - conditions:
    - header:
        name: X-Canary  #header中包含X-Canary:true
        present: true
    - header:
        name: User-Agent #header中包含curl
        contains: curl
    services:  #滿足以上兩個(gè)條件路由到demoappv11
    - name: demoappv11
      port: 80
  - services:  #其他不滿足條件路由到demoapp12
    - name: demoappv12
      port: 80

[root@k8s-master Ingress]# kubectl apply -f httpproxy-headers-routing.yaml 
httpproxy.projectcontour.io/httpproxy-headers-routing unchanged
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME                        FQDN            TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-headers-routing   www.ilinux.io                valid    Valid HTTPProxy

[root@k8s-master Ingress]# kubectl  get svc -n projectcontour
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
contour   ClusterIP      10.100.120.94   <none>        8001/TCP                     114m
envoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   114m
  • 訪問測(cè)試
[root@bigyong ~]# cat /etc/hosts   #添加hosts
...
192.168.54.171   www.ik8s.io  www.ilinux.io


[root@bigyong ~]# curl http://www.ilinux.io  #默認(rèn)為1.2版本
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
[root@bigyong ~]# curl http://www.ilinux.io
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
  • 因?yàn)橥ㄟ^curl訪問 所以在添加信息頭中添加 X-Canary:true即可滿足條件 為1.1版本
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io 
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!


[root@k8s-master Ingress]# kubectl delete -f httpproxy-headers-routing.yaml 
httpproxy.projectcontour.io "httpproxy-headers-routing" deleted

示例2:流量切割 金絲雀發(fā)布
  • 先小比例發(fā)布,沒問題后在發(fā)布到全部
  • 部署部署Envoy httpproxy 流量比列分別為10%吗购、90%流量
[root@k8s-master Ingress]# cat httpproxy-traffic-splitting.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-traffic-splitting
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
  - conditions:
    - prefix: /
    services:
    - name: demoappv11 
      port: 80
      weight: 90   #v1.1版本為90%流量
    - name: demoappv12
      port: 80
      weight: 10  #v1.2版本為10%流量

[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME                          FQDN            TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-traffic-splitting   www.ilinux.io                valid    Valid HTTPProxy
  • 訪問測(cè)試
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done   #v1.1 v1.2的比大約是9:1
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!

示例3:鏡像發(fā)布
[root@k8s-master Ingress]# cat httpproxy-traffic-mirror.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-traffic-mirror
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
  - conditions:
    - prefix: /
    services :
    - name: demoappv11
      port: 80
    - name: demoappv12
      port: 80
      mirror: true  #鏡像訪問

[root@k8s-master Ingress]# kubectl apply -f httpproxy-traffic-mirror.yaml

[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME                       FQDN            TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-traffic-mirror   www.ilinux.io                valid    Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME                          READY   STATUS    RESTARTS   AGE
demoappv11-59544d568d-5gg72   1/1     Running   0          74m
demoappv12-64c664955b-lkchk   1/1     Running   0          74m
```shell
- 訪問測(cè)試

```shell
#都是v1.1版本
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
  • 查看v1.2版本Pod日志 有相同流量訪問并顯示訪問正常
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME                          READY   STATUS    RESTARTS   AGE
demoappv11-59544d568d-5gg72   1/1     Running   0          74m
demoappv12-64c664955b-lkchk   1/1     Running   0          74m
[root@k8s-master Ingress]# kubectl logs demoappv12-64c664955b-lkchk -n dev
 * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
192.168.4.170 - - [13/Sep/2021 09:35:01] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:24] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:29] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:12] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:25] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:07] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:16] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -

[root@k8s-master Ingress]# kubectl delete  -f httpproxy-traffic-mirror.yaml 
httpproxy.projectcontour.io "httpproxy-traffic-mirror" deleted

示例4 :自定義調(diào)度算法
[root@k8s-master Ingress]# cat httpproxy-lb-strategy.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-lb-strategy
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
    - conditions:
      - prefix: /
      services:
        - name: demoappv11
          port: 80
        - name: demoappv12
          port: 80
      loadBalancerPolicy:
        strategy: Random  #隨機(jī)訪問策略
示例5: HTTPProxy服務(wù)彈性 健康檢查
[root@k8s-master Ingress]# cat  httpproxy-retry-timeout.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-retry-timeout
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
  - timeoutPolicy:
      response: 2s  #響應(yīng)時(shí)間為2s 2s內(nèi)沒有響應(yīng)為超時(shí)
      idle: 5s  #空閑5s
    retryPolicy:
      count: 3  #重試3次
      perTryTimeout: 500ms  #重試時(shí)間
    services:
    - name: demoappv12
      port: 80

參考鏈接:

https://baijiahao.baidu.com/s?id=1673615010327758104&wfr=spider&for=pc

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市砸狞,隨后出現(xiàn)的幾起案子捻勉,更是在濱河造成了極大的恐慌,老刑警劉巖刀森,帶你破解...
    沈念sama閱讀 218,204評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件踱启,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡研底,警方通過查閱死者的電腦和手機(jī)埠偿,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,091評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來飘哨,“玉大人胚想,你說我怎么就攤上這事⊙柯。” “怎么了?”我有些...
    開封第一講書人閱讀 164,548評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵统屈,是天一觀的道長(zhǎng)胚吁。 經(jīng)常有香客問我,道長(zhǎng)愁憔,這世上最難降的妖魔是什么腕扶? 我笑而不...
    開封第一講書人閱讀 58,657評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮吨掌,結(jié)果婚禮上半抱,老公的妹妹穿的比我還像新娘。我一直安慰自己膜宋,他們只是感情好窿侈,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,689評(píng)論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著秋茫,像睡著了一般史简。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上肛著,一...
    開封第一講書人閱讀 51,554評(píng)論 1 305
  • 那天圆兵,我揣著相機(jī)與錄音,去河邊找鬼枢贿。 笑死殉农,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的局荚。 我是一名探鬼主播超凳,決...
    沈念sama閱讀 40,302評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了聪建?” 一聲冷哼從身側(cè)響起钙畔,我...
    開封第一講書人閱讀 39,216評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎金麸,沒想到半個(gè)月后擎析,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,661評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡挥下,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,851評(píng)論 3 336
  • 正文 我和宋清朗相戀三年揍魂,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片棚瘟。...
    茶點(diǎn)故事閱讀 39,977評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡现斋,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出偎蘸,到底是詐尸還是另有隱情庄蹋,我是刑警寧澤,帶...
    沈念sama閱讀 35,697評(píng)論 5 347
  • 正文 年R本政府宣布迷雪,位于F島的核電站限书,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏章咧。R本人自食惡果不足惜倦西,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,306評(píng)論 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望赁严。 院中可真熱鬧扰柠,春花似錦、人聲如沸疼约。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,898評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽忆谓。三九已至裆装,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間倡缠,已是汗流浹背哨免。 一陣腳步聲響...
    開封第一講書人閱讀 33,019評(píng)論 1 270
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留昙沦,地道東北人泌参。 一個(gè)月前我還...
    沈念sama閱讀 48,138評(píng)論 3 370
  • 正文 我出身青樓外冀,卻偏偏與公主長(zhǎng)得像视粮,于是被迫代替她去往敵國和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子懒熙,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,927評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容