apisix在k8s上的實踐

一扼仲、相關文檔

# 1荷憋、chart 安裝apisix掂碱、apisix-dashboard绵咱、apisix-ingress-controller
https://github.com/apache/apisix-helm-chart
# 2砌梆、安裝過程中的報錯濒憋,可以下以下鏈接中搜索
https://github.com/apache/apisix-helm-chart/issues
# 3、apisix-ingress-controller相關文檔和使用
https://apisix.apache.org/zh/docs/ingress-controller/practices/the-hard-way/
# 4但壮、灰度發(fā)布相關
https://apisix.apache.org/zh/docs/ingress-controller/concepts/apisix_route

二冀泻、環(huán)境

ip 備注
192.168.13.12 k8s-master-01
192.168.13.211 k8s-node-01
192.168.13.58 k8s-node-02、nfs

提前安裝helm

三蜡饵、安裝

3.1道宅、storageclass安裝配置

3.1.1鬓梅、nfs節(jié)點上

yum install rpcbind nfs-utils -y
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
mkdir /nfs/data
[root@k8s-node-02 ~]# cat /etc/exports
/nfs/data/ *(insecure,rw,sync,no_root_squash)

3.1.2谜酒、nfs-subdir-external-provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.13.58 --set nfs.path=/nfs/data

3.1.3本姥、sc

[root@k8s-master-01 apisix]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  namespace: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---設置為默認的storageclass
provisioner: cluster.local/nfs-subdir-external-provisioner
parameters:
  server: 192.168.13.58
  path: /nfs/data
  readOnly: "false"

3.2、apisix安裝配置

注意:需要對charts做適當修改

helm pull apisix/apisix
tar -xf apisix-0.9.3.tgz
在apisix/values.yaml中做如下修改<只做了添加>:
ingress-controller:                                   ## 假如要啟用ingress-controller
  enabled: true
storageClass: "nfs-storage"                    ## 指定上面創(chuàng)建的nfs-storage
accessMode:
  - ReadWriteOnce
helm package apisix
helm install apisix apisix-0.9.3.tgz --create-namespace  --namespace apisix

安裝完成后焦辅,會發(fā)現(xiàn)apisix-ingress-controller-6697f4849d-wdrr5這個pod一直處于init的狀態(tài)博杖,可以在https://github.com/apache/apisix-helm-chart/issues中進行搜索https://github.com/apache/apisix-helm-chart/pull/284,也可以通過查看pod日志進行解決筷登,原因說明:

apisix-ingress-controller監(jiān)聽k8s apiserver crd資源<apiroute>剃根,通過svc apisix-admin:9180端口連接到apisix,apisix將規(guī)則寫入etcd中仆抵。但日志顯示controller一直監(jiān)聽:apisix-admin.ingress-apisix.svc.cluster.local:9180跟继,而svc和pod都部署在apisix的ns下种冬,所以需要修改兩個地方為:apisix-admin.apisix.svc.cluster.local:9180镣丑,分別為:
kubectl edit deployment apisix-ingress-controller -n apisix
kubectl edit configmap apisix-configmap -n apisix
然后刪除pod apisix-ingress-controller,重新生成娱两。

后續(xù)查看pod apisix-ingress-controller得日志莺匠,常有如下報錯:


apisix-ingress-controller err log

這一般是sa權限設置不當造成的,在此我貼一份我的配置十兢,后續(xù)看到有報錯日志趣竣,在適當?shù)胤叫薷募纯桑?/p>

[root@k8s-master-01 apisix]# cat 12-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-ingress-controller
  namespace: apisix
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: apisix-clusterrole
  namespace: apisix
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - replicationcontrollers
      - replicationcontrollers/scale
      - serviceaccounts
      - services
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - bindings
      - events
      - limitranges
      - namespaces/status
      - pods/log
      - pods/status
      - replicationcontrollers/status
      - resourcequotas
      - resourcequotas/status
    verbs:
      - get
      - list
      - watch
      - create
      - delete
      - update
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - controllerrevisions
      - daemonsets
      - deployments
      - deployments/scale
      - replicasets
      - replicasets/scale
      - statefulsets
      - statefulsets/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - daemonsets
      - deployments
      - deployments/scale
      - ingresses
      - networkpolicies
      - replicasets
      - replicasets/scale
      - replicationcontrollers/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apisix.apache.org
    resources:
      - apisixroutes
      - apisixroutes/status
      - apisixupstreams
      - apisixupstreams/status
      - apisixtlses
      - apisixtlses/status
      - apisixclusterconfigs
      - apisixclusterconfigs/status
      - apisixconsumers
      - apisixconsumers/status
      - apisixpluginconfigs
      - apisixpluginconfigs/status

    verbs:
      - get
      - list
      - watch
      - create
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apisix-clusterrolebinding
  namespace: apisix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apisix-clusterrole
subjects:
  - kind: ServiceAccount
    name: apisix-ingress-controller
    namespace: apisix

3.3摇庙、dashboard安裝

helm install apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace apisix

3.4、資源查看

由于在“3.2”中已經(jīng)啟用了ingress-controller遥缕,在此不再按照教程再裝一次卫袒。


apisix
svc->apisix-admin    9180       pod->apisix: 9180       操作routes、streams单匣、consumers等的端口
svc->apisix-gateway  80:30761   pod->apisix: 9080       訪問應用url的接口 

3.5夕凝、使用

3.5.1、創(chuàng)建pod

kubectl run httpbin --image-pull-policy=IfNotPresent --image kennethreitz/httpbin --port 80
kubectl expose pod httpbin --port 80

3.5.2户秤、創(chuàng)建crd

[root@k8s-master-01 apisix]# cat ApisixRoute.yaml
apiVersion: apisix.apache.org/v2beta3 
kind: ApisixRoute
metadata:
  name: httpserver-route
spec:
  http:
  - name: httpbin
    match:
      hosts:
      - local.httpbin.org
      paths:
      - /*
    backends:
      - serviceName: httpbin
        servicePort: 80

注意在此用的apiVersion為“apisix.apache.org/v2beta3 ”码秉,查看apisix-ingress-controller的日志會發(fā)現(xiàn)有如下報錯:

Failed to watch *v2beta1.ApisixRoute: failed to list *v2beta1.ApisixRoute: the server could not find the requested resource (get apisixroutes.apisix.apache.org)

這時修改configmap即可:


apisix-ingress-controller log errir

圖中標記處,在修改之前為:v2beta1鸡号,和我們使用的apiversion不匹配转砖,所以會報錯,修改即可鲸伴。

3.5.3府蔗、測試

[root@k8s-master-01 apisix]# kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: local.httpbin.org'
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.79.1", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "127.0.0.1", 
  "url": "http://local.httpbin.org/get"
}
[root@k8s-master-01 apisix]# curl http://192.168.13.12:30761/get -H "Host: local.httpbin.org"
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.29.0", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "20.10.151.128", 
  "url": "http://local.httpbin.org/get"
}

3.6、灰度發(fā)布

# 本文使用過的文檔
https://api7.ai/blog/traffic-split-in-apache-apisix-ingress-controller

3.6.1 stable版本

[root@k8s-master-01 canary]# cat 1-stable.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-stable-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: stable
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-stable
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: stable
  template:
    metadata:
      labels:
        app: myapp
        version: stable
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v1
        imagePullPolicy: IfNotPresent
        name: myapp-stable
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: stable

3.6.2 canary版本

[root@k8s-master-01 canary]# cat 2-canary.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-canary-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: canary
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-canary
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: canary
  template:
    metadata:
      labels:
        app: myapp
        version: canary
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v2
        imagePullPolicy: IfNotPresent
        name: myapp-canary
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: canary

3.6.3 基于weight的灰度發(fā)布

[root@k8s-master-01 canary]# cat 3-apisixroute-weight.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute
  namespace: canary 
spec:
  http:
  - name: myapp-canary-rule
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
      weight: 10
    - serviceName: myapp-canary-service
      servicePort: 80
      weight: 5

測試:


基于權重的灰度發(fā)布

canary和stable的比例約為:2:1挑围。

3.6.4 基于優(yōu)先級的灰度發(fā)布

流量會優(yōu)先打入優(yōu)先級高的pod

[root@k8s-master-01 canary]# kubectl apply -f priority.yaml
apisixroute.apisix.apache.org/myapp-canary-apisixroute2 created
[root@k8s-master-01 canary]# cat 4-ap.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute2
  namespace: canary
spec:
  http:
  - name: myapp-stable-rule2
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule2
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

測試:


基于優(yōu)先級的灰度發(fā)布

流量會優(yōu)先打入myapp-canary-service礁竞。

3.6.5 基于參數(shù)的灰度發(fā)布

[root@k8s-master-01 canary]# cat vars.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Query
          name: id
        op: In
        set:
        - "3"
        - "13"
        - "23"
        - "33"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

測試:


基于條件的灰度發(fā)布

符合提交的流量會打入myapp-canary-service,否則打入myapp-stable-service杉辙。

3.6.5 基于header的灰度發(fā)布

[root@k8s-master-01 canary]# cat canary-header.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Header
          name: canary
        op: RegexMatch
        value: ".*myapp.*"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

測試:


基于header的灰度發(fā)布
最后編輯于
?著作權歸作者所有,轉載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末模捂,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子蜘矢,更是在濱河造成了極大的恐慌狂男,老刑警劉巖,帶你破解...
    沈念sama閱讀 207,113評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件品腹,死亡現(xiàn)場離奇詭異岖食,居然都是意外死亡,警方通過查閱死者的電腦和手機舞吭,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,644評論 2 381
  • 文/潘曉璐 我一進店門泡垃,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人羡鸥,你說我怎么就攤上這事蔑穴。” “怎么了惧浴?”我有些...
    開封第一講書人閱讀 153,340評論 0 344
  • 文/不壞的土叔 我叫張陵存和,是天一觀的道長。 經(jīng)常有香客問我,道長捐腿,這世上最難降的妖魔是什么纵朋? 我笑而不...
    開封第一講書人閱讀 55,449評論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮茄袖,結果婚禮上操软,老公的妹妹穿的比我還像新娘。我一直安慰自己宪祥,他們只是感情好寺鸥,可當我...
    茶點故事閱讀 64,445評論 5 374
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著品山,像睡著了一般胆建。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上肘交,一...
    開封第一講書人閱讀 49,166評論 1 284
  • 那天笆载,我揣著相機與錄音,去河邊找鬼涯呻。 笑死凉驻,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的复罐。 我是一名探鬼主播涝登,決...
    沈念sama閱讀 38,442評論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼效诅!你這毒婦竟也來了胀滚?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 37,105評論 0 261
  • 序言:老撾萬榮一對情侶失蹤乱投,失蹤者是張志新(化名)和其女友劉穎咽笼,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體戚炫,經(jīng)...
    沈念sama閱讀 43,601評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡剑刑,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,066評論 2 325
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了双肤。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片施掏。...
    茶點故事閱讀 38,161評論 1 334
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖茅糜,靈堂內(nèi)的尸體忽然破棺而出七芭,到底是詐尸還是另有隱情,我是刑警寧澤限匣,帶...
    沈念sama閱讀 33,792評論 4 323
  • 正文 年R本政府宣布抖苦,位于F島的核電站,受9級特大地震影響米死,放射性物質發(fā)生泄漏锌历。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 39,351評論 3 307
  • 文/蒙蒙 一峦筒、第九天 我趴在偏房一處隱蔽的房頂上張望究西。 院中可真熱鬧,春花似錦物喷、人聲如沸卤材。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,352評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽扇丛。三九已至,卻和暖如春尉辑,著一層夾襖步出監(jiān)牢的瞬間帆精,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,584評論 1 261
  • 我被黑心中介騙來泰國打工隧魄, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留卓练,地道東北人。 一個月前我還...
    沈念sama閱讀 45,618評論 2 355
  • 正文 我出身青樓购啄,卻偏偏與公主長得像襟企,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子狮含,可洞房花燭夜當晚...
    茶點故事閱讀 42,916評論 2 344

推薦閱讀更多精彩內(nèi)容