一扼仲、相關文檔
# 1荷憋、chart 安裝apisix掂碱、apisix-dashboard绵咱、apisix-ingress-controller
https://github.com/apache/apisix-helm-chart
# 2砌梆、安裝過程中的報錯濒憋,可以下以下鏈接中搜索
https://github.com/apache/apisix-helm-chart/issues
# 3、apisix-ingress-controller相關文檔和使用
https://apisix.apache.org/zh/docs/ingress-controller/practices/the-hard-way/
# 4但壮、灰度發(fā)布相關
https://apisix.apache.org/zh/docs/ingress-controller/concepts/apisix_route
二冀泻、環(huán)境
ip | 備注 |
---|---|
192.168.13.12 | k8s-master-01 |
192.168.13.211 | k8s-node-01 |
192.168.13.58 | k8s-node-02、nfs |
提前安裝helm
三蜡饵、安裝
3.1道宅、storageclass安裝配置
3.1.1鬓梅、nfs節(jié)點上
yum install rpcbind nfs-utils -y
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
mkdir /nfs/data
[root@k8s-node-02 ~]# cat /etc/exports
/nfs/data/ *(insecure,rw,sync,no_root_squash)
3.1.2谜酒、nfs-subdir-external-provisioner
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.13.58 --set nfs.path=/nfs/data
3.1.3本姥、sc
[root@k8s-master-01 apisix]# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
namespace: default
annotations:
storageclass.kubernetes.io/is-default-class: "true" #---設置為默認的storageclass
provisioner: cluster.local/nfs-subdir-external-provisioner
parameters:
server: 192.168.13.58
path: /nfs/data
readOnly: "false"
3.2、apisix安裝配置
注意:需要對charts做適當修改
helm pull apisix/apisix
tar -xf apisix-0.9.3.tgz
在apisix/values.yaml中做如下修改<只做了添加>:
ingress-controller: ## 假如要啟用ingress-controller
enabled: true
storageClass: "nfs-storage" ## 指定上面創(chuàng)建的nfs-storage
accessMode:
- ReadWriteOnce
helm package apisix
helm install apisix apisix-0.9.3.tgz --create-namespace --namespace apisix
安裝完成后焦辅,會發(fā)現(xiàn)apisix-ingress-controller-6697f4849d-wdrr5這個pod一直處于init的狀態(tài)博杖,可以在https://github.com/apache/apisix-helm-chart/issues中進行搜索https://github.com/apache/apisix-helm-chart/pull/284,也可以通過查看pod日志進行解決筷登,原因說明:
apisix-ingress-controller監(jiān)聽k8s apiserver crd資源<apiroute>剃根,通過svc apisix-admin:9180端口連接到apisix,apisix將規(guī)則寫入etcd中仆抵。但日志顯示controller一直監(jiān)聽:apisix-admin.ingress-apisix.svc.cluster.local:9180跟继,而svc和pod都部署在apisix的ns下种冬,所以需要修改兩個地方為:apisix-admin.apisix.svc.cluster.local:9180镣丑,分別為:
kubectl edit deployment apisix-ingress-controller -n apisix
kubectl edit configmap apisix-configmap -n apisix
然后刪除pod apisix-ingress-controller,重新生成娱两。
后續(xù)查看pod apisix-ingress-controller得日志莺匠,常有如下報錯:
這一般是sa權限設置不當造成的,在此我貼一份我的配置十兢,后續(xù)看到有報錯日志趣竣,在適當?shù)胤叫薷募纯桑?/p>
[root@k8s-master-01 apisix]# cat 12-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: apisix-ingress-controller
namespace: apisix
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: apisix-clusterrole
namespace: apisix
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- replicationcontrollers
- replicationcontrollers/scale
- serviceaccounts
- services
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
verbs:
- get
- list
- watch
- create
- delete
- update
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- controllerrevisions
- daemonsets
- deployments
- deployments/scale
- replicasets
- replicasets/scale
- statefulsets
- statefulsets/scale
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- deployments/scale
- ingresses
- networkpolicies
- replicasets
- replicasets/scale
- replicationcontrollers/scale
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- apisix.apache.org
resources:
- apisixroutes
- apisixroutes/status
- apisixupstreams
- apisixupstreams/status
- apisixtlses
- apisixtlses/status
- apisixclusterconfigs
- apisixclusterconfigs/status
- apisixconsumers
- apisixconsumers/status
- apisixpluginconfigs
- apisixpluginconfigs/status
verbs:
- get
- list
- watch
- create
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: apisix-clusterrolebinding
namespace: apisix
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: apisix-clusterrole
subjects:
- kind: ServiceAccount
name: apisix-ingress-controller
namespace: apisix
3.3摇庙、dashboard安裝
helm install apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace apisix
3.4、資源查看
由于在“3.2”中已經(jīng)啟用了ingress-controller遥缕,在此不再按照教程再裝一次卫袒。
svc->apisix-admin 9180 pod->apisix: 9180 操作routes、streams单匣、consumers等的端口
svc->apisix-gateway 80:30761 pod->apisix: 9080 訪問應用url的接口
3.5夕凝、使用
3.5.1、創(chuàng)建pod
kubectl run httpbin --image-pull-policy=IfNotPresent --image kennethreitz/httpbin --port 80
kubectl expose pod httpbin --port 80
3.5.2户秤、創(chuàng)建crd
[root@k8s-master-01 apisix]# cat ApisixRoute.yaml
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
name: httpserver-route
spec:
http:
- name: httpbin
match:
hosts:
- local.httpbin.org
paths:
- /*
backends:
- serviceName: httpbin
servicePort: 80
注意在此用的apiVersion為“apisix.apache.org/v2beta3 ”码秉,查看apisix-ingress-controller的日志會發(fā)現(xiàn)有如下報錯:
Failed to watch *v2beta1.ApisixRoute: failed to list *v2beta1.ApisixRoute: the server could not find the requested resource (get apisixroutes.apisix.apache.org)
這時修改configmap即可:
圖中標記處,在修改之前為:v2beta1鸡号,和我們使用的apiversion不匹配转砖,所以會報錯,修改即可鲸伴。
3.5.3府蔗、測試
[root@k8s-master-01 apisix]# kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: local.httpbin.org'
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "local.httpbin.org",
"User-Agent": "curl/7.79.1",
"X-Forwarded-Host": "local.httpbin.org"
},
"origin": "127.0.0.1",
"url": "http://local.httpbin.org/get"
}
[root@k8s-master-01 apisix]# curl http://192.168.13.12:30761/get -H "Host: local.httpbin.org"
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "local.httpbin.org",
"User-Agent": "curl/7.29.0",
"X-Forwarded-Host": "local.httpbin.org"
},
"origin": "20.10.151.128",
"url": "http://local.httpbin.org/get"
}
3.6、灰度發(fā)布
# 本文使用過的文檔
https://api7.ai/blog/traffic-split-in-apache-apisix-ingress-controller
3.6.1 stable版本
[root@k8s-master-01 canary]# cat 1-stable.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-stable-service
namespace: canary
spec:
ports:
- port: 80
targetPort: 80
name: http-port
selector:
app: myapp
version: stable
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-stable
namespace: canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: stable
template:
metadata:
labels:
app: myapp
version: stable
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v1
imagePullPolicy: IfNotPresent
name: myapp-stable
ports:
- name: http-port
containerPort: 80
env:
- name: APP_ENV
value: stable
3.6.2 canary版本
[root@k8s-master-01 canary]# cat 2-canary.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-canary-service
namespace: canary
spec:
ports:
- port: 80
targetPort: 80
name: http-port
selector:
app: myapp
version: canary
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-canary
namespace: canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: canary
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v2
imagePullPolicy: IfNotPresent
name: myapp-canary
ports:
- name: http-port
containerPort: 80
env:
- name: APP_ENV
value: canary
3.6.3 基于weight的灰度發(fā)布
[root@k8s-master-01 canary]# cat 3-apisixroute-weight.yaml
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
name: myapp-canary-apisixroute
namespace: canary
spec:
http:
- name: myapp-canary-rule
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
backends:
- serviceName: myapp-stable-service
servicePort: 80
weight: 10
- serviceName: myapp-canary-service
servicePort: 80
weight: 5
測試:
canary和stable的比例約為:2:1挑围。
3.6.4 基于優(yōu)先級的灰度發(fā)布
流量會優(yōu)先打入優(yōu)先級高的pod
[root@k8s-master-01 canary]# kubectl apply -f priority.yaml
apisixroute.apisix.apache.org/myapp-canary-apisixroute2 created
[root@k8s-master-01 canary]# cat 4-ap.yaml
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
name: myapp-canary-apisixroute2
namespace: canary
spec:
http:
- name: myapp-stable-rule2
priority: 1
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
backends:
- serviceName: myapp-stable-service
servicePort: 80
- name: myapp-canary-rule2
priority: 2
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
backends:
- serviceName: myapp-canary-service
servicePort: 80
測試:
流量會優(yōu)先打入myapp-canary-service礁竞。
3.6.5 基于參數(shù)的灰度發(fā)布
[root@k8s-master-01 canary]# cat vars.yaml
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
name: myapp-canary-apisixroute3
namespace: canary
spec:
http:
- name: myapp-stable-rule3
priority: 1
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
backends:
- serviceName: myapp-stable-service
servicePort: 80
- name: myapp-canary-rule3
priority: 2
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
exprs:
- subject:
scope: Query
name: id
op: In
set:
- "3"
- "13"
- "23"
- "33"
backends:
- serviceName: myapp-canary-service
servicePort: 80
測試:
符合提交的流量會打入myapp-canary-service,否則打入myapp-stable-service杉辙。
3.6.5 基于header的灰度發(fā)布
[root@k8s-master-01 canary]# cat canary-header.yaml
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
name: myapp-canary-apisixroute3
namespace: canary
spec:
http:
- name: myapp-stable-rule3
priority: 1
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
backends:
- serviceName: myapp-stable-service
servicePort: 80
- name: myapp-canary-rule3
priority: 2
match:
hosts:
- myapp.fengzhihai.cn
paths:
- /
exprs:
- subject:
scope: Header
name: canary
op: RegexMatch
value: ".*myapp.*"
backends:
- serviceName: myapp-canary-service
servicePort: 80
測試: