一蜘澜、Ingress概念
Kubernetes關(guān)于服務(wù)的暴露主要是通過NodePort方式帮匾,通過綁定宿主機(jī)的某個(gè)端口,然后進(jìn)行pod的請求轉(zhuǎn)發(fā)和負(fù)載均衡奖慌,但這種方式下缺陷是:
Service可能有很多個(gè)昼窗,如果每個(gè)都綁定一個(gè)node主機(jī)端口的話软棺,主機(jī)需要開放外圍一堆的端口進(jìn)行服務(wù)調(diào)用,管理混亂無法應(yīng)用很多公司要求的防火墻規(guī)則影涉。
理想的方式是通過一個(gè)外部的負(fù)載均衡器变隔,綁定固定的端口规伐,比如80,然后根據(jù)域名或者服務(wù)名向后面的Service ip轉(zhuǎn)發(fā)蟹倾,Nginx很好的解決了這個(gè)需求,但問題是如果有新的服務(wù)加入猖闪,如何去修改Nginx的配置鲜棠,并且加載這些配置? Kubernetes給出的方案就是Ingress培慌,Ingress包含了兩大主件Ingress Controller和Ingress豁陆。
Ingress解決的是新的服務(wù)加入后,域名和服務(wù)的對應(yīng)問題吵护,基本上是一個(gè)ingress的對象盒音,通過yaml進(jìn)行創(chuàng)建和更新進(jìn)行加載。
-
Ingress Controller是將Ingress這種變化生成一段Nginx的配置馅而,然后將這個(gè)配置通過Kubernetes API寫到Nginx的Pod中祥诽,然后reload.(注意:寫入 nginx.conf 的不是service的地址,而是service backend 的 pod 的地址,避免在 service 在增加一層負(fù)載均衡轉(zhuǎn)發(fā))
image.png
從上圖中可以很清晰的看到,實(shí)際上請求進(jìn)來還是被負(fù)載均衡器攔截鼻忠,比如 nginx斑唬,然后 Ingress Controller 通過跟 Ingress 交互得知某個(gè)域名對應(yīng)哪個(gè) service善炫,再通過跟 kubernetes API 交互得知 service 地址等信息例隆;綜合以后生成配置文件實(shí)時(shí)寫入負(fù)載均衡器佃牛,然后負(fù)載均衡器 reload 該規(guī)則便可實(shí)現(xiàn)服務(wù)發(fā)現(xiàn)她按,即動(dòng)態(tài)映射
了解了以上內(nèi)容以后阔挠,這也就很好的說明了我為什么喜歡把負(fù)載均衡器部署為 Daemon Set飘庄;因?yàn)闊o論如何請求首先是被負(fù)載均衡器攔截的,所以在每個(gè) node 上都部署一下谒亦,同時(shí) hostport
方式監(jiān)聽 80 端口竭宰;那么就解決了其他方式部署不確定 負(fù)載均衡器在哪的問題,同時(shí)訪問每個(gè) node 的 80 都能正確解析請求份招;如果前端再 放個(gè) nginx 就又實(shí)現(xiàn)了一層負(fù)載均衡切揭。
Ingress使用
二、部署配置Ingress
Kぁ@!以下部署方法已經(jīng)過時(shí)谐腰,新版本已經(jīng)更新至0.9-beta1孕豹,不需要部署default-backend,具體部署文檔參考:https://kubernetes.github.io/ingress-nginx/deploy/
2.1 部署文件介紹十气、準(zhǔn)備
第一步: 獲取配置文件位置
https://github.com/kubernetes/ingress-nginx/tree/nginx-0.20.0/deploy
第二步: 下載部署文件
提供了兩種方式 :
- 默認(rèn)下載最新的yaml
- 指定版本號下載對應(yīng)的yaml
-
默認(rèn)下載最新的yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
image.png
-
指定版本號下載對應(yīng)的yaml
如下載ingress-nginx 0.17.0對應(yīng)的yamlwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.17.0/deploy/mandatory.yaml
部署文件介紹
-
namespace.yaml
創(chuàng)建一個(gè)獨(dú)立的命名空間 ingress-nginx -
configmap.yaml
ConfigMap是存儲(chǔ)通用的配置變量的励背,類似于配置文件,使用戶可以將分布式系統(tǒng)中用于不同模塊的環(huán)境變量統(tǒng)一到一個(gè)對象中管理砸西;而它與配置文件的區(qū)別在于它是存在集群的“環(huán)境”中的叶眉,并且支持K8S集群中所有通用的操作調(diào)用方式。
從數(shù)據(jù)角度來看芹枷,ConfigMap的類型只是鍵值組衅疙,用于存儲(chǔ)被Pod或者其他資源對象(如RC)訪問的信息。這與secret的設(shè)計(jì)理念有異曲同工之妙鸳慈,主要區(qū)別在于ConfigMap通常不用于存儲(chǔ)敏感信息饱溢,而只存儲(chǔ)簡單的文本信息。
ConfigMap可以保存環(huán)境變量的屬性走芋,也可以保存配置文件绩郎。
創(chuàng)建pod時(shí),對configmap進(jìn)行綁定翁逞,pod內(nèi)的應(yīng)用可以直接引用ConfigMap的配置肋杖。相當(dāng)于configmap為應(yīng)用/運(yùn)行環(huán)境封裝配置。
pod使用ConfigMap熄攘,通常用于:設(shè)置環(huán)境變量的值兽愤、設(shè)置命令行參數(shù)、創(chuàng)建配置文件。
default-backend.yaml
如果外界訪問的域名不存在的話浅萧,則默認(rèn)轉(zhuǎn)發(fā)到default-http-backend這個(gè)Service逐沙,其會(huì)直接返回404:rbac.yaml
負(fù)責(zé)Ingress的RBAC授權(quán)的控制,其創(chuàng)建了Ingress用到的ServiceAccount洼畅、ClusterRole吩案、Role、RoleBinding帝簇、ClusterRoleBindingwith-rbac.yaml
是Ingress的核心徘郭,用于創(chuàng)建ingress-controller。前面提到過丧肴,ingress-controller的作用是將新加入的Ingress進(jìn)行轉(zhuǎn)化為Nginx的配置
這里残揉,我們先對一些重要的文件進(jìn)行簡單介紹。
default-backend.yaml
default-backend
的作用是芋浮,如果外界訪問的域名不存在的話抱环,則默認(rèn)轉(zhuǎn)發(fā)到default-http-backend
這個(gè)Service,其會(huì)直接返回404:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
rbac.yaml
rbac.yaml
負(fù)責(zé)Ingress的RBAC授權(quán)的控制纸巷,其創(chuàng)建了Ingress用到的ServiceAccount镇草、ClusterRole、Role瘤旨、RoleBinding梯啤、ClusterRoleBinding。在上文《從零開始搭建Kubernetes集群(四存哲、搭建K8S Dashboard)》中因宇,我們已對這些概念進(jìn)行了簡單介紹。
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
with-rbac.yaml
with-rbac.yaml
是Ingress的核心宏胯,用于創(chuàng)建ingress-controller羽嫡。前面提到過本姥,ingress-controller的作用是將新加入的Ingress進(jìn)行轉(zhuǎn)化為Nginx的配置肩袍。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsNonRoot: false
如上,可以看到nginx-ingress-controller啟動(dòng)時(shí)傳入了參數(shù)婚惫,分別為前面創(chuàng)建的default-backend-service以及configmap氛赐。
2.2 部署ingress
第一步: 準(zhǔn)備鏡像,從這里mandatory.yaml查看需要哪些鏡像
已經(jīng)準(zhǔn)備好先舷, 可以直接點(diǎn)擊下載
鏡像名稱 | 版本 | 下載地址 |
---|---|---|
k8s.gcr.io/defaultbackend-amd64 | 1.5 | registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64 |
quay.io/kubernetes-ingress-controller/nginx-ingress-controller | 0.20.0 | registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller |
如:
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
將鏡像上傳到自己的私有倉庫艰管,以供下面的步驟使用(或者給鏡像打tag)。
第二步: 更新mandatory.yaml中的鏡像地址
替換成自己的鏡像地址(或者給鏡像打tag):
-
替換defaultbackend-amd64鏡像地址(或者給鏡像打tag)
sed -i 's#k8s.gcr.io/defaultbackend-amd64#registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64#g' mandatory.yaml
-
替換nginx-ingress-controller鏡像地址(或者給鏡像打tag)
sed -i 's#quay.io/kubernetes-ingress-controller/nginx-ingress-controller#registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller#g' mandatory.yaml
第三步: 部署nginx-ingress-controller
kubectl apply -f mandatory.yaml
第四步: 查看ingress-nginx組件狀態(tài)蒋川?
-
查看相關(guān)pod狀態(tài)
kubectl get pods -n ingress-nginx -o wide
image.png[root@master ingress-nginx]# kubectl get pods -n ingress-nginx -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default-http-backend-7f594549df-nzthj 1/1 Running 0 3m59s 192.168.1.90 slave1 <none> nginx-ingress-controller-9fc7f4c5f-dr722 1/1 Running 0 3m59s 192.168.2.110 slave2 <none> [root@master ingress-nginx]#
-
查看service狀態(tài)
[root@master ingress-nginx]# kubectl get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.104.146.218 <none> 80/TCP 5m37s [root@master ingress-nginx]#
測試default-http-backend 是否起作用牲芋?
系統(tǒng)自動(dòng)安裝了一個(gè)default-http-backend pod, 這是一個(gè)缺省的http后端服務(wù), 用于返回404結(jié)果缸浦,如下所示:
三夕冲、創(chuàng)建自定義Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana-ingress
namespace: default
spec:
rules:
- host: myk8s.com
http:
paths:
- path: /
backend:
serviceName: kibana
servicePort: 5601
其中:
- rules中的host必須為域名,不能為IP裂逐,表示Ingress-controller的Pod所在主機(jī)域名歹鱼,也就是Ingress-controller的IP對應(yīng)的域名。
- paths中的path則表示映射的路徑卜高。如映射
/
表示若訪問myk8s.com
弥姻,則會(huì)將請求轉(zhuǎn)發(fā)至Kibana的service,端口為5601掺涛。
創(chuàng)建成功后庭敦,查看:
[root@k8s-node1 ingress]# kubectl get ingress -o wide
NAME HOSTS ADDRESS PORTS AGE
kibana-ingress myk8s.com 80 6s
我們再執(zhí)行kubectl exec nginx-ingress-controller-5b79cbb5c6-2zr7f -it cat /etc/nginx/nginx.conf -n ingress-nginx
,可以看到生成nginx配置薪缆,篇幅較長螺捐,各位自行篩選:
## start server myk8s.com
server {
server_name myk8s.com ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location /kibana {
log_by_lua_block {
}
port_in_redirect off;
set $proxy_upstream_name "";
set $namespace "kube-system";
set $ingress_name "dashboard-ingress";
set $service_name "kibana";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
# No endpoints available for the request
return 503;
}
location / {
log_by_lua_block {
}
port_in_redirect off;
set $proxy_upstream_name "";
set $namespace "default";
set $ingress_name "kibana-ingress";
set $service_name "kibana";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
# No endpoints available for the request
return 503;
}
}
## end server myk8s.com
設(shè)置host
首先,我們需要在Ingress-controller的Pod所在主機(jī)上(這里為k8s-node1)矮燎,將上面提到的域名myk8s.com
追加入/etc/hosts
文件:
192.168.56.101 myk8s.com
除此之外定血,如果想在自己的Windows物理機(jī)上使用瀏覽器訪問kibana,也需要在C:\Windows\System32\drivers\etc\hosts
文件內(nèi)加入上述內(nèi)容诞外。設(shè)置后澜沟,分別在k8s-node1和物理機(jī)上測試無誤即可:
測試
在Windows物理機(jī)上,使用Chrome訪問myk8s.com
峡谊,也就是相當(dāng)于訪問了192.168.56.101:80
: