Kubernetes Ingress

Kubernetes Ingress

Kubernetes 暴露服務(wù)的有三種方式华糖,分別為 LoadBlancer Service、NodePort Service薇搁、Ingress状勤。官網(wǎng)對 Ingress 的定義為管理對外服務(wù)到集群內(nèi)服務(wù)之間規(guī)則的集合吴侦,通俗點講就是它定義規(guī)則來允許進入集群的請求被轉(zhuǎn)發(fā)到集群中對應服務(wù)上,從來實現(xiàn)服務(wù)暴漏杂靶。 Ingress 能把集群內(nèi) Service 配置成外網(wǎng)能夠訪問的 URL梆惯,流量負載均衡,終止SSL吗垮,提供基于域名訪問的虛擬主機等等垛吗。相對于Trafik ingress來說 Nginx ingress 功能更強大 性能更好 實現(xiàn)的功能也相對更多.

部署Nginx Ingress

cat ingress-nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: ges.harbor.in/tools/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---
apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      # 增加nodePort
      nodePort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      # 增加nodePort
      nodePort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx

在apply這個service之前,我們要注意的是我們的apiserver 默認的node port端口范圍是30000-32767烁登,但是我們所需要的nodeport端口不在這個范圍之內(nèi)怯屉,所以要修改apiserver的nodeport端口。編輯下面這個文件饵沧,

vim /etc/kubernetes/manifests/kube-apiserver.yaml

在command下新增

- --service-node-port-range=0-65535

測試Ingress Nginx

cat ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  namespace: es-backend-pre
  name: nginx-web
  annotations:
    # 指定 Ingress Controller 的類型
    kubernetes.io/ingress.class: "nginx"
    # 指定我們的 rules 的 path 可以使用正則表達式
    nginx.ingress.kubernetes.io/use-regex: "true"
    # 連接超時時間锨络,默認為 5s
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    # 后端服務(wù)器回轉(zhuǎn)數(shù)據(jù)超時時間,默認為 60s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    # 后端服務(wù)器響應超時時間狼牺,默認為 60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    # 客戶端上傳文件羡儿,最大大小,默認為 20m
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
    # URL 重寫
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  # 路由規(guī)則
  rules:
  # 主機名是钥,只能是域名掠归,修改為你自己的
  - host: k8s.test.com
    http:
      paths:
      - path: /es/api
        backend:
          # 后臺部署的 Service Name
          serviceName: es-platform-portal-api-svc
          # 后臺部署的 Service Port
          servicePort: 8080

在hosts中添加解析

Your Node ip  k8s.test.com
[root@pre-k8s-app-master ~]# kubectl get ingress -A
NAMESPACE        NAME        CLASS    HOSTS          ADDRESS        PORTS   AGE
es-backend-pre   nginx-web   <none>   k8s.test.com   10.107.51.25   80      2d18h
[root@pre-k8s-app-master ~]# curl k8s.test.com/es/api
{"timestamp":1652664604167,"status":404,"error":"Not Found","message":"No message available","path":"/"} 訪問API成功 由于rewrite重寫規(guī)則導致not found

Kubernetes自動伸縮

HPA全稱是Horizontal Pod Autoscaler,翻譯成中文是POD水平自動伸縮悄泥,以下都會用HPA代替Horizontal Pod Autoscaler虏冻,HPA可以基于CPU利用率對replication controller、deployment和replicaset中的pod數(shù)量進行自動擴縮容(除了CPU利用率也可以基于其他應程序提供的度量指標custom metrics進行自動擴縮容)弹囚。pod自動縮放不適用于無法縮放的對象兄旬,比如DaemonSets。HPA由Kubernetes API資源和控制器實現(xiàn)余寥。資源決定了控制器的行為∶跎控制器會周期性的獲取平均CPU利用率宋舷,并與目標值相比較后來調(diào)整replication controller或deployment中的副本數(shù)量。

HPA的實現(xiàn)是一個控制循環(huán)瓢姻,由controller manager的–horizontal-pod-autoscaler-sync-period參數(shù)指定周期(默認值為15秒)祝蝠。每個周期內(nèi),controller manager根據(jù)每個HorizontalPodAutoscaler定義中指定的指標查詢資源利用率。controller manager可以從resource metrics API(pod 資源指標)和custom metrics API(自定義指標)獲取指標绎狭。

1)對于每個pod的資源指標(如CPU)细溅,控制器從資源指標API中獲取每一個 HorizontalPodAutoscaler指定的pod的指標,然后儡嘶,如果設(shè)置了目標使用率喇聊,控制器獲取每個pod中的容器資源使用情況,并計算資源使用率蹦狂。如果使用原始值誓篱,將直接使用原始數(shù)據(jù)(不再計算百分比)。然后凯楔,控制器根據(jù)平均的資源使用率或原始值計算出縮放的比例窜骄,進而計算出目標副本數(shù)。需要注意的是摆屯,如果pod某些容器不支持資源采集邻遏,那么控制器將不會使用該pod的CPU使用率

2)如果 pod 使用自定義指標,控制器機制與資源指標類似虐骑,區(qū)別在于自定義指標只使用原始值准验,而不是使用率。

3)如果pod 使用對象指標和外部指標(每個指標描述一個對象信息)富弦。這個指標將直接跟據(jù)目標設(shè)定值相比較沟娱,并生成一個上面提到的縮放比例。在autoscaling/v2beta2版本API中腕柜,這個指標也可以根據(jù)pod數(shù)量平分后再計算济似。通常情況下,控制器將從一系列的聚合API(metrics.k8s.io盏缤、custom.metrics.k8s.io和external.metrics.k8s.io)中獲取指標數(shù)據(jù)砰蠢。metrics.k8s.io API通常由 metrics-server(需要額外啟動)提供。

Metrics Server

[root@pre-k8s-app-master ~]# cat metrics.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - deployments
  verbs:
  - get
  - list
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metrics-server-config
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.3.6
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
      version: v0.3.6
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
        version: v0.3.6
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: ges.harbor.in/tools/metrics-server-amd64:v0.3.6
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: ges.harbor.in/tools/addon-resizer:1.8.4
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --cpu=300m
          - --extra-cpu=20m
          - --memory=200Mi
          - --extra-memory=10Mi
          - --threshold=5
          - --deployment=metrics-server
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
      volumes:
        - name: metrics-server-config-volume
          configMap:
            name: metrics-server-config
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: https
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100

測試Metrics 是否安裝成功

[root@pre-k8s-app-master ~]# kubectl top node
NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
pre-k8s-app-master   336m         8%     3039Mi          39%       
pre-k8s-app-nd01     545m         6%     8739Mi          55%       
pre-k8s-app-nd02     521m         6%     9646Mi          61%       
pre-k8s-app-nd03     788m         9%     7980Mi          50%       
[root@pre-k8s-app-master ~]# kubectl top pod -A
NAMESPACE                 NAME                                          CPU(cores)   MEMORY(bytes)   
cattle-system             cattle-cluster-agent-66dc65c4fd-rzjmn         4m           65Mi            
cattle-system             cattle-node-agent-7f26j                       1m           41Mi            
cattle-system             cattle-node-agent-clnpr                       1m           32Mi            
cattle-system             cattle-node-agent-gh8f5                       1m           32Mi            
cattle-system             cattle-node-agent-vzql6                       1m           28Mi            
cephfs                    cephfs-provisioner-5cdfc6d89f-v6fbt           3m           10Mi            
es-backend-pre            es-dashboard-portal-api-5486dbfdd9-gmczg      26m          1581Mi          
es-backend-pre            es-education-portal-api-584f48c8c7-mrcdz      19m          513Mi           
es-backend-pre            es-gis-portal-api-6f6ccf4d49-54fcn            31m          579Mi           
es-backend-pre            es-hse-portal-api-79f98fddd5-hcnbj            42m          754Mi           
es-backend-pre            es-hse-svc-ds-alarm-67f9b566df-dmgt6          124m         667Mi           
es-backend-pre            es-hse-svc-opc-alarm-6bbdd8fdd9-zrtbr         76m          543Mi           
es-backend-pre            es-license-api-5fd9bbcb48-dk5r6               13m          456Mi           
es-backend-pre            es-platform-iot-portal-api-79547569bd-7qrfz   40m          603Mi           
es-backend-pre            es-platform-portal-api-8477dff74f-zlh89       65m          775Mi           
es-backend-pre            es-sso-portal-api-545b986d97-5tnvr            15m          492Mi           
es-backend-pre            es-svc-gis-alarm-c6454fd69-zqbv2              329m         1085Mi          
es-backend-pre            es-svc-gis-pos-68f758bf4f-qb9rh               103m         575Mi           
es-backend-pre            es-svc-iot-data-55d4d64b94-tsv6x              58m          580Mi           
es-backend-pre            es-svc-report-generate-6f747c5589-cks9q       50m          488Mi           
es-backend-pre            es-svc-sys-msg-broker-66d69dc445-kl8pk        80m          505Mi           
es-backend-pre            es-svc-system-tool-79dc8d64c7-tnbwx           101m         888Mi           
es-backend-pre            es-svc-workflow-daemon-6db955cf8f-b8qrc       16m          580Mi           
es-backend-pre            es-svc-workorder-trigger-fd98ccc6f-2lnd9      75m          568Mi           
es-backend-pre            es-terminal-portal-api-5cf6bd9f78-fcjk4       24m          591Mi           
es-frontend-pre           es-dashboard-portal-5c7b85588-v7bd2           0m           4Mi             
es-frontend-pre           es-education-web-portal-7d5cb97c56-pg7tb      0m           6Mi             
es-frontend-pre           es-gis-web-portal-b54475b6d-8mjfz             0m           5Mi             
es-frontend-pre           es-hse-web-portal-6cbd5ff67b-svcg6            0m           6Mi             
es-frontend-pre           es-main-web-portal-799ff4d7b8-lplsp           0m           6Mi             
es-frontend-pre           es-platform-web-portal-78989688c9-v72z6       0m           7Mi             
es-frontend-pre           es-sso-web-portal-7b4d67d54b-v4ttd            0m           5Mi             
es-frontend-pre           es-terminal-web-portal-6cd99d6b49-wn5xf       0m           4Mi             
ingress-nginx             nginx-ingress-controller-2hzg8                3m           135Mi           
ingress-nginx             nginx-ingress-controller-6vpcl                3m           88Mi            
ingress-nginx             nginx-ingress-controller-lh6qs                7m           169Mi           
ingress-nginx             nginx-ingress-controller-r2wf7                4m           133Mi           
kube-system               calico-kube-controllers-7f4f5bf95d-bldcp      3m           40Mi            
kube-system               calico-node-jgrsr                             43m          154Mi           
kube-system               calico-node-ps2bv                             49m          155Mi           
kube-system               calico-node-s4hx9                             43m          131Mi           
kube-system               calico-node-wsd74                             42m          153Mi           
kube-system               coredns-f9fd979d6-jhtrv                       3m           24Mi            
kube-system               coredns-f9fd979d6-ndvjv                       3m           22Mi            
kube-system               etcd-pre-k8s-app-master                       29m          307Mi           
kube-system               kube-apiserver-pre-k8s-app-master             112m         455Mi           
kube-system               kube-controller-manager-pre-k8s-app-master    16m          68Mi            
kube-system               kube-proxy-4lxfn                              8m           43Mi            
kube-system               kube-proxy-lntrq                              1m           43Mi            
kube-system               kube-proxy-lr8vc                              9m           28Mi            
kube-system               kube-proxy-nmdvv                              1m           45Mi            
kube-system               kube-scheduler-pre-k8s-app-master             4m           33Mi            
kube-system               kube-sealyun-lvscare-pre-k8s-app-nd01         3m           19Mi            
kube-system               kube-sealyun-lvscare-pre-k8s-app-nd02         3m           19Mi            
kube-system               kube-sealyun-lvscare-pre-k8s-app-nd03         1m           18Mi            
kube-system               metrics-server-76f5687466-fdvbs               2m           41Mi            
prometheus-exporter-app   kube-state-metrics-6d4c97fdd9-f4sbk           2m           38Mi            
prometheus-exporter-app   node-exporter-6x9m7                           4m           23Mi            
prometheus-exporter-app   node-exporter-8p2jw                           0m           13Mi            
prometheus-exporter-app   node-exporter-tm2bz                           3m           25Mi            
prometheus-exporter-app   node-exporter-wqlzc                           6m           24Mi            
prometheus-exporter-app   prometheus-pre-app-869d859896-bj2zz           93m          780Mi           

創(chuàng)建HPA

kubectl autoscale deployment es-platform-portal-api --cpu-percent=50 --min=1 --max=10
[root@pre-k8s-app-master ~]# kubectl describe hpa es-platform-portal-api -n es-backend-pre
Name:                                                  es-platform-portal-api
Namespace:                                             es-backend-pre
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Mon, 16 May 2022 09:44:12 +0800
Reference:                                             Deployment/es-platform-portal-api
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  156% (78m) / 50%
Min replicas:                                          1
Max replicas:                                          10
Deployment pods:                                       3 current / 3 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    ReadyForNewScale    recommended size matches current size
  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range
Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  24s   horizontal-pod-autoscaler  New size: 3; reason: cpu resource utilization (percentage of request) above target
[root@pre-k8s-app-master ~]# kubectl get hpa -A
NAMESPACE        NAME                     REFERENCE                           TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
es-backend-pre   es-platform-portal-api   Deployment/es-platform-portal-api   156%/50%   1         10        3          45s

可以看到 es-platform-portal-api 的replicas 變?yōu)榱?個
如果出現(xiàn)了 failed to get cpu utilization: missing request for cpu 這樣的錯誤信息唉铜。這是因為我們上面創(chuàng)建的 Pod 對象沒有添加 request 資源聲明台舱,這樣導致 HPA 讀取不到 CPU 指標信息,所以如果要想讓 HPA 生效潭流,對應的 Pod 資源必須添加 requests 資源聲明竞惋,更新yaml文件

        resources:
          requests:
            cpu: 0.01
            memory: 25Mi
          limits:
            cpu: 0.05
            memory: 60Mi
cat hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: es-platform-portal-api
  namespace: es-backend-pre
spec:
    maxReplicas: 10
    minReplicas: 1
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: es-platform-portal-api
    metrics:
    - type: Resource
      resource:
        name: memory
        targetAverageUtilization: 60
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市灰嫉,隨后出現(xiàn)的幾起案子拆宛,更是在濱河造成了極大的恐慌,老刑警劉巖讼撒,帶你破解...
    沈念sama閱讀 211,884評論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件浑厚,死亡現(xiàn)場離奇詭異股耽,居然都是意外死亡,警方通過查閱死者的電腦和手機钳幅,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,347評論 3 385
  • 文/潘曉璐 我一進店門物蝙,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人敢艰,你說我怎么就攤上這事诬乞。” “怎么了盖矫?”我有些...
    開封第一講書人閱讀 157,435評論 0 348
  • 文/不壞的土叔 我叫張陵丽惭,是天一觀的道長。 經(jīng)常有香客問我辈双,道長责掏,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 56,509評論 1 284
  • 正文 為了忘掉前任湃望,我火速辦了婚禮换衬,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘证芭。我一直安慰自己瞳浦,他們只是感情好,可當我...
    茶點故事閱讀 65,611評論 6 386
  • 文/花漫 我一把揭開白布废士。 她就那樣靜靜地躺著叫潦,像睡著了一般。 火紅的嫁衣襯著肌膚如雪官硝。 梳的紋絲不亂的頭發(fā)上矗蕊,一...
    開封第一講書人閱讀 49,837評論 1 290
  • 那天,我揣著相機與錄音氢架,去河邊找鬼傻咖。 笑死,一個胖子當著我的面吹牛岖研,可吹牛的內(nèi)容都是我干的卿操。 我是一名探鬼主播,決...
    沈念sama閱讀 38,987評論 3 408
  • 文/蒼蘭香墨 我猛地睜開眼孙援,長吁一口氣:“原來是場噩夢啊……” “哼害淤!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起拓售,我...
    開封第一講書人閱讀 37,730評論 0 267
  • 序言:老撾萬榮一對情侶失蹤窥摄,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體离斩,經(jīng)...
    沈念sama閱讀 44,194評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡浪谴,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,525評論 2 327
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了莹菱。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 38,664評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡吱瘩,死狀恐怖道伟,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情使碾,我是刑警寧澤蜜徽,帶...
    沈念sama閱讀 34,334評論 4 330
  • 正文 年R本政府宣布,位于F島的核電站票摇,受9級特大地震影響拘鞋,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜矢门,卻給世界環(huán)境...
    茶點故事閱讀 39,944評論 3 313
  • 文/蒙蒙 一盆色、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧祟剔,春花似錦隔躲、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,764評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至叛薯,卻和暖如春浑吟,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背案训。 一陣腳步聲響...
    開封第一講書人閱讀 31,997評論 1 266
  • 我被黑心中介騙來泰國打工买置, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人强霎。 一個月前我還...
    沈念sama閱讀 46,389評論 2 360
  • 正文 我出身青樓忿项,卻偏偏與公主長得像,于是被迫代替她去往敵國和親城舞。 傳聞我的和親對象是個殘疾皇子轩触,可洞房花燭夜當晚...
    茶點故事閱讀 43,554評論 2 349

推薦閱讀更多精彩內(nèi)容