Kubernetes 部署 Nebula 圖數(shù)據(jù)庫集群

Kubernetes 是什么

Kubernetes 是一個(gè)開源的,用于管理云平臺(tái)中多個(gè)主機(jī)上的容器化的應(yīng)用草穆,Kubernetes 的目標(biāo)是讓部署容器化的應(yīng)用簡單并且高效,Kubernetes 提供了應(yīng)用部署窗宇,規(guī)劃厌处,更新,維護(hù)的一種機(jī)制凝危。<br />Kubernetes 在設(shè)計(jì)結(jié)構(gòu)上定義了一系列的構(gòu)建模塊波俄,其目的是為了提供一個(gè)可以部署、維護(hù)和擴(kuò)展應(yīng)用程序的機(jī)制蛾默,組成 Kubernetes 的組件設(shè)計(jì)概念為松耦合可擴(kuò)展的懦铺,這樣可以使之滿足多種不同的工作負(fù)載≈ЪΓ可擴(kuò)展性在很大程度上由 Kubernetes
API 提供冬念,此 API 主要被作為擴(kuò)展的內(nèi)部組件以及 Kubernetes 上運(yùn)行的容器來使用。

image.png

Kubernetes 主要由以下幾個(gè)核心組件組成:

  • etcd 保存了整個(gè)集群的狀態(tài)
  • apiserver 提供了資源操作的唯一入口牧挣,并提供認(rèn)證急前、授權(quán)、訪問控制瀑构、API注冊(cè)和發(fā)現(xiàn)等機(jī)制
  • controller manager 負(fù)責(zé)維護(hù)集群的狀態(tài)叔汁,比如故障檢測(cè)、自動(dòng)擴(kuò)展检碗、滾動(dòng)更新等
  • scheduler 負(fù)責(zé)資源的調(diào)度据块,按照預(yù)定的調(diào)度策略將Pod調(diào)度到相應(yīng)的機(jī)器上
  • kubelet 負(fù)責(zé)維護(hù)容器的生命周期,同時(shí)也負(fù)責(zé) Volume和網(wǎng)絡(luò)的管理
  • Container runtime 負(fù)責(zé)鏡像管理以及 Pod 和容器的真正運(yùn)行(CRI)
  • kube-proxy 負(fù)責(zé)為 Service 提供 cluster 內(nèi)部的服務(wù)發(fā)現(xiàn)和負(fù)載均衡

除了核心組件折剃,還有一些推薦的 Add-ons:

  • kube-dns 負(fù)責(zé)為整個(gè)集群提供 DNS 服務(wù)
  • Ingress Controller 為服務(wù)提供外網(wǎng)入口
  • Heapster 提供資源監(jiān)控
  • Dashboard 提供 GUI
  • Federation 提供跨可用區(qū)的集群
  • Fluentd-elasticsearch 提供集群日志采集另假、存儲(chǔ)與查詢

Kubernetes 和數(shù)據(jù)庫

數(shù)據(jù)庫容器化是最近的一大熱點(diǎn),那么 Kubernetes 能為數(shù)據(jù)庫帶來什么好處呢怕犁?

  • 故障恢復(fù): Kubernetes 提供故障恢復(fù)的功能边篮,數(shù)據(jù)庫應(yīng)用如果宕掉,Kubernetes 可以將其自動(dòng)重啟奏甫,或者將數(shù)據(jù)庫實(shí)例遷移到集群中其他節(jié)點(diǎn)上
  • 存儲(chǔ)管理: Kubernetes 提供了豐富的存儲(chǔ)接入方案戈轿,數(shù)據(jù)庫應(yīng)用能透明地使用不同類型的存儲(chǔ)系統(tǒng)
  • 負(fù)載均衡: Kubernetes Service 提供負(fù)載均衡功能,能將外部訪問平攤給不同的數(shù)據(jù)庫實(shí)例副本上
  • 水平拓展: Kubernetes 可以根據(jù)當(dāng)前數(shù)據(jù)庫集群的資源利用率情況阵子,縮放副本數(shù)目思杯,從而提升資源的利用率

目前很多數(shù)據(jù)庫,如:MySQL挠进,MongoDB 和 TiDB 在 Kubernetes 集群中都能運(yùn)行很良好色乾。

Nebula Graph在Kubernetes中的實(shí)踐

Nebula Graph 是一個(gè)分布式的開源圖數(shù)據(jù)庫誊册,主要組件有:Query Engine 的 graphd,數(shù)據(jù)存儲(chǔ)的 storaged暖璧,和元數(shù)據(jù)的 meted案怯。在 Kubernetes 實(shí)踐過程中,它主要給圖數(shù)據(jù)庫 Nebula Graph 帶來了以下的好處:

  • Kubernetes 能分?jǐn)?nebula graphd澎办,metad 和 storaged 不副本之間的負(fù)載嘲碱。graphd,metad 和 storaged 可以通過 Kubernetes 的域名服務(wù)自動(dòng)發(fā)現(xiàn)彼此局蚀。
  • 通過 storageclass悍汛,pvc 和 pv 可以屏蔽底層存儲(chǔ)細(xì)節(jié),無論使用本地卷還是云盤至会,Kubernetes 均可以屏蔽這些細(xì)節(jié)。
  • 通過 Kubernetes 可以在幾秒內(nèi)成功部署一套 Nebula 集群谱俭,Kubernetes 也可以無感知地實(shí)現(xiàn) Nebula 集群的升級(jí)奉件。
  • Nebula 集群通過 Kubernetes 可以做到自我恢復(fù),單體副本 crash昆著,Kubernetes 可以重新將其拉起县貌,無需運(yùn)維人員介入。
  • Kubernetes 可以根據(jù)當(dāng)前 Nebula 集群的資源利用率情況水平伸縮 Nebula 集群凑懂,從而提供集群的性能煤痕。

下面來講解下具體的實(shí)踐內(nèi)容。

集群部署

硬件和軟件要求

這里主要羅列下本文部署涉及到的機(jī)器接谨、操作系統(tǒng)參數(shù)

  • 操作系統(tǒng)使用的 CentOS-7.6.1810 x86_64
  • 虛擬機(jī)配置
    • 4 CPU
    • 8G 內(nèi)存
    • 50G 系統(tǒng)盤
    • 50G 數(shù)據(jù)盤A
    • 50G 數(shù)據(jù)盤B
  • Kubernetes 集群版本 v1.16
  • Nebula 版本為 v1.0.0-rc3
  • 使用本地 PV 作為數(shù)據(jù)存儲(chǔ)

kubernetes 集群規(guī)劃

以下為集群清單

服務(wù)器 IP nebula 實(shí)例 role
192.168.0.1 k8s-master
192.168.0.2 graphd, metad-0, storaged-0 k8s-slave
192.168.0.3 graphd, metad-1, storaged-1 k8s-slave
192.168.0.4 graphd, metad-2, storaged-2 k8s-slave

Kubernetes 待部署組件

  • 安裝 Helm
  • 準(zhǔn)備本地磁盤摆碉,并安裝本地卷插件
  • 安裝 nebula 集群
  • 安裝 ingress-controller

安裝 Helm

Helm 是 Kubernetes 集群上的包管理工具,類似 CentOS 上的 yum脓豪,Ubuntu 上的 apt-get巷帝。使用 Helm 可以極大地降低使用 Kubernetes 部署應(yīng)用的門檻。由于本篇文章不做 Helm 詳細(xì)介紹扫夜,有興趣的小伙伴可自行閱讀《Helm 入門指南》

下載安裝Helm

使用下面命令在終端執(zhí)行即可安裝 Helm

[root@nebula ~]# wget https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz 
[root@nebula ~]# tar -zxvf helm/helm-v3.0.1-linux-amd64.tgz
[root@nebula ~]# mv linux-amd64/helm /usr/bin/helm
[root@nebula ~]# chmod +x /usr/bin/helm

查看 Helm 版本

執(zhí)行 helm version 命令即可查看對(duì)應(yīng)的 Helm 版本楞泼,以文本為例,以下為輸出結(jié)果:

version.BuildInfo{
    Version:"v3.0.1", 
    GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", 
    GitTreeState:"clean", 
    GoVersion:"go1.13.4"
}

設(shè)置本地磁盤

在每臺(tái)機(jī)器上做如下配置

創(chuàng)建 mount 目錄

[root@nebula ~]# sudo mkdir -p /mnt/disks

格式化數(shù)據(jù)盤

[root@nebula ~]# sudo mkfs.ext4 /dev/diskA 
[root@nebula ~]# sudo mkfs.ext4 /dev/diskB

掛載數(shù)據(jù)盤

[root@nebula ~]# DISKA_UUID=$(blkid -s UUID -o value /dev/diskA) 
[root@nebula ~]# DISKB_UUID=$(blkid -s UUID -o value /dev/diskB) 
[root@nebula ~]# sudo mkdir /mnt/disks/$DISKA_UUID
[root@nebula ~]# sudo mkdir /mnt/disks/$DISKB_UUID
[root@nebula ~]# sudo mount -t ext4 /dev/diskA /mnt/disks/$DISKA_UUID
[root@nebula ~]# sudo mount -t ext4 /dev/diskB /mnt/disks/$DISKB_UUID

[root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskA` /mnt/disks/$DISKA_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
[root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskB` /mnt/disks/$DISKB_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab

部署本地卷插件

[root@nebula ~]# curl https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/archive/v2.3.3.zip
[root@nebula ~]# unzip v2.3.3.zip

修改 v2.3.3/helm/provisioner/values.yaml

#
# Common options.
#
common:
  #
  # Defines whether to generate service account and role bindings.
  #
  rbac: true
  #
  # Defines the namespace where provisioner runs
  #
  namespace: default
  #
  # Defines whether to create provisioner namespace
  #
  createNamespace: false
  #
  # Beta PV.NodeAffinity field is used by default. If running against pre-1.10
  # k8s version, the `useAlphaAPI` flag must be enabled in the configMap.
  #
  useAlphaAPI: false
  #
  # Indicates if PVs should be dependents of the owner Node.
  #
  setPVOwnerRef: false
  #
  # Provisioner clean volumes in process by default. If set to true, provisioner
  # will use Jobs to clean.
  #
  useJobForCleaning: false
  #
  # Provisioner name contains Node.UID by default. If set to true, the provisioner
  # name will only use Node.Name.
  #
  useNodeNameOnly: false
  #
  # Resync period in reflectors will be random between minResyncPeriod and
  # 2*minResyncPeriod. Default: 5m0s.
  #
  #minResyncPeriod: 5m0s
  #
  # Defines the name of configmap used by Provisioner
  #
  configMapName: "local-provisioner-config"
  #
  # Enables or disables Pod Security Policy creation and binding
  #
  podSecurityPolicy: false
#
# Configure storage classes.
#
classes:
- name: fast-disks # Defines name of storage classe.
  # Path on the host where local volumes of this storage class are mounted
  # under.
  hostDir: /mnt/fast-disks
  # Optionally specify mount path of local volumes. By default, we use same
  # path as hostDir in container.
  # mountDir: /mnt/fast-disks
  # The volume mode of created PersistentVolume object. Default to Filesystem
  # if not specified.
  volumeMode: Filesystem
  # Filesystem type to mount.
  # It applies only when the source path is a block device,
  # and desire volume mode is Filesystem.
  # Must be a filesystem type supported by the host operating system.
  fsType: ext4
  blockCleanerCommand:
  #  Do a quick reset of the block device during its cleanup.
  #  - "/scripts/quick_reset.sh"
  #  or use dd to zero out block dev in two iterations by uncommenting these lines
  #  - "/scripts/dd_zero.sh"
  #  - "2"
  # or run shred utility for 2 iteration.s
     - "/scripts/shred.sh"
     - "2"
  # or blkdiscard utility by uncommenting the line below.
  #  - "/scripts/blkdiscard.sh"
  # Uncomment to create storage class object with default configuration.
  # storageClass: true
  # Uncomment to create storage class object and configure it.
  # storageClass:
    # reclaimPolicy: Delete # Available reclaim policies: Delete/Retain, defaults: Delete.
    # isDefaultClass: true # set as default class

#
# Configure DaemonSet for provisioner.
#
daemonset:
  #
  # Defines the name of a Provisioner
  #
  name: "local-volume-provisioner"
  #
  # Defines Provisioner's image name including container registry.
  #
  image: quay.io/external_storage/local-volume-provisioner:v2.3.3
  #
  # Defines Image download policy, see kubernetes documentation for available values.
  #
  #imagePullPolicy: Always
  #
  # Defines a name of the service account which Provisioner will use to communicate with API server.
  #
  serviceAccount: local-storage-admin
  #
  # Defines a name of the Pod Priority Class to use with the Provisioner DaemonSet
  #
  # Note that if you want to make it critical, specify "system-cluster-critical"
  # or "system-node-critical" and deploy in kube-system namespace.
  # Ref: https://k8s.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical
  #
  #priorityClassName: system-node-critical
  # If configured, nodeSelector will add a nodeSelector field to the DaemonSet PodSpec.
  #
  # NodeSelector constraint for local-volume-provisioner scheduling to nodes.
  # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  nodeSelector: {}
  #
  # If configured KubeConfigEnv will (optionally) specify the location of kubeconfig file on the node.
  #  kubeConfigEnv: KUBECONFIG
  #
  # List of node labels to be copied to the PVs created by the provisioner in a format:
  #
  #  nodeLabels:
  #    - failure-domain.beta.kubernetes.io/zone
  #    - failure-domain.beta.kubernetes.io/region
  #
  # If configured, tolerations will add a toleration field to the DaemonSet PodSpec.
  #
  # Node tolerations for local-volume-provisioner scheduling to nodes with taints.
  # Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  tolerations: []
  #
  # If configured, resources will set the requests/limits field to the Daemonset PodSpec.
  # Ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
  resources: {}
#
# Configure Prometheus monitoring
#
prometheus:
  operator:
    ## Are you using Prometheus Operator?
    enabled: false

    serviceMonitor:
      ## Interval at which Prometheus scrapes the provisioner
      interval: 10s

      # Namespace Prometheus is installed in
      namespace: monitoring

      ## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr)
      ## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65)
      ## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298)
      selector:
        prometheus: kube-prometheus

hostDir: /mnt/fast-disks 改成hostDir: /mnt/disks<br />將# storageClass: true 改成 storageClass: true<br />然后執(zhí)行:

#安裝
[root@nebula ~]# helm install local-static-provisioner v2.3.3/helm/provisioner
#查看local-static-provisioner部署情況
[root@nebula ~]# helm list

部署 nebula 集群

下載 nebula helm-chart 包

# 下載nebula
[root@nebula ~]# wget https://github.com/vesoft-inc/nebula/archive/master.zip 
# 解壓
[root@nebula ~]# unzip master.zip 

設(shè)置 Kubernetes slave 節(jié)點(diǎn)

下面是 Kubernetes 節(jié)點(diǎn)列表笤闯,我們需要設(shè)置 slave 節(jié)點(diǎn)的調(diào)度標(biāo)簽堕阔。可以將 192.168.0.2颗味,192.168.0.3超陆,192.168.0.4 打上 nebula: "yes" 的標(biāo)簽。

服務(wù)器 IP kubernetes roles nodeName
192.168.0.1 master 192.168.0.1
192.168.0.2 worker 192.168.0.2
192.168.0.3 worker 192.168.0.3
192.168.0.4 worker 192.168.0.4

具體操作如下:

[root@nebula ~]# kubectl  label node 192.168.0.2 nebula="yes" --overwrite 
[root@nebula ~]# kubectl  label node 192.168.0.3 nebula="yes" --overwrite
[root@nebula ~]# kubectl  label node 192.168.0.4 nebula="yes" --overwrite

調(diào)整 nebula helm chart 默認(rèn)的 values 值

nebula helm-chart 包目錄如下:

master/kubernetes/
└── helm
    ├── Chart.yaml
    ├── templates
    │   ├── configmap.yaml
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── ingress-configmap.yaml\ 
    │   ├── NOTES.txt
    │   ├── pdb.yaml
    │   ├── service.yaml
    │   └── statefulset.yaml
    └── values.yaml

2 directories, 10 files

我們需要調(diào)整 master/kubernetes/values.yaml 里面的 MetadHosts 的值浦马,將這個(gè) IP List 替換本環(huán)境的 3 個(gè) k8s worker 的 ip侥猬。

MetadHosts:
  - 192.168.0.2:44500
  - 192.168.0.3:44500
  - 192.168.0.4:44500

通過 helm 安裝 nebula

# 安裝
[root@nebula ~]# helm install nebula master/kubernetes/helm 
# 查看
[root@nebula ~]# helm status nebula
# 查看k8s集群上nebula部署情況
[root@nebula ~]# kubectl get pod  | grep nebula
nebula-graphd-579d89c958-g2j2c                   1/1     Running            0          1m
nebula-graphd-579d89c958-p7829                   1/1     Running            0          1m
nebula-graphd-579d89c958-q74zx                   1/1     Running            0          1m
nebula-metad-0                                   1/1     Running            0          1m
nebula-metad-1                                   1/1     Running            0          1m
nebula-metad-2                                   1/1     Running            0          1m
nebula-storaged-0                                1/1     Running            0          1m
nebula-storaged-1                                1/1     Running            0          1m
nebula-storaged-2                                1/1     Running            0          1m

部署 Ingress-controller

Ingress-controller 是 Kubernetes 的一個(gè) Add-Ons例驹。Kubernetes 通過 ingress-controller 將 Kubernetes 內(nèi)部署的服務(wù)暴露給外部用戶訪問。Ingress-controller 還提供負(fù)載均衡的功能退唠,可以將外部訪問流量平攤給 k8s 中應(yīng)用的不同的副本鹃锈。

image.png

選擇一個(gè)節(jié)點(diǎn)部署 Ingress-controller

[root@nebula ~]# kubectl get node 
NAME              STATUS     ROLES    AGE   VERSION
192.168.0.1       Ready      master   82d   v1.16.1
192.168.0.2       Ready      <none>   82d   v1.16.1
192.168.0.3       Ready      <none>   82d   v1.16.1
192.168.0.4       Ready      <none>   82d   v1.16.1
[root@nebula ~]# kubectl label node 192.168.0.4 ingress=yes

編寫 ingress-nginx.yaml 部署文件

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      tolerations:
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
          effect: "NoSchedule"
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app.kubernetes.io/name
                    operator: In
                    values:
                      - ingress-nginx
              topologyKey: "ingress-nginx.kubernetes.io/master"
      nodeSelector:
        ingress: "yes"
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=default/graphd-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --http-port=8000
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

部署 ingress-nginx

# 部署
[root@nebula ~]# kubectl create -f ingress-nginx.yaml
# 查看部署情況
[root@nebula ~]# kubectl get pod -n ingress-nginx 
NAME                             READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-mmms7   1/1     Running   2          1m

訪問 nebula 集群

查看 ingress-nginx 所在的節(jié)點(diǎn):

[root@nebula ~]# kubectl get node -l ingress=yes -owide 
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
192.168.0.4     Ready    <none>   1d   v1.16.1    192.168.0.4    <none>        CentOS Linux 7 (Core)   7.6.1810.el7.x86_64     docker://19.3.3

訪問 nebula 集群:

[root@nebula ~]# docker run --rm -ti --net=host vesoft/nebula-console:nightly --addr=192.168.0.4 --port=3699

FAQ

如何搭建一套 Kubernetes 集群?

搭建高可用的 Kubernetes 可以參考社區(qū)文檔:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/<br />
你也可以通過 minikube 搭建本地的 Kubernetes 集群瞧预,參考文檔:https://kubernetes.io/docs/setup/learning-environment/minikube/

如何調(diào)整 nebula 集群的部署參數(shù)?

在使用 helm install 時(shí)屎债,使用 --set 可以設(shè)置部署參數(shù),從而覆蓋掉 helm chart 中 values.yaml 中的變量垢油。參考文檔:https://helm.sh/docs/intro/using_helm/

如何查看 nebula 集群狀況盆驹?

使用kubectl get pod | grep nebula命令,或者直接在 Kubernetes dashboard 上查看 nebula 集群的運(yùn)行狀況滩愁。

如何使用其他類型的存儲(chǔ)躯喇?

參考文檔:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

參考資料

附錄

image
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市硝枉,隨后出現(xiàn)的幾起案子廉丽,更是在濱河造成了極大的恐慌,老刑警劉巖妻味,帶你破解...
    沈念sama閱讀 222,627評(píng)論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件正压,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡责球,警方通過查閱死者的電腦和手機(jī)焦履,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,180評(píng)論 3 399
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來雏逾,“玉大人嘉裤,你說我怎么就攤上這事∑懿” “怎么了价脾?”我有些...
    開封第一講書人閱讀 169,346評(píng)論 0 362
  • 文/不壞的土叔 我叫張陵,是天一觀的道長笛匙。 經(jīng)常有香客問我侨把,道長,這世上最難降的妖魔是什么妹孙? 我笑而不...
    開封第一講書人閱讀 60,097評(píng)論 1 300
  • 正文 為了忘掉前任秋柄,我火速辦了婚禮,結(jié)果婚禮上蠢正,老公的妹妹穿的比我還像新娘骇笔。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,100評(píng)論 6 398
  • 文/花漫 我一把揭開白布笨触。 她就那樣靜靜地躺著懦傍,像睡著了一般。 火紅的嫁衣襯著肌膚如雪芦劣。 梳的紋絲不亂的頭發(fā)上粗俱,一...
    開封第一講書人閱讀 52,696評(píng)論 1 312
  • 那天,我揣著相機(jī)與錄音虚吟,去河邊找鬼寸认。 笑死,一個(gè)胖子當(dāng)著我的面吹牛串慰,可吹牛的內(nèi)容都是我干的偏塞。 我是一名探鬼主播,決...
    沈念sama閱讀 41,165評(píng)論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼邦鲫,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼灸叼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起庆捺,我...
    開封第一講書人閱讀 40,108評(píng)論 0 277
  • 序言:老撾萬榮一對(duì)情侶失蹤古今,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后疼燥,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,646評(píng)論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡蚁堤,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,709評(píng)論 3 342
  • 正文 我和宋清朗相戀三年醉者,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片披诗。...
    茶點(diǎn)故事閱讀 40,861評(píng)論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡撬即,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出呈队,到底是詐尸還是另有隱情剥槐,我是刑警寧澤,帶...
    沈念sama閱讀 36,527評(píng)論 5 351
  • 正文 年R本政府宣布宪摧,位于F島的核電站粒竖,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏几于。R本人自食惡果不足惜蕊苗,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,196評(píng)論 3 336
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望沿彭。 院中可真熱鬧朽砰,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,698評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至造锅,卻和暖如春撼唾,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背备绽。 一陣腳步聲響...
    開封第一講書人閱讀 33,804評(píng)論 1 274
  • 我被黑心中介騙來泰國打工券坞, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人肺素。 一個(gè)月前我還...
    沈念sama閱讀 49,287評(píng)論 3 379
  • 正文 我出身青樓恨锚,卻偏偏與公主長得像,于是被迫代替她去往敵國和親倍靡。 傳聞我的和親對(duì)象是個(gè)殘疾皇子猴伶,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,860評(píng)論 2 361

推薦閱讀更多精彩內(nèi)容