Kubernetes 是什么
Kubernetes 是一個(gè)開源的,用于管理云平臺(tái)中多個(gè)主機(jī)上的容器化的應(yīng)用草穆,Kubernetes 的目標(biāo)是讓部署容器化的應(yīng)用簡單并且高效,Kubernetes 提供了應(yīng)用部署窗宇,規(guī)劃厌处,更新,維護(hù)的一種機(jī)制凝危。<br />Kubernetes 在設(shè)計(jì)結(jié)構(gòu)上定義了一系列的構(gòu)建模塊波俄,其目的是為了提供一個(gè)可以部署、維護(hù)和擴(kuò)展應(yīng)用程序的機(jī)制蛾默,組成 Kubernetes 的組件設(shè)計(jì)概念為松耦合和可擴(kuò)展的懦铺,這樣可以使之滿足多種不同的工作負(fù)載≈ЪΓ可擴(kuò)展性在很大程度上由 Kubernetes
API 提供冬念,此 API 主要被作為擴(kuò)展的內(nèi)部組件以及 Kubernetes 上運(yùn)行的容器來使用。
Kubernetes 主要由以下幾個(gè)核心組件組成:
-
etcd
保存了整個(gè)集群的狀態(tài) -
apiserver
提供了資源操作的唯一入口牧挣,并提供認(rèn)證急前、授權(quán)、訪問控制瀑构、API注冊(cè)和發(fā)現(xiàn)等機(jī)制 -
controller manager
負(fù)責(zé)維護(hù)集群的狀態(tài)叔汁,比如故障檢測(cè)、自動(dòng)擴(kuò)展检碗、滾動(dòng)更新等 -
scheduler
負(fù)責(zé)資源的調(diào)度据块,按照預(yù)定的調(diào)度策略將Pod調(diào)度到相應(yīng)的機(jī)器上 -
kubelet
負(fù)責(zé)維護(hù)容器的生命周期,同時(shí)也負(fù)責(zé) Volume和網(wǎng)絡(luò)的管理 -
Container runtime
負(fù)責(zé)鏡像管理以及 Pod 和容器的真正運(yùn)行(CRI) -
kube-proxy
負(fù)責(zé)為 Service 提供 cluster 內(nèi)部的服務(wù)發(fā)現(xiàn)和負(fù)載均衡
除了核心組件折剃,還有一些推薦的 Add-ons:
-
kube-dns
負(fù)責(zé)為整個(gè)集群提供 DNS 服務(wù) -
Ingress Controller
為服務(wù)提供外網(wǎng)入口 -
Heapster
提供資源監(jiān)控 -
Dashboard
提供 GUI -
Federation
提供跨可用區(qū)的集群 -
Fluentd-elasticsearch
提供集群日志采集另假、存儲(chǔ)與查詢
Kubernetes 和數(shù)據(jù)庫
數(shù)據(jù)庫容器化是最近的一大熱點(diǎn),那么 Kubernetes 能為數(shù)據(jù)庫帶來什么好處呢怕犁?
- 故障恢復(fù): Kubernetes 提供故障恢復(fù)的功能边篮,數(shù)據(jù)庫應(yīng)用如果宕掉,Kubernetes 可以將其自動(dòng)重啟奏甫,或者將數(shù)據(jù)庫實(shí)例遷移到集群中其他節(jié)點(diǎn)上
- 存儲(chǔ)管理: Kubernetes 提供了豐富的存儲(chǔ)接入方案戈轿,數(shù)據(jù)庫應(yīng)用能透明地使用不同類型的存儲(chǔ)系統(tǒng)
- 負(fù)載均衡: Kubernetes Service 提供負(fù)載均衡功能,能將外部訪問平攤給不同的數(shù)據(jù)庫實(shí)例副本上
- 水平拓展: Kubernetes 可以根據(jù)當(dāng)前數(shù)據(jù)庫集群的資源利用率情況阵子,縮放副本數(shù)目思杯,從而提升資源的利用率
目前很多數(shù)據(jù)庫,如:MySQL挠进,MongoDB 和 TiDB 在 Kubernetes 集群中都能運(yùn)行很良好色乾。
Nebula Graph在Kubernetes中的實(shí)踐
Nebula Graph 是一個(gè)分布式的開源圖數(shù)據(jù)庫誊册,主要組件有:Query Engine 的 graphd,數(shù)據(jù)存儲(chǔ)的 storaged暖璧,和元數(shù)據(jù)的 meted案怯。在 Kubernetes 實(shí)踐過程中,它主要給圖數(shù)據(jù)庫 Nebula Graph 帶來了以下的好處:
- Kubernetes 能分?jǐn)?nebula graphd澎办,metad 和 storaged 不副本之間的負(fù)載嘲碱。graphd,metad 和 storaged 可以通過 Kubernetes 的域名服務(wù)自動(dòng)發(fā)現(xiàn)彼此局蚀。
- 通過 storageclass悍汛,pvc 和 pv 可以屏蔽底層存儲(chǔ)細(xì)節(jié),無論使用本地卷還是云盤至会,Kubernetes 均可以屏蔽這些細(xì)節(jié)。
- 通過 Kubernetes 可以在幾秒內(nèi)成功部署一套 Nebula 集群谱俭,Kubernetes 也可以無感知地實(shí)現(xiàn) Nebula 集群的升級(jí)奉件。
- Nebula 集群通過 Kubernetes 可以做到自我恢復(fù),單體副本 crash昆著,Kubernetes 可以重新將其拉起县貌,無需運(yùn)維人員介入。
- Kubernetes 可以根據(jù)當(dāng)前 Nebula 集群的資源利用率情況水平伸縮 Nebula 集群凑懂,從而提供集群的性能煤痕。
下面來講解下具體的實(shí)踐內(nèi)容。
集群部署
硬件和軟件要求
這里主要羅列下本文部署涉及到的機(jī)器接谨、操作系統(tǒng)參數(shù)
- 操作系統(tǒng)使用的 CentOS-7.6.1810 x86_64
- 虛擬機(jī)配置
- 4 CPU
- 8G 內(nèi)存
- 50G 系統(tǒng)盤
- 50G 數(shù)據(jù)盤A
- 50G 數(shù)據(jù)盤B
- Kubernetes 集群版本 v1.16
- Nebula 版本為 v1.0.0-rc3
- 使用本地 PV 作為數(shù)據(jù)存儲(chǔ)
kubernetes 集群規(guī)劃
以下為集群清單
服務(wù)器 IP | nebula 實(shí)例 | role |
---|---|---|
192.168.0.1 | k8s-master | |
192.168.0.2 | graphd, metad-0, storaged-0 | k8s-slave |
192.168.0.3 | graphd, metad-1, storaged-1 | k8s-slave |
192.168.0.4 | graphd, metad-2, storaged-2 | k8s-slave |
Kubernetes 待部署組件
- 安裝 Helm
- 準(zhǔn)備本地磁盤摆碉,并安裝本地卷插件
- 安裝 nebula 集群
- 安裝 ingress-controller
安裝 Helm
Helm 是 Kubernetes 集群上的包管理工具,類似 CentOS 上的 yum脓豪,Ubuntu 上的 apt-get巷帝。使用 Helm 可以極大地降低使用 Kubernetes 部署應(yīng)用的門檻。由于本篇文章不做 Helm 詳細(xì)介紹扫夜,有興趣的小伙伴可自行閱讀《Helm 入門指南》
下載安裝Helm
使用下面命令在終端執(zhí)行即可安裝 Helm
[root@nebula ~]# wget https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz
[root@nebula ~]# tar -zxvf helm/helm-v3.0.1-linux-amd64.tgz
[root@nebula ~]# mv linux-amd64/helm /usr/bin/helm
[root@nebula ~]# chmod +x /usr/bin/helm
查看 Helm 版本
執(zhí)行 helm version
命令即可查看對(duì)應(yīng)的 Helm 版本楞泼,以文本為例,以下為輸出結(jié)果:
version.BuildInfo{
Version:"v3.0.1",
GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa",
GitTreeState:"clean",
GoVersion:"go1.13.4"
}
設(shè)置本地磁盤
在每臺(tái)機(jī)器上做如下配置
創(chuàng)建 mount 目錄
[root@nebula ~]# sudo mkdir -p /mnt/disks
格式化數(shù)據(jù)盤
[root@nebula ~]# sudo mkfs.ext4 /dev/diskA
[root@nebula ~]# sudo mkfs.ext4 /dev/diskB
掛載數(shù)據(jù)盤
[root@nebula ~]# DISKA_UUID=$(blkid -s UUID -o value /dev/diskA)
[root@nebula ~]# DISKB_UUID=$(blkid -s UUID -o value /dev/diskB)
[root@nebula ~]# sudo mkdir /mnt/disks/$DISKA_UUID
[root@nebula ~]# sudo mkdir /mnt/disks/$DISKB_UUID
[root@nebula ~]# sudo mount -t ext4 /dev/diskA /mnt/disks/$DISKA_UUID
[root@nebula ~]# sudo mount -t ext4 /dev/diskB /mnt/disks/$DISKB_UUID
[root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskA` /mnt/disks/$DISKA_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
[root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskB` /mnt/disks/$DISKB_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
部署本地卷插件
[root@nebula ~]# curl https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/archive/v2.3.3.zip
[root@nebula ~]# unzip v2.3.3.zip
修改 v2.3.3/helm/provisioner/values.yaml
#
# Common options.
#
common:
#
# Defines whether to generate service account and role bindings.
#
rbac: true
#
# Defines the namespace where provisioner runs
#
namespace: default
#
# Defines whether to create provisioner namespace
#
createNamespace: false
#
# Beta PV.NodeAffinity field is used by default. If running against pre-1.10
# k8s version, the `useAlphaAPI` flag must be enabled in the configMap.
#
useAlphaAPI: false
#
# Indicates if PVs should be dependents of the owner Node.
#
setPVOwnerRef: false
#
# Provisioner clean volumes in process by default. If set to true, provisioner
# will use Jobs to clean.
#
useJobForCleaning: false
#
# Provisioner name contains Node.UID by default. If set to true, the provisioner
# name will only use Node.Name.
#
useNodeNameOnly: false
#
# Resync period in reflectors will be random between minResyncPeriod and
# 2*minResyncPeriod. Default: 5m0s.
#
#minResyncPeriod: 5m0s
#
# Defines the name of configmap used by Provisioner
#
configMapName: "local-provisioner-config"
#
# Enables or disables Pod Security Policy creation and binding
#
podSecurityPolicy: false
#
# Configure storage classes.
#
classes:
- name: fast-disks # Defines name of storage classe.
# Path on the host where local volumes of this storage class are mounted
# under.
hostDir: /mnt/fast-disks
# Optionally specify mount path of local volumes. By default, we use same
# path as hostDir in container.
# mountDir: /mnt/fast-disks
# The volume mode of created PersistentVolume object. Default to Filesystem
# if not specified.
volumeMode: Filesystem
# Filesystem type to mount.
# It applies only when the source path is a block device,
# and desire volume mode is Filesystem.
# Must be a filesystem type supported by the host operating system.
fsType: ext4
blockCleanerCommand:
# Do a quick reset of the block device during its cleanup.
# - "/scripts/quick_reset.sh"
# or use dd to zero out block dev in two iterations by uncommenting these lines
# - "/scripts/dd_zero.sh"
# - "2"
# or run shred utility for 2 iteration.s
- "/scripts/shred.sh"
- "2"
# or blkdiscard utility by uncommenting the line below.
# - "/scripts/blkdiscard.sh"
# Uncomment to create storage class object with default configuration.
# storageClass: true
# Uncomment to create storage class object and configure it.
# storageClass:
# reclaimPolicy: Delete # Available reclaim policies: Delete/Retain, defaults: Delete.
# isDefaultClass: true # set as default class
#
# Configure DaemonSet for provisioner.
#
daemonset:
#
# Defines the name of a Provisioner
#
name: "local-volume-provisioner"
#
# Defines Provisioner's image name including container registry.
#
image: quay.io/external_storage/local-volume-provisioner:v2.3.3
#
# Defines Image download policy, see kubernetes documentation for available values.
#
#imagePullPolicy: Always
#
# Defines a name of the service account which Provisioner will use to communicate with API server.
#
serviceAccount: local-storage-admin
#
# Defines a name of the Pod Priority Class to use with the Provisioner DaemonSet
#
# Note that if you want to make it critical, specify "system-cluster-critical"
# or "system-node-critical" and deploy in kube-system namespace.
# Ref: https://k8s.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical
#
#priorityClassName: system-node-critical
# If configured, nodeSelector will add a nodeSelector field to the DaemonSet PodSpec.
#
# NodeSelector constraint for local-volume-provisioner scheduling to nodes.
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
#
# If configured KubeConfigEnv will (optionally) specify the location of kubeconfig file on the node.
# kubeConfigEnv: KUBECONFIG
#
# List of node labels to be copied to the PVs created by the provisioner in a format:
#
# nodeLabels:
# - failure-domain.beta.kubernetes.io/zone
# - failure-domain.beta.kubernetes.io/region
#
# If configured, tolerations will add a toleration field to the DaemonSet PodSpec.
#
# Node tolerations for local-volume-provisioner scheduling to nodes with taints.
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
#
# If configured, resources will set the requests/limits field to the Daemonset PodSpec.
# Ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
resources: {}
#
# Configure Prometheus monitoring
#
prometheus:
operator:
## Are you using Prometheus Operator?
enabled: false
serviceMonitor:
## Interval at which Prometheus scrapes the provisioner
interval: 10s
# Namespace Prometheus is installed in
namespace: monitoring
## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr)
## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65)
## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298)
selector:
prometheus: kube-prometheus
將hostDir: /mnt/fast-disks
改成hostDir: /mnt/disks
<br />將# storageClass: true
改成 storageClass: true
<br />然后執(zhí)行:
#安裝
[root@nebula ~]# helm install local-static-provisioner v2.3.3/helm/provisioner
#查看local-static-provisioner部署情況
[root@nebula ~]# helm list
部署 nebula 集群
下載 nebula helm-chart 包
# 下載nebula
[root@nebula ~]# wget https://github.com/vesoft-inc/nebula/archive/master.zip
# 解壓
[root@nebula ~]# unzip master.zip
設(shè)置 Kubernetes slave 節(jié)點(diǎn)
下面是 Kubernetes 節(jié)點(diǎn)列表笤闯,我們需要設(shè)置 slave 節(jié)點(diǎn)的調(diào)度標(biāo)簽堕阔。可以將 192.168.0.2颗味,192.168.0.3超陆,192.168.0.4 打上 nebula: "yes" 的標(biāo)簽。
服務(wù)器 IP | kubernetes roles | nodeName |
---|---|---|
192.168.0.1 | master | 192.168.0.1 |
192.168.0.2 | worker | 192.168.0.2 |
192.168.0.3 | worker | 192.168.0.3 |
192.168.0.4 | worker | 192.168.0.4 |
具體操作如下:
[root@nebula ~]# kubectl label node 192.168.0.2 nebula="yes" --overwrite
[root@nebula ~]# kubectl label node 192.168.0.3 nebula="yes" --overwrite
[root@nebula ~]# kubectl label node 192.168.0.4 nebula="yes" --overwrite
調(diào)整 nebula helm chart 默認(rèn)的 values 值
nebula helm-chart 包目錄如下:
master/kubernetes/
└── helm
├── Chart.yaml
├── templates
│ ├── configmap.yaml
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress-configmap.yaml\
│ ├── NOTES.txt
│ ├── pdb.yaml
│ ├── service.yaml
│ └── statefulset.yaml
└── values.yaml
2 directories, 10 files
我們需要調(diào)整 master/kubernetes/values.yaml
里面的 MetadHosts 的值浦马,將這個(gè) IP List 替換本環(huán)境的 3 個(gè) k8s worker 的 ip侥猬。
MetadHosts:
- 192.168.0.2:44500
- 192.168.0.3:44500
- 192.168.0.4:44500
通過 helm 安裝 nebula
# 安裝
[root@nebula ~]# helm install nebula master/kubernetes/helm
# 查看
[root@nebula ~]# helm status nebula
# 查看k8s集群上nebula部署情況
[root@nebula ~]# kubectl get pod | grep nebula
nebula-graphd-579d89c958-g2j2c 1/1 Running 0 1m
nebula-graphd-579d89c958-p7829 1/1 Running 0 1m
nebula-graphd-579d89c958-q74zx 1/1 Running 0 1m
nebula-metad-0 1/1 Running 0 1m
nebula-metad-1 1/1 Running 0 1m
nebula-metad-2 1/1 Running 0 1m
nebula-storaged-0 1/1 Running 0 1m
nebula-storaged-1 1/1 Running 0 1m
nebula-storaged-2 1/1 Running 0 1m
部署 Ingress-controller
Ingress-controller 是 Kubernetes 的一個(gè) Add-Ons例驹。Kubernetes 通過 ingress-controller 將 Kubernetes 內(nèi)部署的服務(wù)暴露給外部用戶訪問。Ingress-controller 還提供負(fù)載均衡的功能退唠,可以將外部訪問流量平攤給 k8s 中應(yīng)用的不同的副本鹃锈。
選擇一個(gè)節(jié)點(diǎn)部署 Ingress-controller
[root@nebula ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.0.1 Ready master 82d v1.16.1
192.168.0.2 Ready <none> 82d v1.16.1
192.168.0.3 Ready <none> 82d v1.16.1
192.168.0.4 Ready <none> 82d v1.16.1
[root@nebula ~]# kubectl label node 192.168.0.4 ingress=yes
編寫 ingress-nginx.yaml 部署文件
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
topologyKey: "ingress-nginx.kubernetes.io/master"
nodeSelector:
ingress: "yes"
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=default/graphd-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --http-port=8000
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
部署 ingress-nginx
# 部署
[root@nebula ~]# kubectl create -f ingress-nginx.yaml
# 查看部署情況
[root@nebula ~]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-mmms7 1/1 Running 2 1m
訪問 nebula 集群
查看 ingress-nginx 所在的節(jié)點(diǎn):
[root@nebula ~]# kubectl get node -l ingress=yes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
192.168.0.4 Ready <none> 1d v1.16.1 192.168.0.4 <none> CentOS Linux 7 (Core) 7.6.1810.el7.x86_64 docker://19.3.3
訪問 nebula 集群:
[root@nebula ~]# docker run --rm -ti --net=host vesoft/nebula-console:nightly --addr=192.168.0.4 --port=3699
FAQ
如何搭建一套 Kubernetes 集群?
搭建高可用的 Kubernetes 可以參考社區(qū)文檔:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/<br />
你也可以通過 minikube 搭建本地的 Kubernetes 集群瞧预,參考文檔:https://kubernetes.io/docs/setup/learning-environment/minikube/
如何調(diào)整 nebula 集群的部署參數(shù)?
在使用 helm install 時(shí)屎债,使用 --set 可以設(shè)置部署參數(shù),從而覆蓋掉 helm chart 中 values.yaml 中的變量垢油。參考文檔:https://helm.sh/docs/intro/using_helm/
如何查看 nebula 集群狀況盆驹?
使用kubectl get pod | grep nebula
命令,或者直接在 Kubernetes dashboard 上查看 nebula 集群的運(yùn)行狀況滩愁。
如何使用其他類型的存儲(chǔ)躯喇?
參考文檔:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
參考資料
附錄
- Nebula Graph:一個(gè)開源的分布式圖數(shù)據(jù)庫
- GitHub:https://github.com/vesoft-inc/nebula
- 官方博客:https://nebula-graph.io/cn/posts/
- 微博:weibo.com/nebulagraph