Kubernetes 基于GlusterFS+heketi的高可用動(dòng)態(tài)存儲(chǔ)管理StorageClass

說(shuō)明

Kubernetes中使用GlusterFS作為持久化存儲(chǔ)隔嫡,要提供storageClass使用需要依賴Heketi工具呢蔫。Heketi是一個(gè)具有resetful接口的glusterfs管理程序柒昏,作為kubernetes的Storage存儲(chǔ)的external provisioner苦蒿。 “Heketi提供了一個(gè)RESTful管理界面儿捧,可用于管理GlusterFS卷的生命周期漏设。借助Heketi墨闲,像OpenStack Manila,Kubernetes和OpenShift這樣的云服務(wù)可以動(dòng)態(tài)地配置GlusterFS卷和任何支持的持久性類型郑口。Heketi將自動(dòng)確定整個(gè)集群的brick位置鸳碧,確保將brick及其副本放置在不同的故障域中。Heketi還支持任意數(shù)量的GlusterFS集群犬性,允許云服務(wù)提供網(wǎng)絡(luò)文件存儲(chǔ)瞻离,而不受限于單個(gè)GlusterFS集群。
heketi:提供基于RESTful接口管理glusterfs的功能乒裆,可以方便的創(chuàng)建集群管理glusterfs的node而涉,device娄帖,volume;與k8s結(jié)合可以創(chuàng)建動(dòng)態(tài)的PV,擴(kuò)展glusterfs存儲(chǔ)的動(dòng)態(tài)管理功能威酒。主要用來(lái)管理glusterFS volume的生命周期凶硅,初始化時(shí)候就要分配好裸磁盤(pán)(未格式化)設(shè)備.

注意事項(xiàng):

安裝Glusterfs客戶端:每個(gè)kubernetes集群的節(jié)點(diǎn)需要安裝gulsterfs的客戶端尝苇,如glusterfs-cli,glusterfs-fuse.主要用于在每個(gè)node節(jié)點(diǎn)掛載volume雳旅。
加載內(nèi)核模塊:每個(gè)kubernetes集群的節(jié)點(diǎn)運(yùn)行modprobe dm_thin_pool,加載內(nèi)核模塊杆怕。
高可用(至少三個(gè)節(jié)點(diǎn)):至少需要節(jié)點(diǎn)用來(lái)部署glusterfs集群族购,并且這3個(gè)節(jié)點(diǎn)每個(gè)節(jié)點(diǎn)需要至少一個(gè)空余的磁盤(pán)壳贪。

基礎(chǔ)設(shè)施要求:

正在運(yùn)行的glusterfs集群,至少有三個(gè)node節(jié)點(diǎn)寝杖,每個(gè)節(jié)點(diǎn)至少有一個(gè)可用的裸塊設(shè)備(如EBS卷或本地磁盤(pán),就是沒(méi)有格式化的).
用于運(yùn)行GlusterFS節(jié)點(diǎn)必須為GlusterFS通信打開(kāi)相應(yīng)的端口(如果開(kāi)啟了防火墻的情況下违施,沒(méi)開(kāi)防火墻就不需要這些操作)。

安裝GlusterFS

安裝依賴及常用工具包:
yum -y install flex bison openssl openssl-devel libxml2-devel gcc lrzsz vim*
查找gluster的最新軟件倉(cāng)庫(kù):
yum search centos-release-gluster
安裝最新版本的gluster軟件倉(cāng)庫(kù):
yum install centos-release-gluster7.noarch -y
安裝gluster源朝墩,并安裝glusterfs及相關(guān)軟件包
yum install glusterfs glusterfs-server glusterfs-cli glusterfs-geo-replication glusterfs-rdma -y
客戶端安裝GlusterFS客戶端軟件
yum install glusterfs-fuse glusterfs-cli
啟動(dòng)Glusterd服務(wù)
systemctl start glusterd
systemctl enable glusterd --now //設(shè)開(kāi)機(jī)自啟醉拓,并立即啟動(dòng)服務(wù)

在任意一個(gè)節(jié)點(diǎn)上添加信任節(jié)點(diǎn)
gluster peer probe node99
gluster peer probe node110
gluster peer probe node145
gluster peer probe node108

查看節(jié)點(diǎn)狀態(tài):
gluster peer status //查看節(jié)點(diǎn)狀態(tài)

Number of Peers: 3
Hostname: node110
Uuid: 5f13e231-25cf-475b-81ca-22122b1bfe55
State: Peer in Cluster (Connected)
Other names:
10.14.151.110

Hostname: node99
Uuid: 7988f095-db67-47ee-913b-c232d1d4e954
State: Peer in Cluster (Connected)

Hostname: node145
Uuid: 36e84b55-dab7-4551-9b44-490039d1bdf6
State: Peer in Cluster (Connected)
Other names:
10.14.151.145

創(chuàng)建復(fù)制卷

mkdir /glusterfs/storage1/rep_vol1
gluster volume create rep_vol1 replica 2 node99:/glusterfs/storage1/rep_vol1 node108:/glusterfs/storage1/rep_vol1

創(chuàng)建分布式卷

gluster volume create vdisk1 node108:/brick1 node110:/brick1 brick node145:/brick1 force

創(chuàng)建分布式復(fù)制卷

gluster volume create fbfz replica 2 transport tcp node108:/gluster/fbfz1 node110:/gluster/fbfz1 node145:/gluster/fbfz1 node108:/gluster/fbfz2 node110:/gluster/fbfz2 node145:/gluster/fbfz2 force

啟動(dòng)卷
gluster volume start rep_vol1
查看卷狀態(tài)
gluster volume status
gluster volume info
客戶端測(cè)試掛載卷
mount -t glusterfs node108:rep_vol /tmp/aaa
mount -t glusterfs node145:fbfz /mnt/fbfz
客戶端測(cè)試卷數(shù)據(jù)存儲(chǔ)
for i in `seq -w 1 3`;do cp -rp /var/log/messages /tmp/aaa/test-$i;done

ls /tmp/aaa
111  1.txt  2.txt  anaconda-ks.cfg  test-1  test-2  test-3

其他備用操作

停止卷:
gluster volume stop vdisk2
刪除卷:
gluster volume delete vdisk2
將某個(gè)存儲(chǔ)節(jié)點(diǎn)主機(jī)從信任池中刪除:
gluster peer detach node2

安裝Heketi

Heketi是由golang編寫(xiě),直接靜態(tài)編譯運(yùn)行二進(jìn)制即可收苏,也可以通過(guò)yum安裝以及docker部署亿卤,主要會(huì)產(chǎn)生db文件存儲(chǔ)cluster、node鹿霸、device排吴、volume等信息。

安裝二進(jìn)制包
wget -c https://github.com/heketi/heketi/releases/download/v8.0.0/heketi-v8.0.0.linux.amd64.tar.gz tar zxvf heketi-v9.0.0.linux.amd64.tar.gz
mkdir -pv /opt/heketi/{bin,conf,data}
cp heketi/heketi.json /opt/heketi/conf/
cp heketi/{heketi,heketi-cli} /opt/heketi/bin/

創(chuàng)建ssh-key

我們glusterFS部署在k8s集群外懦鼠,所以heketi通過(guò)ssh管理glusterFS钻哩。需要?jiǎng)?chuàng)建免秘鑰登陸到所有g(shù)lusterFS節(jié)點(diǎn)。
ssh-keygen -f /opt/heketi/conf/heketi_key -t rsa -N ''
ssh-copy-id -i /opt/heketi/conf/heketi_key.pub root@node108 -p 2222
ssh-copy-id -i /opt/heketi/conf/heketi_key.pub root@node110 -p 2222
ssh-copy-id -i /opt/heketi/conf/heketi_key.pub root@node145 -p 2222

配置文件修改

{
  "_port_comment": "Heketi Server Port Number",
  "port": "18080",
  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true,
  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "adminkey"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "userkey"
    }
  },
  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
     "            Kubernetes exec api."
    ],
    "executor": "ssh",
    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/opt/heketi/conf/heketi_key",
      "user": "root",
      "port": "2222",
      "fstab": "/etc/fstab"
    },
    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },
    "_db_comment": "Database file name",
    "db": "/opt/heketi/data/heketi.db",
    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

注意:
需要說(shuō)明的是肛冶,heketi有三種executor街氢,分別為mock、ssh睦袖、kubernetes珊肃,建議在測(cè)試環(huán)境使用mock,生產(chǎn)環(huán)境使用ssh馅笙,當(dāng)glusterfs以容器的方式部署在kubernetes上時(shí)伦乔,才使用kubernetes。我們這里將glusterfs和heketi獨(dú)立部署董习,使用ssh的方式烈和。
使用docker部署的時(shí)候,還需將/var/lib/heketi/mounts 掛載至容器里面, heketi 會(huì)將此目錄作為 gluster volume的掛載點(diǎn)皿淋。

systemd配置
cat /usr/lib/systemd/system/heketi.service

[Unit]
Description=RESTful based volume management framework for GlusterFS
Before=network-online.target
After=network-online.target
Documentation=https://github.com/heketi/heketi
[Service]
Type=simple
LimitNOFILE=65536
ExecStart=/opt/heketi/bin/heketi --config=/opt/heketi/conf/heketi.json
KillMode=process
Restart=on-failure
RestartSec=5
SuccessExitStatus=15
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target

啟動(dòng)heketi服務(wù)
systemctl start heketi
systemctl enable heketi
systemctl status heketi

Heketi管理GlusterFS

添加cluster
/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json cluster create

{"id":"bb1362c3360d80419c822b9994381608","nodes":[],"volumes":[],"block":true,"file":true,"blockvolumes":[]}

將3個(gè)glusterfs節(jié)點(diǎn)作為node添加到cluster

添加節(jié)點(diǎn)

opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json node add --cluster "bb1362c3360d80419c822b9994381608" --management-host-name node108 --storage-host-name 10.14.151.108 --zone 1

{"zone":1,"hostnames":{"manage":["node108"],"storage":["10.14.151.108"]},"cluster":"bb1362c3360d80419c822b9994381608","id":"227fd34c519f0a2c9d5a5b7f3d048745","state":"online","devices":[]}

/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json node add --cluster "bb1362c3360d80419c822b9994381608" --management-host-name node110 --storage-host-name 10.14.151.110 --zone 1

{"zone":1,"hostnames":{"manage":["node110"],"storage":["10.14.151.110"]},"cluster":"bb1362c3360d80419c822b9994381608","id":"5f2d7412f0c874634aa8ee18865533bf","state":"online","devices":[]}

/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json node add --cluster "bb1362c3360d80419c822b9994381608" --management-host-name node145 --storage-host-name 10.14.151.145 --zone 1

{"zone":1,"hostnames":{"manage":["node145"],"storage":["10.14.151.145"]},"cluster":"bb1362c3360d80419c822b9994381608","id":"e20c47a9d9a31a300ed85ccc37441608","state":"online","devices":[]}

查看節(jié)點(diǎn):
/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey node list

Id:227fd34c519f0a2c9d5a5b7f3d048745     Cluster:bb1362c3360d80419c822b9994381608
Id:5f2d7412f0c874634aa8ee18865533bf     Cluster:bb1362c3360d80419c822b9994381608
Id:e20c47a9d9a31a300ed85ccc37441608     Cluster:bb1362c3360d80419c822b9994381608

添加device

機(jī)器只是作為gluster的運(yùn)行單元招刹,volume是基于device創(chuàng)建的。同時(shí)需要特別說(shuō)明的是窝趣,目前heketi僅支持使用裸分區(qū)或裸磁盤(pán)(未格式化)添加為device疯暑,不支持文件系統(tǒng)。

/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json device add --name="/dev/sdb" --node "227fd34c519f0a2c9d5a5b7f3d048745"
/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json device add --name="/dev/sdb" --node "5f2d7412f0c874634aa8ee18865533bf"
/opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey --json device add --name="/dev/sdb" --node "e20c47a9d9a31a300ed85ccc37441608"

注:--node參數(shù)給出的id是上一步創(chuàng)建node時(shí)生成的高帖,實(shí)際配置中,要添加每一個(gè)節(jié)點(diǎn)的每一塊用于存儲(chǔ)的硬盤(pán)畦粮。

生產(chǎn)實(shí)際配置

以上ad-hoc命令均可通過(guò)配置文件創(chuàng)建然后導(dǎo)入:

$ sudo cat /data/heketi/conf/topology.json

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "k8s1"
                            ],
                            "storage": [
                                "10.111.209.188"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdc1"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "k8s2"
                            ],
                            "storage": [
                                "10.111.209.189"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdc1"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "k8s3"
                            ],
                            "storage": [
                                "10.111.209.190"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdc1"
                    ]
                }             
            ]
        }
    ]
}

創(chuàng)建:

$ sudo heketi-cli topology load --json topology.json

添加volume

這里僅僅是做一個(gè)測(cè)試散址,實(shí)際使用中乖阵,會(huì)由kubernetes自動(dòng)創(chuàng)建pv.
創(chuàng)建一個(gè)大小為3G,副本為2的volume

opt/heketi/bin/heketi-cli --user admin --server http://10.143.143.111:18080 --secret adminkey volume create --size 3 --replica 2

Name: vol_61055753aa935407b80e1137647733f6
Size: 3
Volume Id: 61055753aa935407b80e1137647733f6
Cluster Id: bb1362c3360d80419c822b9994381608
Mount: 10.14.151.108:vol_61055753aa935407b80e1137647733f6
Mount Options: backup-volfile-servers=10.14.151.110,10.14.151.145
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 2

kubernetes storageclass 配置
創(chuàng)建storageclass
添加storageclass-glusterfs.yaml文件预麸,內(nèi)容如下:
cat storageclass-glusterfs.yaml

apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: default
data:
  # base64 encoded password. E.g.: echo -n "mypassword" | base64
  key: YWRtaW5rZXk=
type: kubernetes.io/glusterfs
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
parameters:
  resturl: "http://10.14.143.111:18080"
  clusterid: "bb1362c3360d80419c822b9994381608"
  restauthenabled: "true"
  restuser: "admin"
  #secretNamespace: "default"
  #secretName: "heketi-secret"
  restuserkey: "adminkey"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:2"

kubectl apply -f storageclass-glusterfs.yaml

secret/heketi-secret created
storageclass.storage.k8s.io/glusterfs created    

kubectl get sc

NAME         PROVISIONER               AGE
csi-cephfs   cephfs.csi.ceph.com       260d
glusterfs    kubernetes.io/glusterfs   1m

注意:
storageclass.beta.kubernetes.io/is-default-class: "true" #表示此storageClass作為default sc瞪浸,創(chuàng)建pvc不指定sc時(shí),默認(rèn)使用此sc.
reclaimPolicy: Retain #表示pv回收策略為保留吏祸,刪除pvc時(shí)將不刪除pv对蒲。
更詳細(xì)的用法參考:<u>https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs</u>

創(chuàng)建pvc
kubectl create -f glusterfs-pvc.yaml
cat glusterfs-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-mysql1
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

kubectl create -f glusterfs-pvc.yaml

persistentvolumeclaim/glusterfs-mysql1 created  

kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                      STORAGECLASS   REASON    AGE                             1y
pvc-5ae5026d-91d2-11ea-bd25-fa157e638e00   1Gi        RWX            Delete           Bound      default/glusterfs-mysql1   glusterfs                5m

kubectl get pvc

NAME                                     STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
glusterfs-mysql1                         Bound     pvc-5ae5026d-91d2-11ea-bd25-fa157e638e00   1Gi        RWX            glusterfs      4m

創(chuàng)建pod,使用pvc
cat mysql-deployment.yaml

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: mysql
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: root123456
        ports:
          - containerPort: 3306
        volumeMounts:
        - name: gluster-mysql-data
          mountPath: "/var/lib/mysql"
      volumes:
        - name: gluster-mysql-data
          persistentVolumeClaim:
            claimName: glusterfs-mysql1

kubectl create -f mysql-deployment.yaml

deployment.extensions/mysql created

kubectl get deploy

NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mysql     1         1         1            1           4m

kubectl get pods

NAME                          READY     STATUS    RESTARTS   AGE
mysql-b75b5dcfb-cb7qm         1/1       Running   0          5m

kubectl exec -ti mysql-b75b5dcfb-cb7qm sh
df -Th

Filesystem                                          Type            Size  Used Avail Use% Mounted on
overlay                                             overlay         120G   22G   99G  18% /
tmpfs                                               tmpfs            64M     0   64M   0% /dev
tmpfs                                               tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/centos-root                             xfs             120G   22G   99G  18% /etc/hosts
shm                                                 tmpfs            64M     0   64M   0% /dev/shm
10.14.151.108:vol_00de560fab819d81ade9aae98fcdd4d1 fuse.glusterfs 1016M  254M  763M  25% /var/lib/mysql
tmpfs                                               tmpfs            16G   12K   16G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               tmpfs            16G     0   16G   0% /proc/scsi
tmpfs                                               tmpfs            16G     0   16G   0% /sys/firmware

創(chuàng)建statefulset

-
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "glusterfs"
      resources:
        requests:
          storage: 1Gi

kubectl apply -f nginx-statefulset.yml

service/nginx created
statefulset.apps/nginx created

kubectl get pod,pv,pvc

NAME           READY   STATUS    RESTARTS   AGE
pod/nginx-0    1/1     Running   0          116s
pod/nginx-1    1/1     Running   0          98s
pod/nginx-2    1/1     Running   0          91s
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
persistentvolume/pvc-5ac3eba9-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            Retain           Bound    default/www-nginx-0   glusterfs               99s
persistentvolume/pvc-65f27519-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            Retain           Bound    default/www-nginx-1   glusterfs               93s
persistentvolume/pvc-69b31512-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            Retain           Bound    default/www-nginx-2   glusterfs               86s
NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/www-nginx-0   Bound    pvc-5ac3eba9-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            glusterfs      116s
persistentvolumeclaim/www-nginx-1   Bound    pvc-65f27519-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            glusterfs      98s
persistentvolumeclaim/www-nginx-2   Bound    pvc-69b31512-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            glusterfs      91s

我們可以看到RECLAIM POLICY: Retain ,經(jīng)過(guò)測(cè)試
刪除pvc贡翘,pv status會(huì)變成Released狀態(tài)蹈矮,且不會(huì)被刪除
刪除pv, 通過(guò)heketi-cli volume list查看,volume不會(huì)被刪除
kubernetes pv和gluster volume不一致時(shí),可使用heketi來(lái)統(tǒng)一管理volume.此文檔heketi和glusterfs都在kubernetes集群外部署鸣驱。對(duì)于支持AWS EBS的磁盤(pán)泛鸟,可通過(guò)EBS storageClass方式將glusterFS heketi部署在容器中管理.參考https://github.com/gluster/gluster-kubernetes

參考文檔
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs
https://github.com/heketi/heketi/blob/master/docs/admin/readme.md
https://www.cnblogs.com/breezey/p/8849466.html
https://www.cnblogs.com/breezey/p/8849466.html)

驗(yàn)證測(cè)試:

kubectl exec -ti mysql-b75b5dcfb-cb7qm sh

圖片17.png

進(jìn)容器看掛載的是10.14.151.108的volume,
圖片18.png

關(guān)閉10.14.151.108的gluster服務(wù)踊东,模擬故障
圖片19.png

進(jìn)容器后寫(xiě)數(shù)據(jù):
圖片20.png

圖片21.png

說(shuō)明掛掉一個(gè)節(jié)點(diǎn)是不影響用戶使用的北滥。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市闸翅,隨后出現(xiàn)的幾起案子再芋,更是在濱河造成了極大的恐慌,老刑警劉巖坚冀,帶你破解...
    沈念sama閱讀 211,265評(píng)論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件济赎,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡遗菠,警方通過(guò)查閱死者的電腦和手機(jī)联喘,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,078評(píng)論 2 385
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)辙纬,“玉大人豁遭,你說(shuō)我怎么就攤上這事『丶穑” “怎么了蓖谢?”我有些...
    開(kāi)封第一講書(shū)人閱讀 156,852評(píng)論 0 347
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)譬涡。 經(jīng)常有香客問(wèn)我闪幽,道長(zhǎng),這世上最難降的妖魔是什么涡匀? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,408評(píng)論 1 283
  • 正文 為了忘掉前任盯腌,我火速辦了婚禮,結(jié)果婚禮上陨瘩,老公的妹妹穿的比我還像新娘腕够。我一直安慰自己级乍,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,445評(píng)論 5 384
  • 文/花漫 我一把揭開(kāi)白布帚湘。 她就那樣靜靜地躺著玫荣,像睡著了一般。 火紅的嫁衣襯著肌膚如雪大诸。 梳的紋絲不亂的頭發(fā)上捅厂,一...
    開(kāi)封第一講書(shū)人閱讀 49,772評(píng)論 1 290
  • 那天,我揣著相機(jī)與錄音资柔,去河邊找鬼焙贷。 笑死,一個(gè)胖子當(dāng)著我的面吹牛建邓,可吹牛的內(nèi)容都是我干的盈厘。 我是一名探鬼主播,決...
    沈念sama閱讀 38,921評(píng)論 3 406
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼官边,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼沸手!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起注簿,我...
    開(kāi)封第一講書(shū)人閱讀 37,688評(píng)論 0 266
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤契吉,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后诡渴,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體捐晶,經(jīng)...
    沈念sama閱讀 44,130評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,467評(píng)論 2 325
  • 正文 我和宋清朗相戀三年妄辩,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了惑灵。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,617評(píng)論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡眼耀,死狀恐怖英支,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情哮伟,我是刑警寧澤干花,帶...
    沈念sama閱讀 34,276評(píng)論 4 329
  • 正文 年R本政府宣布,位于F島的核電站楞黄,受9級(jí)特大地震影響池凄,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜鬼廓,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,882評(píng)論 3 312
  • 文/蒙蒙 一肿仑、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦尤慰、人聲如沸勾邦。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,740評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至萎河,卻和暖如春荔泳,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背虐杯。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,967評(píng)論 1 265
  • 我被黑心中介騙來(lái)泰國(guó)打工玛歌, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人擎椰。 一個(gè)月前我還...
    沈念sama閱讀 46,315評(píng)論 2 360
  • 正文 我出身青樓支子,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親达舒。 傳聞我的和親對(duì)象是個(gè)殘疾皇子值朋,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,486評(píng)論 2 348