環(huán)境信息
三臺機(jī)器,操作系統(tǒng)CentOS 7.4:
hanyu-210 10.20.0.210
hanyu-211 10.20.0.211
hanyu-212 10.20.0.212
前提條件:
已搭建K8S集群(1個master 2個node節(jié)點)
1犹撒、搭建glusterFS集群(除非特別說明逻卖,否則三個節(jié)點都執(zhí)行)
執(zhí)行
[root@hanyu-210 k8s_glusterfs]# yum install centos-release-gluster
[root@hanyu-210 k8s_glusterfs]# yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
配置 GlusterFS 集群
[root@hanyu-210 k8s_glusterfs]# systemctl start glusterd.service
[root@hanyu-210 k8s_glusterfs]# systemctl enable glusterd.service
hanyu-210節(jié)點執(zhí)行
[root@hanyu-210 k8s_glusterfs]# gluster peer probe hanyu-210
[root@hanyu-210 k8s_glusterfs]# gluster peer probe hanyu-211
[root@hanyu-210 k8s_glusterfs]# gluster peer probe hanyu-212
創(chuàng)建數(shù)據(jù)目錄
[root@hanyu-210 k8s_glusterfs]# mkdir -p /opt/gfs_data
創(chuàng)建復(fù)制卷
[root@hanyu-210 k8s_glusterfs]# gluster volume create k8s-volume replica 3 hanyu-210:/opt/gfs_data hanyu-211:/opt/gfs_data hanyu-212:/opt/gfs_data force
啟動卷
[root@hanyu-210 k8s_glusterfs]# gluster volume start k8s-volume
查詢卷狀態(tài)
[root@hanyu-210 k8s_glusterfs]# gluster volume status
Status of volume: k8s-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick hanyu-210:/opt/gfs_data 49152 0 Y 29445
Brick hanyu-212:/opt/gfs_data 49152 0 Y 32098
Self-heal Daemon on localhost N/A N/A Y 29466
Self-heal Daemon on hanyu-212 N/A N/A Y 32119
Task Status of Volume k8s-volume
------------------------------------------------------------------------------
There are no active volume tasks
[root@hanyu-210 k8s_glusterfs]# gluster volume info
Volume Name: k8s-volume
Type: Replicate
Volume ID: 7d7ecba3-7bc9-4e09-89ed-493b3a6a2454
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: hanyu-210:/opt/gfs_data
Brick2: hanyu-211:/opt/gfs_data
Brick3: hanyu-212:/opt/gfs_data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
驗證glusterFS集群可用
選擇其中一臺主機(jī)執(zhí)行
yum install -y glusterfs glusterfs-fuse
mkdir -p /root/test
mount -t glusterfs hanyu-210:k8s-volume /root/test
df -h
umount /root/test
2递雀、使用glusterfs(以下均在k8s master節(jié)點執(zhí)行)
創(chuàng)建glusterfs的endpoints及對應(yīng)的service:kubectl apply -f glusterfs-cluster.yaml
[root@hanyu-210 k8s_glusterfs]# cat glusterfs-cluster.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 10.20.0.210
ports:
- port: 49152
- addresses:
- ip: 10.20.0.211
ports:
- port: 49152
- addresses:
- ip: 10.20.0.212
ports:
- port: 49152
---
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: 49152
創(chuàng)建應(yīng)用直接使用glusterfs作為存儲卷:kubectl apply -f nginx_deployment_test.yaml
[root@hanyu-210 k8s_glusterfs]# cat nginx_deployment_test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-test
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: storage001
mountPath: "/usr/share/nginx/html"
volumes:
- name: storage001
glusterfs:
endpoints: glusterfs-cluster
path: k8s-volume
readOnly: false
創(chuàng)建pv使用glusterfs:kubectl apply -f glusterfs-pv.yaml
[root@hanyu-210 k8s_glusterfs]# cat glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: k8s-volume
readOnly: false
創(chuàng)建pvc聲明:kubectl apply -f glusterfs-pvc.yaml
[root@hanyu-210 k8s_glusterfs]# cat glusterfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: glusterfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
創(chuàng)建應(yīng)用使用pvc:kubectl apply -f nginx_deployment.yaml
[root@hanyu-210 k8s_glusterfs]# cat nginx_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: storage001
mountPath: "/usr/share/nginx/html"
volumes:
- name: storage001
persistentVolumeClaim:
claimName: glusterfs-pvc