1.glusterfs概述
GlusterFS系統(tǒng)是一個(gè)可擴(kuò)展的網(wǎng)絡(luò)文件系統(tǒng)纤怒,相比其他分布式文件系統(tǒng),GlusterFS具有高擴(kuò)展性峭状、高可用性克滴、高性能、可橫向擴(kuò)展等特點(diǎn)优床,并且其沒(méi)有元數(shù)據(jù)服務(wù)器的設(shè)計(jì)劝赔,讓整個(gè)服務(wù)沒(méi)有單點(diǎn)故障的隱患。
當(dāng)客戶端訪問(wèn)GlusterFS存儲(chǔ)時(shí)胆敞,首先程序通過(guò)訪問(wèn)掛載點(diǎn)的形式讀寫(xiě)數(shù)據(jù)着帽,對(duì)于用戶和程序而言,集群文件系統(tǒng)是透明的移层,用戶和程序根本感覺(jué)不到文件系統(tǒng)是本地還是在遠(yuǎn)程服務(wù)器上仍翰。讀寫(xiě)操作將會(huì)被交給VFS(Virtual File System)來(lái)處理,VFS會(huì)將請(qǐng)求交給FUSE內(nèi)核模塊观话,而FUSE又會(huì)通過(guò)設(shè)備/dev/fuse將數(shù)據(jù)交給GlusterFS Client予借。最后經(jīng)過(guò)GlusterFS Client的計(jì)算,并最終經(jīng)過(guò)網(wǎng)絡(luò)將請(qǐng)求或數(shù)據(jù)發(fā)送到GlusterFS Server上。
2.glusterfs常用分布式簡(jiǎn)介
?
分布式卷也成為哈希卷灵迫,多個(gè)文件以文件為單位在多個(gè)brick上秦叛,使用哈希算法隨機(jī)存儲(chǔ)。
應(yīng)用場(chǎng)景:大量小文件
優(yōu)點(diǎn):讀/寫(xiě)性能好
缺點(diǎn):如果存儲(chǔ)或服務(wù)器故障瀑粥,該brick上的數(shù)據(jù)將丟失
不指定卷類型挣跋,默認(rèn)是分布式卷
brick數(shù)量沒(méi)有限制
創(chuàng)建分布式卷命令:
gluster volume create volume_name node1:/data/br1 node2:/data/br1
復(fù)制卷是將多個(gè)文件在多個(gè)brick上復(fù)制多份,brick的數(shù)目要與需要復(fù)制的份數(shù)相等狞换,建議brick分布在不同的服務(wù)器上避咆。
應(yīng)用場(chǎng)景:對(duì)可靠性高和讀寫(xiě)性能要求高的場(chǎng)景
優(yōu)點(diǎn):讀寫(xiě)性能好
缺點(diǎn):寫(xiě)性能差
replica = brick
創(chuàng)建復(fù)制卷:
gluster volume create volume_name replica 2?node1:/data/br1 node2:/data/br1
replica:文件保存的份數(shù)
條帶卷是將文件分成條帶,存放在多個(gè)brick上修噪,默認(rèn)條帶大小128k
應(yīng)用場(chǎng)景:大文件
優(yōu)點(diǎn):適用于大文件存儲(chǔ)
缺點(diǎn):可靠性低牌借,brick故障會(huì)導(dǎo)致數(shù)據(jù)全部丟失
stripe = birck
創(chuàng)建條帶卷:
gluster volume create volume_name?stripe 2?node1:/data/br1 node2:/data/br1
stripe:條帶個(gè)數(shù)
分布式條帶卷是將多個(gè)文件在多個(gè)節(jié)點(diǎn)哈希存儲(chǔ),每個(gè)文件再多分條帶在多個(gè)brick上存儲(chǔ)
應(yīng)用場(chǎng)景:讀/寫(xiě)性能高的大量大文件場(chǎng)景
優(yōu)點(diǎn):高并發(fā)支持
缺點(diǎn):沒(méi)有冗余割按,可靠性差
brick數(shù)是stripe的倍數(shù)
創(chuàng)建分布式條帶卷:
gluster volume create volume_name stripe 2 node1:/data/br1 node2:/data/br1 node3:/data/br1 node4:/data/br1
分布式復(fù)制卷是將多個(gè)文件在多個(gè)節(jié)點(diǎn)上哈希存儲(chǔ),在多個(gè)brick復(fù)制多份存儲(chǔ)磷籍。
應(yīng)用場(chǎng)景:大量文件讀和可靠性要求高的場(chǎng)景
優(yōu)點(diǎn):高可靠适荣,讀性能高
缺點(diǎn):犧牲存儲(chǔ)空間,寫(xiě)性能差
brick數(shù)量是replica的倍數(shù)
gluster volume create volume_name replica 2 node1:/data/br1 node2:/data/br1 node3:/data/br1 node4:/data/br1
條帶式復(fù)制卷是將一個(gè)大文件存儲(chǔ)的時(shí)候劃分條帶院领,并且保存多份
應(yīng)用場(chǎng)景:超大文件弛矛,并且對(duì)可靠性要求高
優(yōu)點(diǎn):大文件存儲(chǔ),可靠性高
缺點(diǎn):犧牲空間寫(xiě)性能差
brick數(shù)量是stripe比然、replica的乘積
gluster volume create volume_name stripe 2 replica 2 node1:/data/br1 node2:/data/br1 node3:/data/br1 node4:/data/br1
3.glusterfs環(huán)境
日志存儲(chǔ)集群采用的是分布式復(fù)制卷丈氓,將多個(gè)文件在多個(gè)節(jié)點(diǎn)上哈希存儲(chǔ),在多個(gè)brick復(fù)制多份存儲(chǔ)强法。共有五臺(tái)服務(wù)器万俗,磁盤(pán)空間共有90T,那么采用這種分布式復(fù)制卷的方式饮怯,只有45T磁盤(pán)空間可用闰歪。并且需要采用分布式復(fù)制卷方式需要要有雙數(shù)的brick,所以現(xiàn)采用一臺(tái)服務(wù)器上創(chuàng)建兩個(gè)brick蓖墅,如上圖所示库倘,10.102.23.4:/data_01/node和10.102.23.44:/data_01/node是備份關(guān)系,其他節(jié)點(diǎn)均是如此论矾,10.102.23.44作為日志存儲(chǔ)集群的管理節(jié)點(diǎn)教翩,nfs-ganesha服務(wù)只需要安裝在控制節(jié)點(diǎn),客戶端則可以通過(guò)nfs方式掛載贪壳。
# sed -i 's#SELINUX=enforcing#SELINUX=disabled#' /etc/sysconfig/selinux ??#關(guān)閉selinux
# iptables -F ????#清除防火墻規(guī)則
安裝glusterfs(01-05)
# yum install userspace-rcu-*
# yum install python2-gluster-3.13.2-2.el7.x86_64.rpm
# yum install tcmu-runner-* libtcmu-*
# yum install gluster*
# yum install nfs-ganesha-* ?????
#這個(gè)nfs只要需要對(duì)外掛載的哪臺(tái)服務(wù)器需要安裝(10.102.23.44)
# systemctl start glusterd.service ???#所有服務(wù)器啟動(dòng)glusterd
# systemctl start rpcbind
# systemctl enable glusterd.service
# systemctl enable rpcbind ??
# ss ??-lnt ??#查詢端口是否有為24007饱亿,如果有則服務(wù)正常運(yùn)行
創(chuàng)建集群(在10.102.23.44節(jié)點(diǎn)上執(zhí)行一下操作,向集群中添加節(jié)點(diǎn)):
[root@admin-node ~]# gluster peer probe 10.102.23.44
peer probe: success. [root@admin-node ~]# gluster peer probe 10.102.23.45
peer probe: success.
[root@admin-node ~]# gluster peer probe 10.102.23.46
peer probe: success.
[root@admin-node ~]# gluster peer probe 10.102.23.47
peer probe: success.
[root@admin-node ~]# gluster peer probe 10.102.23.4
peer probe: success.
查看虛擬機(jī)信任狀態(tài)添加結(jié)果
[root@admin-node ~]# gluster peer status
Number of Peers: 4
Hostname: 10.102.23.46
Uuid: 31b5ecd4-c49c-4fa7-8757-c01604ffcc7e
State: Peer in Cluster (Connected)
Hostname: 10.102.23.47
Uuid: 38a7fda9-ad4a-441a-b28f-a396b09606af
State: Peer in Cluster (Connected)
Hostname: 10.102.23.45
Uuid: 9e3cfb56-1ed4-4daf-9d20-ad4bf2cefb37
State: Peer in Cluster (Connected)
Hostname: 10.102.23.4
Uuid: 1836ae9a-eca5-444f-bb9c-20f032247bcb
State: Peer in Cluster (Connected)
在所有節(jié)點(diǎn)進(jìn)行以下磁盤(pán)操作:
[root@admin-node ~]# ?fdisk ?/dev/sdb ?
創(chuàng)建卷組:
[root@admin-node ~]# vgcreate ?vg_data01 /dev/sdb1 ?/dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
[root@admin-node ~]# vgcreate ?vg_data02 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1
查看卷組:
[root@admin-node ~]# vgdisplay
創(chuàng)建邏輯卷:
[root@admin-node ~]# lvcreate ?-n lv_data01 ?-L ?9TB vg_data01
[root@admin-node ~]# lvcreate ?-n lv_data02 ?-L ?9TB vg_data02
查看邏輯卷:
[root@admin-node ~]# lvdisplay
格式化邏輯卷:
[root@admin-node ~]# mkfs.xfs /dev/vg_data01/lv_data01
[root@admin-node ~]# mkfs.xfs /dev/vg_data02/lv_data02
掛載邏輯卷:
[root@admin-node ~]# mkdir ?-p /data_01/node ??/data_02/node
[root@admin-node ~]# vim /etc/fstab
/dev/vg_data01/lv_data01 ???????/data_01 ???xfs ???defaults ???????0 0
/dev/vg_data02/lv_data02 ??????/data_02 ???xfs ???defaults ???????0 0
[root@admin-node ~]# mount /data_01
[root@admin-node ~]# mount /data_02分布式復(fù)制模式(組合型), 最少需要4臺(tái)服務(wù)器才能創(chuàng)建。
創(chuàng)建卷:[root@admin-node ~]# gluster volume create data-volume replica 2 ??10.102.23.4:/data_01/node ?10.102.23.44:/data_01/node ?10.102.23.44:/data_02/node 10.102.23.45:/data_02/node ?10.102.23.45:/data_01/node ?10.102.23.4:/data_02/node 10.102.23.46:/data_01/node ?10.102.23.47:/data_01/node ?10.102.23.46:/data_02/node ?10.102.23.47:/data_02/node force
啟動(dòng)創(chuàng)建的卷:
[root@admin-node ~]# gluster volume ?start data-volume
volume start: data-volume: success所有機(jī)器都可以查看:
[root@admin-node ~]# gluster volume ?info
查看分布式卷的狀態(tài):
[root@admin-node ~]# gluster volume ?status
基于以上glusterfs部署路捧,glusterfs分布式復(fù)制卷已經(jīng)完成
4.nfs-ganesha環(huán)境搭建
glusterfs服務(wù)本身也是支持nfs掛載关霸,由于現(xiàn)有生產(chǎn)環(huán)境存在多個(gè)網(wǎng)段,并且有些網(wǎng)段與glusterfs存儲(chǔ)服務(wù)器網(wǎng)段是不通杰扫,所以需要通過(guò)nginx代理nfs來(lái)實(shí)現(xiàn)nfs掛載队寇。Glusterfs服務(wù)只是支持nfs3版本的掛載,在通過(guò)nginx代理方面也不是那么方便章姓,端口繁多佳遣,所以glusterfs與NFSs-Ganesha是完美組合。????NFSs-Ganesha 通過(guò)FSAL(文件系統(tǒng)抽象層)將一個(gè)后端存儲(chǔ)抽象成一個(gè)統(tǒng)一的API凡伊,提供給Ganesha服務(wù)端零渐,然后通過(guò)NFS協(xié)議將其掛載到客戶端。在客戶端上對(duì)掛出來(lái)的空間進(jìn)行操作系忙。并且NFSs-Ganesha 可以指定nfs的版本诵盼。
在管理節(jié)點(diǎn)10.102.23.44上安裝nfs-ganesha,在一開(kāi)始部署glusterfs已在管理節(jié)點(diǎn)上安裝银还,這里就不重復(fù)說(shuō)明了风宁,直接簡(jiǎn)要說(shuō)明配置文件
[root@admin-node ~]# vim ?/etc/ganesha/ganesha.conf
.....................................
EXPORT
{
????????## Export Id (mandatory, each EXPORT must have a unique Export_Id)
????????#Export_Id = 12345;
????????Export_Id = 10;
????????## Exported path (mandatory)
????????#Path = /nonexistant;
????????Path = /data01;
????????## Pseudo Path (required for NFSv4 or if mount_path_pseudo = true)
????????#Pseudo = /nonexistant;
????????Pseudo = /data01; ????#客戶端通過(guò)nfs掛載的根目錄
????????## Restrict the protocols that may use this export. ?This cannot allow
????????## access that is denied in NFS_CORE_PARAM.
????????#Protocols = 3,4;
????????Protocols = 4; ??????#客戶端nfs掛載的版本
????????## Access type for clients. ?Default is None, so some access must be
????????## given. It can be here, in the EXPORT_DEFAULTS, or in a CLIENT block
????????#Access_Type = RW;
????????Access_Type = RW; ???#權(quán)限問(wèn)題
????????## Whether to squash various users.
????????#Squash = root_squash;
????????Squash = No_root_squash; ??#root降級(jí)
????????## Allowed security types for this export
????????#Sectype = sys,krb5,krb5i,krb5p;
????????Sectype = sys; ??????#類型
????????## Exporting FSAL
????????#FSAL {
????????????????#Name = VFS;
????????#}
????????FSAL {
????????????????Name = GLUSTER;
????????????????hostname = "10.102.23.44"; ???#glusterfs管理節(jié)點(diǎn)IP
????????????????volume = "data-volume"; ?????#glusterfs卷名
????????}
}
...................
[root@admin-node ~]# systemctl ?restart nfs-ganesha
[root@admin-node ~]# systemctl ?enable nfs-ganesha
[root@admin-node ~]# showmount -e 10.102.23.44
Export list for 10.102.23.44: ???????#nfs-ganesha搭建成功
5.客戶端掛載
以glusterfs方式掛載:[root@admin-node ~]# mkdir /logs
[root@admin-node ~]# mount -t glusterfs 10.102.23.44:data-volume /logs/
以NFS方式進(jìn)行掛載:
在客戶端(10.1.99段):
[root@moban-00 ?~]#yum -y install nfs-utils rpcbind
[root@moban-00 ?~]# systemctl start rpcbind
[root@moban-00 ?~]# systemctl enable rpcbind
[root@moban-00 ?~]# mkdir /home/dwweiyinwen/logs/
[root@moban-00 ~]# mount -t nfs -o vers=4,proto=tcp,port=2049 10.102.23.44:/data01 ??/home/dwweiyinwen/logs/