1)安裝yum原名 https://www.gluster.org/install/
centos7 yum install centos-release-gluster
centos6 yum install xfsprogs
2)添加磁盤 格式化并掛載
# mkfs.xfs -i size=512 /dev/sdb1
# mkdir -p /bricks/brick1
# vi /etc/fstab
/dev/sdb1 /bricks/brick1 xfs defaults 0 0
# mount -a && mount
3) 安裝軟件
yum install glusterfs-server
4)啟動
systemctl enable glusterd
systemctl start glusterd
systemctl status glusterd
5)在 swarm-manager 節(jié)點上配置焰轻,將 節(jié)點 加入到 集群中狈惫。
主節(jié)點上
# gluster peer probe server2
# gluster peer probe server1
注: 需要先做好hosts解析 或 server換成ip地址
6)創(chuàng)建數(shù)據目錄
mkdir /bricks/brick1/gv0
7)創(chuàng)建邏輯磁盤
數(shù)據儲存方法
1默認模式,既DHT, 也叫 分布卷: 將文件已hash算法隨機分布到 一臺服務器節(jié)點中存儲
gluster volume create gv0 server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0
2 復制模式鹦马,既AFR, 創(chuàng)建volume 時帶 replica x 數(shù)量: 將文件復制到 replica x 個節(jié)點中
gluster volume create gv0 replica 2 server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0
3 條帶模式,既Striped, 創(chuàng)建volume 時帶 stripe x 數(shù)量: 將文件切割成數(shù)據塊忆肾,分別存儲到 stripe x 個節(jié)點中 ( 類似raid 0 )
gluster volume create gv0 striped 2 server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0
4 分布式條帶模式(組合型)荸频,最少需要4臺服務器才能創(chuàng)建。 創(chuàng)建volume 時 stripe 2 server = 4 個節(jié)點: 是DHT 與 Striped 的組合型客冈。
gluster volume create gv0 striped 2 transport tcp server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0 server3:/bricks/brick1/gv0 server4:/bricks/brick1/gv0
5分布式復制模式(組合型), 最少需要4臺服務器才能創(chuàng)建旭从。 創(chuàng)建volume 時 replica 2 server = 4 個節(jié)點:是DHT 與 AFR 的組合型。
gluster volume create gv0 replica 2 transport tcp server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0 server3:/bricks/brick1/gv0 server4:/bricks/brick1/gv0
6 條帶復制卷模式(組合型), 最少需要4臺服務器才能創(chuàng)建场仲。 創(chuàng)建volume 時 stripe 2 replica 2 server = 4 個節(jié)點: 是 Striped 與 AFR 的組合型
gluster volume create gv0 striped 2 replica 2 transport tcp server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0 server3:/bricks/brick1/gv0 server4:/bricks/brick1/gv0
7三種模式混合, 至少需要8臺 服務器才能創(chuàng)建和悦。 stripe 2 replica 2 , 每4個節(jié)點 組成一個 組。
gluster volume create gv0 striped 2 replica 2 transport tcp server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0 server3:/bricks/brick1/gv0 server4:/bricks/brick1/gv0 server5:/bricks/brick1/gv0 server6:/bricks/brick1/gv0 server7:/bricks/brick1/gv0 server8:/bricks/brick1/gv0
注: 如果你把數(shù)據目錄建立在根目錄下 會報下面的錯 可以在命令后面加上參數(shù) orce
如: gluster volume create gv0 replica 2 server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0 force
報錯內容: Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: gv0: failed: The brick server1:/bricks/brick1/gv0 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
gluster volume create gv0 replica 2 server1:/bricks/brick1/gv0 server2:/bricks/brick1/gv0 force
- 啟動卷
gluster volume start gv0
9)查看卷
gluster volume info
gluster volume stop 卷名 停止
gluster volume delete 卷名 刪除注: 刪除 磁盤 以后渠缕,必須刪除 磁盤( 數(shù)據目錄 ) 中的 ( .glusterfs/ .trashcan/ )目錄鸽素。 否則創(chuàng)建新 volume 相同的 磁盤 會出現(xiàn)文件 不分布,或者 類型 錯亂 的問題亦鳞。
gluster peer detach 節(jié)點名 刪除節(jié)點
馍忽。
添加GlusterFS節(jié)點:
gluster peer probe swarm-node-3
gluster volume add-brick models swarm-node-3:/opt/gluster/data
注:如果是復制卷或者條帶卷,則每次添加的Brick數(shù)必須是replica或者stripe的整數(shù)倍
配置卷
gluster volume set
縮容volume:
先將數(shù)據遷移到其它可用的Brick燕差,遷移結束后才將該Brick移除:
gluster volume remove-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data start
在執(zhí)行了start之后遭笋,可以使用status命令查看移除進度:
gluster volume remove-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data status
不進行數(shù)據遷移,直接刪除該Brick:
gluster volume remove-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data commit
注意徒探,如果是復制卷或者條帶卷瓦呼,則每次移除的Brick數(shù)必須是replica或者stripe的整數(shù)倍。
擴容:
gluster volume add-brick models swarm-node-2:/opt/gluster/data
修復命令:
gluster volume replace-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data commit -force
遷移volume:
gluster volume replace-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data start
pause 為暫停遷移
gluster volume replace-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data pause
abort 為終止遷移
gluster volume replace-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data abort
status 查看遷移狀態(tài)
gluster volume replace-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data status
遷移結束后使用commit 來生效
gluster volume replace-brick models swarm-node-2:/opt/gluster/data swarm-node-3:/opt/gluster/data commit
均衡volume:
gluster volume models lay-outstart
gluster volume models start
gluster volume models startforce
gluster volume models status
gluster volume models stop
gluster 性能調優(yōu):
開啟 指定 volume 的配額: (models 為 volume 名稱)
gluster volume quota models enable
限制 models 中 / (既總目錄) 最大使用 80GB 空間
gluster volume quota models limit-usage / 80GB
設置 cache 4GB
gluster volume set models performance.cache-size 4GB
開啟 異步 测暗, 后臺操作
gluster volume set models performance.flush-behind on
設置 io 線程 32
gluster volume set models performance.io-thread-count 32
設置 回寫 (寫數(shù)據時間央串,先寫入緩存內磨澡,再寫入硬盤)
gluster volume set models performance.write-behind on
部署GlusterFS客戶端并mount GlusterFS文件系統(tǒng) (客戶端必須加入 glusterfs hosts 否則報錯。)
yum install -y glusterfs glusterfs-fuse
mkdir -p /opt/gfsmnt
mount -t glusterfs swarm-manager:models /opt/gfsmnt/
確認掛載結果:
mount -t fuse.glusterfs
查看卷
gluster volume list /列出集群中的所有卷/
gluster volume info [all] /查看集群中的卷信息/
gluster volume status [all] /查看集群中的卷狀態(tài)/
更改卷類型
1.需要先卸載掛載的目錄
umount /mnt
2.停止卷
3.更改卷的類型
語法:gluster volume set test-volume config.transport tcp,rdma OR tcp OR rdma
例子:
重新均衡卷
語法:gluster volume rebalance <VOLNAME> fix-layout start
例子:gluster volume rebalance test-volume fix-layout start