CephFS
Ceph Filesystem:ceph的文件系統(tǒng)欧募,主要用于文件共享,類似NFS
MDS: meta data service仆抵,元數(shù)據(jù)服務(wù)跟继,CephFS的運行依賴于MDS。MDS的守護進程是ceph-mds
ceph-mds作用:
????ceph-mds進程自身的管理
????主要用于存儲CephFS上存儲文件相關(guān)的元數(shù)據(jù)镣丑,
????協(xié)調(diào)對ceph存儲集群的訪問
部署MDS服務(wù)
可以部署在mgr舔糖,mon節(jié)點,在ceph-mgr1安裝ceph-mds
ceph@ceph-mgr1:~$ sudo apt -y? install ceph-mds
在ceph-deploy節(jié)點莺匠,
創(chuàng)建在ceph-mgr1上得mds
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr1
創(chuàng)建metadata元數(shù)據(jù)存儲池和data數(shù)據(jù)存儲池金吗,這兩個存儲池用于創(chuàng)建CephFS,如下創(chuàng)建名為cephfs-metadata的元數(shù)據(jù)存儲池和cephfs-data的數(shù)據(jù)存儲池, 最后兩個32分別是pg歸置組的數(shù)量和pgp歸置組排序的數(shù)量
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
查看ceph集群的狀態(tài):
ceph@ceph-deploy:~/ceph-cluster$ ceph -s
? cluster:
? ? id:? ? 98762d01-8474-493a-806e-fcb0dfc5fdb2
? ? health: HEALTH_WARN
? ? ? ? ? ? 1 pool(s) do not have an application enabled
? services:
? ? mon: 1 daemons, quorum ceph-mon1 (age 9d)
? ? mgr: ceph-mgr1(active, since 9d)
? ? mds: 1/1 daemons up
? ? osd: 11 osds: 11 up (since 9d), 11 in (since 11d)
? ? rgw: 1 daemon active (1 hosts, 1 zones)
? data:
? ? volumes: 1/1 healthy
? ? pools:? 10 pools, 329 pgs
? ? objects: 650 objects, 1.4 GiB
? ? usage:? 8.5 GiB used, 211 GiB / 220 GiB avail
? ? pgs:? ? 329 active+clean
創(chuàng)建cephfs
ceph@ceph-deploy:~/ceph-cluster$ ceph fs new mycephfs cephfs-metadata cephfs-data
查看fs狀態(tài)
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status mycephfs
mycephfs - 1 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr1? Reqs:? ? 0 /s? ? 65? ? 43? ? 21? ? 30?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 1776k? 66.3G?
? cephfs-data? ? ? data? ? 1364M? 66.3G?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
驗證fs狀態(tài),active狀態(tài)
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active}
創(chuàng)建具有cephfs權(quán)限的賬戶
ceph@ceph-deploy:~/ceph-cluster$ ceph auth add client.huahaulincephfs mon "allow rw" osd "allow rwx pool=cephfs-dada"
added key for client.huahaulincephfs
驗證:
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.huahaulincephfs
[client.huahaulincephfs]
key = AQDtrzJhUzNSOBAAnepZKifX1VAGoj31qAfjbw==
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-dada"
exported keyring for client.huahaulincephfs
創(chuàng)建keyring文件
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.huahaulincephfs -o ceph.client.huahaulincephfs.keyring
exported keyring for client.huahaulincephfs
創(chuàng)建key文件
[root@ceph-client1 ceph]# ceph auth print-key client.huahaulincephfs > huahaulincephfs.key
驗證keyring文件
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.client.huahaulincephfs.keyring
[client.huahaulincephfs]
key = AQDtrzJhUzNSOBAAnepZKifX1VAGoj31qAfjbw==
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-dada"
在客戶端掛載mycephfs
安裝客戶端工具ceph-common摇庙,需要配置相應(yīng)yum源旱物,這里使用的centos7作為客戶端
yum install ceph-common -y
授權(quán)
在ceph-deploy將剛剛創(chuàng)建的huahaulincephfs的秘鑰分發(fā)過來
ceph@ceph-deploy:~/ceph-cluster$ sudo scp ceph.conf ceph.client.huahaulincephfs.keyring huahaulincephfs.key root@ceph-clinet1:/etc/ceph/
客戶端權(quán)限驗證
[root@ceph-client1 ceph]# ceph --user huahaulincephfs -s
? cluster:
? ? id:? ? 98762d01-8474-493a-806e-fcb0dfc5fdb2
? ? health: HEALTH_WARN
? ? ? ? ? ? 1 pool(s) do not have an application enabled
? services:
? ? mon: 1 daemons, quorum ceph-mon1 (age 9d)
? ? mgr: ceph-mgr1(active, since 9d)
? ? mds: 1/1 daemons up
? ? osd: 11 osds: 11 up (since 9d), 11 in (since 11d)
? ? rgw: 1 daemon active (1 hosts, 1 zones)
? data:
? ? volumes: 1/1 healthy
? ? pools:? 10 pools, 329 pgs
? ? objects: 650 objects, 1.4 GiB
? ? usage:? 8.5 GiB used, 211 GiB / 220 GiB avail
? ? pgs:? ? 329 active+clean
掛載cephfs
有兩種方式:內(nèi)核空間掛載和用戶空間掛載,推薦使用內(nèi)核空間掛載卫袒,內(nèi)核空間掛載需要支持ceph模塊宵呛,用戶空間掛載需要支持ceph-fuse模塊,一般使用內(nèi)核空間掛載方式夕凝,除非內(nèi)核版本較低宝穗,且沒有ceph模塊的時候,可以安裝ceph-fuse方式掛載
演示內(nèi)核空間掛載迹冤,掛載有兩種方式讽营,通過key文件掛載和通過key掛載
#掛載cephfs需要掛載mon節(jié)點的6789端口,加入到mon集群的節(jié)點才可以被掛載泡徙,將mon節(jié)點加入集群之后掛載橱鹏,發(fā)現(xiàn)報錯
[root@ceph-client1 ceph]# mount -t ceph 192.168.241.12:6789,192.168.241.13:6789,192.168.241.14:6789:/? /datafs -o? name=huahaulincephfs,secret=AQDtrzJhUzNSOBAAnepZKifX1VAGoj31qAfjbw==
mount error 13 = Permission denied
原因是沒有授權(quán)mds權(quán)限,于是更新權(quán)限堪藐,加上mds權(quán)限
[root@ceph-client1 ceph]# ceph auth get client.huahaulincephfs
exported keyring for client.huahaulincephfs
[client.huahaulincephfs]
key = AQDtrzJhUzNSOBAAnepZKifX1VAGoj31qAfjbw==
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-dada"
[root@ceph-client1 ceph]# ceph auth caps client.huahaulincephfs mon "allow r" mds "allow rw" osd "allow rwx pool=cephfs-data"
updated caps for client.huahaulincephfs
[root@ceph-client1 ceph]# ceph auth get client.huahaulincephfs
exported keyring for client.huahaulincephfs
[client.huahaulincephfs]
key = AQDtrzJhUzNSOBAAnepZKifX1VAGoj31qAfjbw==
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rwx pool=cephfs-data"
[root@ceph-client1 /]# mount -t ceph 192.168.241.12:6789,192.168.241.13:6789,192.168.241.14:6789:/? /datafs/ -o? name=huahaulincephfs,secret=AQDtrzJhUzNSOBAAnepZKifX1VAGoj31qAfjbw==
或者使用key文件掛載
mount -t ceph 192.168.241.12:6789,192.168.241.13:6789,192.168.241.14:6789:/? /datafs/ -o? name=huahaulincephfs,secretfile=/etc/ceph/huahaulincephfs.key
掛載成功@蚶肌!
#驗證數(shù)據(jù)礁竞,掛載之前? cephfs-data 已使用1.6G
[root@ceph-client1 /]# ceph df
--- RAW STORAGE ---
CLASS? ? SIZE? ? AVAIL? ? USED? RAW USED? %RAW USED
hdd? ? 220 GiB? 206 GiB? 14 GiB? ? 14 GiB? ? ? 6.41
TOTAL? 220 GiB? 206 GiB? 14 GiB? ? 14 GiB? ? ? 6.41
--- POOLS ---
POOL? ? ? ? ? ? ? ? ? ID? PGS? STORED? OBJECTS? ? USED? %USED? MAX AVAIL
device_health_metrics? 1? ? 1? ? ? 0 B? ? ? ? 0? ? ? 0 B? ? ? 0? ? 64 GiB
mypool? ? ? ? ? ? ? ? ? 2? 32? 1.2 MiB? ? ? ? 1? 3.5 MiB? ? ? 0? ? 64 GiB
.rgw.root? ? ? ? ? ? ? 3? 32? 1.3 KiB? ? ? ? 4? 48 KiB? ? ? 0? ? 64 GiB
default.rgw.log? ? ? ? 4? 32? 3.6 KiB? ? ? 209? 408 KiB? ? ? 0? ? 64 GiB
default.rgw.control? ? 5? 32? ? ? 0 B? ? ? ? 8? ? ? 0 B? ? ? 0? ? 64 GiB
default.rgw.meta? ? ? ? 6? ? 8? ? ? 0 B? ? ? ? 0? ? ? 0 B? ? ? 0? ? 64 GiB
myrbd1? ? ? ? ? ? ? ? ? 7? 64? 829 MiB? ? ? 223? 2.4 GiB? 1.25? ? 64 GiB
cephfs-metadata? ? ? ? 8? 32? 640 KiB? ? ? 23? 2.0 MiB? ? ? 0? ? 64 GiB
cephfs-data? ? ? ? ? ? 9? 64? 563 MiB? ? ? 179? 1.6 GiB? 0.85? ? 64 GiB
rbd1-data? ? ? ? ? ? ? 10? 32? 538 MiB? ? ? 158? 1.6 GiB? 0.81? ? 64 GiB
#寫入200M數(shù)據(jù)
[root@ceph-client1 /]# dd if=/dev/zero of=/datafs/test bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 1.03247 s, 203 MB/s
#寫入200M數(shù)據(jù)之后糖荒,cephfs-data變成1.8GiB
[root@ceph-client1 /]# ceph df
--- RAW STORAGE ---
CLASS? ? SIZE? ? AVAIL? ? USED? RAW USED? %RAW USED
hdd? ? 220 GiB? 205 GiB? 15 GiB? ? 15 GiB? ? ? 6.93
TOTAL? 220 GiB? 205 GiB? 15 GiB? ? 15 GiB? ? ? 6.93
--- POOLS ---
POOL? ? ? ? ? ? ? ? ? ID? PGS? STORED? OBJECTS? ? USED? %USED? MAX AVAIL
device_health_metrics? 1? ? 1? ? ? 0 B? ? ? ? 0? ? ? 0 B? ? ? 0? ? 63 GiB
mypool? ? ? ? ? ? ? ? ? 2? 32? 1.2 MiB? ? ? ? 1? 3.5 MiB? ? ? 0? ? 63 GiB
.rgw.root? ? ? ? ? ? ? 3? 32? 1.3 KiB? ? ? ? 4? 48 KiB? ? ? 0? ? 63 GiB
default.rgw.log? ? ? ? 4? 32? 3.6 KiB? ? ? 209? 408 KiB? ? ? 0? ? 63 GiB
default.rgw.control? ? 5? 32? ? ? 0 B? ? ? ? 8? ? ? 0 B? ? ? 0? ? 63 GiB
default.rgw.meta? ? ? ? 6? ? 8? ? ? 0 B? ? ? ? 0? ? ? 0 B? ? ? 0? ? 63 GiB
myrbd1? ? ? ? ? ? ? ? ? 7? 64? 829 MiB? ? ? 223? 2.4 GiB? 1.26? ? 63 GiB
cephfs-metadata? ? ? ? 8? 32? 667 KiB? ? ? 23? 2.0 MiB? ? ? 0? ? 63 GiB
cephfs-data? ? ? ? ? ? 9? 64? 627 MiB? ? ? 179? 1.8 GiB? 0.96? ? 63 GiB
rbd1-data? ? ? ? ? ? ? 10? 32? 538 MiB? ? ? 158? 1.6 GiB? 0.82? ? 63 GiB
#查看掛載點/datafs狀態(tài)
[root@ceph-client1 ceph]# stat -f /datafs/
? File: "/datafs/"
? ? ID: b1d1181888b4b15b Namelen: 255? ? Type: ceph
Block size: 4194304? ? Fundamental block size: 4194304
Blocks: Total: 16354? ? ? Free: 16191? ? ? Available: 16191
Inodes: Total: 179? ? ? ? Free: -1
#設(shè)置開機掛載
[root@ceph-client1 ceph]# vi /etc/fstab
192.168.241.12:6789,192.168.241.13:6789,192.168.241.14:6789:/ /datafs? ceph defaults,name=huahaulincephfs,secretfile=/etc/ceph/huahaulincephfs.key,_netdev 0 0
[root@ceph-client1 ceph]# mount -a
MDS高可用
#查看mds狀態(tài),是單節(jié)點
[root@ceph-client1 /]# ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active}
#添加mds服務(wù)角色模捂,當前已有ceph-mgr1一個mds角色捶朵,接下將ceph-mgr2,ceph-mon2狂男,ceph-mon3添加為mds角色综看,實現(xiàn)兩主兩備和高性能結(jié)構(gòu)
在ceph-mgr2,ceph-mon2岖食,ceph-mon3 分別安裝ceph-mds服務(wù)红碑,安裝ceph-mds命令同樣不同在ceph直接過間接登錄過的環(huán)境執(zhí)行
dyl@ceph-mgr2:~$ sudo apt install -y ceph-mds
dyl@ceph-mon2:~$ sudo apt -y install ceph-mds
dyl@ceph-mon3:/etc/ceph$ sudo apt -y install ceph-mds
#添加mds,在ceph-deploy節(jié)點執(zhí)行
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr2
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon2
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon3
#查看mds狀態(tài)泡垃,4個up了析珊,
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active} 4 up:standby
#驗證mds集群的狀態(tài),有四個standby蔑穴,1個active忠寻,之前可能將mon1加入過mds節(jié)點,所以mon1也在standby中
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr1? Reqs:? ? 0 /s? ? 66? ? 44? ? 21? ? 42?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2108k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mgr2?
ceph-mon1?
ceph-mon2?
ceph-mon3?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
#當前的文件系統(tǒng)狀態(tài)
ceph@ceph-deploy:~/ceph-cluster$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name mycephfs
epoch 4
flags 12
created 2021-08-25T08:46:24.916762-0700
modified 2021-08-25T08:46:25.923608-0700
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=14145}
failed
damaged
stopped
data_pools [9]
metadata_pool 8
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph-mgr1{0:14145} state up:active seq 68 addr [v2:192.168.241.15:6802/203148310,v1:192.168.241.15:6803/203148310]]
#設(shè)置兩主兩備存和,即得設(shè)置active激活狀態(tài)的mds數(shù)量為2锡溯,現(xiàn)在有4個mds: ceph-mgr1,ceph-mgr2,ceph-mon2,ceph-mon3
ceph@ceph-deploy:~/ceph-cluster$ ceph fs set mycephfs? max_mds 2
#查看active狀態(tài)的mds數(shù)量為2赶舆,可以看到處于active狀態(tài)的是ceph-mgr1,ceph-mon3祭饭,
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr1? Reqs:? ? 0 /s? ? 66? ? 44? ? 21? ? 42?
1? ? active? ceph-mon3? Reqs:? ? 0 /s? ? 10? ? 13? ? 11? ? ? 0?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2180k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mgr2?
ceph-mon1?
ceph-mon2?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
#設(shè)置mds高可用芜茵,處于active狀態(tài)的是ceph-mgr1,ceph-mon3倡蝙,處于standby狀態(tài)的是 ceph-mgr2九串,ceph-mon1,ceph-mon2寺鸥,ceph-mon1先不去
管猪钮,接下來就是為兩個active狀態(tài)的mds分別指定standby節(jié)點作為備,實現(xiàn)每個每個主都有一個備
不想要mon1加到mds集群里胆建,將mon1踢出群烤低,禁用mon1的mds服務(wù)即可,在mon1上執(zhí)行
ceph@ceph-mon1:/etc/ceph$ sudo systemctl stop ceph-mds@ceph-mon1.service
這樣再用ceph fs status查看的時候就看不到ceph-mon1了
#在ceph-deploy節(jié)點配置mds高可用的配置
? ceph@ceph-deploy:~/ceph-cluster$ cd /var/lib/ceph/ceph-cluster
#在ceph.conf中追加ceph-mon2為ceph-mon3的備笆载,ceph-mgr2位ceph-mgr1的備
? ceph@ceph-deploy:~/ceph-cluster$ vi ceph.conf
[mds.ceph-mon2]
mds_standby_for_name = ceph-mon3
mds_standy_replay = true
[mds.ceph-mgr2]
mds_standby_for_name = ceph-mgr1
mds_standby_replay = true
#將該配置推送到各mds節(jié)點,保持配置一致
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mgr1
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mgr2
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mon2
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mon3
#在各mds節(jié)點重啟mds服務(wù)
ceph@ceph-mgr1:~$ sudo systemctl restart ceph-mds@ceph-mgr1.service
ceph@ceph-mgr2:/etc/ceph$ sudo systemctl restart ceph-mds@ceph-mgr2.service
ceph@ceph-mon2:/etc/ceph$ sudo systemctl restart ceph-mds@ceph-mon2.service
ceph@ceph-mon3:/etc/ceph$ sudo systemctl restart ceph-mds@ceph-mon3.service
#再次查看cephfs的狀態(tài)凉驻,可以看到mds集群兩主(active)兩備(standby)了,可以看到aceive已經(jīng)變了,可能是我重啟服務(wù)的時候先后順序不同涝登,在主重啟過程中雄家,備就自動升級為主了
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr2? Reqs:? ? 0 /s? ? 91? ? 44? ? 21? ? ? 2?
1? ? active? ceph-mon2? Reqs:? ? 0 /s? ? 10? ? 13? ? 11? ? ? 0?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2228k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mgr1?
ceph-mon3?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
#測試驗證高可用性
在任意一個mds active節(jié)點停止mds服務(wù)胀滚,然后查看狀態(tài)
比如現(xiàn)在的active是ceph-mgr2,ceph-mon2咽笼,和他們互為主備的分別是ceph-mgr1和ceph-mon3
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr2? Reqs:? ? 0 /s? ? 91? ? 44? ? 21? ? ? 2?
1? ? active? ceph-mon2? Reqs:? ? 0 /s? ? 10? ? 13? ? 11? ? ? 0?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2228k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mon3?
ceph-mgr1?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
我們停止ceph-mon2的mds服務(wù),看ceph-mon3會不會主動接替ceph-mon2為主褐荷,在ceph-mon2停止mds服務(wù)
ceph@ceph-mon2:/etc/ceph$ sudo systemctl stop ceph-mds@ceph-mon2.service
查看fs集群狀態(tài)勾效,mon3并沒有提升為主叛甫,mgr1居然提升為主了。杨伙。。ceph-mon2已經(jīng)不在集群里了
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr2? Reqs:? ? 0 /s? ? 91? ? 44? ? 21? ? ? 2?
1? ? active? ceph-mgr1? Reqs:? ? 0 /s? ? 10? ? 13? ? 11? ? ? 0?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2228k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mon3?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
再把mon2 mds服務(wù)啟動限匣,再看雞群里mon2又回到集群了,只不過是standby的狀態(tài)
ceph@ceph-mon2:/etc/ceph$ sudo systemctl start ceph-mds@ceph-mon2.service
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mgr2? Reqs:? ? 0 /s? ? 91? ? 44? ? 21? ? ? 2?
1? ? active? ceph-mgr1? Reqs:? ? 0 /s? ? 10? ? 13? ? 11? ? ? 0?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2228k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mon3?
ceph-mon2?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
我們再把mgr2的mds停掉,看看效果锌历,這次mon2又被提升為主了
ceph@ceph-mgr2:/etc/ceph$ sudo systemctl stop ceph-mds@ceph-mgr2.service
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK? STATE? ? ? MDS? ? ? ? ACTIVITY? ? DNS? ? INOS? DIRS? CAPS?
0? ? active? ceph-mon2? Reqs:? ? 0 /s? ? 91? ? 44? ? 21? ? ? 2?
1? ? active? ceph-mgr1? Reqs:? ? 0 /s? ? 10? ? 13? ? 11? ? ? 0?
? ? ? POOL? ? ? ? TYPE? ? USED? AVAIL?
cephfs-metadata? metadata? 2264k? 63.2G?
? cephfs-data? ? ? data? ? 1964M? 63.2G?
STANDBY MDS?
ceph-mon3?
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)