文章結(jié)構(gòu)
一嘲更、 安裝部署
- 準(zhǔn)備集群基礎(chǔ)配置
- ceph-osd配置
- 啟動(dòng)ceph-osd服務(wù)
當(dāng)前在虛擬機(jī)osd2(192.168.10.43)上
1. 準(zhǔn)備集基礎(chǔ)配置
將monosd(192.168.10.42)
上已經(jīng)創(chuàng)建好的相關(guān)配置文件復(fù)制到本機(jī)
bash> scp root@monosd:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
bash> scp root@monosd:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
bash> scp root@monosd:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
2. ceph-osd配置
bash> ceph-volume lvm prepare --data /dev/sdb
bash> ceph-volume lvm list
---
====== osd.0 =======
[block] /dev/ceph-920942d0-38fc-4b38-9193-81677af5d5e5/osd-block-da930cdb-d64c-481f-ac0c-e0d434adf2d8
block device /dev/ceph-920942d0-38fc-4b38-9193-81677af5d5e5/
osd-block-da930cdb-d64c-481f-ac0c-e0d434adf2d8
block uuid 2CJorX-iXz0-Tqjn-Zrnz-3hdf-hOEi-1eJWtQ
cephx lockbox secret
cluster fsid 611b25ed-0794-43a5-954c-26e2ba4191a3
cluster name ceph
crush device class None
encrypted 0
osd fsid da930cdb-d64c-481f-ac0c-e0d434adf2d8
osd id 0
osdspec affinity
type block
vdo 0
devices /dev/sdb
bash> ceph-volume lvm activate 0 da930cdb-d64c-481f-ac0c-e0d434adf2d8
#ceph-volume lvm activate {ID} {FSID}
/dev/sdb
可以通過(guò) lsblk 或 fdisk -l 查看空閑磁盤(pán)得到,根據(jù)情況自行進(jìn)行替換
{ID} {FSID} 是通過(guò)ceph-volume lvm list
結(jié)果信息得到的
3. 啟動(dòng)ceph-osd服務(wù)
將服務(wù)添加到開(kāi)機(jī)動(dòng)辖众,并隨后開(kāi)啟服務(wù)
#此處@后的數(shù)字為 {ID}
bash> systemctl enable ceph-osd@0
bash> systemctl start ceph-osd@0
#此時(shí)使用命令ceph -s查看狀態(tài),可以看到當(dāng)前osd的啟動(dòng)狀態(tài)
bash> ceph -s
---
cluster:
id: 611b25ed-0794-43a5-954c-26e2ba4191a3
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
Degraded data redundancy: 1 pg undersized
OSD count 1 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum monosd (age 46m)
mgr: monosd_mgr(active, starting, since 0.556561s)
osd: 1 osds: 1 up (since 2m), 1 in (since 2m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 1.2 GiB used, 19 GiB / 20 GiB avail
pgs: 100.000% pgs not active
1 undersized+peered
如法炮制,我將在另外兩臺(tái)機(jī)器上做相同操作烙常。操作后使用ceph -s
查看如下:
bash> ceph -s
---
cluster:
id: 611b25ed-0794-43a5-954c-26e2ba4191a3
health: HEALTH_OK
services:
mon: 1 daemons, quorum monosd (age 95s)
mgr: monosd_mgr(active, since 83s)
osd: 3 osds: 3 up (since 79s), 3 in (since 7m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 4.2 GiB used, 56 GiB / 60 GiB avail
pgs: 1 active+clean
至此费奸,ceph的精減版基礎(chǔ)集群部署完成憋他。REF.
Ceph 15.25 手動(dòng)部署系列筆記