手工部署Ceph集群(Hammer版)

1岖寞、環(huán)境準備

1.1 安裝系統(tǒng)

[root@node242 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

1.2 關(guān)閉selinux

[root@node242 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@node242 ~]# setenforce 0

1.3 配置yum源

[root@node242 ~]# curl -o /etc/yum.repos.d/openstack-mitaka-ceph-hammer-7.2-x86_64.repo http://172.30.120.146/repo.files/mitaka.x86.repo
[root@node242 ~]# yum clean all
[root@node242 ~]# yum makecache

1.4 時間同步

[root@node242 ~]# yum install ntp -y
[root@node242 ~]# ntpdate cn.pool.ntp.org
 1 Aug 09:46:23 ntpdate[23508]: step time server 202.108.6.95 offset 973.598833 sec

2、部署 Ceph 集群

2.1 安裝 ceph

[root@node242 ~]# yum install ceph -y

2.2 創(chuàng)建 ceph.conf 配置文件

[root@node242 ~]# uuidgen
346436e3-a7f4-4fd9-9694-e9830faec4d5

[root@node242 ~]# vim /etc/ceph/ceph.conf
[global]
fsid = 346436e3-a7f4-4fd9-9694-e9830faec4d5
mon initial members = node242
mon host = 192.168.100.242
public network = 192.168.100.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 1
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128
osd crush chooseleaf type = 1
osd mkfs type = btrfs
mon clock drift allowed = 2
mon clock drift warn backoff = 30

[mon.node242]
host = node242
mon addr = 192.168.100.242:6789

2.3 部署配置 Ceph Monitors

[root@node242 ~]# mkdir ceph
[root@node242 ~]# cd ceph/
[root@node242 ceph]# ls

創(chuàng)建一個 ceph.mon.keyring

[root@node242 ceph]# ceph-authtool --create-keyring ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating ceph.mon.keyring

創(chuàng)建一個 ceph.client.admin.keyring

[root@node242 ceph]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
creating /etc/ceph/ceph.client.admin.keyring

將ceph.client.admin.keyring導入到ceph.mon.keyring

[root@node242 ceph]# ceph-authtool ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into ceph.mon.keyring

創(chuàng)建一個Mon焦辅,名字叫${HOSTNAME}(此處可以直接寫主機名,如: node242)

[root@node242 ceph]# monmaptool --create --add ${HOSTNAME} 192.168.100.242 --fsid 346436e3-a7f4-4fd9-9694-e9830faec4d5 ./monmap
monmaptool: monmap file monmap
monmaptool: set fsid to 346436e3-a7f4-4fd9-9694-e9830faec4d5
monmaptool: writing epoch 0 to monmap (1 monitors)

創(chuàng)建存儲 monitor 數(shù)據(jù)的文件夾,文件夾里主要有keyring和store.db

[root@node242 ceph]# mkdir /var/lib/ceph/mon/ceph-${HOSTNAME}

使用 monmap 和 ceph.mon.keyring 初始化 ceph-mon 守護進程開啟需要的數(shù)據(jù)文件keyring和store.db

[root@node242 ceph]# ceph-mon --mkfs -i ${HOSTNAME} --monmap ./monmap --keyring ./ceph.mon.keyring
2019-07-31 19:27:03.659441 7f2f416b0880 -1 did not load config file, using default settings.
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node242 for mon.node242

在 monitor 數(shù)據(jù)目錄下創(chuàng)建 done 文件成翩,防止重新建立MON

[root@node242 ceph]# touch /var/lib/ceph/mon/ceph-${HOSTNAME}/done

啟動 ceph-mon

[root@node242 ceph]# /etc/init.d/ceph start mon.node242
=== mon.node242 ===
Starting Ceph mon.node242 on node242...
Running as unit run-10346.service.
Starting ceph-create-keys on node242...
[root@node242 ceph]# /etc/init.d/ceph status mon.node242
=== mon.node242 ===
mon.node242: running {"version":"0.94.5"}

查看集群狀態(tài)

[root@node242 ceph]# ceph -s
    cluster 346436e3-a7f4-4fd9-9694-e9830faec4d5
     health HEALTH_ERR
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
     monmap e1: 1 mons at {node242=192.168.100.242:6789/0}
            election epoch 2, quorum 0 node242
     osdmap e1: 0 osds: 0 up, 0 in
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating
[root@node242 ceph]# ceph osd tree
ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1      0 root default
[root@node242 ~]# ceph osd pool ls detail
pool 0 'rbd' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

2.4 部署配置 Ceph OSD

查看osd節(jié)點磁盤

[root@node242 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda             252:0    0   80G  0 disk 
├─vda1          252:1    0  500M  0 part /boot
└─vda2          252:2    0 79.5G  0 part 
  ├─centos-root 253:0    0   75G  0 lvm  /
  └─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
vdb             252:16   0    8G  0 disk 
vdc             252:32   0    8G  0 disk 
vdd             252:48   0    8G  0 disk

查看磁盤分區(qū)表

[root@node242 ~]# parted /dev/vdb print
Error: /dev/vdb: unrecognised disk label
Model: Virtio Block Device (virtblk)                                      
Disk /dev/vdb: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

把磁盤分區(qū)表設(shè)置為 gpt

[root@node242 ~]# parted /dev/vdb mktable gpt
Information: You may need to update /etc/fstab.

查看磁盤分區(qū)表

[root@node242 ~]# parted /dev/vdb print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

創(chuàng)建分區(qū)1作為日志盤

[root@node242 ~]# parted /dev/vdb  mkpart primary xfs 0% 20%
Information: You may need to update /etc/fstab.

創(chuàng)建分區(qū)2作為數(shù)據(jù)盤

[root@node242 ~]# parted /dev/vdb  mkpart primary xfs 20% 100%
Information: You may need to update /etc/fstab.

查看磁盤分區(qū)

[root@node242 ~]# lsblk /dev/vdb
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb    252:16   0    8G  0 disk 
├─vdb1 252:17   0  1.6G  0 part 
└─vdb2 252:18   0  6.4G  0 part

創(chuàng)建 osd , 分配 osd id

[root@node242 ~]# ceph osd tree
ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1      0 root default
[root@node242 ~]# ceph osd create
0
[root@node242 ~]# ceph osd tree
ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 2.00000 root default                                       
 0       0 osd.0               down        0          1.00000 

創(chuàng)建存儲 osd 數(shù)據(jù)的文件夾氮惯,掛載 osd 數(shù)據(jù)盤

[root@node242 ~]# mkdir /var/lib/ceph/osd/ceph-0
[root@node242 ~]# mkfs.xfs -f /dev/vdb2 
meta-data=/dev/vdb2              isize=256    agcount=4, agsize=419392 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1677568, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node242 ~]# mount /dev/vdb2 /var/lib/ceph/osd/ceph-0/

配置 osd 日志盤

[root@node242 ~]# ll /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Aug  1 11:18 b0d06586-00e8-42fe-bdad-9b47abee374e -> ../../vdb1
lrwxrwxrwx 1 root root 10 Aug  1 11:12 dce5f167-fc11-45b0-8cf2-d16b8923a1b9 -> ../../vdb2
[root@node242 ~]# ln -s /dev/disk/by-partuuid/b0d06586-00e8-42fe-bdad-9b47abee374e /var/lib/ceph/osd/ceph-0/journal
[root@node242 ~]# ls -l /var/lib/ceph/osd/ceph-0/
total 0
lrwxrwxrwx 1 root root 58 Aug  1 11:19 journal -> /dev/disk/by-partuuid/b0d06586-00e8-42fe-bdad-9b47abee374e

創(chuàng)建 osd.0 的 keyring 文件

[root@node242 ~]# ceph-osd -i 0 --mkfs --mkkey
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
2019-08-01 11:19:48.919616 7fd45e81d880 -1 journal check: ondisk fsid d1721d05-684d-483b-becb-5c07564f364c doesn't match expected 8558e48b-5363-4383-94c2-7f303676ddd5, invalid (someone else's?) journal
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
2019-08-01 11:19:48.936239 7fd45e81d880 -1 filestore(/var/lib/ceph/osd/ceph-0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2019-08-01 11:19:48.947600 7fd45e81d880 -1 created object store /var/lib/ceph/osd/ceph-0 journal /var/lib/ceph/osd/ceph-0/journal for osd.0 fsid 346436e3-a7f4-4fd9-9694-e9830faec4d5
2019-08-01 11:19:48.947663 7fd45e81d880 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
2019-08-01 11:19:48.947757 7fd45e81d880 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring

查看

[root@node242 ~]# ls -l /var/lib/ceph/osd/ceph-0/
total 32
-rw-r--r-- 1 root root 37 Aug  1 11:19 ceph_fsid
drwxr-xr-x 4 root root 61 Aug  1 11:19 current
-rw-r--r-- 1 root root 37 Aug  1 11:19 fsid
lrwxrwxrwx 1 root root 58 Aug  1 11:19 journal -> /dev/disk/by-partuuid/b0d06586-00e8-42fe-bdad-9b47abee374e
-rw------- 1 root root 56 Aug  1 11:19 keyring
-rw-r--r-- 1 root root 21 Aug  1 11:19 magic
-rw-r--r-- 1 root root  6 Aug  1 11:19 ready
-rw-r--r-- 1 root root  4 Aug  1 11:19 store_version
-rw-r--r-- 1 root root 53 Aug  1 11:19 superblock
-rw-r--r-- 1 root root  2 Aug  1 11:19 whoami

添加 osd.0 的認證信息

[root@node242 ~]# ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-0/keyring
added key for osd.0

啟動服務(wù)

[root@node242 ~]# ceph-osd -i 0 
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
[root@node242 ~]# ps -ef | grep osd
root     19995     1  1 11:23 ?        00:00:00 ceph-osd -i 0
root     20090  9959  0 11:23 pts/0    00:00:00 grep --color=auto osd

把 osd.0 加入 crushmap

[root@node242 ~]# ceph osd tree
ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1      0 root default                                   
 0      0 osd.0             up  1.00000          1.00000
[root@node242 ~]# ceph osd crush add osd.0  1.0 root=default host=node242
add item id 0 name 'osd.0' weight 1 at location {host=node242,root=default} to crush map
[root@node242 ~]# ceph osd tree
ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 1.00000 root default                                       
-2 1.00000     host node242                                   
 0 1.00000         osd.0         up  1.00000          1.00000

重復以上的創(chuàng)建 osd 過程叮雳,添加osd

[root@node242 ~]# ceph osd tree
ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 3.00000 root default                                       
-2 3.00000     host node242                                   
 0 1.00000         osd.0         up  1.00000          1.00000 
 1 1.00000         osd.1         up  1.00000          1.00000 
 2 1.00000         osd.2         up  1.00000          1.00000 

3、配置 Ceph 集群

查看集群狀態(tài)

[root@node242 ~]# ceph -s
    cluster 346436e3-a7f4-4fd9-9694-e9830faec4d5
     health HEALTH_WARN
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {node242=192.168.100.242:6789/0}
            election epoch 2, quorum 0 node242
     osdmap e13: 3 osds: 3 up, 3 in
      pgmap v21: 64 pgs, 1 pools, 0 bytes data, 0 objects
            100416 kB used, 19530 MB / 19629 MB avail
                  64 active+clean

查看池信息

[root@node242 ~]# ceph  osd pool ls detail
pool 0 'rbd' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 29 flags hashpspool stripe_width 0

調(diào)整 rbd 池的 pg_num 和 pgp_num

[root@node242 ~]# ceph osd pool set rbd pgp_num 128
Error EINVAL: specified pgp_num 128 > pg_num 64
[root@node242 ~]# ceph osd pool set rbd pg_num 128
set pool 0 pg_num to 128
[root@node242 ~]# ceph osd pool set rbd pgp_num 128
Error EBUSY: currently creating pgs, wait
[root@node242 ~]# ceph osd pool set rbd pgp_num 128
set pool 0 pgp_num to 128

再次查看集群狀態(tài)

[root@node242 ~]# ceph -s
    cluster 346436e3-a7f4-4fd9-9694-e9830faec4d5
     health HEALTH_OK
     monmap e1: 1 mons at {node242=192.168.100.242:6789/0}
            election epoch 2, quorum 0 node242
     osdmap e18: 3 osds: 3 up, 3 in
      pgmap v32: 128 pgs, 1 pools, 0 bytes data, 0 objects
            101180 kB used, 19530 MB / 19629 MB avail
                 128 active+clean

Ceph集群狀態(tài)正常妇汗,部署完成帘不。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市杨箭,隨后出現(xiàn)的幾起案子寞焙,更是在濱河造成了極大的恐慌,老刑警劉巖互婿,帶你破解...
    沈念sama閱讀 217,509評論 6 504
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件捣郊,死亡現(xiàn)場離奇詭異,居然都是意外死亡慈参,警方通過查閱死者的電腦和手機呛牲,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,806評論 3 394
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來驮配,“玉大人娘扩,你說我怎么就攤上這事∽扯停” “怎么了琐旁?”我有些...
    開封第一講書人閱讀 163,875評論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長猜绣。 經(jīng)常有香客問我灰殴,道長,這世上最難降的妖魔是什么途事? 我笑而不...
    開封第一講書人閱讀 58,441評論 1 293
  • 正文 為了忘掉前任验懊,我火速辦了婚禮,結(jié)果婚禮上尸变,老公的妹妹穿的比我還像新娘义图。我一直安慰自己,他們只是感情好召烂,可當我...
    茶點故事閱讀 67,488評論 6 392
  • 文/花漫 我一把揭開白布碱工。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪怕篷。 梳的紋絲不亂的頭發(fā)上历筝,一...
    開封第一講書人閱讀 51,365評論 1 302
  • 那天,我揣著相機與錄音廊谓,去河邊找鬼梳猪。 笑死,一個胖子當著我的面吹牛蒸痹,可吹牛的內(nèi)容都是我干的春弥。 我是一名探鬼主播,決...
    沈念sama閱讀 40,190評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼叠荠,長吁一口氣:“原來是場噩夢啊……” “哼匿沛!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起榛鼎,我...
    開封第一講書人閱讀 39,062評論 0 276
  • 序言:老撾萬榮一對情侶失蹤逃呼,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后者娱,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體抡笼,經(jīng)...
    沈念sama閱讀 45,500評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,706評論 3 335
  • 正文 我和宋清朗相戀三年黄鳍,在試婚紗的時候發(fā)現(xiàn)自己被綠了蔫缸。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 39,834評論 1 347
  • 序言:一個原本活蹦亂跳的男人離奇死亡际起,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出吐葱,到底是詐尸還是另有隱情街望,我是刑警寧澤,帶...
    沈念sama閱讀 35,559評論 5 345
  • 正文 年R本政府宣布弟跑,位于F島的核電站灾前,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏孟辑。R本人自食惡果不足惜哎甲,卻給世界環(huán)境...
    茶點故事閱讀 41,167評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望饲嗽。 院中可真熱鬧炭玫,春花似錦、人聲如沸貌虾。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,779評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至衔憨,卻和暖如春叶圃,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背践图。 一陣腳步聲響...
    開封第一講書人閱讀 32,912評論 1 269
  • 我被黑心中介騙來泰國打工掺冠, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人码党。 一個月前我還...
    沈念sama閱讀 47,958評論 2 370
  • 正文 我出身青樓德崭,卻偏偏與公主長得像,于是被迫代替她去往敵國和親闽瓢。 傳聞我的和親對象是個殘疾皇子接癌,可洞房花燭夜當晚...
    茶點故事閱讀 44,779評論 2 354