Ceph可以同時提供對象存儲RADOSGW竟闪、塊存儲RBD、文件系統(tǒng)存儲Ceph FS妖爷。?
RBD即RADOS Block Device的簡稱絮识,RBD塊存儲是最穩(wěn)定且最常用的存儲類型次舌。RBD塊設(shè)備類似磁盤可以被掛載彼念。 RBD塊設(shè)備具有快照浅萧、多副本洼畅、克隆和一致性等特性帝簇,數(shù)據(jù)以條帶化的方式存儲在Ceph集群的多個OSD中丧肴。
塊是一個有序字節(jié),普通的塊大小為512字節(jié)冲甘〗迹基于塊的存儲是最常見的存儲方式陶夜,例如硬盤裆站。
一羽嫡、實驗環(huán)境
操作系統(tǒng):CentOS7.5 Minimal
cephServer(ceph01):192.168.1.106? ? /dev/sda??/dev/sdb? ?/dev/sdc
cephClient:192.168.1.104? ?/dev/sda?
我們實驗環(huán)境的ceph是用ceph-deploy部署的單機版杭棵,也就是說存儲并不具備高可用性魂爪,主要用于實驗cephFS艰管。
我們后續(xù)在此基礎(chǔ)上牲芋,將ceph存儲做成集群缸浦,再測試ceph的其他存儲類型餐济。
本次安裝的ceph版本為:ceph version 12.2.11? luminous (stable)
二絮姆、安裝配置cephServer
更改主機名篙悯,添加主機名映射
# hostnamectl? set-hostname? ceph01
#? echo "192.168.1.106? ceph01" >>/etc/hosts
將? /dev/sdc分區(qū)鸽照,作為OSD的journal日志盤
# parted? -s? /dev/sdc? "mklabel gpt"
# parted? -s? /dev/sdc? "mkpart primary? 0% 100%"
設(shè)置本機免密登錄
# ssh-keygen
# ssh-copy-id root@192.168.1.106
關(guān)閉selinux和firewalld
# setenforce 0
# sed? -i? 's/^SELINUX=.*/SELINUX=permissive/g'? /etc/selinux/config
# systemctl? stop firewalld
# systemctl disable firewalld?
添加ceph yum倉庫
# vim /etc/yum.repos.d/ceph.repo
####################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
#####################################################
#? yum -y install epel-release
# yum clean all
#? yum repolist
安裝ceph 相關(guān)組件
#? yum -y install ceph-deploy
# ceph-deploy --version
# yum -y install? ceph-mds ceph-mgr ceph-osd ceph-mon
# mkdir mycluster
# cd? mycluster
# ceph-deploy new? ceph01
# vim ceph.conf?
增加如下字段:
#############################
osd_pool_default_size = 1
osd_pool_default_min_size = 1
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
################################
#? ceph-deploy? mon create? ceph01
# ceph-deploy? mon create-initial
# ceph-deploy admin? ceph01
#? ceph-deploy? disk? list? ceph01?
# ceph-deploy? disk? zap? ceph01? ?/dev/sdb??
# ceph-deploy osd create --data /dev/sdb --journal /dev/sdc? ceph01
# ceph-disk list
#? ceph-deploy mgr create? ceph01?
# ceph-deploy? mds? create ceph01
# cd mycluster/
#? ll?
# lsblk
# ll /etc/ceph/
#? systemctl? status ceph*
配置 MGR dashboard
#??ceph mgr module enable dashboard
# vim??/etc/ceph/ceph.conf?
添加如下字段:
#######################
[mgr]
mgr modules = dashboard
########################
設(shè)置dashboard的IP和端口
# ceph config-key put mgr/dashboard/server_addr 192.168.1.106
# ceph config-key put mgr/dashboard/server_port 7000
# systemctl restart ceph-mgr@ceph01.service
# systemctl status ceph-mgr@ceph01.service
# ss -tan
瀏覽器訪問:?http://192.168.1.106:7000
三、安裝配置cephClient
關(guān)閉selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
安裝ceph-common
# yum? -y install epel-release?
# yum? -y install ceph-common
拷貝服務(wù)端的配置文件key文件
#? scp? root@192.168.1.106: /etc/ceph/ceph.client.admin.keyring? /etc/ceph
#? scp? root@192.168.1.106: /etc/ceph/ceph.conf? ? /etc/ceph?
創(chuàng)建一個新的存儲池,而不是使用默認的rbd
#??ceph osd pool create test 128
創(chuàng)建一個塊
# rbd create --size 10G disk01 --pool test
查看塊信息
# rbd info --pool test disk01
將塊disk01映射到本地
# rbd map --pool test disk01
# dmesg | tail
由于內(nèi)核不支持,需要禁止一些特性号杏,只保留layering
#?rbd --pool test feature disable disk01 exclusive-lock, object-map, fast-diff, deep-flatten?
# rbd map --pool test disk01
# lsblk
格式化塊設(shè)備
#?mkfs.ext4 /dev/rbd0
把rbd0掛載到本地目錄
# mount? ?/dev/rbd0? ?/mnt
在cephServer端查看集群狀態(tài)莹妒, 集群的狀態(tài)是HEALTH_WARN
# ceph health
# ceph health detail
# ceph osd pool application? enable? ?test? ?test
# ceph health detail
# rbd create --size 10G disk02 --pool test
# rbd map --pool test disk02
# rbd feature disable test/disk02 object-map fast-diff deep-flatten
# rbd map --pool test disk02
在cephClient測試寫入一個大文件
#? dd if=/dev/zero of=/mnt/test? bs=1M count=5000
# df? -hT
四蜈块、參考
三個關(guān)于ceph的博客
http://www.zphj1987.com
http://xiaqunfeng.cc
https://bloq.frognew.com
Ceph Luminous安裝指南
https://www.centos.bz/2018/01/ceph-luminous%E5%AE%89%E8%A3%85%E6%8C%87%E5%8D%97/
升級Ceph集群從Kraken到Luminous
https://blog.frognew.com/2017/11/upgrade-ceph-from-kraken-to-luminous.html
https://www.cnblogs.com/sisimi/p/7753310.html
Ceph塊存儲之RBD
https://blog.frognew.com/2017/02/ceph-rbd.html
Ceph v12.2 Luminous 塊存儲(RBD)搭建
http://www.voidcn.com/article/p-ttvgrhdn-bpb.html
https://codertw.com/%E7%A8%8B%E5%BC%8F%E8%AA%9E%E8%A8%80/101799/
Ceph Pool操作總結(jié)
http://int32bit.me/2016/05/19/Ceph-Pool%E6%93%8D%E4%BD%9C%E6%80%BB%E7%BB%93/
使用ceph-deploy 部署集群
https://blog.csdn.net/mailjoin/article/details/79695016
存儲集群快速入門
http://docs.ceph.org.cn/start/quick-ceph-deploy/
http://docs.ceph.com/docs/mimic/rados/deployment/ceph-deploy-osd/
增加/刪除 OSD
http://docs.ceph.org.cn/rados/deployment/ceph-deploy-osd/
ceph-deploy 2.0.0 部署 Ceph Luminous 12.2.4?
http://www.yangguanjun.com/2018/04/06/ceph-deploy-latest-luminous/
ceph v12.2.4 (luminous)命令變動
https://blog.51cto.com/3168247/2088865
ceph luminous 新功能之內(nèi)置dashboard
http://www.zphj1987.com/2017/06/25/ceph-luminous-new-dashboard/
ceph luminous 新功能之內(nèi)置dashboard
https://ceph.com/planet/ceph-luminous-%E6%96%B0%E5%8A%9F%E8%83%BD%E4%B9%8B%E5%86%85%E7%BD%AEdashboard/
https://ceph.com/community/new-luminous-dashboard/
Ceph: Show OSD to Journal Mapping
https://fatmin.com/2015/08/11/ceph-show-osd-to-journal-mapping/
https://serverfault.com/questions/828882/ceph-osds-and-journal-drives
Ceph中OSD的Journal日志問題
https://www.zhihu.com/question/266181191
影響性能的關(guān)鍵部分-ceph的osd journal寫
http://www.cnblogs.com/rodenpark/p/6223320.html
Ceph OSD Journal notes
https://gist.github.com/mbukatov/86f1a2cc480d0deae32a9e48805a4115
如何替換Ceph的Journal
http://www.zphj1987.com/2016/07/26/%E5%A6%82%E4%BD%95%E6%9B%BF%E6%8D%A2Ceph%E7%9A%84Journal/
How to identify the journal disk for a Ceph?OSD?
https://arvimal.blog/2015/08/05/how-to-check-the-journal-disk-for-any-particular-osd/
Ceph: Troubleshooting Failed OSD Creation
https://fatmin.com/2015/08/06/ceph-troubleshooting-failed-osd-creation/
CEPH: How to Restart an Install, or How to Reset a Cluster
https://fatmin.com/2015/08/18/ceph-how-to-restart-an-install-or-reset-a-cluster/