一、實驗背景
1. 內(nèi)網(wǎng)環(huán)境下坞靶,無法連接互聯(lián)網(wǎng)憔狞,需要搭建ceph,為分布式集群提供ceph文件系統(tǒng)
2. 要實現(xiàn)腳本的自動化安裝彰阴,shell腳本或者ansible playbook瘾敢,不使用ceph-deploy工具
我們需要在一臺能聯(lián)網(wǎng)的實驗機機器上,將ceph集群安裝所需的主包及其依賴一次性下載尿这,編寫安裝腳本簇抵,然后在目標機器上搭建本地yum源,實現(xiàn)離線安裝射众。
我們先實現(xiàn)搭建本地倉庫碟摆,在目標機器上手動安裝。
二叨橱、實驗環(huán)境
操作系統(tǒng):CentOS7.5 Minimal
聯(lián)網(wǎng)的實驗機: 192.168.1.101
cephServer(node01): 192.168.1.103??
cephServer(node01)數(shù)據(jù)盤:/dev/sdb 100G
cephClient: 192.168.1.106
三典蜕、在聯(lián)網(wǎng)的實驗機下載ceph主包及其依賴
添加ceph官方y(tǒng)um鏡像倉庫
#? vi? ?/etc/yum.repos.d/ceph.repo
##################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
##################################################
# yum clean all
# yum repolist?
# yum list all |grep ceph
# yum? -y install epel-release?
# yum -y install yum-utils?
# yum -y install createrepo?
# mkdir /root/cephDeps?
# repotrack??ceph ceph-mgr ceph-mon ceph-mds ceph-osd ceph-fuse?ceph-radosgw? -p??/root/cephDeps
# createrepo? -v???/root/cephDeps
# tar? -zcf? ??cephDeps.tar.gz???/root/cephDeps
四、在cephServer(node01)上搭建 本地yum源
將cephDeps.tar.gz拷貝到cephServer(node01)服務(wù)器
#? tar? -zxf??cephDeps.tar.gz?
# vim? build_localrepo.sh?
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo? /etc/yum.repos.d/backup
# create local repositry
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf? ./cephDeps/*? /tmp/localrepo
echo "
[localrepo]
name=Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1"? > /etc/yum.repos.d/ceph.repo
yum clean all
##################################################
#? sh? -x??build_localrepo.sh??
#? yum repolist??
五罗洗、在cephServer(node01)上離線安裝單機ceph
關(guān)閉selinux
# setenforce 0
# sed? -i? 's/^SELINUX=.*/SELINUX=permissive/g'? /etc/selinux/config
設(shè)置防火墻愉舔,放行相關(guān)端口
# systemctl? start? firewalld
# systemctl enable firewalld?
# firewall-cmd --zone=public --add-port=6789/tcp?--permanent
# firewall-cmd --zone=public --add-port=6800-7300/tcp?--permanent
# firewall-cmd --reload
用本地yum源安裝ceph組件
#? yum -y install ceph ceph-mds ceph-mgr ceph-osd ceph-mon
# yum list installed | grep ceph
# ll /etc/ceph/
# ll /var/lib/ceph/
配置ceph組件
創(chuàng)建集群id
#?uidgen?
用uidgen 生成一個uuid 例如 ee741368-4233-4cbc-8607-5d36ab314dab
創(chuàng)建ceph主配置文件
# vim? /etc/ceph/ceph.conf
######################################
[global]
fsid = ee741368-4233-4cbc-8607-5d36ab314dab??
mon_initial_members = node01
mon_host = 192.168.1.103
mon_max_pg_per_osd = 300
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_journal_size = 1024
osd_crush_chooseleaf_type = 0
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
[mon]
mon allow pool delete = true
###################################
1.部署mon
創(chuàng)建mon密鑰
#? ?ceph-authtool? --create-keyring? /tmp/ceph.mon.keyring? --gen-key? -n mon.? --cap mon 'allow *'
#? cat /tmp/ceph.mon.keyring
創(chuàng)建管理密鑰
#? ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
# cat /etc/ceph/ceph.client.admin.keyring
# cat /var/lib/ceph/bootstrap-osd/ceph.keyring
將管理密鑰都導(dǎo)入到mon密鑰中
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
# cat /tmp/ceph.mon.keyring
創(chuàng)建monitor map
# monmaptool --create --add node01 192.168.1.103 --fsid ee741368-4233-4cbc-8607-5d36ab314dab /tmp/monmap
創(chuàng)建mon的目錄,啟動mon
#?mkdir? ?/var/lib/ceph/mon/ceph-node1
# chown?-R? ceph:ceph? /var/lib/ceph/
# chown?ceph:ceph???/tmp/monmap? ?/tmp/ceph.mon.keyring
#??sudo -u ceph ceph-mon --mkfs -i node01? --monmap /tmp/monmap? ?--keyring /tmp/ceph.mon.keyring
# ll /var/lib/ceph/mon/ceph-node01/
啟動mon服務(wù)
# systemctl start ceph-mon@node01.service
# systemctl enable ceph-mon@node01.service
#?systemctl status ceph-mon@node01.service
# ceph -s
2.部署osd
cephServer(node01)數(shù)據(jù)盤:/dev/sdb 100G
# lsblk
創(chuàng)建osd
#???ceph-volume lvm create --data /dev/sdb
#? ll /dev/mapper/
#? ll? /var/lib/ceph/osd/ceph-0/
# ceph auth list
啟動osd服務(wù)
# systemctl start? ceph-osd@0.service
# systemctl enable? ceph-osd@0.service
# systemctl status? ceph-osd@0.service
# ceph -s
3.部署mgr
創(chuàng)建密鑰
# mkdir /var/lib/ceph/mgr/ceph-node01
# ceph auth get-or-create mgr.node01 mon 'allow profile mgr' osd 'allow *' mds 'allow *'? >??/var/lib/ceph/mgr/ceph-node01/keyring
# chown -R ceph:ceph /var/lib/ceph/mgr
啟動mgr服務(wù)
# systemctl start ceph-mgr@node01.service
# systemctl enable ceph-mgr@node01.service
# systemctl status ceph-mgr@node01.service
# ceph -s
查看mgr模塊
# ceph mgr module ls??
4.部署mds
創(chuàng)建mds數(shù)據(jù)目錄
#? mkdir -p? /var/lib/ceph/mds/ceph-node01
創(chuàng)建秘鑰
#? ceph-authtool? --create-keyring? /var/lib/ceph/mds/ceph-node01/keyring? ?--gen-key? -n? ?mds.node01
導(dǎo)入秘鑰
#?ceph auth add mds.node01? osd "allow rwx" mds "allow" mon "allow profile mds"? -i? /var/lib/ceph/mds/ceph-node01/keyring
# chown -R ceph:ceph /var/lib/ceph/mds
# ceph auth list
啟動mds服務(wù)
# systemctl start ceph-mds@node01.service
# systemctl enable ceph-mds@node01.service
# systemctl status ceph-mds@node01.service
# ceph osd tree
5.?創(chuàng)建Ceph Pool
一個ceph集群可以有多個pool栖博,每個pool是邏輯上的隔離單位屑宠,不同的pool可以有完全不一樣的數(shù)據(jù)處理方式,比如Replica Size(副本數(shù))仇让、Placement Groups典奉、CRUSH Rules躺翻、快照、所屬者等卫玖。
pg_num設(shè)置參考:https://ceph.com/pgcalc
# ceph osd pool create cephfs_data 128
# ceph osd pool create cephfs_metadata 128
# ceph fs new cephfs cephfs_metadata cephfs_data
# ceph fs ls
# ceph -s
# ceph --show-config | grep mon_max_pg_per_osd
集群osd 數(shù)量較少拂到,如果創(chuàng)建了大量的pool忽洛,每個pool要占用一些pg ,ceph集群默認每塊磁盤都有默認值,為250 pgs单刁,不過這個默認值是可以調(diào)整的,但調(diào)整得過大或者過小都會對集群的性能產(chǎn)生一定影響非驮。
# vim /etc/ceph/ceph.conf
################################
mon_max_pg_per_osd = 300
################################
# systemctl restart ceph-mgr@node01.service
#?systemctl status ceph-mgr@node01.service
# ceph --show-config | grep "mon_max_pg_per_osd"
# ceph osd lspools
cephServer節(jié)點 服務(wù)正常啟動后各服務(wù)狀態(tài)壕鹉,服務(wù)進程、日志文件琴许、端口監(jiān)聽一覽
# ll /etc/ceph/
# ll /var/lib/ceph/
#? tree?/var/lib/ceph/
# cd /var/lib/ceph/
# ll bootstrap-*
六税肪、安裝配置cephClient
客戶端要掛載使用cephfs的目錄,有兩種方式:
1. 使用linux kernel client
2.? 使用ceph-fuse
這兩種方式各有優(yōu)劣勢榜田,kernel client的特點在于它與ceph通信大部分都在內(nèi)核態(tài)進行益兄,因此性能要更好,缺點是L版本的cephfs要求客戶端支持一些高級特性箭券,ceph FUSE就是簡單一些净捅,還支持配額,缺點就是性能比較差辩块,實測全ssd的集群蛔六,性能差不多為kernel client的一半。
關(guān)閉selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
方式一:使用linux kernel client
在cephSever服務(wù)器上獲取admin認證key
# cat /etc/ceph/ceph.client.admin.keyring
默認采用ceph-deploy部署ceph集群是開啟了cephx認證庆捺,需要掛載secret-keyring古今,即集群mon節(jié)點/etc/ceph/ceph.client.admin.keyring文件中的”key”值,采用secretfile可不用暴露keyring滔以,但有1個bug捉腥,始終報錯:libceph: bad option at 'secretfile=/etc/ceph/admin.secret'
Bug地址:https://bugzilla.redhat.com/show_bug.cgi?id=1030402
# mount -t ceph 192.168.1.103:6789:/? /mnt -o name=admin,secret=AQDZRfJcn4i0BRAAAHXMjFmkEZX2oO/ron1mRA==
# mount -l? | grep ceph?
# df -hT?
方式二:使用ceph-fuse
在cephClient上搭建 本地yum源
將cephDeps.tar.gz拷貝到cephClient)服務(wù)器
#? tar? -zxf??cephDeps.tar.gz?
# vim? build_localrepo.sh?
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo? /etc/yum.repos.d/backup
# create local repositry
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf? ./cephDeps/*? /tmp/localrepo
echo "
[localrepo]
name=Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1"? > /etc/yum.repos.d/ceph.repo
yum clean all
##################################################
#? sh? -x??build_localrepo.sh??
#? yum repolist??
安裝ceph-fuse 相關(guān)組件
#? yum -y install ceph-fuse
# rpm -ql ceph-fuse
創(chuàng)建ceph-fuse 相關(guān)目錄,從cephServer拷貝配置文件和秘鑰
#? mkdir? /etc/ceph
#? scp? root@192.168.1.103:/etc/ceph/ceph.client.admin.keyring? /etc/ceph
#? scp? root@192.168.1.103:/etc/ceph/ceph.conf? ? /etc/ceph?
#? chmod? 600? /etc/ceph/ceph.client.admin.keyring
創(chuàng)建ceph-fuse的service文件
#? cp /usr/lib/systemd/system/ceph-fuse@.service? ?/etc/systemd/system/ceph-fuse.service
#? vim? /etc/systemd/system/ceph-fuse.service?
##############################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev? /mnt
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
########################################################
我們將cephfs掛載在客戶端/mnt下
# systemctl daemon-reload
# systemctl? start ceph-fuse.service
# systemctl? enable? ceph-fuse.service
# systemctl? status? ceph-fuse.service
# systemctl? start ceph-fuse.target
# systemctl? enable ceph-fuse.target
# systemctl? status ceph-fuse.target
#? df? -hT
測試寫入一個大文件
#? dd if=/dev/zero of=/mnt/test? bs=1M count=10000
# df? -hT
設(shè)置cephFS 掛載子目錄
從上面的可以看出你画,掛載cephfs的時候抵碟,源目錄使用的是/,如果一個集群只提供給一個用戶使用就太浪費了坏匪,能不能把集群切分成多個目錄拟逮,多個用戶自己掛載自己的目錄進行讀寫呢?
# ceph-fuse --help
使用admin掛載了cephfs的/之后适滓,只需在/中創(chuàng)建目錄敦迄,這些創(chuàng)建后的目錄就成為cephFS的子樹,其他用戶經(jīng)過配置,是可以直接掛載這些子樹目錄的罚屋,具體步驟為:
1. 使用admin掛載了/之后苦囱,創(chuàng)建了/ceph
#?mkdir?-p?/opt/tmp
#?ceph-fuse?/opt/tmp
#?mkdir? /opt/tmp/ceph
#??umount? /opt/tmp
#??rm?-rf? /opt/tmp
2. 設(shè)置ceph-fuse.service,掛載子目錄
# vim /etc/systemd/system/ceph-fuse.service
################################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev? /mnt? ?-r /ceph
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
###################################################################
#?systemctl daemon-reload?
#?systemctl start ceph-fuse.service
#?systemctl enable ceph-fuse.service
#?systemctl status ceph-fuse.service
# systemctl? start ceph-fuse.target
# systemctl? enable ceph-fuse.target
# systemctl? status ceph-fuse.target
#? df? -hT
cephClient節(jié)點 服務(wù)正常啟動后各服務(wù)狀態(tài)脾猛,服務(wù)進程撕彤、日志文件、端口監(jiān)聽一覽
當然猛拴,這篇文章我們只講了ceph的文件系統(tǒng)cephFS羹铅,關(guān)于另外兩種存儲 塊存儲和對象存儲,大家可以參閱相關(guān)資料愉昆,自行解決职员!
七、參考
Ceph基礎(chǔ)知識
https://www.cnblogs.com/zywu-king/p/9064032.html
centos7離線搭建Ceph塊存儲和對象存儲
https://pianzong.club/2018/11/05/install-ceph-offline/
分布式文件系統(tǒng)Ceph
https://blog.csdn.net/dapao123456789/article/category/2197933
ceph-deploy?v2.0.0初始化磁盤
https://blog.51cto.com/3168247/2088865
Ceph告警:too many PGs per OSD處理
http://www.reibang.com/p/f2b20a175702
ceph (luminous 版) pool 管理
https://blog.csdn.net/signmem/article/details/78594340
ceph集群添加了一個osd之后撼唾,該osd的狀態(tài)始終為down
https://blog.51cto.com/xiaowangzai/2173309?source=dra
CentOS7.x上ceph的單機部署和cephFS文件系統(tǒng)的使用
http://www.reibang.com/p/736fc03bd164
Ceph?bluestore?和?ceph-volume
http://xcodest.me/ceph-bluestore-and-ceph-volume.html
Ceph PGs per Pool Calculator
https://ceph.com/pgcalc
MANUAL?DEPLOYMENT
http://docs.ceph.com/docs/master/install/manual-deployment/#manager-daemon-configuration
CEPH-MGR ADMINISTRATOR’S GUIDE
http://docs.ceph.com/docs/master/mgr/administrator/#mgr-administrator-guide?tdsourcetag=s_pcqq_aiomsg
CREATE A CEPH FILESYSTEM
http://docs.ceph.com/docs/master/cephfs/createfs
http://docs.ceph.org.cn/cephfs/createfs
Redhat/MANUALLY INSTALLING RED HAT CEPH STORAGE
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/manually-installing-red-hat-ceph-storage
WHAT IS RED HAT CEPH STORAGE?
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/what_is_red_hat_ceph_storage