一 資源瓢剿、版本信息
cpu:Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz //4core
mem:total 7686,swap 7935
os:Linux promote.cache-dns.local 3.10.0-957.el7.x86_64
ceph:rh-luminous
二 ceph簡介
- 分布式存儲
- ceph層次結(jié)構(gòu)
- 最簡部署方式:一個管理節(jié)點规揪、一個mon節(jié)點度苔、兩個osd節(jié)點
三 環(huán)境準(zhǔn)備
- 部署KVM虛擬環(huán)境,參考基于KVM的虛擬機環(huán)境搭建
- 鏡像選擇
CentOS-7-x86_64-Minimal-1810.iso
- 資源配置
mem:1G
disk:50G
cpu:1core
- 虛擬機名稱
ceph
- 虛擬網(wǎng)絡(luò)選擇
NAT:default
四 ceph節(jié)點工具安裝
本節(jié)操作在上一節(jié)創(chuàng)建的虛擬機中執(zhí)行
- 安裝常用網(wǎng)絡(luò)工具
yum install net-tools -y
- 網(wǎng)絡(luò)修改暂吉,使用靜態(tài)IP
ifconfig
netstat -rn
結(jié)果如下圖:
將這些信息寫到配置文件中固化:
修改DNS
echo "NETWORKING=yes" >> /etc/sysconfig/network
echo "DNS1=114.114.114.114" >> /etc/sysconfig/network
echo "DNS2=8.8.8.8" >> /etc/sysconfig/network
修改靜態(tài)IP
vi /etc/sysconfig/network-scripts/ifcfg-eth0
修改如下配置胖秒,其他配置不變
#BOOTPROTO="dhcp" //這一行需要注釋掉
BOOTPROTO="static"
NM_CONTROLLED=no
IPADDR=192.168.122.122 //IP 和原先IP一樣也可
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
添加主機名
echo "192.168.122.122 node" >> /etc/hosts
echo "192.168.122.123 node1" >> /etc/hosts
echo "192.168.122.124 node2" >> /etc/hosts
echo "192.168.122.125 node3" >> /etc/hosts
重啟網(wǎng)絡(luò)服務(wù)
service network restart
NOTE:如果你是ssh到這個虛擬機的,會失去連接慕的,可以關(guān)閉終端重新連接
- yum相關(guān)
- 安裝第三方源管理工具
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y yum-plugin-priorities
yum install -y yum-utils
- 源配置
創(chuàng)建ceph源配置文件阎肝,并打開編輯
touch /etc/yum.repos.d/ceph.repo
vi /etc/yum.repos.d/ceph.repo
在文件中寫入如下內(nèi)容
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
更新yum源
yum update -y
- 時鐘相關(guān)
yum install -y ntp ntpdate ntp-doc
- 關(guān)閉防火墻
firewall-cmd --zone=public --add-service=ceph-mon --permanent
firewall-cmd --zone=public --add-service=ceph --permanent
firewall-cmd --reload
iptables -A INPUT -i eth0 -p tcp -s 192.168.122.0/24 -d 192.168.122.254 -j ACCEPT
iptables-save
sudo setenforce 0
- ceph部署用戶
創(chuàng)建用戶
useradd -g root -m cephD -d /home/cephD
passwd cephD
免密碼權(quán)限
echo "cephD ALL=(ALL)NOPASSWD: ALL" | sudo tee /etc/sudoers.d/cephD
chmod 0440 /etc/sudoers.d/cephD
五 ceph節(jié)點clone
本節(jié)操作在宿主機執(zhí)行
- 獲得root權(quán)限
sudo su
- 關(guān)閉虛擬機ceph
virsh shutdown ceph
- clone出ceph-1、ceph-2肮街、ceph-3節(jié)點
virt-clone -o ceph -n ceph-1 -f /home/data/ceph-1.qcow2
virt-clone -o ceph -n ceph-2 -f /home/data/ceph-2.qcow2
virt-clone -o ceph -n ceph-3 -f /home/data/ceph-3.qcow2
注:ceph為管理節(jié)點风题,ceph-1為mon節(jié)點,ceph-2嫉父、ceph-3為osd節(jié)點
- 掛載硬盤
- 創(chuàng)建硬盤鏡像
qemu-img create -f qcow2 /home/data/osd1.qcow2 50g
qemu-img create -f qcow2 /home/data/osd2.qcow2 50g
qemu-img create -f qcow2 /home/data/osd3.qcow2 50g
- 修改配置文件沛硅,將磁盤掛載到虛擬機(以ceph-2為例)
virsh edit ceph-2
添加如下內(nèi)容到domain.devices節(jié)點下
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/home/data/osd2.qcow2'/>
<target dev='vdb' bus='virtio'/>
</disk>
- 啟動虛擬機
virsh start ceph
virsh start ceph-1
virsh start ceph-2
virsh start ceph-3
查看虛擬機狀態(tài)
virsh list --all
- 修改虛擬機IP
KVM進(jìn)行clone操作之后,虛擬機IP也被clone了绕辖,在同一網(wǎng)段中摇肌,IP沖突,需要手動修改(以ceph-1為例)
virt-viewer -c qemu:///system ceph-1
進(jìn)入控制臺仪际,root登陸围小,修改eth0的IP
vi /etc/sysconfig/network-scripts/ifcfg-eth0
修改IPADDR昵骤,不與其他虛擬機沖突
IPADDR=192.168.122.123
重啟網(wǎng)絡(luò)服務(wù)
service network restart
修改 ceph-2、ceph-3的IP為124肯适、125
六 ceph-deploy 部署ceph集群
本節(jié)操作在虛擬機ceph[ceph管理節(jié)點]上執(zhí)行
- 安裝ceph-deploy
yum install -y ceph-deploy
- 部署用戶的免密碼登陸其他節(jié)點
su - cephD
生成SSH秘鑰变秦,不輸入密碼,全部[enter]
ssh-keygen
添加信任框舔,執(zhí)行以下操作
ssh-copy-id cephD@node1
ssh-copy-id cephD@node2
ssh-copy-id cephD@node3
cd ~;
touch ~/.ssh/config;
vi ~/.ssh/config
輸入如下內(nèi)容
Host node1
Hostname node1
User cephD
Host node2
Hostname node2
User cephD
Host node3
Hostname node3
User cephD
- 創(chuàng)建集群
cd ~;
mkdir my-cluster;cd my-cluster;
ceph-deploy new node1
結(jié)果如下:
修改OSD默認(rèn)數(shù)量為2
echo "osd pool default size = 2" >> ceph.conf
echo "public_network = 192.168.122.0/24" >> ceph.conf
- 集群安裝ceph
ceph-deploy install --release luminous node node1 node2 node3
- 初始化ceph-moni服務(wù)
ceph-deploy mon create-initial
- 拷貝管理員配置到各個節(jié)點
ceph-deploy admin node node1 node2 node3
- 安裝管理例程
ceph-deploy mgr create node1
NOTE:mgr和moni是什么關(guān)系
- 添加OSD節(jié)點
ceph-deploy osd create --data /dev/vdb node2
ceph-deploy osd create --data /dev/vdb node3
- 查看ceph集群狀態(tài)
ssh node1 sudo ceph health
ssh node2 sudo ceph health
ssh node3 sudo ceph health
ssh node1 sudo ceph -s
- 集群擴展
- 新增元數(shù)據(jù)服務(wù)節(jié)點
ceph-deploy mds create node1
ceph-deploy mds create node2
- 新增ceph-moni
ceph-deploy mon add node2
ceph-deploy mon add node3
NOTE:現(xiàn)在集群三個節(jié)點都運行了ceph-moni ?
- 新增管理例程節(jié)點
ceph-deploy mgr create node2 node3
- 新增rgw實例
ceph-deploy rgw create node1
ceph-deploy rgw create node2
- pool操作
ceph osd pool create mytest 8 //創(chuàng)建
ceph osd pool rm mytest //刪除
- 對象操作
[cephD@node my-cluster]$ rados put test-object-1 ceph.log --pool=mytest
[cephD@node my-cluster]$ rados -p mytest ls
test-object-1
[cephD@node my-cluster]$ ceph osd map mytest test-object-1
osdmap e26 pool 'mytest' (5) object 'test-object-1' -> pg 5.74dc35e2 (5.2) -> up ([1,0], p1) acting ([1,0], p1)
[cephD@node my-cluster]$ rados rm test-object-1 --pool=mytest
七 ansible 部署ceph集群
本節(jié)在ceph主機以cephD用戶執(zhí)行
- 準(zhǔn)備工作
- 卸載ceph集群
cd ~/my-cluster;
ceph-deploy purge node node1 node2 node3
ceph-deploy purgedata node node1 node2 node3
ceph-deploy forgetkeys
rm ceph.*
- 安裝python-pip工具
cd ~;
sudo yum update -y;
sudo yum install -y python-pip;
- 安裝ceph-ansible
- 安裝ansible-2.6.4
sudo yum install -y PyYAML
sudo yum install -y python-jinja2
sudo yum install -y python-paramiko
sudo yum install -y python-six
sudo yum install -y python2-cryptography
sudo yum install -y sshpass
wget https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.6.4-1.el7.ans.noarch.rpm
sudo rpm -ivh ansible-2.6.4-1.el7.ans.noarch.rpm
ansible --version
- 下載ceph-ansible
cd ~;
sudo yum install -y git;
git clone https://github.com/ceph/ceph-ansible.git
cd ceph-ansible;
git branch -a|grep stable
結(jié)果如下:
- 發(fā)行版說明
ceph-ansible分支 | ceph版本 | ansible版本 |
---|---|---|
stable-3.0 | jewel 和 luminous | 2.4 |
stable-3.1 | luminous 和 mimic | 2.4 |
stable-3.2 | luminous 和 mimic | 2.6 |
master | luminous 和 mimic | 2.7 |
- 選擇stable-3.2伴栓,解決python依賴
git checkout stable-3.2
sudo pip install -r requirements.txt
sudo pip install --upgrade pip
- 配置Inventory集群主機
sudo chmod 0660 /etc/ansible/hosts
sudo echo "[mons]">>/etc/ansible/hosts
sudo echo "node1">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "[osds]">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "node3">>/etc/ansible/hosts
sudo echo "[mgrs]">>/etc/ansible/hosts
sudo echo "node1">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "node3">>/etc/ansible/hosts
- 配置Playbook部署指令
cp site.yml.sample site.yml
- 配置ceph部署
cp group_vars/all.yml.sample group_vars/all.yml
vi group_vars/all.yml
------
###########
# INSTALL #
###########
ceph_origin:repository
ceph_repository: community
ceph_stable_release: luminous
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}/el7/x86_64"
......
monitor_interface: eth0
......
public_network: 192.168.122.0/24
osd_objectstore: filestore
devices:
- '/dev/vdb'
osd_scenario: collocated
------
- 安裝執(zhí)行
ansible-playbook site.yml -vv
ceph -s
NOTE:-vv 提示更多錯誤信息
PLAY RECAP ********************************************************************************************************************************************************************************************************
node1 : ok=165 changed=26 unreachable=0 failed=0
node2 : ok=248 changed=35 unreachable=0 failed=0
node3 : ok=176 changed=26 unreachable=0 failed=0
INSTALLER STATUS **************************************************************************************************************************************************************************************************
Install Ceph Monitor : Complete (0:07:34)
Install Ceph Manager : Complete (0:07:58)
Install Ceph OSD : Complete (0:01:09)
Wednesday 27 March 2019 02:50:32 -0400 (0:00:00.065) 0:17:19.385 *******
===============================================================================
ceph-common : install redhat ceph packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 274.13s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:16 -------------------------------------------------------------------------------------------------------------------------
ceph-common : install redhat ceph packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 230.22s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:16 -------------------------------------------------------------------------------------------------------------------------
ceph-common : install centos dependencies ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 104.34s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:9 --------------------------------------------------------------------------------------------------------------------------
ceph-common : install centos dependencies ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 93.92s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:9 --------------------------------------------------------------------------------------------------------------------------
ceph-mgr : install ceph-mgr package on RedHat or SUSE ----------------------------------------------------------------------------------------------------------------------------------------------------- 78.47s
/home/cephD/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2 ------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : create ceph mgr keyring(s) when mon is not containerized --------------------------------------------------------------------------------------------------------------------------------------- 18.35s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:61 ---------------------------------------------------------------------------------------------------------------------------------------------------
ceph-osd : manually prepare ceph "filestore" non-containerized osd disk(s) with collocated osd data and journal ------------------------------------------------------------------------------------------- 12.11s
/home/cephD/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53 ----------------------------------------------------------------------------------------------------------------------------------------
ceph-osd : activate osd(s) when device is a disk ----------------------------------------------------------------------------------------------------------------------------------------------------------- 9.93s
/home/cephD/ceph-ansible/roles/ceph-osd/tasks/activate_osds.yml:5 ------------------------------------------------------------------------------------------------------------------------------------------------
ceph-config : generate ceph configuration file: ceph.conf -------------------------------------------------------------------------------------------------------------------------------------------------- 7.68s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/main.yml:77 -----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : collect admin and bootstrap keys ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.42s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:2 ----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : create monitor initial keyring ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 5.64s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22 ---------------------------------------------------------------------------------------------------------------------------------------------
ceph-mgr : disable ceph mgr enabled modules ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.45s
/home/cephD/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32 --------------------------------------------------------------------------------------------------------------------------------------------------------
ceph-config : generate ceph configuration file: ceph.conf -------------------------------------------------------------------------------------------------------------------------------------------------- 4.88s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/main.yml:77 -----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 4.35s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 4.07s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.06s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
ceph-common : purge yum cache ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 3.59s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/configure_redhat_repository_installation.yml:23 --------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 3.27s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.27s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.12s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
檢查集群狀態(tài)
cephD@node ceph-ansible (stable-3.2) $ ssh node1 sudo ceph -s
cluster:
id: bb653ada-5753-4672-9d3b-b5e92846b897
health: HEALTH_OK
services:
mon: 2 daemons, quorum node1,node2
mgr: node2(active), standbys: node3, node1
osd: 2 osds: 2 up, 2 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 214MiB used, 89.7GiB / 90.0GiB avail
pgs:
其他操作可以參考【七 使用ceph-deploy安裝】第7步之后的操作
NOTE:本節(jié)安裝沒有ceph-admin節(jié)點,所以node節(jié)點上是沒有ceph的雨饺,所有ceph操作需要在node1上執(zhí)行:
ssh node1
八 離線部署
本章在cceph主機以ephD用戶執(zhí)行
- 搭建本地倉庫
CentOS7搭建本地倉庫--CEPH - 使用ceph-ansible部署
參考【七 ansible 部署ceph集群】 - 與第七章不一樣的地方
- 安裝python-pip工具注意點
sudo pip install -r /home/cephD/ceph-ansible/requirements.txt --find-links=http://192.168.232.129/repo/python/deps/ --trusted-host 192.168.232.129
- 配置ceph部署注意點
cp group_vars/all.yml.sample group_vars/all.yml
vi group_vars/all.yml
------
###########
# INSTALL #
###########
ceph_origin:repository
ceph_repository: custom
ceph_stable_release: luminous
ceph_stable_repo: "http://192.168.232.129/repo/ceph/luminous/"
......
monitor_interface: eth0
......
public_network: 192.168.122.0/24
osd_objectstore: filestore
devices:
- '/dev/sdb'
osd_scenario: collocated
------
- 提醒
ceph-ansible 部署ceph集群的時候 cephD用戶的一系列操作也是必要的
九 操作集群
- 啟動所有守護(hù)例程
sudo systemctl start ceph.target
- 停止所有守護(hù)例程
sudo systemctl stop ceph\*.service ceph\*.target
十 問題&解決
- [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
Solution:等待20分鐘,再次執(zhí)行(有時候惑淳,由于網(wǎng)絡(luò)原因额港,yum install -y ceph ceph-radosgw 時間會超過300s,造成超時) - [node1][WARNIN] Another app is currently holding the yum lock; waiting for it to exit...
Solution:等待歧焦,或者通過[ps -ef|grep yum]找到鎖住的指令進(jìn)程移斩,cancel掉之后,以此執(zhí)行yum指令 - 安裝特別慢
Solution:可以不在一個命令中安裝绢馍,經(jīng)測試向瓷,支持并行安裝,如下:
ceph-deploy install --release luminous node &
ceph-deploy install --release luminous node1 &
ceph-deploy install --release luminous node2 &
ceph-deploy install --release luminous node3 &
- auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring ---- ceph quorum_status --format json-pretty
sudo cp * /etc/ceph/
sudo chown cephD:root /etc/ceph/*
- [ceph_deploy.rgw][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; ...
ceph-deploy --overwrite-conf rgw create node1
- [ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
echo "public_network = 192.168.122.0/24" >> ceph.conf
ceph-deploy --overwrite-conf config push node node1 node2 node3
- mgr和moni有啥區(qū)別
在luminous版本之前,mgr進(jìn)程包含在moni進(jìn)程內(nèi)部舰涌,L版開始拆分出來
十一 參考文檔
http://docs.ceph.com/ceph-ansible/master/
http://docs.ceph.com/docs/master/start/