系列鏈接
- http://www.reibang.com/p/f18a1b3a4920 如何用kolla來(lái)部署容器化ceph集群
- http://www.reibang.com/p/a39f226d5dfb 修復(fù)一些部署中遇到的問(wèn)題
- http://www.reibang.com/p/d520fed237c0 在kolla ceph中引入device classes特性
- http://www.reibang.com/p/d6e047e1ad06 支持bcache設(shè)備和iscsi及多路徑設(shè)備
- http://www.reibang.com/p/ab8251fc991a 一個(gè)ceph容器化部署編排項(xiàng)目
簡(jiǎn)介
kolla 是openstack的容器化部署項(xiàng)目宗弯,主要目的是實(shí)現(xiàn)生產(chǎn)級(jí)別容器化openstack平臺(tái)的部署炒瘸,做到開(kāi)箱即用才菠。kolla利用ansible來(lái)編排相關(guān)容器的部署。
ceph作為開(kāi)源的分布式存儲(chǔ)虱饿,與openstack的聯(lián)系很緊密,kolla實(shí)現(xiàn)了簡(jiǎn)單的ceph集群部署以及與openstack組件比如cinder,manila,nova,glance之間的交互悼嫉。
kolla項(xiàng)目包含兩個(gè)代碼:
- kolla 主要負(fù)責(zé)鏡像構(gòu)建(https://github.com/openstack/kolla)
- kolla-ansible 主要負(fù)責(zé)部署及升級(jí)(https://github.com/openstack/kolla-ansible)
下面就簡(jiǎn)單說(shuō)下用kolla來(lái)部署ceph的優(yōu)缺點(diǎn):
kolla部署ceph的優(yōu)缺點(diǎn)
- 優(yōu)點(diǎn)
1. kolla-ansible的部署過(guò)程會(huì)比較環(huán)境的差異化解孙,比如鏡像的tag發(fā)生了變化,相關(guān)的配置發(fā)生了變化双戳,這些變化的部分才會(huì)被應(yīng)用到環(huán)境上虹蒋,而沒(méi)有發(fā)生的部分則會(huì)保持不變。反應(yīng)到ceph的部署中,則是可以很方便的來(lái)部署ceph和升級(jí)ceph魄衅。
2. kolla中初始化ceph osd的過(guò)程很巧妙峭竣,主要工作是圍繞disk的partname來(lái)實(shí)現(xiàn),而后ceph-osd容器各自對(duì)應(yīng)相應(yīng)的磁盤晃虫,所以添加新的osd或者修復(fù)損壞的osd都很方便皆撩。
3. ansible自身的優(yōu)點(diǎn)(方便定制化開(kāi)發(fā))
- 缺點(diǎn)
1. 任意改動(dòng)會(huì)對(duì)整個(gè)ceph集群的所有服務(wù)進(jìn)行應(yīng)用, 比如升級(jí)ceph的鏡像,所有的osd會(huì)被重啟(最小可以做到只升級(jí)一個(gè)節(jié)點(diǎn)的組件哲银,使用--limit或者ANSIBLE_SERIAL=1,但是因?yàn)椴渴疬^(guò)程有一些問(wèn)題,這些特性并不能很好的適用)扛吞。
2. 對(duì)ceph的新特性支持不足,kolla ceph現(xiàn)在支持luminous的部署荆责,但是對(duì)于一些新特性比如device class和支持device class的pool創(chuàng)建這些都不支持滥比。
3. 對(duì)于ceph的bluestore部署上,不支持bcache磁盤(磁盤分區(qū)名不支持)以及多路徑磁盤(初始化流程不支持)
也就是說(shuō),kolla目前對(duì)ceph的支持做院,作為一個(gè)測(cè)試集群來(lái)說(shuō)足夠了盲泛,但是對(duì)于生產(chǎn)化的ceph集群,還是有許多方面需要改進(jìn)键耕∷鹿觯可能社區(qū)的本來(lái)目的就是作為一個(gè)測(cè)試集群,所以在cinder/manila等組件的部署中推出了對(duì)外接ceph集群的支持郁竟。
我接觸kolla兩年多了玛迄,主要是用kolla來(lái)部署openstack組件和ceph在生產(chǎn)環(huán)境,所以針對(duì)使用kolla來(lái)部署生產(chǎn)環(huán)境標(biāo)準(zhǔn)的ceph集群做了一些工作棚亩,下面我會(huì)寫一些文章來(lái)講解一些相關(guān)的改進(jìn)工作蓖议。我也提交過(guò)一些改進(jìn)到社區(qū),可能是因?yàn)楦膭?dòng)太大讥蟆,而社區(qū)本身也缺乏對(duì)ceph了解的人勒虾,所以到最后就是一些改動(dòng)大的commits一直擱置。
kolla ceph部署
先用社區(qū)的rocky穩(wěn)定版來(lái)部署一個(gè)集群瘸彤。因?yàn)閗olla-ansible可以把ceph和其他openstack組件一起部署,但是這種方式不利于ceph集群的維護(hù),所以比較推薦的方式是單獨(dú)部署ceph,然后與openstack組件的對(duì)接用external_ceph的方式來(lái)使用.
節(jié)點(diǎn)初始化
kolla部署需要一個(gè)部署節(jié)點(diǎn)修然,最好是與其他ceph節(jié)點(diǎn)區(qū)分開(kāi)
- 節(jié)點(diǎn)介紹
節(jié)點(diǎn)要求,至少需要一個(gè)網(wǎng)卡质况。
ps:mon最好先設(shè)置成三個(gè)或者兩個(gè),可以后續(xù)再添加,多余三個(gè)會(huì)卡住,我提交了一個(gè)commit到社區(qū)來(lái)修復(fù)這個(gè)問(wèn)題.
commit url : https://review.openstack.org/652606
usage | hostname | ip | disks | usage |
---|---|---|---|---|
deploy | deploy-node | 192.168.0.10 | deploy, docker_registry | |
ceph | ceph-node1 | 192.168.0.11 | sdb,sdc,sdd | mon, mgr, osd , mds ,rgw |
ceph | ceph-node2 | 192.168.0.12 | sdb,sdc,sdd | osd |
ceph | ceph-node3 | 192.168.0.13 | sdb,sdc,sdd | osd |
- 公共部分(所有節(jié)點(diǎn)都需要安裝)
# yum 安裝源及必要包
yum install epel-release -y
yum install python-pip -y
yum install -y python-devel libffi-devel gcc openssl-devel git
# 安裝docker api
pip install docker
# 安裝docker
sudo yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm
# docker的配置
配置docker的driver和對(duì)應(yīng)的registry(如果有公共的registry愕宋,則不用配置)
例如:
tee /etc/docker/daemon.json <<-'EOF'
{
"storage-driver": "devicemapper",
"insecure-registries":["192.168.0.10:4000"]
}
EOF
# 新建部署用戶kollasu
useradd -d /home/kollasu -m kollasu
passwd kollasu #修改密碼
echo "kollasu ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/kollasu
chmod 0440 /etc/sudoers.d/kollasu
# nscd(kolla自定義了一些用戶,如manila,ceph等,用來(lái)與主機(jī)上的用戶區(qū)分,需進(jìn)行如下配置)
yum install -y nscd
sed -i 's/\(^[[:space:]]*enable-cache[[:space:]]*passwd[[:space:]]*\)yes/\1no/g' /etc/nscd.conf
sed -i 's/\(^[[:space:]]*enable-cache[[:space:]]*group[[:space:]]*\)yes/\1no/g' /etc/nscd.conf
systemctl restart nscd
- 部署節(jié)點(diǎn)
# 安裝ansible
pip install -U ansible==2.4.1 (rocky版本需要2.4以上版本ansible)
# 建立部署節(jié)點(diǎn)到其他節(jié)點(diǎn)的無(wú)密碼訪問(wèn),并且ssh用戶具有對(duì)應(yīng)節(jié)點(diǎn)的root權(quán)限(如kollasu用戶)
# 建立自己的registry
docker run -d -p 4000:5000 --restart=always --name registry registry:2
- ceph osd disk初始化
systemctl daemon-reload
sudo sgdisk --zap-all -- /dev/sdb
sudo sgdisk --zap-all -- /dev/sdc
sudo sgdisk --zap-all -- /dev/sdd
sudo /sbin/parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1 1 -1
sudo /sbin/parted /dev/sdc -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO2 1 -1
sudo /sbin/parted /dev/sdd -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO3 1 -1
ps: Kolla在初始化的時(shí)候會(huì)以partname來(lái)判斷如何初始化該磁盤為osd, filestore的前綴是KOLLA_CEPH_OSD_BOOTSTRAP, bluestore的前綴是KOLLA_CEPH_OSD_BOOTSTRAP_BS, 前綴后面的名稱來(lái)標(biāo)識(shí)不同的OSD,比如KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1和KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO2就分別對(duì)應(yīng)兩個(gè)不同的osd,名稱隨意结榄,后面kolla會(huì)根據(jù)申請(qǐng)的osd id來(lái)修改磁盤名稱中贝。
對(duì)于bluestore, 一共有四類磁盤, osd data 分區(qū), block分區(qū), wal分區(qū)和db分區(qū):
A small partition is formatted with XFS and contains basic metadata for the OSD. This data directory includes information about the OSD (its identifier, which cluster it belongs to, and its private keyring).
The rest of the device is normally a large partition occupying the rest of the device that is managed directly by BlueStore contains all of the actual data. This primary device is normally identifed by a block symlink in data directory.
It is also possible to deploy BlueStore across two additional devices:
A WAL device can be used for BlueStore’s internal journal or write-ahead log. It is identified by the block.wal symlink in the data directory. It is only useful to use a WAL device if the device is faster than the primary device (e.g., when it is on an SSD and the primary device is an HDD).
A DB device can be used for storing BlueStore’s internal metadata. BlueStore (or rather, the embedded RocksDB) will put as much metadata as it can on the DB device to improve performance. If the DB device fills up, metadata will spill back onto the primary device (where it would have been otherwise). Again, it is only helpful to provision a DB device if it is faster than the primary device.
kolla是通過(guò)磁盤名稱的后綴來(lái)判斷作為osd的哪個(gè)分區(qū)的, 在filestore中, "J" 用來(lái)表示日志盤,在bluestore中,"B"表示block,"W"表示wal,"D"表示db. 例如:
sudo /sbin/parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1 1 200
sudo /sbin/parted /dev/sdb -s mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1_W 201 2249
sudo /sbin/parted /dev/sdb -s mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1_D 2250 4298
sudo /sbin/parted /dev/sdb -s mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1_B 4299 100%
可以自己定義各分區(qū)的大小和對(duì)應(yīng)的磁盤, kolla會(huì)根據(jù)partname自動(dòng)匹配臼朗。如果沒(méi)指定專門的block分區(qū), 則KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1所在的磁盤會(huì)被自動(dòng)格式化成兩個(gè)分區(qū),osd data分區(qū)和block分區(qū), 而該磁盤上的其他分區(qū)都會(huì)被清理掉邻寿。
構(gòu)建ceph鏡像
使用配置文件來(lái)單獨(dú)構(gòu)建ceph鏡像.
[root@deploy-node rocky-ceph]# tree -L 1
.
├── build-test
├── ceph-test
├── kolla
└── kolla-ansible
先看一下我們的部署腳本, 這里我們直接下載的是kolla和kolla-ansible的源代碼, 都是stable/rocky分支,用源碼部署的好處就是你可以方便的進(jìn)行定制化開(kāi)發(fā).
build-test是構(gòu)建鏡像的配置
.
├── build-test
│ └── kolla-build.conf
內(nèi)容如下:
[DEFAULT]
base = centos
profile = image_ceph
namespace = kolla
install_type = source
retries = 1
push_threads = 4
maintainer = kolla Project
[profiles]
image_ceph = cron,kolla-toolbox,fluentd,ceph
cron和kolla-toolbox以及fluentd是公共鏡像
構(gòu)建命令
python kolla/kolla/cmd/build.py --config-file build-test/kolla-build.conf --push --registry 192.168.0.10:4000 --tag cephRocky-7.0.2.0001 --type source
部署ceph
kolla-ansible的部署過(guò)程中可以使用--tags來(lái)指定要部署的項(xiàng)目,當(dāng)然為了單獨(dú)部署ceph,我們可以在globals.yml文件中disable掉其他所有的項(xiàng)目,只保留ceph相關(guān)的.
├── ceph-test
│ ├── custom
│ ├── globals.yml
│ ├── multinode-inventory
│ └── passwords.yml
- globals.yml 如下:
---
# The directory to merge custom config files the kolla's config files
node_custom_config: "{{ CONFIG_DIR }}/custom"
# The project to generate configuration files for
project: ""
# The directory to store the config files on the destination node
node_config_directory: "/home/kollasu/kolla/{{ project }}"
# The group which own node_config_directory, you can use a non-root
# user to deploy kolla
config_owner_user: "kollasu"
config_owner_group: "kollasu"
###################
# Kolla options
###################
# Valid options are ['centos', 'debian', 'oraclelinux', 'rhel', 'ubuntu']
kolla_base_distro: "centos"
# Valid options are [ binary, source ]
kolla_install_type: "source"
kolla_internal_vip_address: ""
####################
# Docker options
####################
### Example: Private repository with authentication
docker_registry: "192.168.0.10:4000"
docker_namespace: "kolla"
docker_registry_username: ""
####################
# OpenStack options
####################
openstack_release: "auto"
openstack_logging_debug: "False"
enable_glance: "no"
enable_haproxy: "no"
enable_keystone: "no"
enable_mariadb: "no"
enable_memcached: "no"
enable_neutron: "no"
enable_nova: "no"
enable_rabbitmq: "no"
enable_ceph: "yes"
enable_ceph_mds: "yes"
enable_ceph_rgw: "yes"
enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
enable_chrony: "no"
enable_cinder: "no"
enable_fluentd: "yes"
enable_heat: "no"
enable_horizon: "no"
enable_manila: "no"
###################
# Ceph options
###################
# Valid options are [ erasure, replicated ]
ceph_pool_type: "replicated"
# Integrate Ceph Rados Object Gateway with OpenStack keystone
enable_ceph_rgw_keystone: "no"
ceph_erasure_profile: "k=2 m=1 ruleset-failure-domain=osd"
ceph_pool_pg_num: 32
ceph_pool_pgp_num: 32
osd_initial_weight: "auto"
# Set the store type for ceph OSD
# Valid options are [ filestore, bluestore]
ceph_osd_store_type: "bluestore"
- multinode-inventory如下:
[storage-mon]
ceph-node1 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
ceph-node2 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
[storage-osd]
ceph-node1 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
ceph-node2 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
ceph-node3 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
[storage-rgw]
ceph-node1 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
[storage-mgr]
ceph-node1 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
[storage-mds]
ceph-node1 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0
[storage-nfs]
[ceph-mon:children]
storage-mon
[ceph-rgw:children]
storage-rgw
[ceph-osd:children]
storage-osd
[ceph-mgr:children]
storage-mgr
[ceph-mds:children]
storage-mds
[ceph-nfs:children]
storage-nfs
這樣定義group的好處就是可以隨意的修改你的集群服務(wù)的節(jié)點(diǎn),而針對(duì)每個(gè)節(jié)點(diǎn)指定相應(yīng)的用戶和interface也可以適應(yīng)更復(fù)雜的情況,比如網(wǎng)卡名稱不同的節(jié)點(diǎn)組成的集群,如果都配置默認(rèn)的同一網(wǎng)卡名,安裝中就會(huì)報(bào)錯(cuò).
- passwords.yml 如下(配置這些是因?yàn)樵诓渴鹬欣媚0迳膳渲梦募?huì)用到這些參數(shù),沒(méi)有會(huì)報(bào)錯(cuò)):
ceph_cluster_fsid: 804effd3-1013-4e57-93ca-983a13cfa133
docker_registry_password:
keystone_admin_password:
- custom文件夾中放置的是你自己的ceph.conf文件,這個(gè)ceph.conf會(huì)跟kolla-ansible根據(jù)模板生成的ceph.conf合并,最終作為ceph集群的配置文件(優(yōu)先級(jí)自定義的ceph.conf要高于自動(dòng)生成的ceph.conf).
[global]
rbd_default_features = 1
public_network = 192.168.0.0/24
cluster_network = 192.168.0.0/24
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_crush_update_on_start = false
osd_class_update_on_start = false
mon_max_pg_per_osd = 500
mon_allow_pool_delete = true
...
- 部署ceph
chmod +x kolla-ansible/tools/kolla-ansible
# pull鏡像到具體節(jié)點(diǎn)
kolla-ansible/tools/kolla-ansible pull --configdir ceph-test -i ceph-test/multinode-inventory --passwords ceph-test/passwords.yml --tags ceph -e openstack_release=cephRocky-7.0.2.0001
# 部署ceph集群
kolla-ansible/tools/kolla-ansible deploy --configdir ceph-test -i ceph-test/multinode-inventory --passwords ceph-test/passwords.yml --tags ceph -e openstack_release=cephRocky-7.0.2.0001
升級(jí)ceph集群
kolla-ansible對(duì)ceph升級(jí)既有方便的地方,即按順序自動(dòng)升級(jí)所有組件,mon-->mgr-->osd-->rgw-->mds-->nfs,可以自動(dòng)化升級(jí)所有容器的鏡像.
缺點(diǎn)就是升級(jí)是針對(duì)所有服務(wù),不能具體指定升級(jí)某一項(xiàng).而在osd的升級(jí)過(guò)程中缺乏一些對(duì)ceph集群狀態(tài)的檢測(cè),kolla-ansible升級(jí)osd是同時(shí)升級(jí)所有節(jié)點(diǎn)上(一次最多可執(zhí)行ANSIBLE_FORKS規(guī)定的節(jié)點(diǎn)數(shù))的osd,單個(gè)節(jié)點(diǎn)上的osd是順序升級(jí)的. 理論上來(lái)說(shuō)只要鏡像沒(méi)有問(wèn)題,osd升級(jí)過(guò)程中的重啟是很快的,不會(huì)影響集群的狀態(tài).但是不怕一萬(wàn)就怕萬(wàn)一,同時(shí)升級(jí)好幾個(gè)節(jié)點(diǎn)的osd出現(xiàn)問(wèn)題的概率當(dāng)然也大,我們理想的狀態(tài)是升級(jí)的服務(wù)可以自動(dòng)指定,osd的升級(jí)過(guò)程可以做到單節(jié)點(diǎn)上順序升級(jí),然后升級(jí)中伴隨著ceph狀態(tài)的檢測(cè).
- 構(gòu)建新鏡像
python kolla/kolla/cmd/build.py --config-file build-test/kolla-build.conf --push --registry 192.168.0.10:4000 --tag cephRocky-7.0.2.0002 --type source
- 升級(jí)ceph
kolla-ansible/tools/kolla-ansible upgrade --configdir ceph-test -i ceph-test/multinode-inventory --passwords ceph-test/passwords.yml --tags ceph -e openstack_release=cephRocky-7.0.2.0002
osd 修復(fù)
- osd 掛載關(guān)系
osd分區(qū)初始化之后,會(huì)根據(jù)osd data分區(qū)的uuid,把分區(qū)掛載到/var/lib/ceph/osd/${uuid}
目錄下,然后啟動(dòng)容器的時(shí)候把/var/lib/ceph/osd/${uuid}
目錄以docker volumes的方式掛載到容器中,/var/lib/ceph/osd/${uuid}:/var/lib/ceph/osd/ceph-${osd_id}
.
以u(píng)uid的方式掛載,很適合帶cache的disk磁盤和多路徑磁盤,這兩者的磁盤名稱很容易變化,但是uuid是一直不變的.
- osd 修復(fù)
kolla-ansible啟動(dòng)的osd容器有時(shí)候會(huì)因?yàn)榇疟P的原因進(jìn)入故障,修復(fù)也是很簡(jiǎn)單,重新格式化磁盤并重新部署, 當(dāng)然修復(fù)之前要先踢出osd,修復(fù)后先置osd的weight值為0,然后慢慢添加.
舉例
- 以osd.7為例
(ceph-mon)[root@ceph-node1 /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.44989 root default
-4 0.14996 host 192.168.0.11
11 0.04999 osd.11 up 1.00000 1.00000
12 0.04999 osd.12 up 1.00000 1.00000
13 0.04999 osd.13 up 1.00000 1.00000
-2 0.14996 host 192.168.0.12
0 0.04999 osd.0 up 1.00000 1.00000
3 0.04999 osd.3 up 1.00000 1.00000
6 0.04999 osd.6 up 1.00000 1.00000
-3 0.14996 host 192.168.0.13
1 0.04999 osd.1 up 1.00000 1.00000
4 0.04999 osd.4 up 1.00000 1.00000
7 0.04999 osd.7 down 1.00000 1.00000
- 去ceph-node3查看osd,可知disk為sdb
Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: FA31FD88-190E-4CA4-AF0D-E31AB1FCADDC
# Start End Size Type Name
1 2048 206847 100M unknown KOLLA_CEPH_DATA_BS_7
2 206848 104857566 49.9G unknown KOLLA_CEPH_DATA_BS_7_B
- 解綁磁盤
# df -h 命令查看
/dev/sdb1 97M 5.3M 92M 6% /var/lib/ceph/osd/0ffdd2fc-41cd-429c-84ee-8150467c06ed
# 解綁
umount /var/lib/ceph/osd/0ffdd2fc-41cd-429c-84ee-8150467c06ed
# 清理/etc/fstab
# 刪除sdb1對(duì)應(yīng)的掛載信息
UUID=0ffdd2fc-41cd-429c-84ee-8150467c06ed /var/lib/ceph/osd/0ffdd2fc-41cd-429c-84ee-8150467c06ed xfs defaults,noatime 0 0
- 清理osd
# 清理舊的osd
osd_number=7
ceph osd out ${osd_number}
ceph osd crush remove osd.${osd_number}
ceph auth del osd.${osd_number}
ceph osd rm ${osd_number}
docker stop ceph_osd_7
docker rm ceph_osd_7
- 重新初始化磁盤
systemctl daemon-reload
sudo sgdisk --zap-all -- /dev/sdb
sudo /sbin/parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1 1 -1
- 重新部署osd
kolla-ansible/tools/kolla-ansible deploy --configdir ceph-test -i ceph-test/multinode-inventory --passwords ceph-test/passwords.yml --tags ceph -e openstack_release=cephRocky-7.0.2.0001
確認(rèn)鏡像的tag沒(méi)有發(fā)生變化,則這次部署只會(huì)重新添加一個(gè)新的osd,并不會(huì)影響之前的osd.
總結(jié)
這篇主要講了一下社區(qū)中ceph的部署,下一篇會(huì)針對(duì)ceph集群的部署和維護(hù)中存在的問(wèn)題,進(jìn)一步的改進(jìn).