一. 前期準備
1. 環(huán)境:
作用 | Hostname | IP |
---|---|---|
ceph node 1 | ceph-node1 | 10.0.0.2 |
ceph node 2 | ceph-node1 | 10.0.0.3 |
ceph node 3 | ceph-node1 | 10.0.0.4 |
ceph-deploy admin node | ceph-deploy | 10.0.0.5 |
client (用于掛載測試) | whatever | 10.0.0.6 |
三臺 ceph node (需要有一塊空盤茫虽,沒有掛載,沒有格式化)
系統(tǒng):centos 7.5 (關(guān)閉防火墻钥组,關(guān)閉 selinux)
2. 推薦在 ceph node 上安裝 NTP,配置 NTP 對時
sudo yum install ntp ntpdate ntp-doc
關(guān)閉 SELinux ; 關(guān)閉防火墻(或者開放相應(yīng)端口)
3. 在所有節(jié)點上修改 yum 源
換成阿里 yum 源,并添加 epel 源
cp -a /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
cp -a /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
添加 ceph yum 源
cat << EOM > /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
EOM
3. 給每個 ceph node 和 ceph-deploy admin node 添加用戶 cephdeploy
groupadd cephdeploy -g 1024
useradd cephdeploy -u 1024 -g 1024
賦予 sudo 權(quán)限
echo "cephdeploy ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
4. 給每個 ceph node 和 ceph-deploy admin node 配置 hosts
host 和 hostname 必須一致, 而且 hostname 不能以數(shù)字開頭
vim /etc/hosts
10.0.0.2 ceph-node1
10.0.0.3 ceph-node2
10.0.0.4 ceph-node3
10.0.0.5 ceph-deploy
5. 配置 SSH 免密登陸
使用 cephdeploy 用戶在 ceph-deploy admin node 生成 SSH 公鑰
su - cephdeploy
ssh-keygen
在 ceph node 上配置使 ceph-deploy admin node 可以使用 cephdeploy 用戶免密登陸
sed -i 's/#PubkeyAuthentication yes/PubkeyAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
mkdir /home/cephdeploy/.ssh
chown -R cephdeploy:cephdeploy /home/cephdeploy/.ssh
chown -R cephdeploy:cephdeploy /home/cephdeploy/.ssh/authorized_keys
chmod 600 /home/cephdeploy/.ssh/authorized_keys
6. 在 ceph-deploy admin node 上配置 ~/.ssh/config
切換到 cephdeploy 用戶
vim .ssh/config
Host ceph-node1
Hostname ceph-node1
User cephdeploy
Port 22
Host ceph-node2
Hostname ceph-node2
User cephdeploy
Port 22
Host ceph-node3
Hostname ceph-node3
User cephdeploy
Port 22
修改權(quán)限
chmod 600 .ssh/config
7. 在 ceph-deploy admin node 上安裝 ceph-deploy
sudo yum install ceph-deploy -y
8. 在 ceph node 上安裝 ceph
sudo yum -y install ceph ceph-radosgw
二. Ceph 集群
1. 初始化 ceph 集群
在管理節(jié)點上,以 cephdeploy 登陸震贵,創(chuàng)建一個目錄籍铁,以維護為集群生成的配置文件和密鑰
mkdir my-cluster
cd my-cluster
ceph-deploy new ceph-node1 ceph-node2 ceph-node3
Deploy the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command
ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
Deploy a manager daemon.
ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
Add three OSDs. For the purposes of these instructions, we assume you have an unused disk in each node called /dev/vdb. Be sure that the device is not currently in use and does not contain any important data.
ceph-deploy osd create --data {device} {ceph-node}
For example:
ceph-deploy osd create --data /dev/vdb ceph-node1
ceph-deploy osd create --data /dev/vdb ceph-node2
ceph-deploy osd create --data /dev/vdb ceph-node3
ceph fs
ceph-deploy mds create ceph-node1 ceph-node2 ceph-node3
ceph osd pool create cephfs_data 32 32
ceph osd pool create cephfs_metadata 32 32
ceph fs new mycephfs cephfs_metadata cephfs_data
cat /etc/ceph/ceph.client.admin.keyring
mount.ceph ceph-node1:6789:/ /mnt/ -o name=admin,secret="xxxxx"
ceph fs authorize cephfs client.testuser /testdir rw
mount.ceph 10.205.117.101:6789,10.205.117.102:6789,10.205.117.103:6789,10.205.117.104:6789:/testdir /mnt/ -o name=testuser,secret="pass"
部署RGW實例
ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3 ceph-node4
快存儲
rbd create foo --size 4096 --image-feature layering -m 10.205.205.41,10.205.205.43,10.205.207.171 -k /etc/ceph/ceph.client.admin.keyring -p rbdpool01
sudo rbd map foo --name client.admin -m 10.205.205.41,10.205.205.43,10.205.207.171 -k /etc/ceph/ceph.client.admin.keyring -p rbdpool01
mkfs.ext4 -m0 /dev/rbd/rbdpool01/foo
mkdir /mnt/ceph-block-device
mount /dev/rbd/rbdpool01/foo /mnt/ceph-block-device
Ceph Dashboard
官方文檔:https://docs.ceph.com/docs/master/mgr/dashboard/
yum -y install ceph-mgr-dashboard
ceph config set mgr mgr/dashboard/ssl false
ceph mgr module enable dashboard
ceph dashboard ac-user-create <username> <password> administrator
1. 開啟儀表板 Object Gateway 管理功能
radosgw-admin user create --uid=CephDashboard --display-name=CephDashboard --system
radosgw-admin user info --uid=<user_id>
ceph dashboard set-rgw-api-access-key <access_key>
ceph dashboard set-rgw-api-secret-key <secret_key>
系統(tǒng)調(diào)優(yōu)
echo "net.ipv4.ip_local_port_range = 1024 65535" >> /etc/sysctl.conf
參考文檔:
https://docs.ceph.com/docs/master/start/quick-ceph-deploy/
李航:分布式存儲 Ceph 介紹及原理架構(gòu)分享