安裝Ceph
安裝Ceph軟件
-
在Ceph所有節(jié)點安裝Ceph
yum -y install librados2-14.2.10 ceph-14.2.10
-
在Ceph1節(jié)點額外安裝ceph-deploy
yum -y install ceph-deploy
-
在各節(jié)點查看版本
ceph -v
結果如下:
ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)
部署MON節(jié)點
只需要在主節(jié)點ceph1執(zhí)行
-
創(chuàng)建集群
cd /etc/ceph ceph-deploy new ceph1 ceph2 ceph3
-
在“/etc/ceph”目錄下自動生成的ceph.conf文件中配置網(wǎng)絡mon_host、public network列敲、cluster network
vi /etc/ceph/ceph.conf
將ceph.conf中的內容修改為如下所示:
[global] fsid = f6b3c38c-7241-44b3-b433-52e276dd53c6 mon_initial_members = ceph1, ceph2, ceph3 mon_host = 192.168.3.166,192.168.3.167,192.168.3.168 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx # 因為組網(wǎng)簡單仑性,就使用一個網(wǎng)絡,未隔離分開 public_network = 192.168.3.0/24 cluster_network = 192.168.3.0/24 [mon] mon_allow_pool_delete = true
-
初始化監(jiān)視器并收集密鑰
ceph-deploy mon create-initial
-
將“ceph.client.admin.keyring”拷貝到各個節(jié)點上
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3
-
查看是否配置成功
ceph -s
如下所示:
cluster: id: f6b3c38c-7241-44b3-b433-52e276dd53c6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
部署MGR節(jié)點
-
部署MGR節(jié)點
ceph-deploy mgr create ceph1 ceph2 ceph3
-
查看MGR是否部署成功
ceph -s
結果如下所示:
cluster: id: f6b3c38c-7241-44b3-b433-52e276dd53c6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h) mgr: ceph1(active, since 2d), standbys: ceph2, ceph3
部署OSD節(jié)點
-
掛載硬盤
查看數(shù)據(jù)節(jié)點可掛載的盤
ceph-deploy disk list ceph2 ceph3
掛載數(shù)據(jù)盤
ceph-deploy disk zap ceph2 /dev/sdb # 磁盤名根據(jù)實際情況修改 ceph-deploy disk zap ceph3 /dev/sdb
-
創(chuàng)建OSD節(jié)點
ceph-deploy osd create ceph2 --data /dev/sdb ceph-deploy osd create ceph3 --data /dev/sdb
-
查看集群狀態(tài)
創(chuàng)建成功后畏梆,查看是否正常蟹腾,即2個OSD是否都為up
ceph -s
Ceph For Kubernetes
APP使用cephfs做的StorageClass煌珊,DB使用rbd做的StorageClass
Cephfs
說明:
- CephFS需要使用兩個Pool來分別存儲數(shù)據(jù)和元數(shù)據(jù)私杜,下面我們分別創(chuàng)建fs_data和fs_metadata兩個Pool蚕键。
- 創(chuàng)建存儲池命令最后的兩個數(shù)字,比如ceph osd pool create fs_data 1024 1024中的兩個1024分別代表存儲池的pg_num和pgp_num衰粹,即存儲池對應的pg數(shù)量锣光。Ceph官方文檔建議整個集群所有存儲池的pg數(shù)量之和大約為:(OSD數(shù)量 * 100)/數(shù)據(jù)冗余因數(shù),數(shù)據(jù)冗余因數(shù)對副本模式而言是副本數(shù)铝耻,對EC模式而言是數(shù)據(jù)塊+校驗塊之和誊爹,比方說,三副本模式是3,EC4+2模式是6频丘。
- 此處整個集群3臺服務器办成,每臺服務器12個OSD,總共36個OSD搂漠,按照上述公式計算應為1200迂卢,一般建議pg數(shù)取2的整數(shù)次冪。由于fs_data存放的數(shù)據(jù)量遠大于其他幾個存儲池的數(shù)據(jù)量状答,因此該存儲池也成比例的分配更多的pg冷守。
綜上刀崖,fs_data的pg數(shù)量取1024惊科,fs_metadata的pg數(shù)量取128或者256。
創(chuàng)建Cephfs
MDS(Metadata Server)即元數(shù)據(jù)Server主要負責Ceph FS集群中文件和目錄的管理亮钦。cephfs需要至少一個mds(Ceph Metadata Server)服務用來存放cepfs服務依賴元數(shù)據(jù)信息馆截,有條件的可以創(chuàng)建2個會自動成為主備。
注:在Ceph集群中安裝MDS
- 在ceph-deploy創(chuàng)建mds服務
sudo ceph-deploy mds create ceph2 ceph3
- 創(chuàng)建存儲池
一個cephfs需要至少兩個RADOS存儲池蜂莉,一個用于數(shù)據(jù)蜡娶、一個用于元數(shù)據(jù)。配置這些存儲池時需考慮:
- 為元數(shù)據(jù)存儲池設置較高的副本水平映穗,因為此存儲池丟失任何數(shù)據(jù)都會導致整個文件系統(tǒng)失效窖张;
- 為元數(shù)據(jù)存儲池分配低延時存儲器(例如SSD),因為它會直接影響到客戶端的操作延時蚁滋;
sudo ceph osd pool create cephfs-data 64 64
sudo ceph osd pool create cephfs-metadata 16 16
sudo ceph fs new cephfs cephfs-metadata cephfs-data
創(chuàng)建完成之后宿接,查看mds和fs的狀態(tài):
# sudo ceph mds stat
e6: 1/1/1 up {0=ceph2=up:active}, 1 up:standby
# sudo ceph fs ls
name: cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
- 獲取Ceph auth key
在ceph-deploy主機中查看cephfs的key
sudo ceph auth get-key client.admin | base64
K8S集成Cephfs
注:在Kubernetes APP Cluster中集成Cephfs
-
安裝ceph-common
sudo yum install -y ceph-common
-
創(chuàng)建命名空間
cephfs-ns.yaml
apiVersion: v1 kind: Namespace metadata: name: cephfs labels: name: cephfs
-
創(chuàng)建ceph-secret
ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: cephfs
data:
key: QVFEY2hYaFlUdGp3SEJBQWsyL0gxWXBhMjNXeEt2NGpBMU5GV3c9PQo= # 這里輸入上面得到的key
-
cluster role
clusterRole.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["list", "get", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "create", "delete"]
-
cluster role binding
clusterRoleBinding.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io
-
role binding
roleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner
-
service account
serviceAccount.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs
-
deployment
cephfs-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner image: "quay.io/external_storage/cephfs-provisioner:latest" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner
-
創(chuàng)建StorageClass
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 10.20.20.200:6789,10.20.20.201:6789,10.20.20.202:6789 # monitor有多少填多少 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: cephfs
Kubernetes集成Ceph rbd
在Ceph集群中初始化存儲池
ceph osd pool create esdb 64 64 #創(chuàng)建池
rbd pool init esdb #初始化池
rbd create esdb/img --size 4096 --image-feature layering -k /etc/ceph/ceph.client.admin.keyring #創(chuàng)建鏡像
rbd map esdb/img --name client.admin -k /etc/ceph/ceph.client.admin.keyring #映射鏡像
cd /etc/ceph
ceph auth get-or-create client.esdb mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow
rwx pool=esdb' -o ceph.client.esdb.keyring # 創(chuàng)建esdb 認證 key
在Kubernetes DB Cluster中安裝ceph rbd
rpm -Uvh https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum repolist && yum install ceph-common -y
yum -y install librbd1 && modprobe rbd
獲取ceph的key用于創(chuàng)建secret
cd /etc/ceph
cat ceph.client.admin.keyring | grep key #admin key
cat ceph.client.esdb.keyring | grep key # esdb key
創(chuàng)建secret
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: k8s-ceph
data:
key: **admin key**
type: kubernetes.io/rbd
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-esdb
namespace: k8s-ceph
data:
key: **esdb key**
type: kubernetes.io/rbd
創(chuàng)建rbd-provisioner
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: k8s-ceph
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
創(chuàng)建 Storage class
#1:創(chuàng)建ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
#2:創(chuàng)建ClusterRoleBindng
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: k8s-ceph
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
#3:創(chuàng)建StorageClass
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rbd
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Delete
parameters:
monitors: 你的mon節(jié)點IP:6789 (所有節(jié)點)
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: k8s-ceph
pool: esdb
userId: esdb
userSecretName: ceph-secret-esdb
userSecretNamespace: k8s-ceph
imageFormat: "2"
imageFeatures: layering
#4: 創(chuàng)建ServiceAccount
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: k8s-ceph
name: rbd-provisioner
在Rancher中添加PVC 狀態(tài)為Bound則為集成成功