連續(xù)花了兩天時間在搭建Zookeeper集群這件事上了宛瞄,碰到了N多坑,終于是搭建成功了。
準(zhǔn)備工作
- Zookeeper鏡像苛坚,經(jīng)過我的測試,在Kubernetes上搭建Zookeeper需要用Kubernetes-Zookeeper這個鏡像色难,而不是用官方的Zookeeper鏡像(后面有時間再試一試)泼舱,Kubernetes-Zookeeper這個鏡像在國內(nèi)不好拿,我是采用docker hub構(gòu)建出來的枷莉,Dockerfile如下:
FROM k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10
MAINTAINER leo.lee <lis85@163.com>
當(dāng)然你也可以直接拉取我構(gòu)建成功的鏡像
docker pull leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
- Zookeeper集群需要用到存儲娇昙,這里需要準(zhǔn)備持久卷(PersistentVolume,簡稱PV)笤妙,我這里以yaml文件創(chuàng)建3個PV冒掌,供待會兒3個Zookeeper節(jié)點創(chuàng)建出來的持久卷聲明(PersistentVolumeClaim,簡稱PVC)綁定蹲盘。
persistent-volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk1
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk2
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk3
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
使用如下命令創(chuàng)建
kubectl create -f persistent-volume.yaml
查看PV
kubectl get pv -o wide
部署Zookeeper集群
zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "192.168.242.132/library/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
使用如下命令創(chuàng)建
kubectl create -f zookeeper.yaml
創(chuàng)建完后會出現(xiàn)一個問題股毫,就是所有的Zookeeper pod都啟動不起來,查看日志發(fā)現(xiàn)是用戶對文件夾【/var/lib/zookeeper】沒有權(quán)限引起的召衔,文件夾的權(quán)限是root用戶铃诬。
這里如何通過安裝解決還有待研究,目前可以先手動把文件夾【/var/lib/zookeeper】的權(quán)限修改為普通用戶苍凛,Zookeeper pod就可以正常啟動了趣席。
通過命令查看pod
kubectl get pod -o wide
查看PV,發(fā)現(xiàn)持久卷聲明已經(jīng)綁定上了醇蝴。
kubectl get pv -o wide
查看PVC
kubectl get pvc -o wide
最后來驗證Zookeeper集群是否正常宣肚,查看集群節(jié)點狀態(tài)
for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status; done
一個leader,兩個follower悠栓,成功C拐恰!闸迷!