1. 借鑒
2. 開始
我們的集群規(guī)劃如下:
kafka01[172.173.16.19] | kafka02[172.173.16.20] | kafka03[172.173.16.21] |
---|
鏡像準備
- docker hub 下載
docker pull caiserkaiser/centos-ssh
- 構(gòu)建
caiser/centos-ssh:7.8 鏡像
創(chuàng)建自定義網(wǎng)絡
docker network create -d bridge --subnet "172.173.16.0/24" --gateway "172.173.16.1" datastore_net
啟動容器
docker run -it -d --network datastore_net --ip 172.173.16.19 --name kafka01 caiser/centos-ssh:7.8
下載kafka
下載kafka
-
拷貝到容器內(nèi)
docker cp ~/Downloads/kafka_2.11-0.11.0.2.tgz df4410d638e8:/opt/envs
-
解壓
tar -zxvf kafka_2.11-0.11.0.2.tgz
配置
server.properties
-
a. 備份
cp /opt/envs/kafka_2.11-0.11.0.2/config/server.properties /opt/envs/kafka_2.11-0.11.0.2/config/server.properties.bak
-
b. 編輯server.properties
vi /opt/envs/kafka_2.11-0.11.0.2/config/server.properties
-
c. 配置
①.
打開
下面這一行的注釋,使可以刪除topic# delete.topic.enable=true
②. 配置 log.dirs【指定目錄時需要先創(chuàng)建】
log.dirs=/opt/logs
③. 配置zookeeper
zookeeper.connect=zookeeper01:2181,zookeeper02:2181,zookeeper03:2181
④. 配置listeners
listeners=PLAINTEXT://kafka01:9092
kafka-節(jié)點配置
-
編輯/etc/hosts九串,并添加以下hostname
172.173.16.13 zookeeper01 172.173.16.14 zookeeper02 172.173.16.15 zookeeper03 172.173.16.19 kafka01 172.173.16.20 kafka02 172.173.16.21 kafka03
保存為鏡像并移除容器
docker commit df4410d638e8 caiser/kafka:0.11.0.2
docker rm df4410d638e8
啟動容器
docker run -it -d --network datastore_net --ip 172.173.16.19 --name kafka01 caiser/kafka:0.11.0.2 bin/bash
docker run -it -d --network datastore_net --ip 172.173.16.20 --name kafka02 caiser/kafka:0.11.0.2 bin/bash
docker run -it -d --network datastore_net --ip 172.173.16.21 --name kafka03 caiser/kafka:0.11.0.2 bin/bash
【重要】
配置不同的broker.id
在config/server.properties下绞佩,不同的節(jié)點的broker.id必須不同,所以我們分配如下
節(jié)點 | kafka01[172.173.16.19] | kafka02[172.173.16.20] | kafka03[172.173.16.21] |
---|---|---|---|
broker.id | 0 | 1 | 2 |
vi /opt/envs/kafka_2.11-0.11.0.2/config/server.properties
【重要】
配置不同的listeners
在config/server.properties下猪钮,不同的節(jié)點的listeners必須為本機hostname
節(jié)點 | 配置 |
---|---|
kafka01[172.173.16.19] | listeners=PLAINTEXT://kafka01:9092 |
kafka02[172.173.16.20] | listeners=PLAINTEXT://kafka02:9092 |
kafka03[172.173.16.21] | listeners=PLAINTEXT://kafka03:9092 |
vi /opt/envs/kafka_2.11-0.11.0.2/config/server.properties
配置ssh免密登錄
-
進入容器
docker exec -it kafka01 /bin/bash
-
到~/.ssh目錄下生成秘鑰
ssh-keygen -t rsa
-
拷貝秘鑰到kafka01品山,kafka03和kafka02
a.[如果沒開啟]三個容器沒有開啟ssh服務[ps -ef | grep ssh],需要依次執(zhí)行
/usr/sbin/sshd -D &
b. 拷貝秘鑰
ssh-copy-id kafka01 ssh-copy-id kafka02 ssh-copy-id kafka03
kafka02和kafka03依次執(zhí)行上述1-3步驟
啟動kafka
依次到kafka01躬贡,kafka02谆奥,kafka03機器上執(zhí)行以下命令
cd /opt/envs/kafka_2.11-0.11.0.2/
bin/kafka-server-start.sh config/server.properties &
配置 集群起/停 腳本
這樣就不用到所有機器上依次啟停了...
#! /bin/bash
case $1 in
"start") {
for i in kafka01 kafka02 kafka03; do
echo "----- start $i -----"
ssh $i "export JMX_PORT=9988 && /opt/envs/kafka_2.11-0.11.0.2/bin/kafka-server-start.sh -daemon /opt/envs/kafka_2.11-0.11.0.2/config/server.properti
es"
done
};;
"stop") {
for i in kafka01 kafka02 kafka03; do
echo "----- stop $i -----"
ssh $i "/opt/envs/kafka_2.11-0.11.0.2/bin/kafka-server-stop.sh stop"
done
};;
esac