kafka生態(tài)圈
環(huán)境
kafka:3.2.1
os:CentOS Linux/7
JDK:1.8.0_291
注意,kafka 3.3.1 要求本地環(huán)境安裝 Java 8 及以上版本鸿吆,本文檔安裝的java11截亦,kafka4.0將會啟用java8
安裝kafka
-
下載kafka主安裝包
官網(wǎng)下載地址:https://kafka.apache.org/downloads
解壓
sudo tar zxvf kafka.tgz -C /opt/module/
- 修改目錄名稱
sudo mv /opt/module/kafka_2.12-3.3.1 /opt/module/kafka
- 在/opt/module/kafka目錄下創(chuàng)建logs文件夾,在配置匯總修改日志
mkdir logs
- 修改配置文件
cd config/
vi server.properties
參考配置文件如下
#broker的全局唯一編號,不能重復(fù)渠啊,集群的話每個虛擬機不重復(fù)
broker.id=0
#刪除topic功能使能
delete.topic.enable=true
#處理網(wǎng)絡(luò)請求的線程數(shù)量
num.network.threads=3
#用來處理磁盤IO的現(xiàn)成數(shù)量
num.io.threads=8
#發(fā)送套接字的緩沖區(qū)大小
socket.send.buffer.bytes=102400
#接收套接字的緩沖區(qū)大小
socket.receive.buffer.bytes=102400
#請求套接字的緩沖區(qū)大小
socket.request.max.bytes=104857600
#kafka運行日志存放的路徑
log.dirs=/opt/module/kafka/logs
#topic在當(dāng)前broker上的分區(qū)個數(shù)
num.partitions=1
#用來恢復(fù)和清理data下數(shù)據(jù)的線程數(shù)量
num.recovery.threads.per.data.dir=1
#segment文件保留的最長時間师脂,超時將被刪除
log.retention.hours=168
#配置連接Zookeeper集群地址
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka
接下來,我們啟動 broker 的部分卦洽,需要按照順序依次啟動 zookeeper 和 kafka server贞言。
- 后臺啟動 zookeeper(后續(xù)版本可能不再需要 zookeeper)
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties #kafka目錄執(zhí)行
- 再后臺啟動 kafka server
bin/kafka-server-start.sh -daemon config/server.properties #kafka目錄執(zhí)
-
創(chuàng)建topic
producer 發(fā)布的 event 會持久化在對應(yīng)的 topic 中,才能路由給正確的 consumer阀蒂。所以该窗,在讀寫 event 之前,我們需要先創(chuàng)建 topic蚤霞。
執(zhí)行以下命令:
# 創(chuàng)建topic
bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
# 查詢topic
bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
# 刪除topic
bin/kafka-topics.sh --delete --topic quickstart-events #需要server.properties中設(shè)置delete.topic.enable=true否則只是標(biāo)記刪除或者直接重啟
#修改topic
partition:分區(qū)索引酗失,注:分區(qū)數(shù)修改的只能+不能-
leader:1代表kafka配置中設(shè)置的broker.id
Replicas:1 代表開啟的副本數(shù)【kafka服務(wù)】對應(yīng)的broker.id
- 讀寫event
接下來我們用 kafka 自帶的 console-consumer 和 console-producer 讀寫 event。
使用 console-producer 寫 event 時昧绣,我們每輸入一行并回車规肴,就會向 topic 寫入一個 event。
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
ctrl+c
退出接著,我們使用 console-consumer 讀 event拖刃∩咀常可以看到,剛寫的 event 被讀到了兑牡。
#--from-beginning代表從開始讀取央碟,即開啟之前的也會讀取
bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
我們可以在兩個會話中保持 producer 和 consumer 不退出,當(dāng)我們在 producer 寫入 event 時发绢, consumer 將實時讀取到硬耍。
- 關(guān)閉kafka
bin/kafka-server-stop.sh
Kafka命令
kafka-topics.sh --zookeeper hadoop102:2181 --list
docker中安裝
可以參考docker倉管庫kafka
- 創(chuàng)建一個網(wǎng)絡(luò)
app-tier:網(wǎng)絡(luò)名稱
–driver:網(wǎng)絡(luò)類型為bridge
docker network create app-tier --driver bridge
- 安裝zookeeper
kafka依賴zookeeper所以先安裝zookeeper
-p:設(shè)置映射端口(默認2181)
-d:后臺啟動
docker run -d --name zookeeper-server \
--network app-tier \
-e ALLOW_ANONYMOUS_LOGIN=yes \
bitnami/zookeeper:latest
- 安裝Kafka
–name:容器名稱
-p:設(shè)置映射端口(默認9092 )
-d:后臺啟動
ALLOW_PLAINTEXT_LISTENER任何人可以訪問
KAFKA_CFG_ZOOKEEPER_CONNECT鏈接的zookeeper
KAFKA_ADVERTISED_HOST_NAME當(dāng)前主機IP或地址(重點:如果是服務(wù)器部署則配服務(wù)器IP或域名否則客戶端監(jiān)聽消息會報地址錯誤)
-e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.0.101:9092 \
docker run -d --name kafka-server \
--network app-tier \
-p 9092:9092 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 \
-e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.172.131:9092 \
bitnami/kafka:latest
- 進入kafka容器就可以執(zhí)行操作,目錄在
/opt/bitnami/kafka
- kafka-map圖形化管理工具
訪問地址:http://服務(wù)器IP:9002/
DEFAULT_USERNAME:默認賬號admin
DEFAULT_PASSWORD:默認密碼admin
Git 地址:https://github.com/dushixiang/kafka-map/blob/master/README-zh_CN.md
docker run -d --name kafka-map \
--network app-tier \
-p 9020:8080 \
-v /home/yzj/kafka-map/data:/usr/local/kafka-map/data \
-e DEFAULT_USERNAME=admin \
-e DEFAULT_PASSWORD=admin \
--restart always dushixiang/kafka-map:latest
kafka集群
- kafka.yml
version: "3.6"
services:
zookeeper:
container_name: zookeeper
image: 'bitnami/zookeeper:3.8.0'
user: root
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
netkafka:
ipv4_address: 172.23.0.10
kafka1:
container_name: kafka1
image: 'bitnami/kafka:3.3.1'
user: root
depends_on:
- zookeeper
ports:
- '19092:9092'
environment:
# 允許使用Kraft
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_ZOOKEEPER_CONNECT=192.168.1.21:2181
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
# 定義kafka服務(wù)端socket監(jiān)聽端口(Docker內(nèi)部的ip地址和端口)
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
# 定義安全協(xié)議
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
#定義外網(wǎng)訪問地址(宿主機ip地址和端口)
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.21:19092
- KAFKA_BROKER_ID=1
- KAFKA_KRAFT_CLUSTER_ID=iZWRiSqjZAlYwlKEqHFQWI
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@172.23.0.11:9093,2@172.23.0.12:9093,3@172.23.0.13:9093
- ALLOW_PLAINTEXT_LISTENER=yes
# 設(shè)置broker最大內(nèi)存边酒,和初始內(nèi)存
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
volumes:
- /home/vagrant/kafka/volume/broker01:/bitnami/kafka:rw
networks:
netkafka:
ipv4_address: 172.23.0.11
kafka2:
container_name: kafka2
image: 'bitnami/kafka:3.3.1'
user: root
ports:
- '29092:9092'
- '29093:9093'
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_ZOOKEEPER_CONNECT=192.168.1.21:2181
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.21:29092 #修改宿主機ip
- KAFKA_BROKER_ID=2
- KAFKA_KRAFT_CLUSTER_ID=iZWRiSqjZAlYwlKEqHFQWI #哪一经柴,三個節(jié)點保持一致
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@172.23.0.11:9093,2@172.23.0.12:9093,3@172.23.0.13:9093
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
volumes:
- /home/vagrant/kafka/volume/broker02:/bitnami/kafka:rw
networks:
netkafka:
ipv4_address: 172.23.0.12
kafka3:
container_name: kafka3
image: 'bitnami/kafka:3.3.1'
user: root
ports:
- '39092:9092'
- '39093:9093'
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_ZOOKEEPER_CONNECT=192.168.1.21:2181
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.21:39092 #修改宿主機ip
- KAFKA_BROKER_ID=3
- KAFKA_KRAFT_CLUSTER_ID=iZWRiSqjZAlYwlKEqHFQWI
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@172.23.0.11:9093,2@172.23.0.12:9093,3@172.23.0.13:9093
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
volumes:
- /home/vagrant/kafka/volume/broker03:/bitnami/kafka:rw
networks:
netkafka:
ipv4_address: 172.23.0.13
networks:
name:
netkafka:
driver: bridge
name: netkafka
ipam:
driver: default
config:
- subnet: 172.23.0.0/25
gateway: 172.23.0.1
1.修改宿主機ip KAFKA_CFG_ADVERTISED_LISTENERS
192.168.1.21
2.修改掛載路徑/home/vagrant/kafka/volume
- 啟動
進入到kafka.yml
目錄執(zhí)行:docker-compose -f kafka.yml up -d
- 執(zhí)行
#3.1進入容器
docker exec -it kafka1 bash
#3.2進入kafka目錄
cd /opt/bitnami/kafka/bin
#3.3 創(chuàng)建Topic
#創(chuàng)建一個副本為3、分區(qū)為5的topic
./kafka-topics.sh --create --topic foo --partitions 5 --replication-factor 3 --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092
Created topic foo.
#查看topic詳細信息
kafka-topics.sh --describe --topic foo --bootstrap-server kafka1:9092, kafka2:9092, kafka3:9092
Topic: foo TopicId: 1tHGERe8QA6z24abtVKCLg PartitionCount: 5 ReplicationFactor: 3 Configs:
Topic: foo Partition: 0 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: foo Partition: 1 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: foo Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: foo Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: foo Partition: 4 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
#3.4生產(chǎn)和消費驗證
#開出兩個窗口 進到相同容器相同目錄
kafka1生產(chǎn): kafka-console-producer.sh --topic foo --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092
kafka2和kafka3消費:/opt/bitnami/kafka# bin/kafka-console-consumer.sh --topic foo --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092
#3.5刪除Topic
kafka-topics.sh --delete --topic foo --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092