下載zookeeper鏡像
docker pull wurstmeister/zookeeper
下載kafka鏡像
docker pull wurstmeister/kafka
啟動(dòng)zk鏡像生成容器
docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper
啟動(dòng)kafka1鏡像生成容器
docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.31.131:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.31.131:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -v /etc/localtime:/etc/localtime wurstmeister/kafka
啟動(dòng)kafka2鏡像生成容器
docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kafka -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=192.168.31.131:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.31.131:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -v /etc/localtime:/etc/localtime wurstmeister/kafka
查看docker進(jìn)程
docker ps -a
向kafka docker中拷貝測(cè)試數(shù)據(jù)日志文件
docker cp /home/test/test.log kafka:/opt
進(jìn)入kafka docker進(jìn)程中,就可以使用命令操作kafka
docker exec -it kafka bash
進(jìn)入kafka docker的進(jìn)程中,執(zhí)行命令向kafka寫入test.log的測(cè)試數(shù)據(jù)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < /opt/test.log
用代碼操作kafka
生產(chǎn)消息
public class KafkaProducerService {
public static Properties props = new Properties();
public final static String topic = "test";
static {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.31.131:9092,192.168.31.131:9093");
props.put(ProducerConfig.ACKS_CONFIG,"all");
props.put(ProducerConfig.RETRIES_CONFIG,"3");
props.put(ProducerConfig.BATCH_SIZE_CONFIG,"16384");
props.put(ProducerConfig.LINGER_MS_CONFIG,"1");
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG,"33554432");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
}
public static Runnable runnable = () -> {
try {
Producer<String,String> producer = new KafkaProducer<>(props);
for(int i=0;i<1000;i++){
ProducerRecord<String,String> record =
new ProducerRecord<>(topic,"key-"+i,"kafka-value-"+i);
producer.send(record, (recordMetadata, e) -> {
if (e==null){
System.out.println("消息發(fā)送成功");
System.out.println("partition : "+recordMetadata.partition()+" , offset : "+recordMetadata.offset()+",topic"+recordMetadata.topic());
}else {
System.out.println("消息發(fā)送失敗");
}
});
}
// 所有的通道打開都需要關(guān)閉
producer.close();
} catch (InterruptedException e) {
e.printStackTrace();
}
};
public static void runService() {
int producer_num = 10;
ExecutorService executor = Executors.newFixedThreadPool(producer_num);
for (int i=0;i<producer_num;i++){
executor.submit(runnable);
}
}
}
消費(fèi)消息
@Slf4j
public class KafkaConsumerService {
public static Properties props = new Properties();
public final static String topic = "test";
static {
props.put("bootstrap.servers","192.168.31.131:9092,192.168.31.131:9093");
props.put("group.id", "test_consumer");
props.put("enable.auto.commit", "true");
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", StringDeserializer.class.getName());
props.put("auto.offset.reset", "latest");
props.put("deserializer.encoding", "UTF-8");
}
public static Runnable runnable = () -> {
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(topic));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(10000);
records.partitions().forEach(topicPartition -> {
List<ConsumerRecord<String, String>> partitionRecords = records.records(topicPartition);
partitionRecords.forEach(record -> {
log.info("kafka的消費(fèi)日志{}",record.toString());
});
});
}
};
public static void runService() {
int producer_num = 2;
ExecutorService executor = Executors.newFixedThreadPool(producer_num);
for (int i=0;i<producer_num;i++){
executor.submit(runnable);
}
}
}
kafka可視化工具 offsetexplorer
下載地址:http://www.kafkatool.com/download.html
kafka監(jiān)控工具 Kafka Eagle
下載地址:http://download.kafka-eagle.org/
解壓出來(lái)的路徑:/usr/local/kafka-eagle-web-2.0.6
修改配置
vim /usr/local/kafka-eagle-web-2.0.6/conf/system-config.properties
修改的地方是cluster1.zk.list和kafka.eagle.url
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=192.168.31.131:2181
....
kafka.eagle.webui.port=8048
kafka.eagle.url=jdbc:sqlite:/usr/local/kafka-eagle-web-2.0.6/db/ke.db
添加環(huán)境變量
vim ~/.bash_profile
export KE_HOME=/usr/local/kafka-eagle-web-2.0.6
export PATH=$KE_HOME/bin:$PATH
source ~/.bash_profile
進(jìn)入kafka docker中修改kafka-server-sta
docker exec -it kafka bash
docker exec -it kafka2 bash
cd /opt/kafka_2.13-2.7.0/bin/
vim kafka-server-start.sh
添加配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
# 這里的端口不一定非要設(shè)置成9999,端口只要可用,均可。
export JMX_PORT="9999"
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi
啟動(dòng)程序
chmod a+x /usr/local/kafka-eagle-web-2.0.6/bin/*
./ke.sh start
- http://host:8048
- 默認(rèn)用戶名:admin
-
默認(rèn)密碼:12345