首先你要安裝個(gè)docker,然后才能起飛吆视。典挑。
額 怎么安裝docker? 裝個(gè)包就好啊啦吧。您觉。炒雞簡單的。授滓。自己研究吧琳水。。
玩之前般堆,我們先看下自己的docker是否正常
ljpMacBookPro:~ liangjiapeng$ docker version
Client: Docker Engine - Community
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:47:43 2018
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:55:00 2018
OS/Arch: linux/amd64
Experimental: false
ljpMacBookPro:~ liangjiapeng$
ljpMacBookPro:~ liangjiapeng$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker 沒問題后在孝,起飛吧
復(fù)制這串命令行執(zhí)行即可
docker run --rm -it \
-p 2181:2181 -p 3030:3030 -p 8081:8081 \
-p 8082:8082 -p 8083:8083 -p 9092:9092 \
-e ADV_HOST=127.0.0.1 \
landoop/fast-data-dev
下面是正常運(yùn)行的狀態(tài),首次執(zhí)行,會拉取docker的鏡像淮摔,耐心等待私沮,等待輸出下邊醬汁的信息就可以了。
ljpMacBookPro:~ liangjiapeng$ docker run --rm -it \
> -p 2181:2181 -p 3030:3030 -p 8081:8081 \
> -p 8082:8082 -p 8083:8083 -p 9092:9092 \
> -e ADV_HOST=127.0.0.1 \
> landoop/fast-data-dev
Setting advertised host to 127.0.0.1.
Operating system RAM available is 3455 MiB, which is less than the lowest
recommended of 4096 MiB. Your system performance may be seriously impacted.
Starting services.
This is Landoop’s fast-data-dev. Kafka 1.1.1-L0 (Landoop's Kafka Distribution).
You may visit http://127.0.0.1:3030 in about a minute.
2018-12-09 15:16:01,639 INFO Included extra file "/etc/supervisord.d/01-zookeeper.conf" during parsing
2018-12-09 15:16:01,639 INFO Included extra file "/etc/supervisord.d/02-broker.conf" during parsing
2018-12-09 15:16:01,639 INFO Included extra file "/etc/supervisord.d/03-schema-registry.conf" during parsing
2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/04-rest-proxy.conf" during parsing
2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/05-connect-distributed.conf" during parsing
2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/06-caddy.conf" during parsing
2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/07-smoke-tests.conf" during parsing
2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/08-logs-to-kafka.conf" during parsing
2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
2018-12-09 15:16:01,640 INFO Set uid to user 0 succeeded
2018-12-09 15:16:01,658 INFO RPC interface 'supervisor' initialized
2018-12-09 15:16:01,658 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-12-09 15:16:01,659 INFO supervisord started with pid 6
2018-12-09 15:16:02,664 INFO spawned: 'sample-data' with pid 164
2018-12-09 15:16:02,668 INFO spawned: 'zookeeper' with pid 165
2018-12-09 15:16:02,673 INFO spawned: 'caddy' with pid 166
2018-12-09 15:16:02,677 INFO spawned: 'broker' with pid 168
2018-12-09 15:16:02,686 INFO spawned: 'smoke-tests' with pid 169
2018-12-09 15:16:02,689 INFO spawned: 'connect-distributed' with pid 170
2018-12-09 15:16:02,693 INFO spawned: 'logs-to-kafka' with pid 171
2018-12-09 15:16:02,715 INFO spawned: 'schema-registry' with pid 177
2018-12-09 15:16:02,750 INFO spawned: 'rest-proxy' with pid 184
2018-12-09 15:16:03,767 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,767 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,767 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,768 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,768 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,769 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,769 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,770 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-09 15:16:03,770 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
這樣子和橙,kafka就已經(jīng)開始啟動了仔燕,使用fast-data-dev這個(gè)鏡像,還有web的界面可以查看魔招,我們?nèi)タ纯?/p>
fast-data-dev web界面
剛啟動的時(shí)候晰搀,web界面中的COYOTE HEALTH CHECKS會進(jìn)行一些檢查,等待檢查完成后我們再使用办斑。
檢查完成后的狀態(tài)
檢查完成后的狀態(tài)
檢查完成后厕隧,我們就可以測試使用了,怎么玩呢,繼續(xù)看
先建立個(gè)topic
root@fast-data-dev / $ kafka-topics --zookeeper 127.0.0.1:2181 --create --topic my_topic --partitions 3 --replication-factor 1
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "my_topic".
生產(chǎn)數(shù)據(jù)
root@fast-data-dev / $ kafka-console-producer --broker-list 127.0.0.1:9092 --topic my_topic
>111
消費(fèi)數(shù)據(jù)吁讨,再開一個(gè)docker
root@fast-data-dev / $ kafka-console-consumer --bootstrap-server 127.0.0.1:9092 --topic my_topic --from-beginning
111
此時(shí)髓迎,再向生產(chǎn)者的終端中寫入數(shù)據(jù),消費(fèi)者這端會自動讀取