Docker從零開始發(fā)布應(yīng)用(5)-構(gòu)建ELK環(huán)境(Kibana+Logstash+Kafka+Elasticsearch)

一稠歉、Elasticsearch docker安裝[1]

準(zhǔn)備Dockerfile和資源文件

elasticsearch.yml(elasticsearch配置文件)
elasticsearch-6.2.4.tar.gz(安裝包,官網(wǎng)下載)
elasticsearch-analysis-ik-6.2.4.zip(分詞器插件树碱,官網(wǎng)下載)
supervisord.conf (supervisord啟動配置文件,也可不用supervisord啟動师痕,直接用./elasticsearch)

1. 配置文件elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#

cluster.name: elasticsearch-application

#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1

#只啟動一個節(jié)點   
#避免failed to obtain node locks, tried, maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes..
node.max_local_storage_nodes: 1

# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#索引數(shù)據(jù)的存儲路徑  
path.data: /usr/local/elasticsearch/data  
#日志文件的存儲路徑  
path.logs: /usr/local/elasticsearch/logs  
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#

http.port: 9200

network.host: 0.0.0.0

#
# Set a custom port for HTTP:
#
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

2. supervisord.conf

[supervisord]
nodaemon=true
[program:elasticsearch]
command=/opt/soft/elasticsearch-6.2.4/bin/elasticsearch
user=elsearch
stdout_logfile_maxbytes = 20MB
stdout_logfile = /usr/local/elasticsearch/logs/elasticsearch-application.log

3. Dockerfile

#
# elasticsearch-6 5.15.2
#

FROM centos7-base
MAINTAINER xuchang
ADD elasticsearch-6.2.4.tar.gz /opt/soft
ADD elasticsearch-analysis-ik-6.2.4.zip /opt/soft/
COPY elasticsearch.yml /opt/soft/elasticsearch-6.2.4/config/


# Install libs
WORKDIR /opt/soft/

RUN groupadd elsearch \
    && useradd elsearch -g elsearch -p elasticsearch \
    && unzip /opt/soft/elasticsearch-analysis-ik-6.2.4.zip \
    && mkdir -p /usr/local/elasticsearch/data \
    && mkdir -p /usr/local/elasticsearch/logs \
    && chown -R elsearch:elsearch  /usr/local/elasticsearch/ \ 
    && chown -R elsearch:elsearch  elasticsearch-6.2.4 \
    && chown -R elsearch:elsearch /usr/bin/supervisord \
    && touch /opt/soft/supervisord.log \
    && chown -R elsearch:elsearch /opt/soft/supervisord.log \
    && mkdir -p /opt/soft/elasticsearch-6.2.4/plugins/analysis-ik/ \
    && cp -r /opt/soft/elasticsearch/* /opt/soft/elasticsearch-6.2.4/plugins/analysis-ik/

USER elsearch

EXPOSE 9200
EXPOSE 9300

COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord"]

\color{red}{error1: }

You probably need to set vm.max_map_count in /etc/sysctl.conf on the host itself, so that Elasticsearch does not attempt to do that from inside the container. 
If you don't know the desired value, try doubling the current setting and keep going until Elasticsearch starts successfully. 
Documentation recommends at least 262144.

解決方法:

#在 宿主機(jī) 切換為root用戶,修改內(nèi)核參數(shù)為262144
#臨時生效
sysctl vm.max_map_count=262144
 
#永久生效
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p

\color{red}{error2: }

failed to obtain node locks, tried,
 maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes..

解決方法: 這里使用的是簡單配置纸颜,單節(jié)點, 在elasticsearch.yml

node.max_local_storage_nodes: 1

二绎橘、 docker Kafka和zookeeper安裝[2]

這里直接采用wurstmeister/kafka-docker 的docker 鏡像,沒有使用kafka自帶的zookeeper

三胁孙、 docker Logstash安裝[3]

Logstash 6.3官方文檔 : https://www.elastic.co/guide/en/logstash/6.3/index.html

準(zhǔn)備Dockerfile和資源文件

Dockerfile
logstash.conf  (logstash輸入源、輸出源称鳞、過濾規(guī)則等配置文件)
logstash.yml (logstash環(huán)境配置)
logstash-6.3.0.tar.gz (安裝包)

1. logstash.conf 文件

input{
     kafka{
        bootstrap_servers => ["192.168.243.195:9092"]
        group_id => "es-consumer-group"
        auto_offset_reset => "latest"   #從最新的偏移量開始消費
        consumer_threads => 5
        decorate_events => true   #此屬性會將當(dāng)前topic涮较、offset、group冈止、partition等信息也帶到message中
        topics => ["Microservice"]   #kafka 的topic
     }
   #可以設(shè)置多個
   #   kafka{
   #      bootstrap_servers => ["192.168.243.195:9092"]
   #      client_id => "test2"
   #      group_id => "test2"
   #      auto_offset_reset => "latest"
   #      consumer_threads => 5
   #      decorate_events => true
   #      topics => ["logq"]
   #      type => "student"
   #    }
}

output {
   elasticsearch{
     hosts=> ["192.168.243.195:9200"]
     index=> "microservice-%{+YYYY.MM.dd}"  #這里配置微服務(wù)索引,如果沒有創(chuàng)建索引狂票,會自動創(chuàng)建,會根據(jù)日期生成索引熙暴,20190802 則會生成 microservice-2019.08.02索引
   }
} 

2. logstash.yml 文件(監(jiān)控沒配置X-pack插件闺属,不會有登陸和Monitoring菜單)

http.host: "0.0.0.0"
log.level: info # 默認(rèn)為info ,當(dāng)消費數(shù)據(jù)失敗,無法發(fā)現(xiàn)錯誤時候可以使用debug模式查看日志
path.logs: /opt/logs/logstash    
# xpack.monitoring.elasticsearch.url: http://192.168.243.195:9200 #監(jiān)控es健康狀態(tài)
# xpack.monitoring.elasticsearch.username: elastic
# xpack.monitoring.elasticsearch.password: changeme
# xpack.monitoring.enabled: false

3. Dockerfile 文件

#
# logstash-6.3.0 服務(wù) 
#
FROM centos7-base
MAINTAINER xuchang

ADD logstash-6.3.0.tar.gz /opt/soft/
WORKDIR /opt/soft/


COPY logstash.conf  /opt/soft/logstash-6.3.0/bin/logstash.conf
COPY logstash.yml  /opt/soft/logstash-6.3.0/config/logstash.yml
RUN mkdir -p  /opt/soft/logs

ENTRYPOINT  /opt/soft/logstash-6.3.0/bin/logstash -f /opt/soft/logstash-6.3.0/bin/logstash.conf

四周霉、 docker Kibana安裝[4]

準(zhǔn)備Dockerfile和資源文件

Dockerfile
kibana.yml (kibana環(huán)境配置)
kibana-6.2.4-linux-x86_64.tar.gz (安裝包)
supervisord.conf  (supervisord啟動配置文件,也可不用supervisord啟動掂器,直接用./kibana)

1. kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.

server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.

server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.

elasticsearch.url: "http://192.168.243.195:9200"

2. supervisord.conf

[supervisord]
nodaemon=true
[program:kibana]

command=/opt/soft/kibana-6.2.4-linux-x86_64/bin/kibana

user=kibana

3.Dockerfile

#
# elasticsearch-6 5.15.2
#


FROM centos7-base
MAINTAINER xuchang
ADD  kibana-6.2.4-linux-x86_64.tar.gz /opt/soft/
COPY kibana.yml /opt/soft/kibana-6.2.4-linux-x86_64/config/

# Install libs
WORKDIR /opt/soft/
RUN groupadd kibana \
    && useradd kibana -g kibana -p kibana \
    && chown -R kibana:kibana kibana-6.2.4-linux-x86_64 \
    && chown -R kibana:kibana /usr/bin/supervisord \
    && touch /opt/soft/supervisord.log \
    && chown -R kibana:kibana /opt/soft/supervisord.log

USER kibana

EXPOSE 5601

COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord"]

五、 docker-compose安裝ELK環(huán)境[5]

docker-compose.yml

version: '2'
services:
  #Elasticsearch 6.2.4
  elasticsearch:
    build: /opt/soft/bclz/Elasticsearch/
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 1g
    ports:
     - "9200:9200"
     - "9300:9300"
  #Zookeeper Kafka
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka:2.11-0.11.0.3
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.243.195:9092
      KAFKA_LISTENERS: PLAINTEXT://:9092
      KAFKA_ZOOKEEPER_CONNECT: 192.168.243.195:2181
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock     
  #Kibana
  kibana:
    build: /opt/soft/bclz/Kibana/
    mem_limit: 300M
    ports:
     - "5601:5601" 
  #Logstash
  logstash:
    build: /opt/soft/bclz/Logstash/
    mem_limit: 300M
[xuchang@localhost bclz]$ ls
docker-compose.yml  docker-install.sh  Elasticsearch  Kibana  Logstash  
[xuchang@localhost bclz]$ docker-compose up -d

六俱箱、 測試ELK環(huán)境[6]

[xuchang@localhost bclz]$ docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                                                    NAMES
caff410f5d5e        logstash:latest                    "/bin/sh -c '/opt/..."   About an hour ago   Up About an hour    22/tcp                                                   inspiring_jennings
a285ad408112        bclz_nexus                         "/usr/bin/supervisord"   About an hour ago   Up About an hour    22/tcp, 0.0.0.0:8081->8081/tcp                           bclz_nexus_1
19ef13106a52        bclz_kibana                        "/usr/bin/supervisord"   About an hour ago   Up About an hour    22/tcp, 0.0.0.0:5601->5601/tcp                           bclz_kibana_1
26f4df5d2c85        wurstmeister/zookeeper             "/bin/sh -c '/usr/..."   About an hour ago   Up About an hour    22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp       bclz_zookeeper_1
61798e283d47        bclz_tengine                       "/usr/bin/supervisord"   About an hour ago   Up About an hour    22/tcp, 0.0.0.0:8888->80/tcp                             bclz_tengine_1
9960ace092e2        bclz_elasticsearch                 "/usr/bin/supervisord"   About an hour ago   Up About an hour    0.0.0.0:9200->9200/tcp, 22/tcp, 0.0.0.0:9300->9300/tcp   bclz_elasticsearch_1
0ba972877a7e        wurstmeister/kafka:2.11-0.11.0.3   "start-kafka.sh"         About an hour ago   Up About an hour    0.0.0.0:9092->9092/tcp                                   bclz_kafka_1

1. 根據(jù)kafka的容器Id推送數(shù)據(jù)

docker exec -ti 0ba972877a7e bin/kafka-topics.sh --create --zookeeper 192.168.243.195:2181 --replication-factor 1 --partitions 1 --topic Microservice
docker exec -ti 0ba972877a7e kafka-console-producer.sh --broker-list 192.168.243.195:9092 --topic Microservice
> sasasasasasa
> test1111111

發(fā)送兩條sasasasasasa国瓮、test1111111,
發(fā)送數(shù)據(jù)后狞谱,elasticsearch作為output源,會在es中自動創(chuàng)建logstash.conf中的索引乃摹,由于這里采用的是日期通配,所有會創(chuàng)建一條microservice-2019.08.02的索引跟衅,

數(shù)據(jù)流向 kafka->logstash->elasticsearch

2. Kibana創(chuàng)建索引

1.登陸Kibana Web界面 http://{local-ip}:5601/

2.點擊左邊菜單 Management

1. 點擊 *   左下角   [Index Patterns]

2. 左邊列表點擊  [ Create Index Pattern]

3.界面顯示: 
Step 1 of 2: Define index pattern
Index pattern
<這個位置填寫匹配需要的索引>
You can use a * as a wildcard in your index pattern.

You can't use empty spaces or the characters \, /, ?, ", <, >, |.


4.點擊下一步孵睬,保存完成

3.通過Kibana查看數(shù)據(jù)

點擊Kibana左側(cè)菜單 discover

資源文件下載: https://code.aliyun.com/792453741/docker-compose.git


  1. Elasticsearch docker安裝 ?

  2. docker Kafka安裝 ?

  3. docker Logstash安裝 ?

  4. docker Kibana安裝 ?

  5. docker-compose安裝ELK環(huán)境 ?

  6. 測試ELK環(huán)境 ?

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市与斤,隨后出現(xiàn)的幾起案子肪康,更是在濱河造成了極大的恐慌荚恶,老刑警劉巖,帶你破解...
    沈念sama閱讀 211,194評論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件磷支,死亡現(xiàn)場離奇詭異谒撼,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)雾狈,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,058評論 2 385
  • 文/潘曉璐 我一進(jìn)店門廓潜,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人善榛,你說我怎么就攤上這事辩蛋。” “怎么了移盆?”我有些...
    開封第一講書人閱讀 156,780評論 0 346
  • 文/不壞的土叔 我叫張陵悼院,是天一觀的道長。 經(jīng)常有香客問我咒循,道長据途,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 56,388評論 1 283
  • 正文 為了忘掉前任叙甸,我火速辦了婚禮颖医,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘裆蒸。我一直安慰自己熔萧,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 65,430評論 5 384
  • 文/花漫 我一把揭開白布僚祷。 她就那樣靜靜地躺著佛致,像睡著了一般。 火紅的嫁衣襯著肌膚如雪辙谜。 梳的紋絲不亂的頭發(fā)上晌杰,一...
    開封第一講書人閱讀 49,764評論 1 290
  • 那天,我揣著相機(jī)與錄音筷弦,去河邊找鬼。 笑死抑诸,一個胖子當(dāng)著我的面吹牛烂琴,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播蜕乡,決...
    沈念sama閱讀 38,907評論 3 406
  • 文/蒼蘭香墨 我猛地睜開眼奸绷,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了层玲?” 一聲冷哼從身側(cè)響起号醉,我...
    開封第一講書人閱讀 37,679評論 0 266
  • 序言:老撾萬榮一對情侶失蹤反症,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后畔派,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體铅碍,經(jīng)...
    沈念sama閱讀 44,122評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,459評論 2 325
  • 正文 我和宋清朗相戀三年线椰,在試婚紗的時候發(fā)現(xiàn)自己被綠了胞谈。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 38,605評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡憨愉,死狀恐怖烦绳,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情配紫,我是刑警寧澤径密,帶...
    沈念sama閱讀 34,270評論 4 329
  • 正文 年R本政府宣布,位于F島的核電站躺孝,受9級特大地震影響享扔,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜括细,卻給世界環(huán)境...
    茶點故事閱讀 39,867評論 3 312
  • 文/蒙蒙 一伪很、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧奋单,春花似錦锉试、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,734評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至贷笛,卻和暖如春应又,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背乏苦。 一陣腳步聲響...
    開封第一講書人閱讀 31,961評論 1 265
  • 我被黑心中介騙來泰國打工株扛, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人汇荐。 一個月前我還...
    沈念sama閱讀 46,297評論 2 360
  • 正文 我出身青樓洞就,卻偏偏與公主長得像,于是被迫代替她去往敵國和親掀淘。 傳聞我的和親對象是個殘疾皇子旬蟋,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 43,472評論 2 348