一稠歉、Elasticsearch docker安裝[1]
準(zhǔn)備Dockerfile和資源文件
elasticsearch.yml(elasticsearch配置文件)
elasticsearch-6.2.4.tar.gz(安裝包,官網(wǎng)下載)
elasticsearch-analysis-ik-6.2.4.zip(分詞器插件树碱,官網(wǎng)下載)
supervisord.conf (supervisord啟動配置文件,也可不用supervisord啟動师痕,直接用./elasticsearch)
1. 配置文件elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elasticsearch-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#只啟動一個節(jié)點
#避免failed to obtain node locks, tried, maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes..
node.max_local_storage_nodes: 1
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#索引數(shù)據(jù)的存儲路徑
path.data: /usr/local/elasticsearch/data
#日志文件的存儲路徑
path.logs: /usr/local/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#
http.port: 9200
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
2. supervisord.conf
[supervisord]
nodaemon=true
[program:elasticsearch]
command=/opt/soft/elasticsearch-6.2.4/bin/elasticsearch
user=elsearch
stdout_logfile_maxbytes = 20MB
stdout_logfile = /usr/local/elasticsearch/logs/elasticsearch-application.log
3. Dockerfile
#
# elasticsearch-6 5.15.2
#
FROM centos7-base
MAINTAINER xuchang
ADD elasticsearch-6.2.4.tar.gz /opt/soft
ADD elasticsearch-analysis-ik-6.2.4.zip /opt/soft/
COPY elasticsearch.yml /opt/soft/elasticsearch-6.2.4/config/
# Install libs
WORKDIR /opt/soft/
RUN groupadd elsearch \
&& useradd elsearch -g elsearch -p elasticsearch \
&& unzip /opt/soft/elasticsearch-analysis-ik-6.2.4.zip \
&& mkdir -p /usr/local/elasticsearch/data \
&& mkdir -p /usr/local/elasticsearch/logs \
&& chown -R elsearch:elsearch /usr/local/elasticsearch/ \
&& chown -R elsearch:elsearch elasticsearch-6.2.4 \
&& chown -R elsearch:elsearch /usr/bin/supervisord \
&& touch /opt/soft/supervisord.log \
&& chown -R elsearch:elsearch /opt/soft/supervisord.log \
&& mkdir -p /opt/soft/elasticsearch-6.2.4/plugins/analysis-ik/ \
&& cp -r /opt/soft/elasticsearch/* /opt/soft/elasticsearch-6.2.4/plugins/analysis-ik/
USER elsearch
EXPOSE 9200
EXPOSE 9300
COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord"]
You probably need to set vm.max_map_count in /etc/sysctl.conf on the host itself, so that Elasticsearch does not attempt to do that from inside the container.
If you don't know the desired value, try doubling the current setting and keep going until Elasticsearch starts successfully.
Documentation recommends at least 262144.
解決方法:
#在 宿主機(jī) 切換為root用戶,修改內(nèi)核參數(shù)為262144
#臨時生效
sysctl vm.max_map_count=262144
#永久生效
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
failed to obtain node locks, tried,
maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes..
解決方法: 這里使用的是簡單配置纸颜,單節(jié)點, 在elasticsearch.yml
node.max_local_storage_nodes: 1
二绎橘、 docker Kafka和zookeeper安裝[2]
這里直接采用wurstmeister/kafka-docker 的docker 鏡像,沒有使用kafka自帶的zookeeper
三胁孙、 docker Logstash安裝[3]
Logstash 6.3官方文檔 : https://www.elastic.co/guide/en/logstash/6.3/index.html
準(zhǔn)備Dockerfile和資源文件
Dockerfile
logstash.conf (logstash輸入源、輸出源称鳞、過濾規(guī)則等配置文件)
logstash.yml (logstash環(huán)境配置)
logstash-6.3.0.tar.gz (安裝包)
1. logstash.conf 文件
input{
kafka{
bootstrap_servers => ["192.168.243.195:9092"]
group_id => "es-consumer-group"
auto_offset_reset => "latest" #從最新的偏移量開始消費
consumer_threads => 5
decorate_events => true #此屬性會將當(dāng)前topic涮较、offset、group冈止、partition等信息也帶到message中
topics => ["Microservice"] #kafka 的topic
}
#可以設(shè)置多個
# kafka{
# bootstrap_servers => ["192.168.243.195:9092"]
# client_id => "test2"
# group_id => "test2"
# auto_offset_reset => "latest"
# consumer_threads => 5
# decorate_events => true
# topics => ["logq"]
# type => "student"
# }
}
output {
elasticsearch{
hosts=> ["192.168.243.195:9200"]
index=> "microservice-%{+YYYY.MM.dd}" #這里配置微服務(wù)索引,如果沒有創(chuàng)建索引狂票,會自動創(chuàng)建,會根據(jù)日期生成索引熙暴,20190802 則會生成 microservice-2019.08.02索引
}
}
2. logstash.yml 文件(監(jiān)控沒配置X-pack插件闺属,不會有登陸和Monitoring菜單)
http.host: "0.0.0.0"
log.level: info # 默認(rèn)為info ,當(dāng)消費數(shù)據(jù)失敗,無法發(fā)現(xiàn)錯誤時候可以使用debug模式查看日志
path.logs: /opt/logs/logstash
# xpack.monitoring.elasticsearch.url: http://192.168.243.195:9200 #監(jiān)控es健康狀態(tài)
# xpack.monitoring.elasticsearch.username: elastic
# xpack.monitoring.elasticsearch.password: changeme
# xpack.monitoring.enabled: false
3. Dockerfile 文件
#
# logstash-6.3.0 服務(wù)
#
FROM centos7-base
MAINTAINER xuchang
ADD logstash-6.3.0.tar.gz /opt/soft/
WORKDIR /opt/soft/
COPY logstash.conf /opt/soft/logstash-6.3.0/bin/logstash.conf
COPY logstash.yml /opt/soft/logstash-6.3.0/config/logstash.yml
RUN mkdir -p /opt/soft/logs
ENTRYPOINT /opt/soft/logstash-6.3.0/bin/logstash -f /opt/soft/logstash-6.3.0/bin/logstash.conf
四周霉、 docker Kibana安裝[4]
準(zhǔn)備Dockerfile和資源文件
Dockerfile
kibana.yml (kibana環(huán)境配置)
kibana-6.2.4-linux-x86_64.tar.gz (安裝包)
supervisord.conf (supervisord啟動配置文件,也可不用supervisord啟動掂器,直接用./kibana)
1. kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.243.195:9200"
2. supervisord.conf
[supervisord]
nodaemon=true
[program:kibana]
command=/opt/soft/kibana-6.2.4-linux-x86_64/bin/kibana
user=kibana
3.Dockerfile
#
# elasticsearch-6 5.15.2
#
FROM centos7-base
MAINTAINER xuchang
ADD kibana-6.2.4-linux-x86_64.tar.gz /opt/soft/
COPY kibana.yml /opt/soft/kibana-6.2.4-linux-x86_64/config/
# Install libs
WORKDIR /opt/soft/
RUN groupadd kibana \
&& useradd kibana -g kibana -p kibana \
&& chown -R kibana:kibana kibana-6.2.4-linux-x86_64 \
&& chown -R kibana:kibana /usr/bin/supervisord \
&& touch /opt/soft/supervisord.log \
&& chown -R kibana:kibana /opt/soft/supervisord.log
USER kibana
EXPOSE 5601
COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord"]
五、 docker-compose安裝ELK環(huán)境[5]
docker-compose.yml
version: '2'
services:
#Elasticsearch 6.2.4
elasticsearch:
build: /opt/soft/bclz/Elasticsearch/
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 1g
ports:
- "9200:9200"
- "9300:9300"
#Zookeeper Kafka
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:2.11-0.11.0.3
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.243.195:9092
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ZOOKEEPER_CONNECT: 192.168.243.195:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
#Kibana
kibana:
build: /opt/soft/bclz/Kibana/
mem_limit: 300M
ports:
- "5601:5601"
#Logstash
logstash:
build: /opt/soft/bclz/Logstash/
mem_limit: 300M
[xuchang@localhost bclz]$ ls
docker-compose.yml docker-install.sh Elasticsearch Kibana Logstash
[xuchang@localhost bclz]$ docker-compose up -d
六俱箱、 測試ELK環(huán)境[6]
[xuchang@localhost bclz]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
caff410f5d5e logstash:latest "/bin/sh -c '/opt/..." About an hour ago Up About an hour 22/tcp inspiring_jennings
a285ad408112 bclz_nexus "/usr/bin/supervisord" About an hour ago Up About an hour 22/tcp, 0.0.0.0:8081->8081/tcp bclz_nexus_1
19ef13106a52 bclz_kibana "/usr/bin/supervisord" About an hour ago Up About an hour 22/tcp, 0.0.0.0:5601->5601/tcp bclz_kibana_1
26f4df5d2c85 wurstmeister/zookeeper "/bin/sh -c '/usr/..." About an hour ago Up About an hour 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp bclz_zookeeper_1
61798e283d47 bclz_tengine "/usr/bin/supervisord" About an hour ago Up About an hour 22/tcp, 0.0.0.0:8888->80/tcp bclz_tengine_1
9960ace092e2 bclz_elasticsearch "/usr/bin/supervisord" About an hour ago Up About an hour 0.0.0.0:9200->9200/tcp, 22/tcp, 0.0.0.0:9300->9300/tcp bclz_elasticsearch_1
0ba972877a7e wurstmeister/kafka:2.11-0.11.0.3 "start-kafka.sh" About an hour ago Up About an hour 0.0.0.0:9092->9092/tcp bclz_kafka_1
1. 根據(jù)kafka的容器Id推送數(shù)據(jù)
docker exec -ti 0ba972877a7e bin/kafka-topics.sh --create --zookeeper 192.168.243.195:2181 --replication-factor 1 --partitions 1 --topic Microservice
docker exec -ti 0ba972877a7e kafka-console-producer.sh --broker-list 192.168.243.195:9092 --topic Microservice
> sasasasasasa
> test1111111
發(fā)送兩條sasasasasasa国瓮、test1111111,
發(fā)送數(shù)據(jù)后狞谱,elasticsearch作為output源,會在es中自動創(chuàng)建logstash.conf中的索引乃摹,由于這里采用的是日期通配,所有會創(chuàng)建一條microservice-2019.08.02的索引跟衅,
數(shù)據(jù)流向 kafka->logstash->elasticsearch
2. Kibana創(chuàng)建索引
1.登陸Kibana Web界面 http://{local-ip}:5601/
2.點擊左邊菜單 Management
1. 點擊 * 左下角 [Index Patterns]
2. 左邊列表點擊 [ Create Index Pattern]
3.界面顯示:
Step 1 of 2: Define index pattern
Index pattern
<這個位置填寫匹配需要的索引>
You can use a * as a wildcard in your index pattern.
You can't use empty spaces or the characters \, /, ?, ", <, >, |.
4.點擊下一步孵睬,保存完成
3.通過Kibana查看數(shù)據(jù)
點擊Kibana左側(cè)菜單 discover
資源文件下載: https://code.aliyun.com/792453741/docker-compose.git