本文章僅作為個(gè)人筆記
elastic官網(wǎng)
kafka官網(wǎng)
zookeeper官網(wǎng)
ElasticSearch官方y(tǒng)um安裝文檔
Logstash官方y(tǒng)um安裝文檔
Kibana官方y(tǒng)um安裝文檔
Filebeat官方y(tǒng)um安裝文檔
kafka2.6.0安裝包官方下載地址
zookeeper3.6.2安裝包官方下載地址
zookeeper官方運(yùn)行文檔
先貼上各工具安裝教程哆料,可先安裝不啟動(dòng),配置后再啟動(dòng)湖员。
------------------安裝部分----------------------
-
安裝java
- yum install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64 -y
-
ElasticSearch安裝
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=0 autorefresh=1 type=rpm-md
yum install --enablerepo=elasticsearch elasticsearch -y
-
Logstash安裝
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/logstash.repo
[logstash-7.x] name=Elastic repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
yum install logstash -y
-
Kibana安裝
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/kibana.repo
[kibana-7.x] name=Kibana repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
yum install kibana -y
chkconfig --add kibana
service kibana start
-
Filebeat安裝
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/filebeat.repo
[kibana-7.x] name=Kibana repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
yum install filebeat -y
systemctl enable filebeat
chkconfig --add filebeat
-
kafka安裝
- 因?yàn)閗afka的服務(wù)依賴于java及zookeeper
- zookeeper安裝
- wget https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz * tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz
- cd apache-zookeeper-3.6.2-bin
- kafka安裝
- wget https://mirror.bit.edu.cn/apache/kafka/2.6.0/kafka_2.13-2.6.0.tgz
- tar -zxvf kafka_2.13-2.6.0.tgz
- cd kafka_2.13-2.6.0
------------------配置部分----------------------
-
配置ElasticSearch(默認(rèn)yum安裝的不更改配置亦可,看個(gè)人需求)
-
vim /etc/elasticsearch/elasticsearch.yml
找到配置文件中的cluster.name维苔,打開該配置并設(shè)置集群名稱 cluster.name: demon 找到配置文件中的node.name舆乔,打開該配置并設(shè)置節(jié)點(diǎn)名稱 node.name: elk-1 解決啟動(dòng)報(bào)錯(cuò) cluster.initial_master_nodes: ["node-1"] 配置內(nèi)存使用用交換分區(qū) bootstrap.memory_lock: true 監(jiān)聽的網(wǎng)絡(luò)地址 network.host: 0.0.0.0 開啟監(jiān)聽的端口 http.port: 9200 增加新的參數(shù),這樣head插件可以訪問es (5.x版本或链,如果沒有可以自己手動(dòng)加) http.cors.enabled: true http.cors.allow-origin: "*"
-
集群配置(非集群可略過)
discovery.zen.ping.unicast.hosts: ["192.168.60.201", "192.168.60.202","192.168.60.203"] # 集群各節(jié)點(diǎn)IP地址,也可以使用els鲸沮、els.demo.com等名稱琳骡,需要各節(jié)點(diǎn)能夠解析 discovery.zen.minimum_master_nodes: 2 # 為了避免腦裂,集群節(jié)點(diǎn)數(shù)最少為 半數(shù)+1
配置開啟自啟
chkconfig --add elasticsearch
service elasticsearch start
-
-
kibana配置
-
vi /etc/kibana/kibana.yml
server.port: 5601 server.host: “0.0.0.0” elasticsearch.hosts: [“http://localhost:9200”] kibana.index: “.kibana”
service kibana start
-
-
kafka配置(kafka依賴于zookeeper讼溺,因此先配置zookeeper并啟動(dòng))
- zookeeper配置
- cp conf/zoo_sample.cfg conf/zoo.cfg
- bin/zkServer.sh start
- kafka配置
-
vim kafka-2.6.0-src/config/zookeeper.properties
server.1=192.168.1.190:2888:3888 //kafka集群ip:port
-
vim kafka-2.6.0-src/config/server.properties
broker.id=0 listeners=PLAINTEXT://192.168.1.190:9092 zookeeper.connect=192.168.1.190:2181,192.168.1.191:2181,192.168.1.192:2181
啟動(dòng)服務(wù)
./bin/kafka-server-start.sh config/server.properties
創(chuàng)建topic(測(cè)試)
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic testtopic
查詢topic(測(cè)試)
./bin/kafka-topics.sh --zookeeper localhost:2181 --list
發(fā)送消息(測(cè)試)
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testtopic
接收消息(測(cè)試)
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testtopic --from-beginning
-
- zookeeper配置
-
Logstash配置(input為監(jiān)控入地址楣号,output為elasticsearch出地址)
設(shè)置輸入數(shù)據(jù)
-
vim /etc/logstash/conf.d/input.conf
input { kafka { type => "nginx_kafka" codec => "json" topics => "nginx" decorate_events => true bootstrap_servers => "localhost:9092" } }
-
設(shè)置輸出數(shù)據(jù)
output { if [type] == "nginx_kafka" { elasticsearch { hosts => ["localhost"] index => 'logstash-nginx-%{+YYYY-MM-dd}' } } }
service logstash start
-
filebeat配置
-
vim /etc/filebeat/filebeat.yml
filebeat.inputs: - type: log paths: - /var/log/nginx/access.log json.keys_under_root: true json.add_error_key: true json.message_key: log output.kafka: hosts: ["localhost:9092"] topic: "nginx"
service filebeat start
-
至此整套elk服務(wù)便部署好了
可以打開 http://localhost:5601 -> StackManagement -> index Patterns 選擇 Create index pattern 設(shè)置過濾的key,然后創(chuàng)建規(guī)則怒坯,創(chuàng)建好后炫狱。
打開 http://localhost:5601 -> Discover 查看對(duì)應(yīng)的log,使用filter進(jìn)行過濾剔猿。