做elastic stack集群部署
主機1:192.168.31.200 kibana
主機2:192.168.31.201
主機3:192.168.31.203
首先去官網下載對應的包凸椿,這里我們選擇rpm安裝
https://www.elastic.co/downloads
ElasticSearch 5的程序環(huán)境:
配置文件:
/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/jvm.options
/etc/elasticsearch/log4j2.properties
Unit File:elasticsearch.service
程序文件:
/usr/share/elasticsearch/bin/elasticsearch
/usr/share/elasticsearch/bin/elasticsearch-keystore:
/usr/share/elasticsearch/bin/elasticsearch-plugin:管理插件程序
編輯配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: node1
path.data: /els/data
path.logs: /els/log
network.host: 192.168.31.200
discovery.zen.ping.unicast.hosts: ["node1", "node2","node3"]
discovery.zen.minimum_master_nodes: 2
vim vim /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g
#初始化分給它2g
然后創(chuàng)建對應的目錄
mkdir -pv /els/{data,log}
chown -R elasticsearch.elasticsearch /els
els的相關模塊說明
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules.html
安裝完成后可以通過curl http://192.168.31.200:9200/_cat
來查看狀態(tài)
curl http://192.168.31.200:9200/_cat/nodes?h=name,ip,port,uptime,jdk
node2 192.168.31.201 9300 26.5m 1.8.0_131
node3 192.168.31.203 9300 7.1m 1.8.0_131
node1 192.168.31.200 9300 34.5m 1.8.0_131
查看插件
/usr/share/elasticsearch/bin/elasticsearch-plugin list
5版本以后的插件可以作為一個獨立的服務運行牺堰,這里我們去github上下載然后安裝head雏胃。
https://github.com/mobz/elasticsearch-head.git
create a fork of elasticsearch-head on github
clone your fork to your machine
cd elasticsearch-head
npm install # downloads node dev dependencies
grunt dev # builds the distribution files, then watches the src directory for changes (if you have an warning like "Warning: Task “clean” failed. Use —force to continue.", well use —force ;) )
直接npm run start會占據前臺掂林,這里我們可以使用nohup npm run start &運行于后臺
然后需要修改配置文件
http.cors.enabled: true
http.cors.allow-orign: "*"
自己上傳一個文檔測試,注意索引會自行創(chuàng)建
curl -XPUT 'node1:9200/students/major/1?pretty' -H 'Content-Type: application/json' -d '
{"name": "jerry", "age": 17, "course": "Pixie jianfa"}'
查看索引
curl 'node1:9200/_cat/indices'
curl -XGET 'node1:9200/students/_search?pretty'
curl -XGET 'node1:9200/_search/?q=course:shiba&pretty'
安裝kibana界面
rpm -ivh kibana-6.5.4-x86_64.rpm
vim /etc/kibana/kibana.yml
server.host: "192.168.31.200"
server.port: 5601
server.name: "node1"
elasticsearch.url: "http://node1:9200"
直接啟動即可
systemctl start kibana
我們再新添一臺nginx主機颤霎,然后在上面安裝filebeats和logstash軟件媳谁。
rpm -ivh filebeat-6.5.4-x86_64.rpm
vim /etc/filebeat/filebeat.yml
hosts: ["node1:9200", "node2:9200"]
可以看到filebeat已經將數據推送到elasticsesarch上了
那么kibana上也可對數據進行處理了
但是filebeat處理數據的能力沒有l(wèi)ogstash好涂滴,所以我們再增加一個logstash節(jié)點友酱,這里我們直接使用nginx主機
注意:logstash如果不使用logstash用戶可能會產生一些權
限沖突的問題。
logstash的各插件配置官方文檔https://www.elastic.co/guide/en/logstash/current/index.html
rpm -ivh logstash-6.5.4.rpm
vim /etc/logstash/conf.d/test.conf
input {
stdin{}
}
output {
stdout { codec => rubydebug }
}
檢測語法,去掉-t就可以直接運行了
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -t -f /etc/logstash/conf.d/test.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
然后我們就可以在終端直接輸入數據了
hello logstash
{
"@version" => "1",
"@timestamp" => 2019-01-15T12:35:51.470Z,
"host" => "node4.lvqing.com",
"message" => "hello logstash"
}
接下來我們配置logstash從beats讀取數據,當然輸出還是到屏幕上柔纵,稍后我們再進行配置輸出到elasticsearch上
input {
beats{
host => '0.0.0.0'
port => 5044
}
}
然后我們需要修改filebeats輸出的對象為logstash
output.logstash:
hosts: ["192.168.31.204:5044"]
再啟動logstash
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/ceshi.conf
logstash就能收集到filebeat傳過來的日志了,并且是已經幫我們切好片的
{
"prospector" => {
"type" => "log"
},
"input" => {
"type" => "log"
},
"host" => {
"os" => {
"version" => "7 (Core)",
"family" => "redhat",
"platform" => "centos",
"codename" => "Core"
},
"name" => "node4.lvqing.com",
"id" => "98b754e309454154b76d44862ecc843e",
"containerized" => true,
"architecture" => "x86_64"
},
"@timestamp" => 2019-01-15T13:51:36.416Z,
"beat" => {
"name" => "node4.lvqing.com",
"version" => "6.5.4",
"hostname" => "node4.lvqing.com"
},
"source" => "/var/log/nginx/access.log",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"@version" => "1",
"offset" => 2527,
"message" => "192.168.31.242 - - [15/Jan/2019:21:51:30 +0800] \"GET /dsa HTTP/1.1\" 404 3650 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36\" \"-\""
}
然后logstash相對于filebea強大的地方就是他的過濾器缔杉,這里我們介紹使用grok插件,它已經事先定義好了正則,我們在使用的時候直接調用就可以了搁料。
input {
beats{
host => '0.0.0.0'
port => 5044
}
}
filter {
grok {
match => {
"message" => "%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:datetime}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<http_referer>\S+)\" \"(?<http_user_agent>\S+)\" \"(?<http_x_forwarded_for>\S+)\""
}
}
}
output {
elasticsearch {
hosts => ["192.168.31.200:9200", "192.168.31.201:9200", "192.168.31.203:9200"]
index => "logstatsh-ngxaccesslog-%{+YYYY.MM.dd}"
}
}
可以看到我們自己定義得filed已經被切出來了或详,但是message還保留著我們可以將它隱藏起來。remove_field => "message"
如果規(guī)模再大一些郭计,elasticsearch處理能力跟不上霸琴,我們還可以將logstash導出到redis當中。由redis統一發(fā)送給elasticsearch昭伸。這里有人可能會產生疑問梧乘,那能不能直接由filebeat將數據發(fā)送給redis呢?實際上filebeat自己也有許多模板可以給日志加標簽庐杨,直接由filebeat發(fā)送給redis
參考文檔地址https://www.elastic.co/guide/en/beats/filebeat/6.4/redis-output.html
修改redis配置文件
bind 0.0.0.0
requirepass lvqing
systemctl start redis
修改filebeat配置文件选调,添加一個配置段
output.redis:
hosts: ["localhost"]
password: "lvqing"
key: "filebeat" #這里是在redis中生成一個filebeat列表,如果由多個filebeat都應該發(fā)往同一個key
db: 0
timeout: 5
成功導入到redis了
127.0.0.1:6379> LINDEX filebeat 0
"{\"@timestamp\":\"2019-01-16T16:04:31.749Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"doc\",\"version\":\"6.5.4\"},\"prospector\":{\"type\":\"log\"},\"beat\":
{\"name\":\"node4.lvqing.com\",\"hostname\":\"node4.lvqing.com\",\"version\":\"6.5.4\"},\"host\":
{\"name\":\"node4.lvqing.com\",\"architecture\":\"x86_64\",\"os\":{\"platform\":\"centos\",\"version\":\"7 (Core)\",\"family\":\"redhat\",\"codename\"
:\"Core\"},\"id\":\"98b754e309454154b76d44862ecc843e\",\"containerized\":true},\"source\":\"/var/log/nginx/access.log\",\"offset\":6258,
\"message\":\"192.168.31.201 - - [17/Jan/2019:00:04:30 +0800] \\\"GET /11dasd HTTP/1.1\\\" 404 3650 \\\"-\\\" \\\"curl/7.29.0\\\" \\\"-\\\"\",\"input\":{\"type\":\"log\"}}
然后我們要讓logstash從redis中讀取數據再送到elastic中
參考文檔https://www.elastic.co/guide/en/logstash/6.4/plugins-inputs-redis.html
input {
redis {
key => "filebeat"
data_type => "list"
password => "lvqing"
}
}
實驗完成灵份,kibana可以看到拉取過來得數據仁堪。