主要解決生產(chǎn)上越來越多的平臺應(yīng)用訪問的可數(shù)據(jù)化顯示
一灸叼、logstash 的 client
就是從 /data/logs/access.log 進(jìn)行寫入到 kafka 里面
logstash配置
input{
file {
type => "apachelogs"
path => "/data/logs/access.log"
start_position => "beginning"
}
}
filter {
if [type] == "apachelogs" {
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => 'cilentip'
}
date {
match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
}
}
}
output{
kafka {
bootstrap_servers => "192.168.158.129:9092,192.168.158.130:9092,192.168.158.132:9092"
topic_id => "apachelogs"
compression_type => "snappy"
}
}
二酬核、logstash 配置server
input {
kafka {
zk_connect => "192.168.158.130:2181,192.168.158.129:2181,192.168.132:2181"
topic_id => "apachelogs"
codec =>? json
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
output {
elasticsearch {
hosts => ["192.168.158.128:9200","192.168.158.131:9200"]
index => "apachelogs-%{+YYYY-MM-dd}"
}
}
三赦肃、查看信息
1、登 錄elk
http://192.168.158.128:9200/_plugin/head/
? 如上圖所示:可以看到很多個字段內(nèi)容 ?
2货邓、在kafka監(jiān)控查看到
可以看到 apachelogs的在kafka里創(chuàng)建了
3秆撮、登 錄kibana
http://192.168.158.128:5601/
可以通過id 查對應(yīng)的錯誤信息:
選擇餅圖
能進(jìn)行統(tǒng)計信息
這里值得提醒的是: /data/logs/access.log的文件必須是今天的日期, logstash 估計會認(rèn)為是舊的日志文件而不進(jìn)行提交操作换况,導(dǎo)致我kafka一直沒有數(shù)據(jù)职辨。因為我是從服務(wù)器拿的數(shù)據(jù),比較舊的數(shù)據(jù)复隆,所以要改變一下文件的時間拨匆。