SpringBoot使用ELK日志收集
Logstash 最佳實踐
Spring boot 日志寫入 ELK 三種方案
-
(ELK)Elasticsearch骤视、Logstash网棍、Kibana的簡稱缭嫡,這三者是核心套件,但并非全部。
Elasticsearch
是實時全文搜索和分析引擎,提供搜集辛孵、分析、存儲數(shù)據(jù)三大功能赡磅;是一套開放REST和JAVA API等結(jié)構(gòu)提供高效搜索功能魄缚,可擴展的分布式系統(tǒng)。它構(gòu)建于Apache Lucene搜索引擎庫之上焚廊。Logstash
是一個用來搜集冶匹、分析习劫、過濾日志的工具。它支持幾乎任何類型的日志徙硅,包括系統(tǒng)日志榜聂、錯誤日志和自定義應(yīng)用程序日志。它可以從許多來源接收日志嗓蘑,這些來源包括 syslog、消息傳遞(例如 RabbitMQ)和JMX匿乃,它能夠以多種方式輸出數(shù)據(jù)桩皿,包括電子郵件、websockets和Elasticsearch幢炸。Kibana
是一個基于Web的圖形界面泄隔,用于搜索、分析和可視化存儲在 Elasticsearch指標中的日志數(shù)據(jù)宛徊。它利用Elasticsearch的REST接口來檢索數(shù)據(jù)佛嬉,不僅允許用戶創(chuàng)建他們自己的數(shù)據(jù)的定制儀表板視圖,還允許他們以特殊的方式查詢和過濾數(shù)據(jù)
1.環(huán)境
系統(tǒng):
CentOS7.3
安裝jdk:
jdk:
安裝java環(huán)境(java環(huán)境必須是1.8版本以上的)
查看jdk軟件包列表:
yum search java | grep -i --color JDK
選擇自己需要的版本進行安裝
- 安裝命令:
yum install java-1.8.0-openjdk-devel.x86_64
2.安裝 elasticsearch 環(huán)境
安裝elasticsearch的yum源的密鑰(這個需要在所有服務(wù)器上都配置)
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置elasticsearch的yum源
如下文件及內(nèi)容需在服務(wù)器上新建
# vim /etc/yum.repos.d/elasticsearch.repo
在elasticsearch.repo文件中添加如下內(nèi)容
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安裝 elasticsearch
yum install -y elasticsearch
創(chuàng)建elasticsearch data的存放目錄闸天,并修改該目錄的屬主屬組
mkdir -p /data/es-data (自定義用于存放data數(shù)據(jù)的目錄)
chown -R elasticsearch:elasticsearch /data/es-data
修改elasticsearch的日志屬主屬組
chown -R elasticsearch:elasticsearch /var/log/elasticsearch/
修改elasticsearch的配置文件
vim /etc/elasticsearch/elasticsearch.yml
找到配置文件中的cluster.name暖呕,打開該配置并設(shè)置集群名稱
cluster.name: demon
找到配置文件中的node.name,打開該配置并設(shè)置節(jié)點名稱
node.name: elk-1
修改data存放的路徑
path.data: /data/es-data
修改logs日志的路徑
path.logs: /var/log/elasticsearch/
配置內(nèi)存使用用交換分區(qū)
bootstrap.memory_lock: true
監(jiān)聽的網(wǎng)絡(luò)地址
network.host: 0.0.0.0
開啟監(jiān)聽的端口
http.port: 9200
增加新的參數(shù)苞氮,這樣head插件可以訪問es (5.x版本湾揽,如果沒有可以自己手動加)
http.cors.enabled: true
http.cors.allow-origin: "*"
啟動elasticsearch服務(wù)
沒有明白此參數(shù)的作用,但是要加上
bootstrap.system_call_filter: false
elasticsearch使用的內(nèi)存大小為2G笼吟,我們把它改小點:
修改參數(shù):
vim /etc/elasticsearch/jvm.options
-Xms512m
-Xmx512m
注意事項
需要修改幾個參數(shù)库物,不然啟動會報錯
vim /etc/security/limits.conf
在末尾追加以下內(nèi)容(elk為啟動用戶,當然也可以指定為*)
elk soft nofile 65536
elk hard nofile 65536
elk soft nproc 2048
elk hard nproc 2048
elk soft memlock unlimited
elk hard memlock unlimited
繼續(xù)再修改一個參數(shù)
vim /etc/security/limits.d/20-nproc.conf
將里面的
soft nproc 4096
改為:
soft nproc 2048
這時候會啟動會報錯:
memory locking requested for elasticsearch process but memory is not locked
添加以下內(nèi)容:
對于systemd service的資源限制贷帮,現(xiàn)在放在 /etc/systemd/system.conf
vim /etc/systemd/system.conf
在末尾追加以下內(nèi)容
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity
啟動
/etc/init.d/elasticsearch start
查看服務(wù)狀態(tài)
service elasticsearch status
日志位置(日志的名稱是以集群名稱命名的):
/var/log/elasticsearch/demon.log
創(chuàng)建開機自啟動服務(wù)
chkconfig elasticsearch on
netstat 包括在 net-tools 套件中安裝:
yum install net-tools
通過瀏覽器請求下9200的端口戚揭,看下是否成功
先檢查9200端口是否起來
netstat -antp |grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN 2833/java
瀏覽器訪問測試是否正常(以下為正常)
curl http://127.0.0.1:9200/
[root@zpf ~]# curl http://127.0.0.1:9200/
{
"name" : "elk-1",
"cluster_name" : "demon",
"cluster_uuid" : "t8-0566XQuaCsp_V3Q315A",
"version" : {
"number" : "5.6.16",
"build_hash" : "3a740d1",
"build_date" : "2019-03-13T15:33:36.565Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
我們來回顧下
Elasticsearch
Elasticsearch是實時全文搜索和分析引擎,提供搜集撵枢、分析民晒、存儲數(shù)據(jù)三大功能;是一套開放REST和JAVA API等結(jié)構(gòu)提供高效搜索功能诲侮,可擴展的分布式系統(tǒng)镀虐。它構(gòu)建于Apache Lucene搜索引擎庫之上
如何和elasticsearch交互
[root@zpf ~]# curl -i -XGET 'localhost:9200/_count?pretty'
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 114
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"skipped" : 0,
"failed" : 0
}
}
安裝插件
安裝elasticsearch-head插件
安裝docker鏡像或者通過github下載elasticsearch-head項目都是可以的,1或者2兩種方式選擇一種安裝使用即可
1. 使用docker的集成好的elasticsearch-head
拉取鏡像
docker pull mobz/elasticsearch-head:5
查看鏡像
docker images
啟動鏡像
docker run -d -p 9100:9100 docker.io/mobz/elasticsearch-head:5
查看正在運行的鏡像
docker ps
docker容器下載成功并啟動以后沟绪,運行瀏覽器打開http://localhost:9100/
2. 使用git安裝elasticsearch-head
# yum install -y npm
服務(wù)器沒有安裝GIT刮便,所以會導致出錯。
yum install git -y
# git clone git://github.com/mobz/elasticsearch-head.git
# cd elasticsearch-head
npm install phantomjs-prebuilt@2.1.16 --ignore-scripts
# npm install
# npm run start 后臺啟動 nohup npm run start &
檢查端口是否起來
netstat -antp |grep 9100
瀏覽器訪問測試是否正常
http://IP:9100/
直接訪問 http://IP:9100/ 訪問不到:
查看防火墻狀態(tài)
firewall-cmd --state
停止firewall
systemctl stop firewalld.service
禁止firewall開機啟動
systemctl disable firewalld.service
參考
Linux – git: command not found 錯誤解決
yuminstall [git](https://www.bxl.me/b/git/) -y
參考
后臺啟動elasticsearch-head
nohup grunt server &exit
參考
3.LogStash的使用
Logstash配置文件
安裝Logstash環(huán)境:
官方安裝手冊:
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
下載yum源的密鑰認證:
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
利用yum安裝logstash
# yum install -y logstash
查看下logstash的安裝目錄
# rpm -ql logstash
創(chuàng)建一個軟連接绽慈,每次執(zhí)行命令的時候不用在寫安裝路勁(默認安裝在/usr/share下)
ln -s /usr/share/logstash/bin/logstash /bin/
執(zhí)行l(wèi)ogstash的命令
# logstash -e 'input { stdin { } } output { stdout {} }'
運行成功以后輸入:
nihao
stdout返回的結(jié)果:
這個jdk的警告就是顯示需要加CPU
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Logstash狀態(tài):
sudo service logstash status
Logstash啟動:
sudo service logstash start
Logstash停止:
sudo service logstash stop
設(shè)置Logstash開機啟動:
chkconfig logstash on
這里有個問題要說明:
第一種:
sudo service logstash start
啟動沒有 狀態(tài)沒有問題
[root@zpf bin]# sudo service logstash status
Redirecting to /bin/systemctl status logstash.service
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 17:07:29 CST; 3s ago
Main PID: 8622 (java)
Tasks: 15
Memory: 220.2M
CGroup: /system.slice/logstash.service
└─8622 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitG...
第二種
nohup ./logstash -f /etc/logstash/conf.d/elk.conf &
方式啟動 logstash
[root@zpf bin]# sudo service logstash status
Redirecting to /bin/systemctl status logstash.service
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2019-07-09 17:07:46 CST; 4min 31s ago
Process: 8744 ExecStart=/usr/share/logstash/bin/logstash --path.settings /etc/logstash (code=exited, status=143)
Main PID: 8744 (code=exited, status=143)
用命令查看 logstash 狀態(tài) 失敗
Active: failed
但是實際 啟動成功
以上命令 都是以java 進程的方式啟動的 logstash 但是第二種就有問題 很費解
[root@zpf bin]# ps -ef|grep java
elastic+ 3177 1 1 15:48 ? 00:00:57 /bin/java -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch
root 26464 4378 7 16:35 pts/0 00:01:03 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb -f /etc/logstash/conf.d/elk.conf
問題:
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
解決:
cd /usr/share/logstash
ln -s /etc/logstash ./config
查看Logstash日志
/var/log/logstash
問題:
Logstash is not able to start since configuration auto reloading was enabled but the configuration contains plugins that don't support it. Quitting...
解決:
/etc/logstash/logstash.yml
config.test_and_exit: false
問題:
Logstash could not be started because there is already another instance
using the configured data directory. If you wish to run multiple
instances, you must change the "path.data" setting.
解決:
sudo service logstash stop
logstash使用配置文件
官方指南:
https://www.elastic.co/guide/en/logstash/current/configuration.html
創(chuàng)建配置文件01-logstash.conf
# vim /etc/logstash/conf.d/elk.conf
文件中添加以下內(nèi)容
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.1.202:9200"] }
stdout { codec => rubydebug }
}
使用配置文件運行l(wèi)ogstash
# logstash -f /etc/logstash/conf.d/elk.conf
運行成功以后輸入以及標準輸出結(jié)果
3.Kibana的安裝及使用
Kibana版本要跟elasticsearch版本一致
我使用的:
elasticsearch5.6.16
下載kibana的tar.gz的軟件包
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.16-linux-x86_64.tar.gz
解壓kibana的tar包
# tar -xzf kibana-5.6.16-linux-x86_64.tar.gz
進入解壓好的kibana
# mv kibana-5.6.16-linux-x86_64 /usr/local
創(chuàng)建kibana的軟連接
# ln -s /usr/local/kibana-5.6.16-linux-x86_64/ /usr/local/kibana
編輯kibana的配置文件
# vim /usr/local/kibana/config/kibana.yml
修改配置文件如下恨旱,開啟以下的配置
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.202:9200"
kibana.index: ".kibana"
啟動kibana
nohup /usr/local/kibana/bin/kibana &
netstat -antp |grep 5601
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 17007/node
打開瀏覽器并設(shè)置對應(yīng)的index
http://IP:5601
Kibana 設(shè)置
注意時間控件(一開始沒注意辈毯,一直找不到日志)
總算完了。
好搜贤,現(xiàn)在索引也可以創(chuàng)建了谆沃,現(xiàn)在可以來輸出nginx、apache仪芒、message唁影、secrue的日志到前臺展示(Nginx有的話直接修改,沒有自行安裝)
編輯nginx配置文件掂名,修改以下內(nèi)容(在http模塊下添加)
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domian":"$host",'
'"host":"$server_addr",'
'"size":"$body_bytes_sent",'
'"responsetime":"$request_time",'
'"referer":"$http_referer",'
'"ua":"$http_user_agent"'
'}';
access_log logs/elk.access.log json;
以上在nginx 位置不能變
編輯logstash配置文件据沈,進行日志收集
vim /etc/logstash/conf.d/elk.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
}
運行看看效果如何
logstash -f /etc/logstash/conf.d/full.conf
ELK終極篇
安裝reids
# yum install -y redis
修改redis的配置文件
# vim /etc/redis.conf
修改內(nèi)容如下
daemonize yes
bind 192.168.1.202
啟動redis服務(wù)
# /etc/init.d/redis restart
測試redis的是否啟用成功
# redis-cli -h 192.168.1.202
輸入info如果有不報錯即可
redis 192.168.1.202:6379> info
redis_version:2.4.10
....
編輯配置redis-out.conf配置文件,把標準輸入的數(shù)據(jù)存儲到redis中
# vim /etc/logstash/conf.d/redis-out.conf
添加如下內(nèi)容
input {
stdin {}
}
output {
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
}
}
運行l(wèi)ogstash指定redis-out.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
運行成功以后饺蔑,在logstash中輸入內(nèi)容(查看下效果)
編輯配置redis-out.conf配置文件锌介,把reids的存儲的數(shù)據(jù)輸出到elasticsearch中
# vim /etc/logstash/conf.d/redis-out.conf
添加如下內(nèi)容
input{
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
batch_count => 1 #這個值是指從隊列中讀取數(shù)據(jù)時,一次性取出多少條猾警,默認125條(如果redis中沒有125條孔祸,就會報錯,所以在測試期間加上這個值)
}
}
output {
elasticsearch {
hosts => ['192.168.1.202:9200']
index => 'redis-test-%{+YYYY.MM.dd}'
}
}
運行l(wèi)ogstash指定redis-out.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
把之前的配置文件修改一下发皿,變成所有的日志監(jiān)控的來源文件都存放到redis中崔慧,然后通過redis在輸出到elasticsearch中
更改為如下,編輯elk.conf
input {
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
if [type] == "http" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
}
}
if [type] == "nginx" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
}
}
if [type] == "secure" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
}
}
if [type] == "system" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
}
}
}
運行l(wèi)ogstash指定shipper.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf
在redis中查看是否已經(jīng)將數(shù)據(jù)寫到里面(有時候輸入的日志文件不產(chǎn)生日志雳窟,會導致redis里面也沒有寫入日志)
把redis中的數(shù)據(jù)讀取出來尊浪,寫入到elasticsearch中
編輯配置文件
# vim /etc/logstash/conf.d/redis-out.conf
添加如下內(nèi)容
input {
redis {
type => "system"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
batch_count => 1
}
redis {
type => "http"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
batch_count => 1
}
redis {
type => "nginx"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
batch_count => 1
}
redis {
type => "secure"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
batch_count => 1
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
}
注意:
input是從客戶端收集的
output是同樣也保存到192.168.1.202中的elasticsearch中,如果要保存到當前的主機上封救,可以把output中的hosts修改成localhost拇涤,如果還需要在kibana中顯示,需要在本機上部署kabana誉结,為何要這樣做鹅士,起到一個松耦合的目的
說白了,就是在客戶端收集日志惩坑,寫到服務(wù)端的redis里或是本地的redis里面掉盅,輸出的時候?qū)覧S服務(wù)器即可
運行命令看看效果
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES服務(wù)器輸出一樣的(這樣是先將日志存到redis數(shù)據(jù)庫,然后再從redis數(shù)據(jù)庫里取出日志)
因為ES保存日志是永久保存以舒,所以需要定期刪除一下日志趾痘,下面命令為刪除指定時間前的日志
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n days"`
filebeat 使用
安裝FileBeat
下載FileBeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz
解壓
tar -zxvf filebeat-6.2.4-linux-x86_64.tar.gz
進入主目錄,修改配置
vi filebeat.yml
找到類似以下的配置并修改
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/xxx/*.log
- /var/xxx/*.out
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
配置一定要注意格式蔓钟,是以2個空格為子級永票,里面的配置都在配置文件中,列出來的只是要修改的部分,enabled默認為false侣集,需要改成true才會收集日志键俱。其中/var/xxx/*.log修改為自己的日志路徑,注意-后面有一個空格世分,
如果多個路徑則添加一行编振,一定要注意新行前面的4個空格,multiline開頭的幾個配置取消注釋就行了臭埋,是為了兼容多行日志的情況踪央,setup.kibana中的host取消注釋,根據(jù)實際情況配置地址斋泄,output.elasticsearch中的host也一樣杯瞻,根據(jù)實際情況配置
啟動FileBeat
filebeat啟動命令
# 前臺啟動
./filebeat -e -c filebeat.yml
# 后臺啟動 不輸出日志/輸出日志
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
nohup ./filebeat -e -c filebeat.yml > filebeat.log &
filebeat 多索引
ilebeat 小巧好用
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/access.log
fields:
type: "nginx"
- input_type: log
paths:
- /mnt/www/bi.xxxxx.com/app/runtime/tasklog/task_*.log
fields:
type: "task"
json.message_key: log
json.keys_under_root: true
output.elasticsearch:
hosts: ["127.0.0.1:9200"]
#index: "logs-%{[beat.version]}-%{+yyyy.MM.dd}"
indices:
- index: "www-f-nginx-log"
when.equals:
fields.type: "nginx"
- index: "www-f-task-log"
when.equals:
fields.type: "task"
#./filebeat -e -c filebeat.yml -d "Publish"