- 示例
filebeat.yml
配置蚊丐,收集Tomcat的catalina.out日志(注釋內(nèi)容留下以便學(xué)習(xí))
[vagrant@localhost filebeat-7.7.1]$ vi filebeat.yml
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
#每個(gè)-是一個(gè)輸入。大多數(shù)選項(xiàng)可以在輸入級(jí)別設(shè)置,因此
# you can use different inputs for various configurations.
#您可以為各種配置使用不同的輸入瘟斜。
# Below are the input specific configurations.
#下面是特定于輸入的配置简僧。
- type: log
# Change to true to enable this input configuration.
#更改為true以啟用此輸入配置澄惊。
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
#應(yīng)該被爬取的路徑紊馏。基礎(chǔ)路徑姻几。
paths:
#可配置多個(gè)路徑
- /home/vagrant/apache-tomcat-9.0.20/logs/catalina.*.out
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
#排除線路宜狐。要匹配的正則表達(dá)式列表。它去掉了
# matching any regular expression from the list.
#匹配列表中的任何正則表達(dá)式蛇捌。
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
#要匹配的正則表達(dá)式列表抚恒。它導(dǎo)出
# matching any regular expression from the list.
#匹配列表中的任何正則表達(dá)式。
#include_lines: ['^INFO','^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
#排除的文件络拌。要匹配的正則表達(dá)式列表俭驮。Filebeat刪除的文件
# are matching any regular expression from the list. By default, no files are dropped.
#匹配列表中的任何正則表達(dá)式。默認(rèn)情況下盒音,沒有文件被刪除表鳍。
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
#可選的附加字段馅而。這些字段可以自由選擇
# to add additional information to the crawled log files for filtering
#添加附加信息到抓取的日志文件進(jìn)行過(guò)濾
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# Multiline可用于記錄跨多行的消息。這是常見的
# for Java Stack Traces or C-Line Continuation
#用于Java堆棧跟蹤或c行延續(xù)
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#必須匹配的regexp模式譬圣。示例模式匹配以[開頭的所有行
multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#定義模式下的模式集是否應(yīng)該被否定瓮恭。默認(rèn)是false
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
#Match可以設(shè)置為“after”或“before”。它用于定義是否應(yīng)該將行追加到模式中
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
#在之前或之后匹配的厘熟,或者只要模式?jīng)]有基于negate匹配屯蹦。
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#注意:在Logstash中,After等同于previous, before等同于next
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
#配置加載的Glob模式
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
#設(shè)置為true可重新加載配置
reload.enabled: false
# Period on which files under path should be checked for changes
#應(yīng)該檢查path下的文件是否有更改的時(shí)間段
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
#應(yīng)該檢查path下文件更改的時(shí)間段#發(fā)布網(wǎng)絡(luò)數(shù)據(jù)的托運(yùn)人的名稱绳姨。它可以用來(lái)分組
# all the transactions sent by a single shipper in the web interface.
#由一個(gè)托運(yùn)人在web interfac中發(fā)送的所有事務(wù)
#name:
# The tags of the shipper are included in their own field with each
#每個(gè)托運(yùn)人的標(biāo)簽都包含在它們自己的字段中
# transaction published.
#事務(wù)發(fā)表登澜。
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
#屬性中添加附加信息的可選字段
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
#這些設(shè)置控制將樣例指示板加載到Kibana索引。加載
# the dashboards is disabled by default and can be enabled either by setting the
#儀表板在默認(rèn)情況下是禁用的飘庄,可以通過(guò)設(shè)置
# options here or by using the `setup` command.
#選項(xiàng)或使用' setup '命令脑蠕。
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
#下載儀表板歸檔文件的URL。默認(rèn)情況下跪削,這個(gè)URL
# has a value which is computed based on the Beat name and version. For released
#有一個(gè)基于節(jié)拍名稱和版本計(jì)算的值谴仙。對(duì)發(fā)布的
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
#版本號(hào),此URL指向工件.elastic.co上的儀表板存檔
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
#從Beats 6.0.0版本開始碾盐,儀表板是通過(guò)Kibana API加載的晃跺。
# This requires a Kibana endpoint configuration.
#這需要Kibana端點(diǎn)配置。
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.0.140:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#配置在發(fā)送由節(jié)拍收集的數(shù)據(jù)時(shí)使用的輸出毫玖。
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["192.168.0.140:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.0.140:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
#配置處理器以增強(qiáng)或操縱節(jié)拍生成的事件掀虎。
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true