- 搜索引擎介紹
- Elasticsearch的使用
- Logstash的使用
- Filebeat的使用
- Kibana的使用
- Elastic Stack綜合應(yīng)用實例
一喧锦、搜索引擎介紹
(一)搜索引擎的主要組成:
索引組件:獲取數(shù)據(jù)-->建立文檔-->文檔分析-->文檔索引(倒排索引)
搜索組件:用戶搜索接口-->建立查詢(將用戶鍵入的信息轉(zhuǎn)換為可處理的查詢對象)-->搜索查詢-->展現(xiàn)結(jié)果
(二)搜索引擎主流開源軟件
索引組件:Lucene, Solr, Elasticsearch
Lucene:提供構(gòu)建索引功能的類庫
Solr:在Lucene基礎(chǔ)上開發(fā)的完整索引組件
Elasticsearch:分布式索引組件咳榜,同樣在Lucene基礎(chǔ)上開發(fā)搜索組件:Kibana
(三)Elastic Stack的組成
Elastic Stack包含一系列工具劲阎,目前主要使用:Elasticsearch, Logstash, Beats, Kibana
Elasticsearch:Elastic Stack的核心工具捻撑,索引組件
Logstash:數(shù)據(jù)的抽取夷家、處理绒疗、輸出靶累,非常占用資源腺毫,目前數(shù)據(jù)的抽取過程已經(jīng)被Beats代替
Beats:輕量級的數(shù)據(jù)采集平臺,有諸多子工具:Filebeat, Metricbeat, Packetbeat, Winlogbeat, Heartbeat
Kibana:搜索組件挣柬,提供可視化的界面接收搜索命令并展示搜索結(jié)果
二潮酒、Elasticsearch的使用
(一)ES的核心組件
Elasticsearch集群:通過分片(Shard)機制實現(xiàn)分布式存儲
集群狀態(tài):
green:主分片、副本分片都存在
yellow:分片丟失邪蛔,但每個分片至少存在一個主分片或副本分片
red:分片丟失急黎,主分片和副本分片都丟失Lucene的核心組件:
索引(Index):類比數(shù)據(jù)庫(database)
類型(Type):類比表(table)
文檔(Document):類比行(row)-
ElasticSearch 5的程序環(huán)境:
配置文件:
/etc/elasticsearch/elasticsearch.yml:配置文件
/etc/elasticsearch/jvm.options:jvm配置文件
/etc/elasticsearch/log4j2.properties:日志配置文件Unit File:elasticsearch.service
程序文件:
/usr/share/elasticsearch/bin/elasticsearch
/usr/share/elasticsearch/bin/elasticsearch-keystore:
/usr/share/elasticsearch/bin/elasticsearch-plugin:管理插件程序搜索服務(wù):9200/tcp
集群服務(wù):9300/tcp
els集群的工作邏輯:
多播、單播:9300/tcp
關(guān)鍵因素:clustername
所有節(jié)點選舉一個主節(jié)點,負(fù)責(zé)管理整個集群的狀態(tài)(green/yellow/red)勃教,以及各shards的分布方式
(二)RESTful API
Elasticsearch提供了RESTful的API接口淤击,可以通過http協(xié)議與其進(jìn)行交互
-
語法:
curl -X<VERB> '<PROTOCOL>://<HOST>:<PORT>/<PATH>?<QUERY_STRING>' -d '<BODY>'
-
<BODY>
:json格式的請求主體 -
<VERB>
:GET(獲取)故源,POST(修改)遭贸,PUT(創(chuàng)建),DELETE(刪除) - <PATH>:/index_name/type/Document_ID/
- 特殊PATH:/_cat, /_search, /_cluster
- 創(chuàng)建文檔:-XPUT -d '{"key1": "value1", "key2": value, ...}'
- /_search:搜索所有的索引和類型心软;
- /INDEX_NAME/_search:搜索指定的單個索引;
- /INDEX1,INDEX2/_search:搜索指定的多個索引著蛙;
- /s*/_search:搜索所有以s開頭的索引删铃;
- /INDEX_NAME/TYPE_NAME/_search:搜索指定的單個索引的指定類型;
-
-
使用舉例:
curl -XGET 'http://192.168.136.230:9200/_cluster/health?pretty=true' curl -XGET 'http://192.168.136.230:9200/_cluster/stats?pretty=true' curl -XGET 'http://192.168.136.230:9200/_cat/nodes?pretty' curl -XGET 'http://192.168.136.230:9200/_cat/health?pretty'
(三)實驗一:配置踏堡、管理Elasticsearch Cluster
實驗環(huán)境:
三臺節(jié)點:node0.hellopeiyang.com, node1.hellopeiyang.com, node3.hellopeiyang.com-
步驟1:準(zhǔn)備工作
ntpdate 172.18.0.1 // 同步時間 vim /etc/hosts // 集群必須能夠互相解析主機名猎唁,本實驗采用hosts文件解決 192.168.136.230 node0 node0.hellopeiyang.com 192.168.136.130 node1 node1.hellopeiyang.com 192.168.136.132 node3 node3.hellopeiyang.com
-
步驟2:安裝并啟動Elasticsearch服務(wù)
yum install java-1.8.0-openjdk rpm -ivh elasticsearch-5.5.3.rpm mkdir /data/els/{logs,data} -pv chown -R elasticsearch:elasticsearch /data/els/ vim /etc/elasticsearch/elasticsearch.yaml cluster.name: myels // 集群名,每個節(jié)點相同 node.name: node0 // 節(jié)點名顷蟆,每個節(jié)點不同 path.data: /data/els/data // 數(shù)據(jù)存儲目錄 path.logs: /data/els/logs // 日志存儲目錄 network.host: 192.168.136.230 // 監(jiān)聽IP http.port: 9200 // 監(jiān)聽端口 // 每個節(jié)點相同诫隅,包含所有節(jié)點的主機名 discovery.zen.ping.unicast.hosts: ["node0", "node1", "node2"] // 決定主節(jié)點歸屬所需的最少節(jié)點數(shù) discovery.zen.minimum_master_nodes: 2 vim /etc/elasticsearch/jvm.options // elasticsearch占用內(nèi)存較嚴(yán)重,一般將內(nèi)存使用調(diào)大些 -Xms1g // java棧初始化占用內(nèi)存 -Xmx1g // java棧最多占用內(nèi)存 systemctl start elasticsearch.service
-
步驟3:測試Elasticsearch節(jié)點和集群是否正常運行
// 測試節(jié)點是否正常運行 curl -XGET 'http://192.168.136.230:9200/' // 測試集群是否正常運行 curl -XGET 'http://192.168.136.230:9200/_cat/nodes?pretty'
-
步驟4:添加帐偎、刪除逐纬、查詢數(shù)據(jù)至Elasticsearch集群
// 在索引books, 類型IT中建立文檔1,2和3 curl -XPUT 'http://192.168.136.230:9200/books/IT/1?pretty' -d '{ "name": "Elasticsearch in Action", "date": "Dec 3, 2015", "author": "Radu Gheorghe and Matthew Lee Hinman" }' curl -XPUT 'http://192.168.136.230:9200/books/IT/2?pretty' -d '{ "name": "Redis Essentials", "date": "Sep 8, 2015", "author": "Maxwell Dayvson Da Silva and Hugo Lopes Tavares" }' curl -XPUT 'http://192.168.136.230:9200/books/IT/3?pretty' -d '{ "name": "Puppet 4.10 Beginner's Guide", "date": "May 31, 2017", "author": "John Arundel" }' // 刪除索引books, 類型IT中的文檔3 curl -XDELETE 'http://192.168.136.230:9200/books/IT/3?pretty' // 查詢索引books, 類型IT中包含elasticsearch關(guān)鍵詞的文檔 curl -XGET 'http://192.168.136.230:9200/books/IT/_search?q=elasticsearch&pretty'
-
步驟5:安裝elasticsearch-head
elasticsearch-head:elasticsearch的插件,實現(xiàn)通過瀏覽器管理集群削樊,托管在GitHub上yum install git npm -y git clone https://github.com/mobz/elasticsearch-head.git cd elasticsearch-head/ npm install // 修改節(jié)點的elasticsearch服務(wù)配置 vim /etc/elasticsearch/elasticsearch.yml // 添加如下兩行豁生,在head中才能成功連接節(jié)點 http.cors.enabled: true http.cors.allow-origin: "*" systemctl restart elasticsearch.service npm run start &
輸入要連接的節(jié)點地址(紅框中),即可看到節(jié)點所處集群的基本情況漫贞,每個節(jié)點信息中加粗黑框為主分片甸箱,非加粗黑框為副本分片
三、Logstash的使用
(一)Logstash的安裝
安裝java-jre環(huán)境:
yum install java-1.8.0-openjdk -y
下載并安裝Logstash軟件包:
rpm -ivh logstash-5.5.3.rpm
Logstash的安裝路徑:
配置文件目錄:/etc/logstash/conf.d/
可執(zhí)行程序目錄:/usr/share/logstash/bin
(二)配置文件格式
input { // 設(shè)置數(shù)據(jù)來源迅脐,必須設(shè)置
...
}
filter{ // 設(shè)置數(shù)據(jù)的過濾操作芍殖,經(jīng)常設(shè)置
...
}
output { // 設(shè)置數(shù)據(jù)的輸出位置,必須設(shè)置
...
}
(三)實驗二:Logstash的基礎(chǔ)使用
-
實驗2-1:從標(biāo)準(zhǔn)輸入獲取數(shù)據(jù)谴蔑,處理后輸出至標(biāo)準(zhǔn)輸出
vim /etc/logstash/conf.d/test.conf input { stdin {} } output { stdout { codec => rubydebug } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf // 檢查配置文件語法 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf // 執(zhí)行配置文件
-
實驗2-2:從httpd的access日志中獲取數(shù)據(jù)豌骏,使用grok插件過濾將每一條日志信息切分,并輸出至標(biāo)準(zhǔn)輸出
// 安裝树碱、配置httpd服務(wù) yum install httpd echo "hello index file" => /var/www/html/index.html echo "hello test file" => /var/www/html/test.html systemctl start httpd // 編輯Logstash配置文件 vim /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/httpd/access_log"] start_position => "beginning" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } } output { stdout{ codec => rubydebug } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
-
實驗2-3:使用date插件調(diào)整時間戳信息格式
vim /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/httpd/access_log"] start_position => "beginning" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } } output { stdout{ codec => rubydebug } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
-
實驗2-4:使用mutate插件修改Key值肯适,將"agent"改為"user_agent"
vim /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/httpd/access_log"] start_position => "beginning" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } } output { stdout{ codec => rubydebug } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
-
實驗2-5:使用geoip插件根據(jù)ip地址查詢所在位置的經(jīng)緯度
vim /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/httpd/access_log"] start_position => "beginning" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } geoip { source => "clientip" target => geoip database => "/etc/logstash/maxmind/GeoLite2-City.mmdb" // GeoLite2-City.mmdb數(shù)據(jù)庫從maxmind官網(wǎng)下載,存儲IP與地理信息的映射 } } output { stdout{ codec => rubydebug } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
-
實驗2-6:從redis數(shù)據(jù)庫采集數(shù)據(jù)
yum install redis vim /etc/redis.conf bind 0.0.0.0 systemctl start redis redis-cli SET mylog 15.15.15.15 vim /etc/logstash/conf.d/test.conf input { redis { host => "192.168.136.230" port => "6379" key => "mylog" data_type => "list" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } geoip { source => "clientip" target => geoip database => "/etc/logstash/maxmind/GeoLite2-City.mmdb" } } output { stdout{ codec => rubydebug } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
-
實驗2-7:Logstash的處理結(jié)果寫入redis數(shù)據(jù)庫
vim /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/httpd/access_log"] start_position => "beginning" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } geoip { source => "clientip" target => geoip database => "/etc/logstash/maxmind/GeoLite2-City.mmdb" } } output { redis { data_type => "channel" key => "logstash-%{+yyyy.MM.dd}" } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
-
實驗2-8:Logstash的處理結(jié)果發(fā)送至實驗一的Elasticsearch集群
vim /etc/logstash/conf.d/test.conf input { file { path => ["/var/log/httpd/access_log"] start_position => "beginning" } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } geoip { source => "clientip" target => geoip database => "/etc/logstash/maxmind/GeoLite2-City.mmdb" } } output { elasticsearch { hosts => ["http://192.168.136.230/", "http://192.168.136.130"] document_type => "httpd-accesslog" index => "logstash-%{+yyyy.MM.dd}" } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
可以在elasticsearch-head中看到接收到的數(shù)據(jù)
四成榜、Filebeat的使用
(一)Beats平臺
- Beats 平臺:集合了多種單一用途數(shù)據(jù)采集器框舔。這些采集器安裝后可用作輕量型代理,從成百上千或成千上萬臺機器向 Logstash 或 Elasticsearch 發(fā)送數(shù)據(jù)
- Filebeat:輕量型日志采集器,用于轉(zhuǎn)發(fā)和匯總?cè)罩九c文件
(二)Filebeat的文件結(jié)構(gòu)
- /etc/filebeat/filebeat.yml:配置文件
- /etc/filebeat/filebeat.full.yml:配置文件模板
- /lib/systemd/system/filebeat.service:Unit文件
(三)實驗三:Filebeat的使用
-
實驗3-1:實現(xiàn)Filebeat收集數(shù)據(jù)傳送至Logstash刘绣,并由Logstash轉(zhuǎn)換后傳送至Elasticsearch
實驗環(huán)境:實驗2-8配置完成的環(huán)境
包含三臺Elasticsearch節(jié)點主機樱溉,一臺Logstash主機,并再增加一臺Filebeat主機步驟1:Filebeat主機配置
rpm -ivh filebeat-5.5.3-x86_64.rpm vim /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/httpd/access_log* // 設(shè)置監(jiān)控的日志 output.logstash: hosts: ["192.168.136.230:5044"] // 指定Logstash服務(wù)器的IP和端口 systemctl start filebeat.service
- 步驟2:Logstash主機配置
vim /etc/logstash/conf.d/test.conf input { beats { port => 5044 } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } geoip { source => "clientip" target => geoip database => "/etc/logstash/maxmind/GeoLite2-City.mmdb" } } output { elasticsearch { hosts => ["http://192.168.136.230/","http://192.168.136.130/"] document_type => "httpd-accesslog" index => "logstash-%{+yyyy.MM.dd}" } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf systemctl start logstash.service
- 步驟3:測試
echo '120.120.120.120 - - [14/Dec/2017:16:42:56 +0800] "GET / HTTP/1.1" 200 18 "-" "curl/7.29.0"' >> /var/log/httpd/access_log
-
實驗3-2:實現(xiàn)Filebeat收集數(shù)據(jù)傳送至Redis纬凤,由Redis傳送至Logstash福贞,并由Logstash轉(zhuǎn)換后傳送至Elasticsearch
實驗環(huán)境:實驗3-1配置完成的環(huán)境
包含三臺Elasticsearch節(jié)點主機,一臺Logstash主機停士,一臺Filebeat主機挖帘,并再增加一臺Redis主機步驟1:修改Filebeat主機配置
vim /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/httpd/access_log* // 設(shè)置監(jiān)控的日志 output.redis: enabled: true hosts: [192.168.136.240] // Redis服務(wù)器地址 port: 6379 key: httpd-accesslog // key值必須要和Logstash主機的配置相同 db: 0 datatype: list systemctl restart filebeat.service
- 步驟2: 配置Redis主機
yum install redis vim /etc/redis.conf bind 0.0.0.0 systemctl start redis.service
- 步驟3:配置Logstash主機
vim /etc/logstash/conf.d/test.conf input { redis { host => '192.168.136.240' port => '6379' key => 'httpd-accesslog' // key值必須要和Filebeat主機的配置相同 data_type => 'list' } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } mutate { rename => { "agent" => "user_agent" } } geoip { source => "clientip" target => geoip database => "/etc/logstash/maxmind/GeoLite2-City.mmdb" } } output { elasticsearch { hosts => ["http://192.168.136.230/","http://192.168.136.130/"] document_type => "httpd-accesslog" index => "logstash-%{+yyyy.MM.dd}" } } /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf systemctl restart logstash.service
- 步驟4:測試
echo '135.136.137.138 - - [14/Dec/2017:16:42:56 +0800] "GET / HTTP/1.1" 200 18 "-" "curl/7.29.0"' >> /var/log/httpd/access_log
五、Kibana的使用
- Kibana:可視化 Elasticsearch 中的數(shù)據(jù)
(一)Kibana的文件結(jié)構(gòu)
- /etc/kibana/kibana.yml:配置文件
- /etc/systemd/system/kibana.service:Unit文件
(二)實驗四:Kibana的使用
實驗4:使用Kibana將Elasticsearch中的數(shù)據(jù)可視化
實驗環(huán)境:實驗3-2配置完成的環(huán)境
包含三臺Elasticsearch節(jié)點主機恋技,一臺Logstash主機拇舀,一臺Filebeat主機,一臺Redis主機蜻底,并再增加一臺Kibana主機步驟1:配置Kibana
rpm -ivh kibana-5.5.3-x86_64.rpm
vim /etc/kibana/kibana.yml
server.port: 5601 // 監(jiān)聽端口
server.host: "0.0.0.0" // 監(jiān)聽ip
server.basePath: ""
server.name: "node3.hellopeiyang.com"
elasticsearch.url: "http://192.168.136.230:9200" // elasticsearch主機ip地址及端口
systemctl start kibana.service
-
步驟2:在web瀏覽器中訪問Kibana主機的5601端口骄崩,進(jìn)入初始化管理頁面
要求填寫索引名稱,配置后點擊Create進(jìn)入管理平臺:左側(cè)為主要功能欄薄辅,當(dāng)前在"Discover"功能中要拂,上面的輸入框中可以搜索,下面顯示搜索結(jié)果
可以使用管理平臺左側(cè)的"Visualize"功能站楚,建立統(tǒng)計圖形脱惰,如下圖中的餅圖
可以使用管理平臺左側(cè)的"Visualize"功能,建立訪問地區(qū)分布圖
可以使用管理平臺左側(cè)的"Dashboard"功能窿春,將多幅圖并排顯示在監(jiān)控界面
六枪芒、Elastic Stack綜合應(yīng)用實例
(一)實驗實例1:
實驗?zāi)繕?biāo):使用Filebeat, Logstash, Elasticsearch, Kibana等工具收集、處理谁尸、存儲并可視化Tomcat日志數(shù)據(jù)
實驗環(huán)境:包含三臺Elasticsearch節(jié)點主機舅踪,一臺Logstash主機,一臺Filebeat主機良蛮,一臺Redis主機和一臺Kibana主機
-
步驟1:配置Filebeat主機
vim /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/tomcat/*access_log* // 監(jiān)控的tomcat目錄路徑 document_type: tomcat-accesslog output.redis: enabled: true hosts: ["192.168.136.131"] port: 6379 key: tomcat-accesslog // 存儲至redis的key名稱 db: 0 datatype: list systemctl start filebeat.service
-
步驟2:配置Redis服務(wù)器
vim /etc/redis.conf bind 0.0.0.0 systemctl start redis.service
-
步驟3:配置Logstash服務(wù)器
vim /etc/logstash/conf.d/tomcat.conf input { redis { host => '192.168.136.131' port => '6379' key => 'tomcat-accesslog' // 與filebeat存儲至redis數(shù)據(jù)庫的key名稱相同 data_type => 'list' } } filter { grok { match => { "message" => "%{HTTPD_COMMONLOG}" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } } output { elasticsearch { hosts => ["http://192.168.136.230/","http://192.168.136.130/"] document_type => "tomcat-accesslog" index => "logstash-%{+yyyy.MM.dd}" } } systemctl start logstash.service
-
步驟4:配置Elasticsearch Cluster
mkdir /data/els/{data,logs} -pv chown -R elasticsearch,elasticsearch /data/els vim /etc/elasticsearch/elasticsearch.yml cluster.name: myels node.name: node0 path.data: /data/els/data path.logs: /data/els/logs network.host: 192.168.136.230 http.port: 9200 discovery.zen.ping.unicast.hosts: ["node0", "node1", "node3"] discovery.zen.minimum_master_nodes: 2 http.cors.enabled: true http.cors.allow-origin: "* systemctl start elasticsearch
-
步驟5:啟動Elasticsearch-head
npm run start &
在web管理頁面中查看集群產(chǎn)生了相應(yīng)的索引
-
步驟6:配置Kibana
vim /etc/kibana/kibana.yml server.port: 5601 server.host: "0.0.0.0" server.basePath: "" server.name: "node3.hellopeiyang.com" elasticsearch.url: "http://192.168.136.230:9200" systemctl start kibana.service
在Kibana的管理頁面也看到了格式化的Tomcat日志統(tǒng)計數(shù)據(jù)
(二)實驗實例2:
實驗?zāi)繕?biāo):使用Filebeat, Logstash, Elasticsearch, Kibana等工具收集抽碌、處理、存儲并可視化Nginx日志數(shù)據(jù)
實驗環(huán)境:包含三臺Elasticsearch節(jié)點主機决瞳,一臺Logstash主機货徙,一臺Filebeat主機,一臺Redis主機和一臺Kibana主機
-
步驟1:配置Filebeat主機
vim /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/nginx/access.log* // 監(jiān)控的tomcat目錄路徑 document_type: nginx-accesslog output.redis: enabled: true hosts: ["192.168.136.131"] port: 6379 key: nginx-accesslog // 存儲至redis的key名稱 db: 0 datatype: list systemctl start filebeat.service
-
步驟2:配置Redis服務(wù)器
vim /etc/redis.conf bind 0.0.0.0 systemctl start redis.service
-
步驟3:配置Logstash服務(wù)器
在filter中使用grok插件時皮胡,在沒有完全匹配模式的情況下痴颊,可以自定義:
例如,\"%{DATA:realclient}\"
屡贺,冒號前為數(shù)據(jù)格式蠢棱,冒號后為給數(shù)據(jù)定義的名稱vim /etc/logstash/conf.d/nginx.conf input { redis { host => '192.168.136.131' port => '6379' key => 'nginx-accesslog' // 與filebeat存儲至redis數(shù)據(jù)庫的key名稱相同 data_type => 'list' } } filter { grok { match => { "message" => "%{HTTPD_COMBINEDLOG} \"%{DATA:realclient}\"" } remove_field => "message" } date { match => ["timestamp","dd/MMM/YYYY:H:m:s Z"] remove_field => "timestamp" } } output { elasticsearch { hosts => ["http://192.168.136.230/","http://192.168.136.130/"] document_type => "nginx-accesslog" index => "logstash-%{+yyyy.MM.dd}" } } systemctl start logstash.service
步驟4:配置Elasticsearch Cluster锌杀,與實驗實例1的步驟4完全相同
-
步驟5:啟動Elasticsearch-head
npm run start &
索引中的文檔,除了可以看到按照給定模式分段的信息泻仙,也看到了自定義分段信息
-
步驟6:配置Kibana糕再,與實驗實例1的步驟6完全相同
在Kibana的管理頁面可以看到格式化的Nginx日志統(tǒng)計數(shù)據(jù),特別注意到自定義分段的信息也可以看到