介紹
ELK是三個(gè)開(kāi)源項(xiàng)目的首字母縮寫(xiě),這三個(gè)項(xiàng)目分別是:Elasticsearch昧互、Logstash 和 Kibana航瞭。
- Elasticsearch 是一個(gè)搜索和分析引擎。
- Logstash 是服務(wù)器端數(shù)據(jù)處理管道你踩,能夠同時(shí)從多個(gè)來(lái)源采集數(shù)據(jù),轉(zhuǎn)換數(shù)據(jù)讳苦,然后將數(shù)據(jù)發(fā)送到諸如 Elasticsearch 等“存儲(chǔ)庫(kù)”中带膜。
- Kibana 則可以讓用戶在 Elasticsearch 中使用圖形和圖表對(duì)數(shù)據(jù)進(jìn)行可視化。
克隆一臺(tái)虛擬機(jī)
- 從之前裝好jdk的centos虛擬機(jī)快照克隆一臺(tái)專門(mén)搭建elk環(huán)境
- 克隆完成以后開(kāi)機(jī)医吊,登錄
- 修改主機(jī)名
vim /etc/hostname
zmzhou-132-elk
- 修改ip地址
vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
#UUID=d77bb448-a7db-4b0f-9812-b306e44c5d3b
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.163.132
GATEWAY=192.168.163.2
NETMASK=255.255.255.0
DNS1=8.8.8.8
DNS2=114.114.114.114
- 重啟
reboot
- 檢查IP钱慢,網(wǎng)絡(luò)和Java環(huán)境
如果Java環(huán)境沒(méi)有配好,請(qǐng)參考 jdk1.8商用免費(fèi)版下載地址
下載卿堂,并配置環(huán)境變量:
vim /etc/profile
# 在最后添加如下內(nèi)容:ZZ保存退出以后執(zhí)行 source /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_202
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export ES_JAVA_HOME=/home/elastic/elasticsearch-7.12.1/jdk
安裝 elasticsearch
-
lscpu
查看系統(tǒng)架構(gòu)
- 下載相應(yīng)版本的 elasticsearch https://www.elastic.co/cn/downloads/elasticsearch
- 創(chuàng)建用戶elastic
useradd elastic
上傳安裝包到/home/elastic/
目錄下 - 解壓束莫,修改配置文件,啟動(dòng)
tar -zxvf elasticsearch-7.12.1-linux-x86_64.tar.gz
cd elasticsearch-7.12.1/
vim config/elasticsearch.yml
# 修改如下內(nèi)容
cluster.name: zmzhou-132-elk
node.name: es-node-1
path.data: /home/elastic/elasticsearch-7.12.1/data
path.logs: /home/elastic/elasticsearch-7.12.1/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "zmzhou-132-elk"]
cluster.initial_master_nodes: ["es-node-1"]
# 修改文件夾所屬用戶權(quán)限
chown -R elastic:elastic /home/elastic/
# 切換用戶
su - elastic
cd elasticsearch-7.12.1/
# 后臺(tái)啟動(dòng)
./bin/elasticsearch -d
報(bào)錯(cuò)以及解決辦法
- 報(bào)錯(cuò)1
JAVA_HOME is deprecated, use ES_JAVA_HOME
vim /etc/profile
export ES_JAVA_HOME=/home/elastic/elasticsearch-7.12.1/jdk
source /etc/profile
- 報(bào)錯(cuò)2 ERROR: bootstrap checks failed
[2] bootstrap checks failed. You must address the points described in the following [2] lines before starting Elasticsearch.
bootstrap check failure [1] of [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
bootstrap check failure [2] of [2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
解決:[1]
編輯 sysctl.conf
添加如下配置
# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
# sysctl -p #使修改立即生效
[2]
在elasticsearch.yml
中加上如下配置:
discovery.seed_hosts: ["127.0.0.1", "zmzhou-132-elk"]
cluster.initial_master_nodes: ["es-node-1"]
- 報(bào)錯(cuò)3 ERROR: bootstrap checks failed
max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
max number of threads [1024] for user [elastic] likely too low, increase to at least [2048]
解決:[1]
切換到root用戶草描,編輯limits.conf 根據(jù)錯(cuò)誤提示添加如下內(nèi)容:
vim /etc/security/limits.conf
#添加如下內(nèi)容
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
[2]
編輯 90-nproc.conf
修改配置
vim /etc/security/limits.d/90-nproc.conf
#修改為
* soft nproc 2048
- 報(bào)錯(cuò)4 bootstrap checks failed
bootstrap checks failed
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
解決:在elasticsearch.yml
中加上如下配置:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
- 查看日志
tail -100f /home/elastic/elasticsearch-7.12.1/logs/zmzhou-132-elk.log
, 啟動(dòng)成功如下:
安裝 Logstash
- 下載相應(yīng)版本的 Logstash https://www.elastic.co/cn/downloads/logstash
- 解壓览绿,修改配置文件
tar -zxvf logstash-7.12.1-linux-x86_64.tar.gz
cd logstash-7.12.1/
cp config/logstash-sample.conf config/logstash.conf
vim startup.sh
#編輯如下內(nèi)容,保存退出
#!/bin/bash
nohup ./bin/logstash -f config/logstash.conf &
chmod +x startup.sh
vim config/logstash.conf
添加配置如下
input {
beats {
port => 5044
}
tcp {
mode => "server"
host => "0.0.0.0" # 允許任意主機(jī)發(fā)送日志
type => "elk1" # 設(shè)定type以區(qū)分每個(gè)輸入源
port => 4567
codec => json_lines # 數(shù)據(jù)格式
}
}
output {
if [type] == "elk1" {
elasticsearch {
action => "index" # 輸出時(shí)創(chuàng)建映射
hosts => "192.168.163.132:9200" # ElasticSearch 的地址和端口
index => "elk1-%{+YYYY.MM.dd}" # 指定索引名
codec => "json"
}
}
}
- 啟動(dòng) logstash
./startup.sh
#查看日志
tail -100f nohup.out
安裝 Kibana
- 下載相應(yīng)版本的 Kibana https://www.elastic.co/cn/downloads/kibana
- 解壓穗慕,修改配置文件饿敲,啟動(dòng)
tar -zxvf kibana-7.12.1-linux-x86_64.tar.gz
cd kibana-7.12.1-linux-x86_64/
vim config/kibana.yml
#修改如下內(nèi)容:
server.port: 5601
server.host: "0.0.0.0"
server.name: "zmzhou-132-elk"
elasticsearch.hosts: ["http://localhost:9200"]
kibana.index: ".kibana"
i18n.locale: "zh-CN"
# 后臺(tái)啟動(dòng)
nohup ./bin/kibana &
- 啟動(dòng)成功訪問(wèn):
http://192.168.163.132:5601/
springboot + logback 輸出日志到 logstash
- 添加 logstash 依賴
pom.xml
<!-- https://mvnrepository.com/artifact/net.logstash.logback/logstash-logback-encoder -->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
- 修改
application.yml
,添加配置
logstash:
address: 192.168.163.132:4567
- 修改
logback-spring.xml
配置
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- logstash地址逛绵,從 application.yml 中獲取-->
<springProperty scope="context" name="LOGSTASH_ADDRESS" source="logstash.address"/>
<springProperty scope="context" name="APPLICATION_NAME" source="spring.application.name"/>
<!--日志在工程中的輸出位置-->
<property name="LOG_FILE" value="/opt/web-shell/logging"/>
<!-- 彩色日志依賴的渲染類 -->
<conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter"/>
<conversionRule conversionWord="wex"
converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter"/>
<conversionRule conversionWord="wEx"
converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter"/>
<!--控制臺(tái)的日志輸出樣式-->
<property name="CONSOLE_LOG_PATTERN"
value="%clr(%d{HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
<!-- 控制臺(tái)輸出 -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<!-- 日志輸出編碼 -->
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<!--文件-->
<appender name="fileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 多JVM同時(shí)操作同一個(gè)日志文件 -->
<Prudent>false</Prudent>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>
${LOG_FILE}.%d{yyyy-MM-dd}.log
</FileNamePattern>
<!-- 日志最大的歷史 10天 -->
<maxHistory>10</maxHistory>
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%d{yyyy-MM-dd HH:mm:ss} %-5level logger{39} -%msg%n
</Pattern>
</layout>
</appender>
<!--輸出到logstash的appender-->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--可以訪問(wèn)的logstash日志收集端口-->
<destination>${LOGSTASH_ADDRESS}</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>Asia/Shanghai</timeZone>
</timestamp>
<pattern>
<pattern>
{
"app": "${APPLICATION_NAME}",
"level": "%level",
"thread": "%thread",
"logger": "%logger{50} %M %L ",
"message": "%msg"
}
</pattern>
</pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>100</maxDepthPerThrowable>
<rootCauseFirst>true</rootCauseFirst>
<inlineHash>true</inlineHash>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="console"/>
<appender-ref ref="fileAppender"/>
<appender-ref ref="LOGSTASH"/>
</root>
</configuration>
springboot + log4j2 異步輸出日志到 logstash
- 修改
log4f2.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--日志級(jí)別以及優(yōu)先級(jí)排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
<!--Configuration后面的status怀各,這個(gè)用于設(shè)置log4j2自身內(nèi)部的信息輸出倔韭,可以不設(shè)置,當(dāng)設(shè)置成trace時(shí)瓢对,你會(huì)看到log4j2內(nèi)部各種詳細(xì)輸出-->
<!--monitorInterval:Log4j能夠自動(dòng)檢測(cè)修改配置 文件和重新配置本身寿酌,設(shè)置間隔秒數(shù)-->
<configuration status="warn" name="web-shell" monitorInterval="300">
<properties>
<property name="LOG_HOME">/opt/web-shell/logs</property>
<property name="maxHistory">7</property>
<!-- 這個(gè)都知道是輸出日志的格式 -->
<property name="pattern">%d{yyyy-MM-dd HH:mm:ss z} [%thread] %-5level %class{36} [%M:%L] - %msg%xEx%n</property>
<property name="console_pattern">%d{HH:mm:ss.SSS} [%thread] %-5level %class{36} [%M:%L] - %msg%xEx%n</property>
<property name="logstash_pattern">{"app": "web-shell", "level": "%level", "message": "%thread %M %L - %msg%xEx%n"}</property>
</properties>
<!--先定義所有的appender -->
<appenders>
<!--這個(gè)輸出控制臺(tái)的配置 -->
<Console name="Console" target="SYSTEM_OUT">
<!-- 控制臺(tái)只輸出level及以上級(jí)別的信息(onMatch),其他的直接拒絕(onMismatch) -->
<ThresholdFilter level="trace" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="${console_pattern}"/>
</Console>
<!--文件會(huì)打印出所有信息硕蛹,這個(gè)log每次運(yùn)行程序會(huì)自動(dòng)清空醇疼,由append屬性決定,這個(gè)也挺有用的法焰,適合臨時(shí)測(cè)試用 -->
<!--append為T(mén)RUE表示消息增加到指定文件中秧荆,false表示消息覆蓋指定的文件內(nèi)容,默認(rèn)值是true -->
<File name="log" fileName="${LOG_HOME}/log4j2.log" append="false">
<PatternLayout pattern="${console_pattern}"/>
<ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
</File>
<!-- 打印出所有的error信息埃仪,每次大小超過(guò)size乙濒,則這size大小的日志會(huì)自動(dòng)存入按年份-月份建立的文件夾下面并進(jìn)行壓縮,作為存檔 -->
<RollingFile name="errorRollingFile" fileName="${LOG_HOME}/error.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/%d{yyyy-MM-dd}-error-%i.log.gz">
<PatternLayout pattern="${pattern}"/>
<!--控制臺(tái)只輸出level及以上級(jí)別的信息(onMatch)贵试,其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="100 MB"/>
</Policies>
<DefaultRolloverStrategy max="${maxHistory}"/>
</RollingFile>
<!-- 打印出所有的warn信息琉兜,每次大小超過(guò)size,則這size大小的日志會(huì)自動(dòng)存入按年份-月份建立的文件夾下面并進(jìn)行壓縮毙玻,作為存檔 -->
<RollingFile name="warnRollingFile" fileName="${LOG_HOME}/warn.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/%d{yyyy-MM-dd}-warn-%i.log.gz">
<PatternLayout pattern="${pattern}"/>
<!--控制臺(tái)只輸出level及以上級(jí)別的信息(onMatch),其他的直接拒絕(onMismatch)-->
<ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="100 MB"/>
</Policies>
<DefaultRolloverStrategy max="${maxHistory}"/>
</RollingFile>
<!-- 打印出所有的info信息廊散,每次大小超過(guò)size桑滩,則這size大小的日志會(huì)自動(dòng)存入按年份-月份建立的文件夾下面并進(jìn)行壓縮,作為存檔 -->
<RollingFile name="infoRollingFile" fileName="${LOG_HOME}/info.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/%d{yyyy-MM-dd}-info-%i.log.gz">
<PatternLayout pattern="${pattern}"/>
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="100 MB"/>
</Policies>
<DefaultRolloverStrategy max="${maxHistory}"/>
</RollingFile>
<!-- 和logstash建立Socket連接 -->
<Socket name="logstash" host="192.168.163.132" port="4567" protocol="TCP">
<PatternLayout pattern="${logstash_pattern}" />
</Socket>
</appenders>
<!--然后定義logger允睹,只有定義了logger并引入的appender运准,appender才會(huì)生效 -->
<loggers>
<!--過(guò)濾掉spring和hibernate的一些無(wú)用的DEBUG信息-->
<logger name="org.hibernate" level="INFO"/>
<logger name="org.springframework" level="INFO"/>
<!-- 日志輸出到logstash -->
<logger name="com.github.zmzhoustar" level="info" includeLocation="false" >
<appender-ref ref="logstash" />
</logger>
<root level="INFO">
<appender-ref ref="Console"/>
<appender-ref ref="log"/>
<appender-ref ref="infoRollingFile"/>
<appender-ref ref="warnRollingFile"/>
<appender-ref ref="errorRollingFile"/>
</root>
</loggers>
</configuration>
- 本項(xiàng)目地址: https://gitee.com/zmzhou-star/web-shell
測(cè)試,啟動(dòng)spring boot項(xiàng)目缭受,訪問(wèn) http://192.168.163.132:5601/
- 創(chuàng)建索引胁澳,我們之前在 logstash 配置的索引規(guī)則是以 elk1 開(kāi)頭:
- 篩選我們的應(yīng)用名
- 至此,我們的ELK環(huán)境已經(jīng)搭好啦米者,不過(guò)還有更多功能等待解鎖韭畸,比如 Beats等,整個(gè)軟件目錄如下
JVM Support Matrix https://www.elastic.co/cn/support/matrix#matrix_jvm
防火墻相關(guān)命令
# 啟動(dòng):
systemctl start firewalld
# 查看狀態(tài):
systemctl status firewalld
firewall-cmd --state
# 停止:
systemctl disable firewalld
#禁用:
systemctl stop firewalld
#查看所有打開(kāi)的端口
firewall-cmd --zone=public --list-ports
#添加一個(gè)端口
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=4567/tcp --permanent
firewall-cmd --zone=public --add-port=5601/tcp --permanent
#刪除一個(gè)端口
firewall-cmd --zone=public --remove-port=80/tcp --permanent
#更新防火墻規(guī)則
firewall-cmd --reload