日志規(guī)約
【強(qiáng)制】應(yīng)用中不可直接使用日志系統(tǒng)(Log4j、Logback)中的 API欣福,而應(yīng)依賴使用日志框架(如SLF4J)中的 API,使用門面模式的日志框架,有利于維護(hù)和各個類的日志處理 方式統(tǒng)一瞬测。
【強(qiáng)制】日志文件至少保存 15 天,因為有些異常具備以“周”為頻次發(fā)生的特點。對于當(dāng)天日志涣楷,以 “應(yīng)用名.log”來保存分唾,保存在/{統(tǒng)一目錄}/logs/{應(yīng)用名}目錄下,過往日志文件名帶有yyyy-MM-dd格式日期狮斗。
【強(qiáng)制】在日志輸出時绽乔,字符串變量之間的拼接使用占位符的方式。 說明:因為 String 字符串的拼接會使用 StringBuilder 的 append() 方式碳褒,有一定的性能損耗折砸。使用占位符僅是替換動 作,可以有效提升性能沙峻。
正例:logger.debug("Processing trade with id : {} and symbol : {}", id, symbol);
【強(qiáng)制】生產(chǎn)環(huán)境禁止使用 System.out 或 System.err 輸出或使用 e.printStackTrace() 打印異常堆棧睦授。
-
【強(qiáng)制】異常信息應(yīng)該包括兩類信息:案發(fā)現(xiàn)場信息和異常堆棧信息。如果不處理摔寨,那么通過關(guān)鍵字 throws 往上拋出去枷。
注意:一定要攜帶最后一個參數(shù)e(java.lang.Throwable),禁止僅打印 e.getMessage()
正例:
logger.error("inputParams: {}", 各類參數(shù)或者對象 toString(), e);
【推薦】謹(jǐn)慎地記錄日志。生產(chǎn)環(huán)境禁止輸出 debug 日志是复;有選擇地輸出 info 日志删顶;如果使用 warn 來記錄剛上線時的業(yè)務(wù)行為信息,一定要注意日志輸出量的問題淑廊,避免把服務(wù)器磁盤撐爆逗余,并記得及時 刪除這些觀察日志。
日志格式
統(tǒng)一的日志格式不僅對用戶友好季惩,也有利于日志收集等運(yùn)維平臺做進(jìn)一步處理录粱。因此,日志格式必須在系統(tǒng)內(nèi)達(dá)成共識画拾。
一條完整的日志由系統(tǒng)自動捕獲的公共信息(由日志模板定義)啥繁,和開發(fā)者手動記錄的日志信息拼接組成,即一條日志=公共信息+日志體
青抛。
格式定義
以下日志格式配置適用于log4j2與logback输虱,對應(yīng)xml中的pattern配置:
|%d{yyyy-MM-dd HH:mm:ss.SSS}|%-5level|%X{traceId}|%X{spanId}|${appName}|%t|%C|%M|%L|%m%n
分段釋義:
|日期時間|日志級別|鏈路ID|鏈路跨度ID|應(yīng)用名詞|線程名稱|類名|方法名|行號|日志體
其中traceId與spanId是鏈路追蹤參數(shù),需要基于鏈路追蹤框架生成脂凶,可以結(jié)合鏈路追蹤系統(tǒng)使用宪睹;appName由應(yīng)用系統(tǒng)定義。
日志樣例
-
INFO(DEBUG,WARN等)日志
|2023-01-30 14:15:26.220|INFO |trace0|span1|iot.spaceFence|http-nio-8080-exec-1|com.iot.spaceFence.controller.IotAlarmFenceController|findOne|31|正常日志:0e178fbb5907676c38959eb7bf1c5f2b
-
ERROR日志
|2023-01-30 14:15:26.221|ERROR|trace1|span1|iot.spaceFence|http-nio-8080-exec-1|com.iot.spaceFence.controller.IotAlarmFenceController|findOne|35|異常日志: 0e178fbb5907676c38959eb7bf1c5f2b java.lang.RuntimeException: 測試異常 at com.iot.spaceFence.controller.IotAlarmFenceController.findOne(IotAlarmFenceController.java:33) [classes/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_351] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_351] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_351] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_351]
ELK配置
filebeat
每臺服務(wù)器運(yùn)行一個filebeat即可蚕钦,負(fù)責(zé)采集日志并發(fā)送到logstash:
注意tags表示所屬項目亭病,最終會對應(yīng)到elasticsearch的不同索引。
#=========================== Agent name =============================
name: 100.127.6.231
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /apps/logs/iot/*/app.log
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: ^\|
# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
tags: ['iot']
- type: log
enabled: true
paths:
- /apps/logs/lms/lms-*.log
multiline.pattern: ^\|
multiline.negate: true
multiline.match: after
tags: ['lms']
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#============================== Dashboards =====================================
setup.kibana:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.3.39:5044","192.168.3.214:5044"]
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
logstash
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
if "iot" in [tags] {
mutate { add_field => { "[@metadata][target_index]" => "iot-%{+YYYY.MM.dd}" } }
} else if "lms" in [tags] {
mutate { add_field => { "[@metadata][target_index]" => "lms-%{+YYYY.MM.dd}" } }
} else {
mutate { add_field => { "[@metadata][target_index]" => "default-%{+YYYY.MM.dd}" } }
}
grok {
match => { "message" => "\|%{DATA:datetime}\|%{NOTSPACE:level}\s?\|%{DATA:traceId}\|%{DATA:spanId}\|%{DATA:app}\|%{NOTSPACE:thread}\|%{NOTSPACE:class}\|%{NOTSPACE:method}\|%{INT:line}\|(?<content>(.|\n)*)" }
# 移除一些不太有用的默認(rèn)字段
remove_field => ["[agent][ephemeral_id]","[agent][id]","[agent][id]","[agent][type]","[agent][version]","[host][mac]","[host][ip]","[host][id]","[host][architecture]","[host][containerized]","[host][os][codename]","[host][os][family]","[host][os][name]","[host][os][kernel]","[ecs][version]"]
}
}
output {
elasticsearch {
hosts => ["http://192.168.0.184:9200","http://192.168.0.182:9200","http://192.168.0.59:9200"]
index => "%{[@metadata][target_index]}"
user => "elastic"
password => "*********"
}
}