前言
本文介紹的內(nèi)存配置方法基于Flink 1.10
配置log4j
Flink1.10 使用的默認(rèn)日志是 Log4j创淡,配置文件的如下:
-
log4j-cli.properties
: 由Flink命令行客戶端使用(例如flink run) -
log4j-yarn-session.properties
: 由Flink命令行啟動YARN Session(yarn-session.sh)時使用 -
log4j.properties
: JobManager / Taskmanager日志(包括standalone和YARN)
默認(rèn)配置
log4j.properties
內(nèi)容如下:
# This affects logging for both user code and Flink
log4j.rootLogger=INFO, file
# Uncomment this if you want to _only_ change Flink's logging
#log4j.logger.org.apache.flink=INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
log4j.logger.akka=INFO
log4j.logger.org.apache.kafka=INFO
log4j.logger.org.apache.hadoop=INFO
log4j.logger.org.apache.zookeeper=INFO
# Log all infos in the given file
log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.file=${log.file}
log4j.appender.file.append=false
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, file
滾動配置
默認(rèn)配置文件會將 JobManager 和 TaskManager 的日志分別打印在不同的文件中,每個文件的日志大小一直會增加.
生產(chǎn)環(huán)境建議將日志文件配置成按大小滾動生成,配置文件如下:
# This affects logging for both user code and Flink
log4j.rootLogger=INFO, R
# Uncomment this if you want to _only_ change Flink's logging
#log4j.logger.org.apache.flink=INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
log4j.logger.akka=INFO
log4j.logger.org.apache.kafka=INFO
log4j.logger.org.apache.hadoop=INFO
log4j.logger.org.apache.zookeeper=INFO
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=${log.file}
log4j.appender.R.MaxFileSize=256MB
log4j.appender.R.Append=true
log4j.appender.R.MaxBackupIndex=10
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %t %-5p %-60c %x - %m%n
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, R
Kafka配置
有的時候需要將日志發(fā)送到kafka做一些監(jiān)控告警或者統(tǒng)一采集到ELK查看分析, 則可以使用KafkaLog4jAppender發(fā)送到kafka, 配置文件如下:
# This affects logging for both user code and Flink
log4j.rootLogger=INFO, kafka
# Uncomment this if you want to _only_ change Flink's logging
#log4j.logger.org.apache.flink=INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
# !!!這里的配置要加上kafka,否則會卡在kafka send!!!
log4j.logger.akka=INFO, kafka
log4j.logger.org.apache.kafka=INFO, kafka
log4j.logger.org.apache.hadoop=INFO, kafka
log4j.logger.org.apache.zookeeper=INFO, kafka
# log send to kafka
log4j.appender.kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.kafka.brokerList=localhost:9092
log4j.appender.kafka.topic=flink_logs
log4j.appender.kafka.compressionType=none
log4j.appender.kafka.requiredNumAcks=0
log4j.appender.kafka.syncSend=false
log4j.appender.kafka.layout=org.apache.log4j.PatternLayout
log4j.appender.kafka.layout.ConversionPattern=[frex] [%d{yyyy-MM-dd HH:mm:ss,SSS}] [%p] %c{1}:%L %x - %m%n
log4j.appender.kafka.level=INFO
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, kafka
同時需要把 kafka-log4j-appender包放到 ${FLINK_HOME}/lib下
配置logback
待續(xù).