問題現(xiàn)象
一個使用10秒滾動窗口的任務在平穩(wěn)運行一段時間之后出現(xiàn)了頻繁的重啟。在TaskManager日志中能看到以下文本:
2019-03-17 16:05:28,854 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
原因定位
- 首先可以看到是YarnTaskExecutorRunner收到了SIGTERM信號, 因為是部署在Yarn上,所以基本可以定位到是Yarn因為什么原因從OS的層面將這個進程給Kill掉的托嚣。
- 代碼上也可以根據(jù)這個日志可以定位到Flink的SignalHandler犯建,下圖可以看到Handler的構造調(diào)用過程。不管是YarnSessionClusterEntrypoint還是YarnTaskExecutorRunner的主函數(shù)都會注冊,并且會在接收到OS的"TERM", "HUP", "INT"信號是打出日志嬉荆。
private static class Handler implements sun.misc.SignalHandler { private final Logger LOG; private final sun.misc.SignalHandler prevHandler; Handler(String name, Logger LOG) { this.LOG = LOG; prevHandler = Signal.handle(new Signal(name), this); } /** * Handle an incoming signal. * * @param signal The incoming signal */ @Override public void handle(Signal signal) { LOG.info("RECEIVED SIGNAL {}: SIG{}. Shutting down as requested.", signal.getNumber(), signal.getName()); prevHandler.handle(signal); } }
Handler構造器的調(diào)用 - 接著可以觀察這個TaskManager所在機器的Yarn的NodeManager的日志入蛆,grep出和這個容器相關的日志响蓉,可以看到最后如下。TaskManager內(nèi)存超出了物理內(nèi)存的限制哨毁。但是從GC日志來看枫甲,連Full GC都很少發(fā)生。
2019-03-19 16:48:10,647 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 10265 for container-id container_1541225469893_4985_01_000005: 3.4 GB of 4 GB physical memory used; 5.4 GB of 8.4 GB virtual memory used
- 因為開了taskmanager.memory.off-heap=true選項,所以Flink內(nèi)部也會使用一些堆外的內(nèi)存想幻。還有就是RocksDB也會直接通過malloc分配內(nèi)存粱栖。
堆外內(nèi)存排查(大型繞彎路現(xiàn)場,想知道直接原因可以直接跳到最后)
- 在開啟堆外內(nèi)存優(yōu)化時脏毯,F(xiàn)link的MemoryManager和NetworkBufferPool會使用ByteBuffer.allocateDirect方法來創(chuàng)建DirectByteBuffer闹究,以此來使用堆外內(nèi)存。但是通過heap dump食店,可以看到DirectByteBuffer數(shù)量及其有限渣淤,因為使用的是默認的taskmanager.memory.segment-size,也就是32KB吉嫩,所以占用的堆外內(nèi)存也只有幾百兆价认,而我預留了兩點幾G的堆外內(nèi)存,顯然不是這個引起的率挣。這時問題排查一度陷入了死胡同刻伊。
Class Name | Objects | Shallow Heap | Retained Heap ------------------------------------------------------------------ java.nio.DirectByteBuffer| 9,470 | 606,080 | >= 606,096 ------------------------------------------------------------------
- 還有一個可疑的點是RocksDB,但是很難去排查它到底占用了多少內(nèi)存椒功。
- 在大佬的幫助下捶箱,在性能環(huán)境上安裝了Jemalloc來代替原來的malloc,關于jemalloc的安裝參考文檔
- 并且在flink-conf.yaml中配置如下參數(shù)动漾,將其注入到container的系統(tǒng)環(huán)境變量中丁屎,使其生效。這樣可以定期把memory profile dump出來旱眯,進行分析, 發(fā)現(xiàn)最后malloc最多的是rocksdb的rocksdb::UncompressBlockContentsForCompressionType方法晨川,并且最終占到了2.15G內(nèi)存的47%,TaskManager也被Yarn給kill掉删豺。
containerized.master.env.LD_PRELOAD: "/opt/jemalloc/lib/libjemalloc.so.2" containerized.master.env.MALLOC_CONF: "prof:true,lg_prof_interval:25,lg_prof_sample:17"
- 使用/opt/jemalloc/bin/jeprof --show_bytes /opt/java/bin/java jeprof.xxx 來分析dump文件共虑,使用top來顯示對調(diào)用malloc最多的方法
- 對比前后dump文件如下
Using local file /opt/java/bin/java. Using local file jeprof.18091.9812.i9812.heap. Welcome to jeprof! For help, type 'help'. (jeprof) top Total: 1107642087 B 884271340 79.8% 79.8% 884271340 79.8% os::malloc@921040 150994944 13.6% 93.5% 150994944 13.6% rocksdb::Arena::AllocateNewBlock 51761789 4.7% 98.1% 51761789 4.7% rocksdb::UncompressBlockContentsForCompressionType 5242880 0.5% 98.6% 5242880 0.5% init 5184828 0.5% 99.1% 5184828 0.5% updatewindow 4204536 0.4% 99.5% 4204536 0.4% readCEN 1643018 0.1% 99.6% 1643018 0.1% std::basic_string::_Rep::_S_create 1346886 0.1% 99.7% 1346886 0.1% inflateInit2_ 917840 0.1% 99.8% 1181009 0.1% rocksdb::LRUCacheShard::Insert 393336 0.0% 99.8% 52155125 4.7% rocksdb::BlockBasedTable::GetTableProperties
(jeprof) top Total: 2259309361 B 1062208712 47.0% 47.0% 1062208712 47.0% rocksdb::UncompressBlockContentsForCompressionType 884120659 39.1% 86.1% 884120659 39.1% os::malloc@921040 285348930 12.6% 98.8% 285348930 12.6% rocksdb::Arena::AllocateNewBlock 5451379 0.2% 99.0% 5451379 0.2% std::basic_string::_Rep::_S_create 5242880 0.2% 99.3% 5242880 0.2% init 5036690 0.2% 99.5% 5036690 0.2% updatewindow 4204536 0.2% 99.7% 4204536 0.2% readCEN 2621559 0.1% 99.8% 2621559 0.1% rocksdb::WritableFileWriter::Append 1346886 0.1% 99.8% 1346886 0.1% inflateInit2_ 524472 0.0% 99.9% 788155 0.0% rocksdb::LRUCacheShard::Insert
jeprof.pdf
- 在搜索這個方法后,發(fā)現(xiàn)有這個issue的github issue鏈接呀页,實際上也不是Memory Leak妈拌,在默認配置下,rocksdb會為所有flush的sst文件在內(nèi)存中保留索引蓬蝶,索引會隨著文件數(shù)越來越多而占用更多的內(nèi)存空間尘分,如果限制內(nèi)存中索引的消耗,會導致經(jīng)常需要去sst文件中獲取元信息來搜索丸氛,大量增加io消耗(這塊不是特別熟悉培愁,有可能說的有點問題),那為什么RocksDB文件會不停膨脹缓窜?
最終問題定位(走完彎路)
- RocksDB文件不斷膨脹定续,可以從checkpoint的大小來看出來谍咆,將incremental checkpoint關閉后,發(fā)現(xiàn)每次Checkpoint大小都在遞增香罐,但是用戶代碼的邏輯實際是使用一個10s的滾動窗口卧波,不應該會出現(xiàn)這樣的情況。
- 之后在flink窗口算子中加了幾行日志庇茫,如下所示港粱,以ClarkTest開頭
@Override
public void onProcessingTime(InternalTimer<K, W> timer) throws Exception {
triggerContext.key = timer.getKey();
triggerContext.window = timer.getNamespace();
MergingWindowSet<W> mergingWindows;
if (windowAssigner instanceof MergingWindowAssigner) {
mergingWindows = getMergingWindowSet();
W stateWindow = mergingWindows.getStateWindow(triggerContext.window);
if (stateWindow == null) {
// Timer firing for non-existent window, this can only happen if a
// trigger did not clean up timers. We have already cleared the merging
// window and therefore the Trigger state, however, so nothing to do.
return;
} else {
windowState.setCurrentNamespace(stateWindow);
}
} else {
windowState.setCurrentNamespace(triggerContext.window);
mergingWindows = null;
}
TriggerResult triggerResult = triggerContext.onProcessingTime(timer.getTimestamp());
int randomInt = random.nextInt(1000);
if (triggerResult.isFire()) {
ACC contents = windowState.get();
if (randomInt == 1) {
LOG.info("ClarkTest: Window state namespace: " + triggerContext.window + " and key " + triggerContext.key);
LOG.info("ClarkTest: Window state value is going to fire is null ? " + (windowState.get() == null));
}
if (contents != null) {
emitWindowContents(triggerContext.window, contents);
}
}
if (triggerResult.isPurge()) {
if (randomInt == 1) {
LOG.info("ClarkTest: Window state get purged. ");
}
windowState.clear();
}
if (!windowAssigner.isEventTime() && isCleanupTime(triggerContext.window, timer.getTimestamp())) {
windowState.setCurrentNamespace(triggerContext.window);
if (randomInt == 1) {
LOG.info("ClarkTest: Window State namespace before cleaning: " + triggerContext.window + " and key " + triggerContext.key);
LOG.info("ClarkTest: Window state value before clear is null ? " + (windowState.get() == null));
}
clearAllState(triggerContext.window, windowState, mergingWindows);
if (randomInt == 1) {
LOG.info("ClarkTest: Window state value after clear is null ? " + (windowState.get() == null));
}
}
if (mergingWindows != null) {
// need to make sure to update the merging state in state
mergingWindows.persist();
}
}
- 發(fā)現(xiàn)每次在emitWindowContents之前window state的結果都不為null,但是在clean up之前旦签,window state的結果已經(jīng)變成了null查坪。說明在這兩段邏輯之間出了什么問題。通過將key打印出來宁炫,發(fā)現(xiàn)前后key有所變化偿曙,所以最后確定是用戶代碼的process function中改變了keyby的key的值導致窗口狀態(tài)無法清理
- 最后總結就是在keyby的時候key一定要是不變量,不然有可能導致狀態(tài)無法清理羔巢。還有就是在分布式系統(tǒng)中望忆,大量使用不變量是規(guī)避風險的最佳途徑之一。