【前言】最近部門因?yàn)閿?shù)據(jù)安全問(wèn)題需要遷移線上生產(chǎn)環(huán)境HBase集群的網(wǎng)段衣厘,在集群重啟后發(fā)生如下兩個(gè)錯(cuò)誤树酪,導(dǎo)致HBase集群無(wú)法正侈嗟洌恢復(fù):(1)HMaster節(jié)點(diǎn)自動(dòng)失斈窭;(2)部分Region出現(xiàn)offline和RIT襟己。因?yàn)槭蔷€上生產(chǎn)環(huán)境引谜,涉及到公司七八十個(gè)業(yè)務(wù)線,因此必須要在規(guī)定時(shí)間內(nèi)保證集群可用擎浴,且保證用戶數(shù)據(jù)不丟失员咽。
【HMaster節(jié)點(diǎn)自動(dòng)失敗】
現(xiàn)象:HA HMaster節(jié)點(diǎn)啟動(dòng)了,過(guò)一段時(shí)間Active HBase Master節(jié)點(diǎn)自動(dòng)失斨ぁ(大概3~5分鐘)贝室。因?yàn)榧翰捎昧薍A高可用,因此Standby HBase Master節(jié)點(diǎn)自動(dòng)切換為Active仿吞。再過(guò)差不多相同時(shí)間滑频,該節(jié)點(diǎn)也自動(dòng)失敗。
查看Master節(jié)點(diǎn)的日志報(bào)錯(cuò)如下(Failed to become active master java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
):
2018-12-23 01:12:56,617 FATAL [master2:16000.activeMasterManager] (org.apache.hadoop.hbase.master.HMaster$1.run:1650) - Failed to become active master
java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
at java.lang.Thread.run(Thread.java:745)
2018-12-23 01:12:56,620 FATAL [master2:16000.activeMasterManager] (org.apache.hadoop.hbase.master.HMaster.abort:2095) - Master server abort: loaded coprocessors are: []
2018-12-23 01:12:56,620 FATAL [master2:16000.activeMasterManager] (org.apache.hadoop.hbase.master.HMaster.abort:2098) - Unhandled exception. Starting shutdown.
java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:985)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:779)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1646)
at java.lang.Thread.run(Thread.java:745)
繼續(xù)往下查看具體的報(bào)錯(cuò)內(nèi)容:
018-12-23 01:12:56,655 ERROR [MASTER_SERVER_OPERATIONS-master2:16000-3] (org.apache.hadoop.hbase.executor.EventHandler.handleException:226) - Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for hbase2.yun,16020,1543323308010, will retry
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:356)
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:220)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://hxcs/apps/hbase/data/WALs/hbase2.yun,16020,1543323308010-splitting] Task = installed = 510 done = 79 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:290)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:391)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:364)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:286)
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:213)
... 4 more
2018-12-23 01:12:56,656 WARN [MASTER_SERVER_OPERATIONS-master2:16000-0] (org.apache.hadoop.hbase.master.SplitLogManager.waitForSplittingCompletion:368) - Stopped while waiting for log splits to be completed
修復(fù)方案:HBase集群在執(zhí)行split操作時(shí)被維護(hù)人員停止(這里可以確保HBase集群是被正常stop)唤冈,導(dǎo)致集群重啟時(shí)無(wú)法恢復(fù)之前的WAL split文件(真實(shí)原因定位需要后續(xù)復(fù)盤測(cè)試)误趴。既然HMaster節(jié)點(diǎn)是不能恢復(fù)WAL split文件才導(dǎo)致的異常退出,那我們先移除/apps/hbase/data/WALs/
下的所有WAL文件务傲,確保集群能夠正常啟動(dòng)凉当。重啟HBase集群后枣申,HMaster節(jié)點(diǎn)恢復(fù)正常,未出現(xiàn)自動(dòng)失敗現(xiàn)象看杭。
相同案例:https://community.hortonworks.com/questions/33140/hbase-master-fails-to-start.html
注意:WAL文件移除會(huì)導(dǎo)致數(shù)據(jù)丟失忠藤,后續(xù)需要恢復(fù)該部分?jǐn)?shù)據(jù),因此千萬(wàn)要備份楼雹。
【部分Region出現(xiàn)offline和RIT】
現(xiàn)象:HMaster節(jié)點(diǎn)恢復(fù)正常后模孩,集群開(kāi)始恢復(fù)Region數(shù)據(jù)。觀察Web UI界面贮缅,發(fā)現(xiàn)有大量Offline Regions和RIT(Region-In-Transition)榨咐。
查看RegionServer節(jié)點(diǎn)的日志發(fā)現(xiàn)頻繁出現(xiàn)如下類似錯(cuò)誤:
2018-12-23 02:53:59,912 INFO [CachedHtables-pool22-t1] (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit:1169) - #92, table=STOCKDIARY_INDEX_2, attempt=118/350 failed=1ops, last exception: org.apache.hadoop.hbase.exceptions.RegionOpeningExce
ption: org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region STOCKDIARY_INDEX_2,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1527595214520.356bf32cca660d985087a0041b70129d. is opening on datanode158,16020,1545498423482
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2895)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1991)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
on datanode158,16020,1543323316468, tracking started null, retrying after=20152ms, replay=1ops
2018-12-23 02:54:02,420 INFO [hbase2.yun,16020,1545498428315-recovery-writer--pool6-t11] (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.waitUntilDone:1566) - #92, waiting for 1 actions to finish
2018-12-23 02:54:02,925 INFO [CachedHtables-pool7-t3] (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit:1169) - #40, table=STOCKDIARY_INDEX, attempt=126/350 failed=1ops, last exception: org.apache.hadoop.hbase.NotServingRegionException: org.
apache.hadoop.hbase.NotServingRegionException: Region STOCKDIARY_INDEX,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1527062480471.31c6c692a383e8f9190338b8ed58da24. is not online on hbase1.yun,16020,1545498434507
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1991)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
on hbase1.yun,16020,1543323426969, tracking started null, retrying after=20015ms, replay=1ops
修復(fù)方案:因?yàn)镠DFS上并未發(fā)現(xiàn)數(shù)據(jù)塊丟失的情況,所以用hbase hbck
等命令修復(fù)均嘗試失敗谴供。通過(guò)查找HDP官網(wǎng)發(fā)現(xiàn)類似案例(案例環(huán)境與集群完全類似)块茁,是因?yàn)閛pen region的線程出現(xiàn)死鎖導(dǎo)致,可以通過(guò)提高線程數(shù)來(lái)解決桂肌,修改配置并重啟HBase集群后恢復(fù)正常数焊。
相同案例:https://community.hortonworks.com/questions/8757/phoenix-local-indexes.html
修改配置:
<property>
<name>hbase.regionserver.executor.openregion.threads</name>
<value>100</value>
</property>
【W(wǎng)AL文件的數(shù)據(jù)恢復(fù)】
因?yàn)椴块THBase集群存儲(chǔ)的是線上用戶數(shù)據(jù),對(duì)于數(shù)據(jù)丟失是零容忍的崎场。前面在解決HMaster節(jié)點(diǎn)自動(dòng)失敗的時(shí)候移除了WAL文件佩耳,因此在集群恢復(fù)正常運(yùn)行后,需要恢復(fù)該部分?jǐn)?shù)據(jù)谭跨。
命令:hbase org.apache.hadoop.hbase.mapreduce.WALPlayer [options] <wal inputdir> <tables> [<tableMappings>]
說(shuō)明:回放WAL文件生成HFile文件干厚,并bulkload HFile文件到對(duì)應(yīng)表中。
//處理所有的表
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /tmp/apps/hbase/data/WALs/
//處理單張表
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /tmp/apps/hbase/data/WALs/ table1
//處理多張表
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /tmp/apps/hbase/data/WALs/ table1,table2,table3
對(duì)于使用了Phoenix插件的HBase集群螃宙,執(zhí)行以上命令時(shí)會(huì)發(fā)生如下錯(cuò)誤:
java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
需要在執(zhí)行命令時(shí)添加Phoenix插件的jar包蛮瞄,命令示例如下:
hbase org.apache.hadoop.hbase.mapreduce.WALPlayer -libjars=/usr/data/phoenix-4.9.0-HBase-1.1-server.jar /tmp/apps/hbase/data/WALs/ table1
在測(cè)試WALPlayer命令時(shí)可能需要備份表數(shù)據(jù),以防在執(zhí)行命令時(shí)發(fā)生未可預(yù)知的錯(cuò)誤污呼,命令如下:
//本集群備份
hbase org.apache.hadoop.hbase.mapreduce.CopyTable -libjars=/usr/data/hbase-client/lib/phoenix-4.9.0-HBase-1.1-server.jar --new.name=newTableName oldTableName
//跨級(jí)群備份
hbase org.apache.hadoop.hbase.mapreduce.CopyTable --peer.adr=server1,server2,server3:2181:/hbase -libjars=/usr/data/hbase-client/lib/phoenix-4.9.0-HBase-1.1-server.jar --new.name=newTableName oldTableName