一. 執(zhí)行DataNode啟動腳本
[root@hadoop001 module]# hadoop-daemon.sh start datanode
starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop001.out
上面的輸出說明datanode啟動過程會在哪個文件中記錄润绎,可以到這個文件中查看datanode啟動成功與否揪惦。
二. 查看Java進(jìn)程
上一步執(zhí)行后,屏幕上不會輸出成功與否的提示篡腌,一定要通過jps命令查看datanode是否正常運(yùn)行
[root@hadoop001 logs]# jps
2672 JournalNode
4438 Jps
2234 QuorumPeerMain
發(fā)現(xiàn)沒有DataNode進(jìn)程热某。HA集群三個datanode的情況類似转捕。
三. 排查故障原因
1. 查看DataNode日志
A. 檢查日志內(nèi)容
[root@hadoop001 logs]# tail -n 50 hadoop-hadoop-datanode-hadoop001.log
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,581 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2022-04-17 10:04:45,582 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/module/hadoop-2.7.3/data/hdfs/data/in_use.lock acquired by nodename 3798@hadoop001
2022-04-17 10:04:45,582 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/module/hadoop-2.7.3/data/hdfs/data/
java.io.IOException: Incompatible clusterIDs in /opt/module/hadoop-2.7.3/data/hdfs/data: namenode clusterID = CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6; datanode clusterID = CID-17aec08d-b97f-405a-88f6-78b6af38f96c
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,583 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop002/192.168.5.102:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,583 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop002/192.168.5.102:9000
2022-04-17 10:04:45,584 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop001/192.168.5.101:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,584 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop001/192.168.5.101:9000
2022-04-17 10:04:45,686 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2022-04-17 10:04:47,687 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2022-04-17 10:04:47,689 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2022-04-17 10:04:47,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop001/192.168.5.101
************************************************************/
[root@hadoop001 logs]#
B. 關(guān)鍵錯誤信息
- 日志中有一條NameNode和DataNode的clusterID不兼容告警
2022-04-17 10:04:45,582 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/module/hadoop-2.7.3/data/hdfs/data/
java.io.IOException: Incompatible clusterIDs in /opt/module/hadoop-2.7.3/data/hdfs/data: namenode clusterID = CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6; datanode clusterID = CID-17aec08d-b97f-405a-88f6-78b6af38f96c
其中:
namenode cluserID= CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6
datanode clusterID = CID-17aec08d-b97f-405a-88f6-78b6af38f96c
- 根據(jù)報錯信息淋样,推測原因?yàn)閚amenode重新格式化后clusterID發(fā)生變化耗式,而datanode的clusterID沒有同步更新導(dǎo)致。
四. 解決辦法
1. 查看datanode的Version文件
[root@hadoop001 current]# pwd
/opt/module/hadoop-2.7.3/data/hdfs/data/current
[root@hadoop001 current]#
[root@hadoop001 current]# ls -ltr
total 4
drwx------ 4 root root 54 Feb 3 20:35 BP-1601307948-192.168.5.101-1643618166217
-rw-r--r-- 1 root root 229 Mar 26 17:41 VERSION
drwx------ 4 root root 54 Mar 26 17:41 BP-779737500-192.168.5.101-1648285297406
[root@hadoop001 current]# cat VERSION
#Sat Mar 26 17:41:57 CST 2022
storageID=DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69
clusterID=CID-17aec08d-b97f-405a-88f6-78b6af38f96c
cTime=0
datanodeUuid=0f7f895f-84a2-49b5-b9b7-8018c2180dbb
storageType=DATA_NODE
layoutVersion=-56
可見VERSION文件中datanode的cluster ID和前面日志中的報錯信息是一樣的趁猴。
2. 修改data路徑下 VERSION文件
- 用namenode的clusterID替換data VERSION文件中的clusterID值
[root@hadoop001 current]# cat VERSION
#Sun Apr 17 10:56:29 CST 2022
storageID=DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69
clusterID=CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6
cTime=0
datanodeUuid=0f7f895f-84a2-49b5-b9b7-8018c2180dbb
storageType=DATA_NODE
layoutVersion=-56
[root@hadoop001 current]#
3. 再次執(zhí)行啟動腳本
[root@hadoop001 current]# hadoop-daemon.sh start datanode
starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-hadoop-datanode-hadoop001.out
4. 檢查Java進(jìn)程
[root@hadoop001 current]# jps
2672 JournalNode
4758 Jps
4679 DataNode
2234 QuorumPeerMain
[root@hadoop001 current]#
說明datanode已經(jīng)正常啟動刊咳。
5. 另外兩個datanode如法炮制
6. 再次查看datanode日志
日志中記錄的DataNode啟動過程如下:
2022-04-17 10:56:27,533 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadoop001/192.168.5.101
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /opt/module/hadoop-2.7.3/etc/hadoop:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/opt/module/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/opt/module/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/opt/module/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_231
************************************************************/
2022-04-17 10:56:27,545 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2022-04-17 10:56:28,132 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2022-04-17 10:56:28,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2022-04-17 10:56:28,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2022-04-17 10:56:28,220 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2022-04-17 10:56:28,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is hadoop001
2022-04-17 10:56:28,233 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2022-04-17 10:56:28,256 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2022-04-17 10:56:28,257 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2022-04-17 10:56:28,257 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2022-04-17 10:56:28,341 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2022-04-17 10:56:28,349 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2022-04-17 10:56:28,354 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2022-04-17 10:56:28,359 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2022-04-17 10:56:28,361 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2022-04-17 10:56:28,361 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2022-04-17 10:56:28,361 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2022-04-17 10:56:28,370 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 39542
2022-04-17 10:56:28,370 INFO org.mortbay.log: jetty-6.1.26
2022-04-17 10:56:28,500 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39542
2022-04-17 10:56:28,575 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2022-04-17 10:56:28,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root
2022-04-17 10:56:28,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2022-04-17 10:56:28,929 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2022-04-17 10:56:28,941 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2022-04-17 10:56:28,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2022-04-17 10:56:28,969 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: ns
2022-04-17 10:56:29,337 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: ns
2022-04-17 10:56:29,345 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hadoop001/192.168.5.101:9000 starting to offer service
2022-04-17 10:56:29,348 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hadoop002/192.168.5.102:9000 starting to offer service
2022-04-17 10:56:29,351 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2022-04-17 10:56:29,353 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2022-04-17 10:56:29,503 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2022-04-17 10:56:29,507 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/module/hadoop-2.7.3/data/hdfs/data/in_use.lock acquired by nodename 4679@hadoop001
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /opt/module/hadoop-2.7.3/data/hdfs/data/current/BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory /opt/module/hadoop-2.7.3/data/hdfs/data/current/BP-1679095799-192.168.5.101-1650160128452 is not formatted for BP-1679095799-192.168.5.101-1650160128452. Formatting ...
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-1679095799-192.168.5.101-1650160128452 directory /opt/module/hadoop-2.7.3/data/hdfs/data/current/BP-1679095799-192.168.5.101-1650160128452/current
2022-04-17 10:56:29,594 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1729410556;bpid=BP-1679095799-192.168.5.101-1650160128452;lv=-56;nsInfo=lv=-63;cid=CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6;nsid=1729410556;c=0;bpid=BP-1679095799-192.168.5.101-1650160128452;dnuuid=0f7f895f-84a2-49b5-b9b7-8018c2180dbb
2022-04-17 10:56:29,639 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69
2022-04-17 10:56:29,639 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /opt/module/hadoop-2.7.3/data/hdfs/data/current, StorageType: DISK
2022-04-17 10:56:29,644 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2022-04-17 10:56:29,644 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,645 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data/current...
2022-04-17 10:56:29,665 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1679095799-192.168.5.101-1650160128452 on /opt/module/hadoop-2.7.3/data/hdfs/data/current: 20ms
2022-04-17 10:56:29,665 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1679095799-192.168.5.101-1650160128452: 20ms
2022-04-17 10:56:29,665 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data/current...
2022-04-17 10:56:29,666 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data/current: 0ms
2022-04-17 10:56:29,666 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
2022-04-17 10:56:29,671 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data
2022-04-17 10:56:29,674 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1650184890674 with interval 21600000
2022-04-17 10:56:29,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid null) service to hadoop001/192.168.5.101:9000 beginning handshake with NN
2022-04-17 10:56:29,678 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/opt/module/hadoop-2.7.3/data/hdfs/data, DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69): finished scanning block pool BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,679 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid null) service to hadoop002/192.168.5.102:9000 beginning handshake with NN
2022-04-17 10:56:29,720 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid null) service to hadoop001/192.168.5.101:9000 successfully registered with NN
2022-04-17 10:56:29,720 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hadoop001/192.168.5.101:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2022-04-17 10:56:29,729 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid 0f7f895f-84a2-49b5-b9b7-8018c2180dbb) service to hadoop002/192.168.5.102:9000 successfully registered with NN
2022-04-17 10:56:29,729 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hadoop002/192.168.5.102:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2022-04-17 10:56:29,731 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/opt/module/hadoop-2.7.3/data/hdfs/data, DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69): no suitable block pools found to scan. Waiting 1814399939 ms.
2022-04-17 10:56:29,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid 0f7f895f-84a2-49b5-b9b7-8018c2180dbb) service to hadoop001/192.168.5.101:9000 trying to claim ACTIVE state with txid=59
2022-04-17 10:56:29,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid 0f7f895f-84a2-49b5-b9b7-8018c2180dbb) service to hadoop001/192.168.5.101:9000
2022-04-17 10:56:29,851 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x673bbe1240a, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 46 msecs for RPC and NN processing. Got back no commands.
2022-04-17 10:56:29,852 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x673bb6616a8, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 54 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2022-04-17 10:56:29,852 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1679095799-192.168.5.101-1650160128452
[root@hadoop001 logs]#