DataNode重啟失敗的原因排查

一. 執(zhí)行DataNode啟動腳本

[root@hadoop001 module]# hadoop-daemon.sh start datanode
starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop001.out

上面的輸出說明datanode啟動過程會在哪個文件中記錄润绎,可以到這個文件中查看datanode啟動成功與否揪惦。

二. 查看Java進(jìn)程

上一步執(zhí)行后,屏幕上不會輸出成功與否的提示篡腌,一定要通過jps命令查看datanode是否正常運(yùn)行

[root@hadoop001 logs]# jps
2672 JournalNode
4438 Jps
2234 QuorumPeerMain

發(fā)現(xiàn)沒有DataNode進(jìn)程热某。HA集群三個datanode的情況類似转捕。

三. 排查故障原因

1. 查看DataNode日志
A. 檢查日志內(nèi)容
[root@hadoop001 logs]# tail -n 50 hadoop-hadoop-datanode-hadoop001.log
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,581 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2022-04-17 10:04:45,582 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/module/hadoop-2.7.3/data/hdfs/data/in_use.lock acquired by nodename 3798@hadoop001
2022-04-17 10:04:45,582 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/module/hadoop-2.7.3/data/hdfs/data/
java.io.IOException: Incompatible clusterIDs in /opt/module/hadoop-2.7.3/data/hdfs/data: namenode clusterID = CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6; datanode clusterID = CID-17aec08d-b97f-405a-88f6-78b6af38f96c
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,583 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop002/192.168.5.102:9000. Exiting. 
java.io.IOException: All specified directories are failed to load.
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,583 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop002/192.168.5.102:9000
2022-04-17 10:04:45,584 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop001/192.168.5.101:9000. Exiting. 
java.io.IOException: All specified directories are failed to load.
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    at java.lang.Thread.run(Thread.java:748)
2022-04-17 10:04:45,584 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop001/192.168.5.101:9000
2022-04-17 10:04:45,686 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2022-04-17 10:04:47,687 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2022-04-17 10:04:47,689 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2022-04-17 10:04:47,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop001/192.168.5.101
************************************************************/
[root@hadoop001 logs]# 
B. 關(guān)鍵錯誤信息
  • 日志中有一條NameNode和DataNode的clusterID不兼容告警
2022-04-17 10:04:45,582 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/module/hadoop-2.7.3/data/hdfs/data/
java.io.IOException: Incompatible clusterIDs in /opt/module/hadoop-2.7.3/data/hdfs/data: namenode clusterID = CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6; datanode clusterID = CID-17aec08d-b97f-405a-88f6-78b6af38f96c

其中:
namenode cluserID= CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6
datanode clusterID = CID-17aec08d-b97f-405a-88f6-78b6af38f96c

  • 根據(jù)報錯信息淋样,推測原因?yàn)閚amenode重新格式化后clusterID發(fā)生變化耗式,而datanode的clusterID沒有同步更新導(dǎo)致。

四. 解決辦法

1. 查看datanode的Version文件
[root@hadoop001 current]# pwd
/opt/module/hadoop-2.7.3/data/hdfs/data/current
[root@hadoop001 current]# 
[root@hadoop001 current]# ls -ltr
total 4
drwx------ 4 root root  54 Feb  3 20:35 BP-1601307948-192.168.5.101-1643618166217
-rw-r--r-- 1 root root 229 Mar 26 17:41 VERSION
drwx------ 4 root root  54 Mar 26 17:41 BP-779737500-192.168.5.101-1648285297406
[root@hadoop001 current]# cat VERSION 
#Sat Mar 26 17:41:57 CST 2022
storageID=DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69
clusterID=CID-17aec08d-b97f-405a-88f6-78b6af38f96c
cTime=0
datanodeUuid=0f7f895f-84a2-49b5-b9b7-8018c2180dbb
storageType=DATA_NODE
layoutVersion=-56

可見VERSION文件中datanode的cluster ID和前面日志中的報錯信息是一樣的趁猴。

2. 修改data路徑下 VERSION文件
  • 用namenode的clusterID替換data VERSION文件中的clusterID值
[root@hadoop001 current]# cat VERSION
#Sun Apr 17 10:56:29 CST 2022
storageID=DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69
clusterID=CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6
cTime=0
datanodeUuid=0f7f895f-84a2-49b5-b9b7-8018c2180dbb
storageType=DATA_NODE
layoutVersion=-56
[root@hadoop001 current]# 
3. 再次執(zhí)行啟動腳本
[root@hadoop001 current]# hadoop-daemon.sh start datanode
starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-hadoop-datanode-hadoop001.out
4. 檢查Java進(jìn)程
[root@hadoop001 current]# jps
2672 JournalNode
4758 Jps
4679 DataNode
2234 QuorumPeerMain
[root@hadoop001 current]# 

說明datanode已經(jīng)正常啟動刊咳。

5. 另外兩個datanode如法炮制
6. 再次查看datanode日志

日志中記錄的DataNode啟動過程如下:

2022-04-17 10:56:27,533 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = hadoop001/192.168.5.101
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.3
STARTUP_MSG:   classpath = /opt/module/hadoop-2.7.3/etc/hadoop:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/opt/module/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/opt/module/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/opt/module/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/opt/module/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/opt/module/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/opt/module/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG:   java = 1.8.0_231
************************************************************/
2022-04-17 10:56:27,545 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2022-04-17 10:56:28,132 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2022-04-17 10:56:28,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2022-04-17 10:56:28,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2022-04-17 10:56:28,220 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2022-04-17 10:56:28,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is hadoop001
2022-04-17 10:56:28,233 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2022-04-17 10:56:28,256 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2022-04-17 10:56:28,257 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2022-04-17 10:56:28,257 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2022-04-17 10:56:28,341 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2022-04-17 10:56:28,349 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2022-04-17 10:56:28,354 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2022-04-17 10:56:28,359 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2022-04-17 10:56:28,361 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2022-04-17 10:56:28,361 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2022-04-17 10:56:28,361 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2022-04-17 10:56:28,370 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 39542
2022-04-17 10:56:28,370 INFO org.mortbay.log: jetty-6.1.26
2022-04-17 10:56:28,500 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39542
2022-04-17 10:56:28,575 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2022-04-17 10:56:28,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root
2022-04-17 10:56:28,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2022-04-17 10:56:28,929 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2022-04-17 10:56:28,941 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2022-04-17 10:56:28,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2022-04-17 10:56:28,969 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: ns
2022-04-17 10:56:29,337 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: ns
2022-04-17 10:56:29,345 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hadoop001/192.168.5.101:9000 starting to offer service
2022-04-17 10:56:29,348 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hadoop002/192.168.5.102:9000 starting to offer service
2022-04-17 10:56:29,351 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2022-04-17 10:56:29,353 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2022-04-17 10:56:29,503 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2022-04-17 10:56:29,507 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/module/hadoop-2.7.3/data/hdfs/data/in_use.lock acquired by nodename 4679@hadoop001
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /opt/module/hadoop-2.7.3/data/hdfs/data/current/BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory /opt/module/hadoop-2.7.3/data/hdfs/data/current/BP-1679095799-192.168.5.101-1650160128452 is not formatted for BP-1679095799-192.168.5.101-1650160128452. Formatting ...
2022-04-17 10:56:29,590 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-1679095799-192.168.5.101-1650160128452 directory /opt/module/hadoop-2.7.3/data/hdfs/data/current/BP-1679095799-192.168.5.101-1650160128452/current
2022-04-17 10:56:29,594 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1729410556;bpid=BP-1679095799-192.168.5.101-1650160128452;lv=-56;nsInfo=lv=-63;cid=CID-bdefd1cb-a53b-4300-bda2-39aaeae2abf6;nsid=1729410556;c=0;bpid=BP-1679095799-192.168.5.101-1650160128452;dnuuid=0f7f895f-84a2-49b5-b9b7-8018c2180dbb
2022-04-17 10:56:29,639 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69
2022-04-17 10:56:29,639 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /opt/module/hadoop-2.7.3/data/hdfs/data/current, StorageType: DISK
2022-04-17 10:56:29,644 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2022-04-17 10:56:29,644 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,645 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data/current...
2022-04-17 10:56:29,665 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1679095799-192.168.5.101-1650160128452 on /opt/module/hadoop-2.7.3/data/hdfs/data/current: 20ms
2022-04-17 10:56:29,665 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1679095799-192.168.5.101-1650160128452: 20ms
2022-04-17 10:56:29,665 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data/current...
2022-04-17 10:56:29,666 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data/current: 0ms
2022-04-17 10:56:29,666 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
2022-04-17 10:56:29,671 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-1679095799-192.168.5.101-1650160128452 on volume /opt/module/hadoop-2.7.3/data/hdfs/data
2022-04-17 10:56:29,674 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1650184890674 with interval 21600000
2022-04-17 10:56:29,677 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid null) service to hadoop001/192.168.5.101:9000 beginning handshake with NN
2022-04-17 10:56:29,678 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/opt/module/hadoop-2.7.3/data/hdfs/data, DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69): finished scanning block pool BP-1679095799-192.168.5.101-1650160128452
2022-04-17 10:56:29,679 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid null) service to hadoop002/192.168.5.102:9000 beginning handshake with NN
2022-04-17 10:56:29,720 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid null) service to hadoop001/192.168.5.101:9000 successfully registered with NN
2022-04-17 10:56:29,720 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hadoop001/192.168.5.101:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2022-04-17 10:56:29,729 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid 0f7f895f-84a2-49b5-b9b7-8018c2180dbb) service to hadoop002/192.168.5.102:9000 successfully registered with NN
2022-04-17 10:56:29,729 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hadoop002/192.168.5.102:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2022-04-17 10:56:29,731 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/opt/module/hadoop-2.7.3/data/hdfs/data, DS-a42e45e8-e949-4cd9-aef9-f4500f96ab69): no suitable block pools found to scan.  Waiting 1814399939 ms.
2022-04-17 10:56:29,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid 0f7f895f-84a2-49b5-b9b7-8018c2180dbb) service to hadoop001/192.168.5.101:9000 trying to claim ACTIVE state with txid=59
2022-04-17 10:56:29,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1679095799-192.168.5.101-1650160128452 (Datanode Uuid 0f7f895f-84a2-49b5-b9b7-8018c2180dbb) service to hadoop001/192.168.5.101:9000
2022-04-17 10:56:29,851 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x673bbe1240a,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 46 msecs for RPC and NN processing. Got back no commands.
2022-04-17 10:56:29,852 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x673bb6616a8,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 54 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2022-04-17 10:56:29,852 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1679095799-192.168.5.101-1650160128452
[root@hadoop001 logs]# 
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市儡司,隨后出現(xiàn)的幾起案子娱挨,更是在濱河造成了極大的恐慌,老刑警劉巖捕犬,帶你破解...
    沈念sama閱讀 211,817評論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件跷坝,死亡現(xiàn)場離奇詭異,居然都是意外死亡碉碉,警方通過查閱死者的電腦和手機(jī)探孝,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,329評論 3 385
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來誉裆,“玉大人,你說我怎么就攤上這事缸濒∽愣” “怎么了?”我有些...
    開封第一講書人閱讀 157,354評論 0 348
  • 文/不壞的土叔 我叫張陵庇配,是天一觀的道長斩跌。 經(jīng)常有香客問我,道長捞慌,這世上最難降的妖魔是什么耀鸦? 我笑而不...
    開封第一講書人閱讀 56,498評論 1 284
  • 正文 為了忘掉前任,我火速辦了婚禮啸澡,結(jié)果婚禮上袖订,老公的妹妹穿的比我還像新娘。我一直安慰自己嗅虏,他們只是感情好洛姑,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,600評論 6 386
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著皮服,像睡著了一般楞艾。 火紅的嫁衣襯著肌膚如雪参咙。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,829評論 1 290
  • 那天硫眯,我揣著相機(jī)與錄音蕴侧,去河邊找鬼。 笑死两入,一個胖子當(dāng)著我的面吹牛净宵,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播谆刨,決...
    沈念sama閱讀 38,979評論 3 408
  • 文/蒼蘭香墨 我猛地睜開眼塘娶,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了痊夭?” 一聲冷哼從身側(cè)響起刁岸,我...
    開封第一講書人閱讀 37,722評論 0 266
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎她我,沒想到半個月后虹曙,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,189評論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡番舆,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,519評論 2 327
  • 正文 我和宋清朗相戀三年酝碳,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片恨狈。...
    茶點(diǎn)故事閱讀 38,654評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡疏哗,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出禾怠,到底是詐尸還是另有隱情返奉,我是刑警寧澤,帶...
    沈念sama閱讀 34,329評論 4 330
  • 正文 年R本政府宣布吗氏,位于F島的核電站芽偏,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏弦讽。R本人自食惡果不足惜污尉,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,940評論 3 313
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望往产。 院中可真熱鬧被碗,春花似錦、人聲如沸仿村。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,762評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽奠宜。三九已至包颁,卻和暖如春瞻想,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背娩嚼。 一陣腳步聲響...
    開封第一講書人閱讀 31,993評論 1 266
  • 我被黑心中介騙來泰國打工蘑险, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人岳悟。 一個月前我還...
    沈念sama閱讀 46,382評論 2 360
  • 正文 我出身青樓佃迄,卻偏偏與公主長得像,于是被迫代替她去往敵國和親贵少。 傳聞我的和親對象是個殘疾皇子呵俏,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,543評論 2 349

推薦閱讀更多精彩內(nèi)容