轉(zhuǎn)載自:https://www.cnblogs.com/Fordestiny/p/9493433.html
問(wèn)題
上傳文件到Hadoop異常奢米,報(bào)錯(cuò)信息如下:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /home/input/qn_log.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
image
解決
1凫乖、查看問(wèn)題節(jié)點(diǎn)的進(jìn)程情況:
image
DataNode進(jìn)程沒(méi)有啟動(dòng)
2汛骂、查看Hadoop datanode.log信息
2018-08-17 05:48:58,076 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/usr/local/hadoop2.7/dfs/data/
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop2.7/dfs/data: namenode clusterID = CID-e1a65f22-f0f6-4423-8c2b-03edd2f30766; datanode clusterID = CID-647259e5-0250-4676-8327-a09f8ccd38a7
報(bào)錯(cuò)的信息為仲闽,namenode clusterID 與 datanode clusterID 不一致糠馆!
分別為:
namenode clusterID = CID-e1a65f22-f0f6-4423-8c2b-03edd2f30766
datanode clusterID = CID-647259e5-0250-4676-8327-a09f8ccd38a7
回想了下,出現(xiàn)這個(gè)問(wèn)題的原因:在于我在重啟Docker容器之后,對(duì)HDFS重新做了格式化,導(dǎo)致版本不一致承绸。
3、解決:
方法:將DataNode的版本挣轨,修改到與NameNode一致
(1)修改dfs/data/current/VERSION文件中军熏,將clusterID的值,改為與namenode的clusterID的值卷扮。
進(jìn)入目錄hadoop-2.7.7/data/tmp/dfs/name/current
拷貝VERSION文件中的clusterID到datanode(dfs/data/current/VERSION)的VERSION中荡澎,覆蓋datanode的ClusterID即可均践。
(2)重啟集群,注意摩幔, 勿執(zhí)行namenode格式化彤委,如下:
$HADOOP_HOME/sbin/start-dfs.sh
$HADOOP_HOME/sbin/start-yarn.sh
再查一下節(jié)點(diǎn)進(jìn)程
image
DataNode進(jìn)程啟動(dòng)起來(lái)了!
再試一下上傳或衡,也OK了