1.ERROR org.apache.Hadoop.hdfs.server.datanode.DataNode:Java.io.IOException: Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode
namespaceID = 240012870; datanode namespaceID = 1462711424 .
http://blog.csdn.net/wh62592855/article/details/5752199
2.org.apache.hadoop.security.AccessControlException: Permission denied: user=xxj
hdfs-site.xml文件中加入
dfs.permissions
false
3. Invalid Hadoop Runtime specified; please click 'Configure Hadoop install directory' or fill in library location input
field
eclipse window->preferences - > Map/Reduce? 選擇hadoop根目錄
4.eclipse error: failure to login
eclipse hadoop plugin/lib 目錄中加入
lib/hadoop-core.jar,
lib/commons-cli-1.2.jar,
lib/commons-configuration-1.6.jar,
lib/commons-httpclient-3.0.1.jar,
lib/commons-lang-2.4.jar,
lib/jackson-core-asl-1.0.1.jar,
lib/jackson-mapper-asl-1.0.1.jar
修改META-INF/MANIFEST.MF
Bundle-ClassPath: classes/,lib/hadoop-core.jar,lib/commons-cli-1.2.jar,lib/commons-configuration-1.6.jar,lib/commons-httpclient-3.0.1.jar,lib/commons-lang-
2.4.jar,lib/jackson-core-asl-1.0.1.jar,lib/jackson-mapper-asl-1.0.1.jar
5.hadoop 1.0.0版本
hadoop 啟動(dòng)時(shí) TaskTracker無(wú)法啟動(dòng)
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin
\mapred\local\ttprivate to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:682)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:655)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:719)
at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1436)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3694)
eclipse運(yùn)行作業(yè)? Failed to set permissions of path: \tmp\hadoop-admin\mapred\staging\Administrator-1506477061\.staging to 0700
:Windows環(huán)境下的Hadoop TaskTracker無(wú)法正常啟動(dòng)? 包括0.20.204笑陈、0.20.205稳捆、1.0.0版本
網(wǎng)上的解決方案 五花八門(mén)? 有的說(shuō)用 0.20.204以下版本 等
我采用修改FileUtil類 checkReturnValue方法代碼 重新編譯? 替換原來(lái)的hadoop-core-1.0.0.jar文件 來(lái)解決
改后的hadoop-core-1.0.0.jar下載地址http://download.csdn.net/detail/java2000_wl/4326323
bughttps://issues.apache.org/jira/browse/HADOOP-7682
6.Bad connection to FS. command aborted. exception: Call to dp01-154954/192.168.13.134:9000 failed on connection exception: java.NET.ConnectException: Connection refused: no further information
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:\tmp\hadoop-SYSTEM\dfs\name is in an inconsistent
state: storage directory does not exist or is not accessible.
重新格式化? bin/hadoop namenode -format? (小心不要拼錯(cuò))
7.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-SYSTEM/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.9412 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1992)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1972)
at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
:bin/hadoop dfsadmin -safemode leave (解除安全模式)
safemode參數(shù)說(shuō)明:
enter - 進(jìn)入安全模式
leave - 強(qiáng)制NameNode離開(kāi)安全模式
get -返回安全模式是否開(kāi)啟的信息
wait - 等待瓷式,一直到安全模式結(jié)束脯倚。
INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
:bin/hadoop dfsadmin -safemode leave (解除安全模式)
9.win7下 ssh啟動(dòng)不了 ?錯(cuò)誤:ssh: connect to host localhost port 22: Connection refused
輸入windows 登錄用戶名
Unable to load native-hadoop library for your platform... using builtin-java classes where applicable...
原因:
多次格式化hadoop導(dǎo)致版本信息不一致炊甲,修改為一致?tīng)顟B(tài)即可解決問(wèn)題
解決方法:
1、停止所有服務(wù) ? ? ? ? ?stop-all.sh
2克伊、格式化namenode ? ?hadoop namenode -foramt
3谆膳、重新啟動(dòng)所有服務(wù)???start-all.sh
4、可以進(jìn)行正常操作了
輸入指令bin/hadoop fs -put ?~/input /in后脯倒,報(bào)錯(cuò):
There are 0 datanode(s) running and no node(s) are excluded in this?operation.
這個(gè)問(wèn)題困擾我很長(zhǎng)時(shí)間实辑,各種百度。最后終于解決藻丢。我產(chǎn)生這個(gè)問(wèn)題的原因是:在第一次格式化dfs后剪撬,啟動(dòng)并使用了hadoop,后來(lái)又重新執(zhí)行了格式化命令(hdfs namenode -format)悠反,這時(shí)namenode的clusterID會(huì)重新生成残黑,而datanode的clusterID 保持不變馍佑。
解決方法:打開(kāi)hdfs-site.xml里配置的datanode和namenode對(duì)應(yīng)的目錄,分別打開(kāi)current文件夾里的VERSION萍摊,可以看到兩個(gè)VERSION里的clusterID項(xiàng)不一致挤茄,修改datanode里VERSION文件的clusterID 與namenode里的一致,再重新啟動(dòng)dfs(執(zhí)行start-dfs.sh)再執(zhí)行jps命令可以看到datanode已正常啟動(dòng)冰木。