問題1
問題:org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /data/program/hadoop/hdfs/data: namenode clusterID = CID-715d917d-2477-41a8-97fe-6b22ae9bad6e; datanode clusterID = CID-11a94f7e-0ba2-4e00-8057-23de4244f219
2017-02-26 00:26:46,150 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to /192.168.1.131:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.
2017-02-26 00:26:46,152 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to /192.168.1.131:9000
2017-02-26 00:26:46,255 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
2017-02-26 00:26:48,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-02-26 00:26:48,261 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
解決辦法:每次namenode format會重新創(chuàng)建一個namenodeId,而data目錄包含了上次format時的id被盈,namenode format清空了namenode下的數(shù)據(jù)元潘,但是沒有清空datanode下的數(shù)據(jù)笼才,導(dǎo)致啟動時失敗,所要做的就是每次fotmat前酝掩,清空data下的所有目錄卫键。
方法1:停掉集群,刪除問題節(jié)點的data目錄下的所有內(nèi)容。即hdfs-site.xml文件中配置的dfs.data.dir目錄撑帖。重新格式化namenode。
方法2:先停掉集群澳眷,然后將datanode節(jié)點目錄/dfs/data/current/VERSION中的修改為與namenode一致即可胡嘿。
問題2
問題:在配置免密登錄的時候,配置的步驟沒有錯誤钳踊,但卻無法免密登錄灶平。
解決辦法:首先查看登錄的日志:cat /var/log/secure,然后分析原因箍土。在日志中顯示如下:
Authentication refused:bad ownership or modes for directory /root/.ssh
該問題是因為權(quán)限問題逢享,sshd為了安全,對屬主的目錄和文件權(quán)限有所要求吴藻,如果權(quán)限不對瞒爬,則ssh的免密碼登錄不生效。.ssh目錄的權(quán)限一般為755或者700沟堡。rsa_id.pub以及authorized_keys的權(quán)限一般為644侧但,rsa權(quán)限必須為600。
問題3
問題:2017-02-26 00:37:12,419 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1117540795-127.0.0.1-1488040411210 (Datanode Uuid null) service to 192.168.1.131/192.168.1.131:9000 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.1.133, hostname=192.168.1.133): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=6bc06fed-eec5-482b-9e2b-e74483edb50f, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-24c1246c-c1eb-4152-a856-f114c169c884;nsid=1820489334;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.?
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.
at org.apache.hadoop.ipc.RPC$Server.call(RPC.
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
at org.apache.hadoop.ipc.Server$Handler.run(Server.
解決辦法:該問題是由于未設(shè)置host的緣故航罗,之后重新設(shè)置好host即可禀横。
問題4
問題:FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager fromhadoop133 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.
at org.apache.hadoop.service.AbstractService.start(AbstractService.
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.
at org.apache.hadoop.service.AbstractService.start(AbstractService.
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.
Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager fromhadoop133 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.
... 6 more
解決辦法:yarn-site.xml中的內(nèi)存參數(shù)(yarn.nodemanager.resource.memory-mb)設(shè)置的問題,好像不能設(shè)置1G以下的(2g最好)粥血。
問題5
問題:hadoop No FileSystem for scheme hdfs
解決辦法:這個很有可能是客戶端Hadoop版本和服務(wù)端版本不一致導(dǎo)致的柏锄,或者導(dǎo)入的jar包缺失酿箭,要確保導(dǎo)入的依賴包完整。
問題6
問題:Hadoop Permission denied: user=GavinCee, access=WRITE, inode="/test":root:supergroup:drwxr-xr-x
解決辦法:到服務(wù)器上修改hadoop的配置文件:conf/hdfs-core.xml, 找到 dfs.permissions 的配置項 , 將value值改為 false趾娃。
dfs.permissions
false
If "true", enable permission checking in HDFS.If "false", permission checking is turned off,but all other behavior is unchanged.Switching from one parameter value to the other does not change the mode,owner or group of files or directories.
修改完重啟下hadoop的進(jìn)程才能生效缭嫡。
ps,個人開發(fā)方便故如此設(shè)置抬闷,謹(jǐn)慎的還是要創(chuàng)建個用戶并授予權(quán)限妇蛀。
問題7
問題:將本地文件復(fù)制到hdfs上去或者在hafs上新建文件時會出現(xiàn)以下錯誤:Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /test. Name node is in safe mode.
解決辦法:hdfs在啟動開始時會進(jìn)入安全模式,這時文件系統(tǒng)中的內(nèi)容不允許修改也不允許刪除笤成,直到安全模式結(jié)束评架。安全模式主要是為了系統(tǒng)啟動的時候檢查各個DataNode上數(shù)據(jù)塊的有效性,同時根據(jù)策略必要的復(fù)制或者刪除部分?jǐn)?shù)據(jù)塊炕泳。運(yùn)行期通過命令也可以進(jìn)入安全模式纵诞。在實踐過程中,系統(tǒng)啟動的時候去修改和刪除文件也會有安全模式不允許修改的出錯提示喊崖,只需要等待一會兒即可挣磨。
可以等待其自動退出安全模式雇逞,也可以使用手動命令來離開安全模式荤懂,如下:
關(guān)閉成功。
分享: