正文之前
今天很氣憤F莼隆个曙!想要學點東西,但是老是被環(huán)境所限制受楼。Hadoop這個見鬼的環(huán)境,我只是運行單機模式呼寸,結(jié)果就是都不成功艳汽。好不容易磕磕盼盼的終于把啥缺的東西都找出來了結(jié)果最后還是失敗了。暫時我真的不想去看失敗記錄对雪,因為快要睡了明天再說吧河狐。另外我這里有《Hadoop 權(quán)威指南》第三版的翻譯版本(華東師范大學翻譯)。我今天吃完晚飯去書店逛的時候又看到了第四版的盜版書瑟捣。所以見獵心喜之下買了回來馋艺。如果有哪位同志想要哪一頁,可以讓我?guī)兔ε囊幌侣跆祝斎蝗揪兔饬司桁簦约嚎措娮訒桑。?/p>
正文
今天遇到的一個很大的問題是桑李。Java我沒法兒找到安裝路徑踱蛀。這簡直是巨坑?贵白?配置$HADOOP_HOME/etc/hadoop/hadoop-env.sh這個見鬼的玩意的時候率拒,找不到JAVA_HOME??禁荒?豈不是要gg猬膨,這也太慘了?呛伴?勃痴?所以我左找右找,終于給我逮到了磷蜀!下面是博客來源召耘,感謝大神!褐隆!
另外,我用另外一臺云服務器親自驗證了我前面介紹的那一篇老外寫的很新版的Hadoop教程。具體請看下列鏈接:
【Hadoop學起來】分布式Hadoop的搭建(Ubuntu 17.04)
另外就是下載的資源更新啦衫贬,自己看評論5鲁骸!
第一步:whereis java
[root@Hadoop Master java]# whereis java
java: /usr/bin/java /etc/java /usr/lib/java /usr/share/java /usr/share/man/man1/java.1.gz
第二步:ls -lrt /usr/bin/java
[root@Hadoop Master java]# ls -lrt /usr/bin/java
lrwxrwxrwx. 1 root root 22 Nov 2 23:38 /usr/bin/java -> /etc/alternatives/java
第三步:ls -lrt /etc/alternatives/java
[root@Hadoop Master java]# ls -lrt /etc/alternatives/java
lrwxrwxrwx. 1 root root 46 Nov 2 23:38 /etc/alternatives/java -> /usr/lib/jvm/java
第四步:設(shè)置環(huán)境變量
vi 此文件/etc/profile
在profile文件末尾加入:固惯,
export JAVA_HOME=/usr/lib/jvm/java
(上面這一行也加到標題里面那個地址的文件中去梆造,記得把原來的刪掉)
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
(我記得這一些好像也~/.bashrc這個文件里面,同樣的也要生效一下)
第五步:使生效 :source /etc/profile
綜合上面這些步驟就已經(jīng)完成了更新JAVA的環(huán)境變量這一過程~~
從上方圖片可以看出來葬毫。當我運行g(shù)ps命令的時候镇辉。是完全可以顯示出Hadoop需要的那些組件的。理論上來說應該可以完成那個問題贴捡。但是我也不知道為什么忽肛,總是有bug!!!
root@VM-161-78-ubuntu:/home/ubuntu/hadoop/hadoop# bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.jar pi 16 1000
上面所要求的問題
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8
Picked up _JAVA_OPTIONS: -Xmx512M
Number of Maps = 16
Samples per Map = 1000
18/01/10 00:41:47 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/QuasiMonteCarlo_1515516105834_522663899/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1728)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2515)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
at org.apache.hadoop.ipc.Client.call(Client.java:1429)
at org.apache.hadoop.ipc.Client.call(Client.java:1339)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1809)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1609)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/QuasiMonteCarlo_1515516105834_522663899/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1728)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2515)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
at org.apache.hadoop.ipc.Client.call(Client.java:1429)
at org.apache.hadoop.ipc.Client.call(Client.java:1339)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1809)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1609)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)
上面是我的運行記錄。我是真的不想去看哪個地方出問題了烂斋。明天繼續(xù)看我的書吧屹逛,這些糟心的問題留到以后解決。現(xiàn)在對這個東西的理解還不夠汛骂,很難看出什么問題來橡庞!
諸位再看一下我另外一臺云服務器上的相同操作账忘。這臺服務器是海外的,所以當初在官網(wǎng)下載鏡像的時候簡直快到不可思議。但是后來各種幺蛾子岔霸,甚至最近我發(fā)現(xiàn)他的NameNode會經(jīng)常自己關(guān)閉敦姻,YARN這小畜生也是一樣的德行I蘖铡韩肝!
正文之后
不說了!有人催我睡覺了祸轮。這幾天看看Hadoop吧兽埃。畢設(shè)畢竟要做這方面的東西。所以估計會很漫長的戰(zhàn)線适袜。開個系列柄错。立個Flag咯~溜了溜了