Hadoop常见问题,
异常一: Connection refused
Jan 15, 2015 4:50:10 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailureINFO: Retrying connect to server: /9.123.140.85:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:12 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:14 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:16 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:18 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:20 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:22 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:24 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:26 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Jan 15, 2015 4:50:28 PM org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
INFO: Retrying connect to server: /9.123.140.85:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
解决办法:
1、ping Linux-hadoop-38能通,telnet Linux-hadoop-38 9000不能通,说明开启了防火墙
2、去Linux-hadoop-38主机关闭防火墙/etc/init.d/iptables stop,显示:
iptables:清除防火墙规则:[确定]
iptables:将链设置为政策 ACCEPT:filter [确定]
iptables:正在卸载模块:[确定]
3、重启
4. 如果关闭防火墙以后还是这个错误,可能就是配置问题。
网上的很多例子例子经常是用localhost来搭建hadoop,但是我是在vbox里面的虚拟机ubuntu上面搭建伪分布式hadoop,然后通过自己的Win7去运行MapReduce程序.所以一定要注意以下问题:
1. core-site.xml和mapred-site.xml里面需要用机器名或者IP,不能用localhost
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hostname or IP:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://hostname or IP:9001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/usr/local/hadoop/mapred/local</value>
<description>存储mapred自己使用的路径</description>
</property>
<property>
<name>mapred.system.dir</name>
<value>/usr/local/hadoop/mapred/system</value>
<description>存储mapred系统级别的路径,可以共享</description>
</property>
</configuration>
2. 虚拟机上面的hosts配置需要更改一下
9.123.140.85 hostname
异常二: Server
IPC version 9 cannot communicate with client version 4
这是由于Ubuntu上hadoop 2.6.0与 Win 7 Eclipse上用的hadoop 1.2.1 版本不一致造成的
实际上从Hadoop 2.x开始就没有了hadoop-core.jar而是需要hadoop-common,hadoop-mapreduce的jar包。
异常三: Exception:
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
在Windows 7 x64上远程调用Map Reduce程序访问Ubuntu 14.04上的Hadoop 2.6.0老是遇到这个问题,搞了好久啊!!!
首先参考网上已有的文章http://blog.csdn.net/congcong68/article/details/42043093,
从https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master下载hadoop-common-2.2.0-bin-master.zip,然后解压后,把hadoop-common-2.2.0-bin-master下的bin全部复制放到Windows Hadoop2的bin目录下。主要需要的是winutils.exe和hadoop.dll, hadoop.dll还需要放到C:\Windows\System32下。
还是有这个问题,发现竟让是在Windows 7 x64上面运行32bit JDK和Eclipse的问题!把JDK和Eclipse换乘X64的就好了。
异常四:Eclipse中运行MapReduce程序时控制台无法打印进度信息的问题
可以在项目的src目录下,新建一个文件,命名为“log4j.properties”,填入以下信息:
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n