hadoop中运行map-reduce程序时,java.net.connectionException

about云开发

标题: mapreduce报错 java.net.ConnectException: Connection refused [打印本页]


作者: Wyy_Ck    时间: 2016-10-31 15:13
标题: mapreduce报错 java.net.ConnectException: Connection refused
弄了半天,系统是centos 7,本想执行一个测试下,如下(文件已经上传到/data/input):
hadoop jar /opt/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar  wordcount /data/input /opt/test/result
报错如下,实在焦虑,帮忙看看:


16/10/30 10:47:11 INFO mapreduce.Job: Job job_1477704898495_0008 failed with state FAILED due to: Application application_1477704898495_0008 failed 2 times due to Error launching appattempt_1477704898495_0008_000002. Got exception: java.net.ConnectException: Call From master/10.162.30.129 to localhost:36109 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1480)
    at org.apache.hadoop.ipc.Client.call(Client.java:1407)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy83.startContainers(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
    at org.apache.hadoop.ipc.Client.call(Client.java:1446)`enter code here`



感谢各位!
 


作者: langke93    时间: 2016-10-31 15:23
原因太多了:
1.确保集群已经启动了,查看是否有僵尸进程
2.防火墙是否关闭
3.hdfs-site.xml文件里面的权限访问是否设置为false


4.hosts里面是如何配置的

以上四项如果不确定,最好都贴出图来


作者: Wyy_Ck    时间: 2016-10-31 15:28

langke93 发表于 2016-10-31 15:23
原因太多了:
1.确保集群已经启动了,查看是否有僵尸进程
2.防火墙是否关闭


感谢您的回复;稍后我把相关配置全部贴出来,  就是感觉问题很多,无从下手。。。。。。。。。
 


作者: Wyy_Ck    时间: 2016-10-31 17:31
1, master:

[Bash shell] 纯文本查看 复制代码

<span style="color:#000000"> [hadoop@master hadoop]$ jps

91651 NameNode

91891 SecondaryNameNode

111023 Jps

92078 ResourceManager</span>



slave:

[AppleScript] 纯文本查看 复制代码

<span style="color:#000000">[root@slave hdfs]# jps

28647 DataNode

28792 NodeManager

44376 Jps</span>



2. 防火墙使用systemcl stop firewall.service 已经关闭了centos 系统
   

[Shell] 纯文本查看 复制代码

<span style="color:#000000">[hadoop@master wordcount]$ systemctl status firewalld.service 

firewalld.service

   Loaded: masked (/dev/null)

   Active: inactive (dead) since Wed 2016-10-12 17:27:10 CST; 2 weeks 4 days ago

 Main PID: 816 (code=exited, status=0/SUCCESS)

   CGroup: /system.slice/firewalld.service</span>


3. 主从节点 hdfs-site.xml:
   

[XML] 纯文本查看 复制代码

<span style="color:#000000"><configuration>

     <property>

       <name>dfs.namenode.name.dir</name>

       <value>file:///data/hadoop/storage/hdfs/name</value>

    </property>

    <property>

       <name>dfs.datanode.data.dir</name>

       <value>file:///data/hadoop/storage/hdfs/data</value>

    </property>

    <property>

       <name>dfs.datanode.http-address</name>

       <value>10.162.30.162:50075</value>

    </property>

    <property>

       <name>dfs.permissions</name>

       <value>false</value>

    </property>

    <property>

      <name>dfs.secondary.http.address</name>

      <value>master:50090</value>

    </property>

    <property>

      <name>dfs.http.address</name>

      <value>master:50070</value>

    </property>

    <property>

       <name>dfs.datanode.ipc.address</name>

       <value>10.162.30.162:50020</value>

    </property>

    <property>

       <name>dfs.datanode.address</name>

       <value>10.162.30.162:50010</value>

    </property>

</configuration></span>



4. hosts文件:
   

[Shell] 纯文本查看 复制代码

<span style="color:#000000">

#127.0.0.1 localhost 

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.162.30.129 master

10.162.30.162 slave</span>

 


作者: Wyy_Ck    时间: 2016-10-31 17:39

Wyy_Ck 发表于 2016-10-31 15:28
感谢您的回复;稍后我把相关配置全部贴出来,  就是感觉问题很多,无从下手。。。。。。。。。


已经回复在下面, 这样的配置 还是报错  


作者: nextuser    时间: 2016-10-31 21:09

Wyy_Ck 发表于 2016-10-31 17:39
已经回复在下面, 这样的配置 还是报错


Call From master/10.162.30.129 to localhost:36109
上面端口是在哪配置的


作者: Wyy_Ck    时间: 2016-10-31 21:23

nextuser 发表于 2016-10-31 21:09
Call From master/10.162.30.129 to localhost:36109
上面端口是在哪配置的


这个端口我没有配置,  而且每一次hadoop namenode -format后  这个端口还不一样    我也没有找到这个端口哪里的


作者: nextuser    时间: 2016-11-1 08:30

Wyy_Ck 发表于 2016-10-31 21:23
这个端口我没有配置,  而且每一次hadoop namenode -format后  这个端口还不一样    我也没有找到这个端 ...


问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


作者: Wyy_Ck    时间: 2016-11-1 09:43

nextuser 发表于 2016-11-1 08:30
问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


所有配置信息:
core-site.xml:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/storage/tmp</value>
    </property>
</configuration>

hdfs-site.xml 上面已经贴出/

mapred-site.xml:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


yarn-site.xml:

       <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
</configuration>



当前报错信息端口又变了,是不是这个需要在哪里配置,否则会随机起端口?

16/11/01 09:29:53 INFO mapreduce.Job: Job job_1477905891965_0005 failed with state FAILED due to: Application application_1477905891965_0005 failed 2 times due to Error launching appattempt_1477905891965_0005_000002. Got exception: java.net.ConnectException: Call From master/10.162.30.129 to localhost:47222 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 


作者: Wyy_Ck    时间: 2016-11-1 09:46

nextuser 发表于 2016-11-1 08:30
问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


所有配置信息:
core-site.xml:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/storage/tmp</value>
    </property>
</configuration>

hdfs-site.xml 上面已经贴出/

mapred-site.xml:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


yarn-site.xml:

       <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
</configuration>



当前报错信息端口又变了,是不是这个需要在哪里配置,否则会随机起端口?

16/11/01 09:29:53 INFO mapreduce.Job: Job job_1477905891965_0005 failed with state FAILED due to: Application application_1477905891965_0005 failed 2 times due to Error launching appattempt_1477905891965_0005_000002. Got exception: java.net.ConnectException: Call From master/10.162.30.129 to localhost:47222 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 


作者: Wyy_Ck    时间: 2016-11-1 10:44

nextuser 发表于 2016-11-1 08:30
问题的关键已经找到了。如果可以尝试解决下。
如果不行,可以贴出相关配置来,特别是hdfs的访问端口


Call From master/10.162.30.129 to localhost:50868 failed on connection exception
每次执行mapreduce任务的时候 出现的错误端口还不一样?


作者: Wyy_Ck    时间: 2016-11-1 11:29
稍等  我把所有xml 配置信息 都贴出来


作者: nextuser    时间: 2016-11-1 15:57

Wyy_Ck 发表于 2016-11-1 10:44
Call From master/10.162.30.129 to localhost:50868 failed on connection exception
每次执行mapr ...


数据副本,
<property>
               <name>dfs.replication</name>
               <value>3</value>
        </property>

没有配置。参考的什么文档,还是自己配置的。
还有你配置了两个节点??
最好配置三个。
配置信息,可以参考这个
hadoop(2.x)以hadoop2.2为例完全分布式最新高可靠安装文档
http://www.aboutyun.com/forum.php?mod=viewthread&tid=7684



 


作者: nextuser    时间: 2016-11-1 16:06

Wyy_Ck 发表于 2016-11-1 10:44
Call From master/10.162.30.129 to localhost:50868 failed on connection exception
每次执行mapr ...


还有跟你格式化什么关系。
格式化是否成功。
贴出来看下。
你出现的错误,贴出截图来 


作者: Wyy_Ck    时间: 2016-11-2 16:25

nextuser 发表于 2016-11-1 15:57
数据副本,

               dfs.replication


我增加了一个子节点, 现在2个子节点, 就这样没用修改什么  就OK了!  如下, 这样就OK了吗
 

[Bash shell] 纯文本查看 复制代码

<span style="color:#000000">hadoop@master ~]$ hadoop jar /opt/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar  wordcount /data/input /data/output

16/11/02 16:17:21 INFO client.RMProxy: Connecting to ResourceManager at master/10.162.30.129:8032

16/11/02 16:17:21 INFO input.FileInputFormat: Total input paths to process : 1

16/11/02 16:17:21 INFO mapreduce.JobSubmitter: number of splits:1

16/11/02 16:17:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1478068499295_0002

16/11/02 16:17:22 INFO impl.YarnClientImpl: Submitted application application_1478068499295_0002

16/11/02 16:17:22 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1478068499295_0002/

16/11/02 16:17:22 INFO mapreduce.Job: Running job: job_1478068499295_0002

16/11/02 16:17:27 INFO mapreduce.Job: Job job_1478068499295_0002 running in uber mode : false

16/11/02 16:17:27 INFO mapreduce.Job:  map 0% reduce 0%

16/11/02 16:17:33 INFO mapreduce.Job:  map 100% reduce 0%

16/11/02 16:17:39 INFO mapreduce.Job:  map 100% reduce 100%

16/11/02 16:17:39 INFO mapreduce.Job: Job job_1478068499295_0002 completed successfully

16/11/02 16:17:39 INFO mapreduce.Job: Counters: 49

        File System Counters

                FILE: Number of bytes read=57

                FILE: Number of bytes written=229629

                FILE: Number of read operations=0

                FILE: Number of large read operations=0

                FILE: Number of write operations=0

                HDFS: Number of bytes read=135

                HDFS: Number of bytes written=31

                HDFS: Number of read operations=6

                HDFS: Number of large read operations=0

                HDFS: Number of write operations=2

        Job Counters 

                Launched map tasks=1

                Launched reduce tasks=1

                Data-local map tasks=1

                Total time spent by all maps in occupied slots (ms)=4077

                Total time spent by all reduces in occupied slots (ms)=2946

                Total time spent by all map tasks (ms)=4077

                Total time spent by all reduce tasks (ms)=2946

                Total vcore-seconds taken by all map tasks=4077

                Total vcore-seconds taken by all reduce tasks=2946

                Total megabyte-seconds taken by all map tasks=4174848

                Total megabyte-seconds taken by all reduce tasks=3016704

        Map-Reduce Framework

                Map input records=1

                Map output records=7

                Map output bytes=59

                Map output materialized bytes=57

                Input split bytes=104

                Combine input records=7

                Combine output records=5

                Reduce input groups=5

                Reduce shuffle bytes=57

                Reduce input records=5

                Reduce output records=5

                Spilled Records=10

                Shuffled Maps =1

                Failed Shuffles=0

                Merged Map outputs=1

                GC time elapsed (ms)=131

                CPU time spent (ms)=1580

                Physical memory (bytes) snapshot=435048448

                Virtual memory (bytes) snapshot=4218990592

                Total committed heap usage (bytes)=311951360

        Shuffle Errors

                BAD_ID=0

                CONNECTION=0

                IO_ERROR=0

                WRONG_LENGTH=0

                WRONG_MAP=0

                WRONG_REDUCE=0

        File Input Format Counters 

                Bytes Read=31

        File Output Format Counters 

                Bytes Written=31</span>

 


作者: nextuser    时间: 2016-11-2 16:53

Wyy_Ck 发表于 2016-11-2 16:25
我增加了一个子节点, 现在2个子节点, 就这样没用修改什么  就OK了!  如下, 这样就OK了吗  

[ ...


对的,就是这样的


作者: Wyy_Ck    时间: 2016-11-2 18:12

nextuser 发表于 2016-11-2 16:53
对的,就是这样的


卡死了  

[Bash shell] 纯文本查看 复制代码

<span style="color:#000000">[hadoop@master ~]$ hadoop fs -put /opt/test/wordcount/wordcount /data/input

[hadoop@master ~]$ hadoop jar /opt/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar  wordcount /data/input /data/output

16/11/02 18:04:05 INFO client.RMProxy: Connecting to ResourceManager at master/10.162.30.129:8032

16/11/02 18:04:06 INFO input.FileInputFormat: Total input paths to process : 1

16/11/02 18:04:06 INFO mapreduce.JobSubmitter: number of splits:1

16/11/02 18:04:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1478080130552_0008

16/11/02 18:04:07 INFO impl.YarnClientImpl: Submitted application application_1478080130552_0008

16/11/02 18:04:07 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1478080130552_0008/

16/11/02 18:04:07 INFO mapreduce.Job: Running job: job_1478080130552_0008

16/11/02 18:04:24 INFO mapreduce.Job: Job job_1478080130552_0008 running in uber mode : false

16/11/02 18:04:24 INFO mapreduce.Job:  map 0% reduce 0%</span>

 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值