INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry p

hadoop在运行自己编写的mapreduce程序时报错
INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
完整报错信息如下:

[walker001@walker001 test_0318]$ hadoop jar walker_0318-0.0.1-SNAPSHOT.jar walker_0318.WordCount /input /outf
20/03/18 11:18:43 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
20/03/18 11:18:45 INFO client.RMProxy: Connecting to ResourceManager at walker001/192.168.220.129:8032
20/03/18 11:19:11 INFO input.FileInputFormat: Total input files to process : 1
20/03/18 11:19:21 INFO mapreduce.JobSubmitter: number of splits:1
20/03/18 11:19:21 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
20/03/18 11:19:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1584497824323_0001
20/03/18 11:19:34 INFO impl.YarnClientImpl: Submitted application application_1584497824323_0001
20/03/18 11:19:34 INFO mapreduce.Job: The url to track the job: http://walker001:8088/proxy/application_1584497824323_0001/
20/03/18 11:19:34 INFO mapreduce.Job: Running job: job_1584497824323_0001
20/03/18 11:20:56 INFO mapreduce.Job: Job job_1584497824323_0001 running in uber mode : false
20/03/18 11:20:57 INFO mapreduce.Job:  map 0% reduce 0%
20/03/18 11:21:17 INFO mapreduce.Job:  map 100% reduce 0%
20/03/18 11:21:37 INFO mapreduce.Job:  map 100% reduce 100%
20/03/18 11:21:51 INFO mapreduce.Job: Job job_1584497824323_0001 completed successfully
20/03/18 11:21:59 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=1836
                FILE: Number of bytes written=280595
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1469
                HDFS: Number of bytes written=1306
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=14422
                Total time spent by all reduces in occupied slots (ms)=18015
                Total time spent by all map tasks (ms)=14422
                Total time spent by all reduce tasks (ms)=18015
                Total vcore-milliseconds taken by all map tasks=14422
                Total vcore-milliseconds taken by all reduce tasks=18015
                Total megabyte-milliseconds taken by all map tasks=14768128
                Total megabyte-milliseconds taken by all reduce tasks=18447360
        Map-Reduce Framework
                Map input records=31
                Map output records=179
                Map output bytes=2055
                Map output materialized bytes=1836
                Input split bytes=103
                Combine input records=179
                Combine output records=131
                Reduce input groups=131
                Reduce shuffle bytes=1836
                Reduce input records=131
                Reduce output records=131
                Spilled Records=262
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=227
                CPU time spent (ms)=1690
                Physical memory (bytes) snapshot=275431424
                Virtual memory (bytes) snapshot=3772268544
                Total committed heap usage (bytes)=138723328
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1366
        File Output Format Counters 
                Bytes Written=1306
20/03/18 11:22:01 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
20/03/18 11:22:03 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:04 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:05 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:06 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:12 WARN ipc.Client: Failed to connect to server: 0.0.0.0/0.0.0.0:10020: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: 拒绝连接
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:682)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:778)
        at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1544)
        at org.apache.hadoop.ipc.Client.call(Client.java:1375)
        at org.apache.hadoop.ipc.Client.call(Client.java:1339)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy15.getJobReport(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:325)
        at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:429)
        at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:617)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
        at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
        at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:616)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1374)
        at walker_0318.WordCount.main(WordCount.java:60)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
20/03/18 11:22:12 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
20/03/18 11:22:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:15 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:16 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:17 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:18 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:19 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:20 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:21 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:22 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:22 WARN ipc.Client: Failed to connect to server: 0.0.0.0/0.0.0.0:10020: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: 拒绝连接
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:682)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:778)
        at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1544)
        at org.apache.hadoop.ipc.Client.call(Client.java:1375)
        at org.apache.hadoop.ipc.Client.call(Client.java:1339)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy15.getJobReport(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:325)
        at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:429)
        at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:617)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
        at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
        at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:616)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1374)
        at walker_0318.WordCount.main(WordCount.java:60)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
20/03/18 11:22:22 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
20/03/18 11:22:23 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:24 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:25 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:26 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:27 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:28 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:29 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:30 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:31 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:32 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/03/18 11:22:32 WARN ipc.Client: Failed to connect to server: 0.0.0.0/0.0.0.0:10020: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: 拒绝连接
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:682)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:778)
        at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1544)
        at org.apache.hadoop.ipc.Client.call(Client.java:1375)
        at org.apache.hadoop.ipc.Client.call(Client.java:1339)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy15.getJobReport(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:325)
        at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:429)
        at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:617)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
        at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
        at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:616)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1374)
        at walker_0318.WordCount.main(WordCount.java:60)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Exception in thread "main" java.io.IOException: java.net.ConnectException: Call From walker001/192.168.220.129 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:344)
        at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:429)
        at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:617)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
        at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
        at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:616)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1374)
        at walker_0318.WordCount.main(WordCount.java:60)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.net.ConnectException: Call From walker001/192.168.220.129 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1487)
        at org.apache.hadoop.ipc.Client.call(Client.java:1429)
        at org.apache.hadoop.ipc.Client.call(Client.java:1339)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy15.getJobReport(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:325)
        ... 17 more
Caused by: java.net.ConnectException: 拒绝连接
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:682)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:778)
        at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1544)
        at org.apache.hadoop.ipc.Client.call(Client.java:1375)
        ... 26 more

这是由于没有启动historyserver引起的

jps查看

2321 DataNode
2850 NodeManager
2551 SecondaryNameNode
5417 Jps
2190 NameNode
2718 ResourceManager

执行以下命令启动historyserver

mr-jobhistory-daemon.sh start historyserver

再jps查看

2321 DataNode
2850 NodeManager
2551 SecondaryNameNode
5466 JobHistoryServer
5499 Jps
2190 NameNode
2718 ResourceManager

jobhistoryserver成功启动

再次执行mapreduce任务

[walker001@walker001 test_0318]$ hadoop jar walker_0318-0.0.1-SNAPSHOT.jar walker_0318.WordCount /input /outfff
20/03/18 12:01:14 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
20/03/18 12:01:16 INFO client.RMProxy: Connecting to ResourceManager at walker001/192.168.220.129:8032
20/03/18 12:01:25 INFO input.FileInputFormat: Total input files to process : 1
20/03/18 12:01:29 INFO mapreduce.JobSubmitter: number of splits:1
20/03/18 12:01:29 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
20/03/18 12:01:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1584497824323_0003
20/03/18 12:01:34 INFO impl.YarnClientImpl: Submitted application application_1584497824323_0003
20/03/18 12:01:35 INFO mapreduce.Job: The url to track the job: http://walker001:8088/proxy/application_1584497824323_0003/
20/03/18 12:01:35 INFO mapreduce.Job: Running job: job_1584497824323_0003
20/03/18 12:02:14 INFO mapreduce.Job: Job job_1584497824323_0003 running in uber mode : false
20/03/18 12:02:14 INFO mapreduce.Job:  map 0% reduce 0%
20/03/18 12:02:39 INFO mapreduce.Job:  map 100% reduce 0%
20/03/18 12:02:53 INFO mapreduce.Job:  map 100% reduce 100%
20/03/18 12:03:01 INFO mapreduce.Job: Job job_1584497824323_0003 completed successfully
20/03/18 12:03:04 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=1836
                FILE: Number of bytes written=280599
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1469
                HDFS: Number of bytes written=1306
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=18774
                Total time spent by all reduces in occupied slots (ms)=8052
                Total time spent by all map tasks (ms)=18774
                Total time spent by all reduce tasks (ms)=8052
                Total vcore-milliseconds taken by all map tasks=18774
                Total vcore-milliseconds taken by all reduce tasks=8052
                Total megabyte-milliseconds taken by all map tasks=19224576
                Total megabyte-milliseconds taken by all reduce tasks=8245248
        Map-Reduce Framework
                Map input records=31
                Map output records=179
                Map output bytes=2055
                Map output materialized bytes=1836
                Input split bytes=103
                Combine input records=179
                Combine output records=131
                Reduce input groups=131
                Reduce shuffle bytes=1836
                Reduce input records=131
                Reduce output records=131
                Spilled Records=262
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=249
                CPU time spent (ms)=1500
                Physical memory (bytes) snapshot=288489472
                Virtual memory (bytes) snapshot=3772272640
                Total committed heap usage (bytes)=141586432
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1366
        File Output Format Counters 
                Bytes Written=1306

正常通过

  • 4
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值