hadoop2.7.7 Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s)

[root@hadoop-master ~]# hadoop jar /zwf/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
22/09/02 21:48:30 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
22/09/02 21:48:32 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:33 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:34 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:35 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:36 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:37 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:38 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:39 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:40 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/02 21:48:41 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# hadoop jar /zwf/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
22/09/03 02:14:15 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
22/09/03 02:14:20 INFO input.FileInputFormat: Total input paths to process : 5
22/09/03 02:14:20 INFO mapreduce.JobSubmitter: number of splits:5
22/09/03 02:14:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1662142450262_0001
22/09/03 02:14:46 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW
22/09/03 02:14:49 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW
22/09/03 02:14:51 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW
22/09/03 02:14:53 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW
22/09/03 02:14:55 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW
22/09/03 02:14:57 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW
22/09/03 02:14:59 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1662142450262_0001 is still in NEW

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

[root@hadoop-master ~]# systemctl stop firewalld
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# systemctl status firewalld
?.firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Sat 2022-09-03 02:52:10 CST; 28s ago
     Docs: man:firewalld(1)
  Process: 85 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 85 (code=exited, status=0/SUCCESS)

Sep 03 02:50:26 hadoop-master systemd[1]: Starting firewalld - dynamic firewall daemon...
Sep 03 02:50:40 hadoop-master systemd[1]: Started firewalld - dynamic firewall daemon.
Sep 03 02:50:43 hadoop-master firewalld[85]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now.
Sep 03 02:51:55 hadoop-master systemd[1]: Stopping firewalld - dynamic firewall daemon...
Sep 03 02:52:10 hadoop-master systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# yarn application -list
22/09/03 02:53:09 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
22/09/03 02:53:11 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/03 02:53:12 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/03 02:53:13 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/03 02:53:14 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/03 02:53:15 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/03 02:53:16 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
22/09/03 02:53:17 INFO ipc.Client: Retrying connect to server: hadoop-master/172.18.0.11:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)


[root@hadoop-master ~]# cat /etc/hosts
#127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.18.0.11	hadoop-master
172.18.0.12	hadoop-node-02
172.18.0.13	hadoop-node-03
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 



[root@hadoop-master ~]# yarn application -list
22/09/03 02:54:30 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
Total number of applications (application-types: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):0
                Application-Id	    Application-Name	    Application-Type	      User	     Queue	             State	       Final-State	       Progress	                       Tracking-URL
[root@hadoop-master ~]# 
[root@hadoop-master ~]# yarn application -list -appStates ALL
22/09/03 02:55:55 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
Total number of applications (application-types: [] and states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]):0
                Application-Id	    Application-Name	    Application-Type	      User	     Queue	             State	       Final-State	       Progress	                       Tracking-URL
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# hadoop jar /zwf/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
22/09/03 02:56:33 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
22/09/03 02:56:33 INFO input.FileInputFormat: Total input paths to process : 5
22/09/03 02:56:33 INFO mapreduce.JobSubmitter: number of splits:5
22/09/03 02:56:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1662144846001_0001
22/09/03 02:56:34 INFO impl.YarnClientImpl: Submitted application application_1662144846001_0001
22/09/03 02:56:34 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1662144846001_0001/
22/09/03 02:56:34 INFO mapreduce.Job: Running job: job_1662144846001_0001
22/09/03 02:56:42 INFO mapreduce.Job: Job job_1662144846001_0001 running in uber mode : false
22/09/03 02:56:42 INFO mapreduce.Job:  map 0% reduce 0%
22/09/03 02:56:52 INFO mapreduce.Job:  map 80% reduce 0%
22/09/03 02:56:54 INFO mapreduce.Job:  map 100% reduce 0%
22/09/03 02:57:33 INFO mapreduce.Job:  map 100% reduce 100%
22/09/03 02:57:34 INFO mapreduce.Job: Job job_1662144846001_0001 completed successfully
22/09/03 02:57:34 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=116
		FILE: Number of bytes written=737475
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=1340
		HDFS: Number of bytes written=215
		HDFS: Number of read operations=23
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=3
	Job Counters 
		Launched map tasks=5
		Launched reduce tasks=1
		Data-local map tasks=5
		Total time spent by all maps in occupied slots (ms)=43028
		Total time spent by all reduces in occupied slots (ms)=29208
		Total time spent by all map tasks (ms)=43028
		Total time spent by all reduce tasks (ms)=29208
		Total vcore-milliseconds taken by all map tasks=43028
		Total vcore-milliseconds taken by all reduce tasks=29208
		Total megabyte-milliseconds taken by all map tasks=44060672
		Total megabyte-milliseconds taken by all reduce tasks=29908992
	Map-Reduce Framework
		Map input records=5
		Map output records=10
		Map output bytes=90
		Map output materialized bytes=140
		Input split bytes=750
		Combine input records=0
		Combine output records=0
		Reduce input groups=2
		Reduce shuffle bytes=140
		Reduce input records=10
		Reduce output records=0
		Spilled Records=20
		Shuffled Maps =5
		Failed Shuffles=0
		Merged Map outputs=5
		GC time elapsed (ms)=989
		CPU time spent (ms)=3660
		Physical memory (bytes) snapshot=1414168576
		Virtual memory (bytes) snapshot=11690545152
		Total committed heap usage (bytes)=1121976320
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=590
	File Output Format Counters 
		Bytes Written=97
Job Finished in 61.233 seconds
Estimated value of Pi is 3.68000000000000000000
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# yarn application -list
22/09/03 02:57:56 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.18.0.11:8032
Total number of applications (application-types: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):0
                Application-Id	    Application-Name	    Application-Type	      User	     Queue	             State	       Final-State	       Progress	                       Tracking-URL
[root@hadoop-master ~]# 



[root@hadoop-master ~]# cat /etc/hosts
#127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.18.0.11	hadoop-master
172.18.0.12	hadoop-node-02
172.18.0.13	hadoop-node-03
[root@hadoop-master ~]# 
[root@hadoop-master ~]# 
[root@hadoop-master ~]# cat /zwf/hadoop-2.7.7/etc/hadoop/yarn-site.xml 
<?xml version="1.0"?>
<!-- Licensed under the Apache License, Version 2.0 (the "License"); you 
	may not use this file except in compliance with the License. You may obtain 
	a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
	required by applicable law or agreed to in writing, software distributed 
	under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES 
	OR CONDITIONS OF ANY KIND, either express or implied. See the License for 
	the specific language governing permissions and limitations under the License. 
	See accompanying LICENSE file. -->
<configuration>
	<!-- Site specific YARN configuration properties -->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop-master</value>
	</property>
	
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>

	<property>
		<description>The address of the scheduler interface.</description>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>hadoop-master:8030</value>
	</property>

	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>hadoop-master:8031</value>
	</property>

	<property>
		<description>The address of the applications manager interface in the RM. </description>
		<name>yarn.resourcemanager.address</name>
		<value>hadoop-master:8032</value>
	</property>

	<property>
		<description>The address of the RM admin interface.</description>
		<name>yarn.resourcemanager.admin.address</name>
		<value>hadoop-master:8033</value>
	</property>

	<property>
		<description>The http address of the RM web application.</description>
		<name>yarn.resourcemanager.webapp.address</name>
		<value>hadoop-master:8088</value>
	</property>

	<property>
		<description>The https adddress of the RM web application.</description>
		<name>yarn.resourcemanager.webapp.https.address</name>
		<value>hadoop-master:8090</value>
	</property>

</configuration>
[root@hadoop-master ~]# 

----------------------------解决-------------------------------------------------------

防火墙关闭

#127.0.0.1    localhost

#127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.18.0.11	hadoop-master
172.18.0.12	hadoop-node-02
172.18.0.13	hadoop-node-03

-----------------------------------------------------------------------------------

其实和【yarn-site.xml】没太大关系,网上一堆的人说因为这个配置错误,其他默认的端口都在的

 

 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值