java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
20/07/10 20:38:53 INFO db.DBInputFormat: Using read commited transaction isolation
20/07/10 20:38:53 INFO mapreduce.JobSubmitter: number of splits:1
20/07/10 20:38:53 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1594384655721_0001
20/07/10 20:38:57 INFO impl.YarnClientImpl: Submitted application application_1594384655721_0001
20/07/10 20:38:57 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1594384655721_0001/
20/07/10 20:38:57 INFO mapreduce.Job: Running job: job_1594384655721_0001
1、先是网上查找信息,看到了在yarn上增加配置文件来增加内存
在yarn-site.xml
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>20480</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
我添加了,没有效果
2、我直接给虚拟机增加2G的内存。也没有效果。一直卡着
3、对etc/hadoop/capacity-scheduler.xml 的0.1修改为0.5
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>0.5</value>
<description>
Maximum percent of resources in the cluster which can be used to run
application masters i.e. controls number of concurrent running
applications.
</description>
</property>
修改了,没有效果
4、最后看到,不用yarn
https://blog.csdn.net/weixin_44177758/article/details/89893518
更改mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
将上面的yarn框架运行改为
<property>
<name>mapreduce.job.tracker</name>
<value>hdfs://192.168.72.10:8001</value>
<final>true</final>
</property>
运行后,直接成功