找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
配置Yarn ResourceManager 高可用时,遇到了一个坑。
找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
参考了网上各种解决办法,运行演示程序 cd $HADOOP_HOME/share/hadoop/mapreduce; hadoop jar hadoop-mapreduce-examples-3.2.0.jar pi 50 100000;ls;
总是报这个错误。
肯定是哪里配置错了,于是写了个java 类,先验证一下,到底路径上有没有这个类.
[hadoop@hadoop-namenode1 test]$ more FindClass.java
public class FindClass {
public static void main(String[] args){
ClassLoader classLoader = FindClass.class.getClassLoader();
try {
Class aClass = classLoader.loadClass("org.apache.hadoop.mapreduce.v2.app.MRAppMaster");
System.out.println("找到了类 aClass.getName() = " + aClass.getName());
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
}
javac FindClass.java
java FindClass
如上图所示,
[hadoop@hadoop-namenode1 test]$ hadoop classpath
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
/var/server/hadoop/etc/hadoop:/var/server/hadoop/share/hadoop/common/lib/:/var/server/hadoop/share/hadoop/common/:/var/server/hadoop/share/hadoop/hdfs:/var/server/hadoop/share/hadoop/hdfs/lib/:/var/server/hadoop/share/hadoop/hdfs/:/var/server/hadoop/share/hadoop/mapreduce/lib/:/var/server/hadoop/share/hadoop/mapreduce/:/var/server/hadoop/share/hadoop/yarn:/var/server/hadoop/share/hadoop/yarn/lib/:/var/server/hadoop/share/hadoop/yarn/
[hadoop@hadoop-namenode1 test]$ yarn classpath
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
/var/server/hadoop/etc/hadoop:/var/server/hadoop/share/hadoop/common/lib/:/var/server/hadoop/share/hadoop/common/:/var/server/hadoop/share/hadoop/hdfs:/var/server/hadoop/share/hadoop/hdfs/lib/:/var/server/hadoop/share/hadoop/hdfs/:/var/server/hadoop/share/hadoop/mapreduce/lib/:/var/server/hadoop/share/hadoop/mapreduce/:/var/server/hadoop/share/hadoop/yarn:/var/server/hadoop/share/hadoop/yarn/lib/:/var/server/hadoop/share/hadoop/yarn/
[hadoop@hadoop-namenode1 test]$
export CLASSPATH=$(hadoop classpath):.
java FindClass
说明类路径上确实是有这个org.apache.hadoop.mapreduce.v2.app.MRAppMaster类的.
于是认真看了yarn-site.xml
发现重复设置了2次yarn.application.classpath属性,后面的属性是空的,把前面的有效属性覆盖了,正确的配置如下:
[hadoop@hadoop-namenode1 hadoop]$ more yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
<name>yarn.application.classpath</name>
<value>/var/server/hadoop/etc/hadoop:/var/server/hadoop/share/hadoop/common/lib/*:/var/server/hadoop/share/hadoop/common/*:/var/server/hadoop/share/hadoop/hdfs:/var/server/hadoop/share/hadoop/hdfs/lib/*:/var/server/hadoop/share/ha
doop/hdfs/*:/var/server/hadoop/share/hadoop/mapreduce/lib/*:/var/server/hadoop/share/hadoop/mapreduce/*:/var/server/hadoop/share/hadoop/yarn:/var/server/hadoop/share/hadoop/yarn/lib/*:/var/server/hadoop/share/hadoop/yarn/*</value>
</property>
<!-- Site specific YARN configuration properties -->
<!-- 是否启用日志聚合.应用程序完成后,日志汇总收集每个容器的日志,这些日志移动到文件系统,例如HDFS. -->
<!-- 用户可以通过配置"yarn.nodemanager.remote-app-log-dir"、"yarn.nodemanager.remote-app-log-dir-suffix"来确定日志移动到的位置 -->
<!-- 用户可以通过应用程序时间服务器访问日志 -->
<!-- 启用日志聚合功能,应用程序完成后,收集各个节点的日志到一起便于查看 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!--开启resource manager HA,默认为false-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 集群的Id,使用该值确保RM不会做为其它集群的active -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>mycluster</value>
</property>
<!--配置resource manager 命名-->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2,rm3</value>
</property>
<!-- 配置第一台机器的resourceManager -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop-namenode1</value>
</property>
<!-- 配置第二台机器的resourceManager -->
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop-namenode2</value>
</property>
<!-- 配置第三台机器的resourceManager -->
<property>
<name>yarn.resourcemanager.hostname.rm3</name>
<value>hadoop-namenode3</value>
</property>
<!-- 配置第一台机器的resourceManager通信地址 -->
<property>
<name>yarn.resourcemanager.address.rm1</name>
<value>hadoop-namenode1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name>
<value>hadoop-namenode1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
<value>hadoop-namenode1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address.rm1</name>
<value>hadoop-namenode1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>hadoop-namenode1:8088</value>
</property>
<!-- 配置第二台机器的resourceManager通信地址 -->
<property>
<name>yarn.resourcemanager.address.rm2</name>
<value>hadoop-namenode2:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>hadoop-namenode2:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
<value>hadoop-namenode2:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address.rm2</name>
<value>hadoop-namenode2:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>hadoop-namenode2:8088</value>
</property>
<!-- 配置第三台机器的resourceManager通信地址 -->
<property>
<name>yarn.resourcemanager.address.rm3</name>
<value>hadoop-namenode3:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm3</name>
<value>hadoop-namenode3:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm3</name>
<value>hadoop-namenode3:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address.rm3</name>
<value>hadoop-namenode3:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm3</name>
<value>hadoop-namenode3:8088</value>
</property>
<!--开启resourcemanager自动恢复功能-->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!--在node1上配置rm1,在node2上配置rm2,注意:一般都喜欢把配置好的文件远程复制到其它机器上,但这个在YARN的另一个机器上一定要修改,其他机器上不配置此项-->
<property>
<name>yarn.resourcemanager.ha.id</name>
<value>rm1</value>
<description>If we want to launch more than one RM in single node, we need this configuration</description>
</property>
<!--用于持久存储的类。尝试开启-->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>hadoop.zk.address</name>
<value>10.6.1.51:2181,10.6.1.52:2181,10.6.1.53:2181</value>
<description>For multiple zk services, separate them with comma</description>
</property>
<!--开启resourcemanager故障自动切换,指定机器-->
<property>
<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
<value>true</value>
<description>Enable automatic failover; By default, it is enabled only when HA is enabled.</description>
</property>
<property>
<name>yarn.client.failover-proxy-provider</name>
<value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
</property>
<!-- 允许分配给一个任务最大的CPU核数,默认是8 -->
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>4</value>
</property>
<!-- 每个节点可用内存,单位MB -->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>12288</value>
</property>
<!-- 单个任务可申请最少内存,默认1024MB -->
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<!-- 单个任务可申请最大内存,默认8192MB -->
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>12288</value>
</property>
<!--多长时间聚合删除一次日志 此处-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>2592000</value><!--30 day-->
</property>
<!--时间在几秒钟内保留用户日志。只适用于如果日志聚合是禁用的-->
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>604800</value><!--7 day-->
</property>
<!--指定文件压缩类型用于压缩汇总日志-->
<property>
<name>yarn.nodemanager.log-aggregation.compression-type</name>
<value>gz</value>
</property>
<!-- nodemanager本地文件存储目录-->
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/var/server/yarn/local</value>
</property>
<!-- resourceManager 保存最大的任务完成个数 -->
<property>
<name>yarn.resourcemanager.max-completed-applications</name>
<value>1000</value>
</property>
<!-- 逗号隔开的服务列表,列表名称应该只包含a-zA-Z0-9_,不能以数字开始-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--rm失联后重新链接的时间-->
<property>
<name>yarn.resourcemanager.connect.retry-interval.ms</name>
<value>2000</value>
</property>
</configuration>
echo "一键启动hadoop集群,替代默认的start-all.sh"
su hadoop
hdfs zkfc -formatZK -force
cat $HADOOP_PREFIX/etc/hadoop/datanode-hosts-exclude | xargs -i -t ssh hadoop@{} "rm -rf /var/server/hadoop/tmp/hadoop-hadoop-zkfc.pid;hdfs --daemon start zkfc"
cat $HADOOP_PREFIX/etc/hadoop/datanode-hosts-exclude | xargs -i -t ssh hadoop@{} "hdfs --daemon start journalnode"
cat $HADOOP_PREFIX/etc/hadoop/datanode-hosts-exclude | xargs -i -t ssh hadoop@{} "hdfs --daemon start namenode"
cat $HADOOP_PREFIX/etc/hadoop/slaves | xargs -i -t ssh hadoop@{} "hdfs --daemon start datanode"
pwd;
#分2次
su yarn;
$HADOOP_PREFIX/sbin/start-yarn.sh
cat $HADOOP_PREFIX/etc/hadoop/slaves | xargs -i -t ssh yarn@{} "yarn --daemon start nodemanager"
pwd;
echo "查看resource manager 高可用是否正常."
yarn rmadmin -getServiceState rm1;
yarn rmadmin -getServiceState rm2;
yarn rmadmin -getServiceState rm3;
echo "批量关闭防火墙"
cat $HADOOP_PREFIX/etc/hadoop/datanode-hosts-exclude | xargs -i -t ssh root@{} "systemctl disable firewalld;systemctl stop firewalld"
cat $HADOOP_PREFIX/etc/hadoop/slaves | xargs -i -t ssh root@{} "systemctl disable firewalld;systemctl stop firewalld"
su hadoop
echo "运行官方map reduce 实例,测试系统是否正常运行"
#cd $HADOOP_HOME/share/hadoop/mapreduce; hadoop jar hadoop-mapreduce-examples-3.0.0-beta1.jar pi 50 100000;ls;
cd $HADOOP_HOME/share/hadoop/mapreduce; hadoop jar hadoop-mapreduce-examples-3.2.0.jar pi 50 100000;ls;