资源分配
hadoop003:resourcemanager,nodemanager,QuorumPeerMain,journalnode,
hadoop004:resourcemanager,nodemanager,QuorumPeerMain,journalnode,
hadoop005: ,nodemanager,QuorumPeerMain,journalnode,
1,配置
1,vi ./etc/hadoop/yarn-site.xml
这里的cluster1为逻辑虚拟集群名,随便取名,但是必须前后保持一致
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop003</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop004</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>hadoop003:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>hadoop004:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop003:2181,hadoop004:2181,hadoop005:2181</value>
</property>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
2,vi ./etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop003:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop003:19888</value>
</property>
2,远程发送到集群中的其他主机上
scp -r ./etc/hadoop/mapred-site.xml ./etc/hadoop/yarn-site.xml hadoop004:/usr/local/hadoop-2.7.1/etc/hadoop/
scp -r ./etc/hadoop/mapred-site.xml ./etc/hadoop/yarn-site.xml hadoop005:/usr/local/hadoop-2.7.1/etc/hadoop/
3,启动
1,启动zk
zkServer.sh start
2,启动journalnode(多个启动,单个启动)
/usr/local/hadoop-2.7.1/sbin/hadoop-daemons.sh start journalnode
/usr/local/hadoop-2.7.1/sbin/hadoop-daemon.sh start journalnode
3,将两个resourcemanager中的一个直接启动,另外一个手动启动
./sbin/start-yarn.sh
./sbin/yarn-daemon.sh start resourcemanager
4,测试
1,进入端口8088查看两个resourcemanager的状态,
2,yarn jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /README.txt /out/00
3,杀死其中一个再看状态,并且看是否能测试2
hadoop003:resourcemanager,nodemanager,QuorumPeerMain,journalnode,
hadoop004:resourcemanager,nodemanager,QuorumPeerMain,journalnode,
hadoop005: ,nodemanager,QuorumPeerMain,journalnode,
1,配置
1,vi ./etc/hadoop/yarn-site.xml
这里的cluster1为逻辑虚拟集群名,随便取名,但是必须前后保持一致
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop003</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop004</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>hadoop003:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>hadoop004:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop003:2181,hadoop004:2181,hadoop005:2181</value>
</property>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
2,vi ./etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop003:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop003:19888</value>
</property>
2,远程发送到集群中的其他主机上
scp -r ./etc/hadoop/mapred-site.xml ./etc/hadoop/yarn-site.xml hadoop004:/usr/local/hadoop-2.7.1/etc/hadoop/
scp -r ./etc/hadoop/mapred-site.xml ./etc/hadoop/yarn-site.xml hadoop005:/usr/local/hadoop-2.7.1/etc/hadoop/
3,启动
1,启动zk
zkServer.sh start
2,启动journalnode(多个启动,单个启动)
/usr/local/hadoop-2.7.1/sbin/hadoop-daemons.sh start journalnode
/usr/local/hadoop-2.7.1/sbin/hadoop-daemon.sh start journalnode
3,将两个resourcemanager中的一个直接启动,另外一个手动启动
./sbin/start-yarn.sh
./sbin/yarn-daemon.sh start resourcemanager
4,测试
1,进入端口8088查看两个resourcemanager的状态,
2,yarn jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /README.txt /out/00
3,杀死其中一个再看状态,并且看是否能测试2