基于大数据伪分布式平台-HA搭建,看着有断层可以看前面的平台搭建过程。
nodemanage和datanode都是从节点角色,和datanode是1:1的关系,所以用了etc/slaves文件
-
修改mapred-site.xml
复制模板
cp mapred-site.xml.template mapred-site.xml
修改mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
-
修改yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node3</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node4</value>
<property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node2:2181,node3:2181,node4:2181</value>
</property>
</configuration>
-
分发给node2、node3、node4
scp yarn-site.xml mapred-site.xml root@node4:/opt/home/hadoop-2.6.5/etc/hadoop/
-
开启yarn
start-yarn.sh(1)#因为namenode 启动的是自己节点的rm,但配置的是node3/4所以不会启动
yarn-daemon.sh start resourcemanager(3,4)#在start-dfs.sh之后开启
-
修改一键启动脚本
启动
#!/bin/bash echo "start all zookeeper.."
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh start";
done
start-all.sh
for i in {3..4};#node34 rm启动
do
ssh node$i "/opt/home/hadoop-2.6.5/sbin/yarn-daemon.sh start resourcemanager";
done
关闭
#!/bin/bash
echo "start all zookeeper.."
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh start";
done
start-all.sh
for i in {3..4};#node34 rm关闭
do
ssh node$i "/opt/home/hadoop-2.6.5/sbin/yarn-daemon.sh start resourcemanager";
done