博客地址: http://blog.csdn.net/u012185296
个性签名: 世界上最遥远的距离不是天涯,也不是海角,而是我站在妳的面前,妳却感觉不到我的存在
技术方向: Flume+Kafka+Storm+Redis/Hbase+Hadoop+Hive+Mahout+Spark ... 云计算技术
转载声明: 可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明,谢谢合作!
qq交流群: 214293307 (期待与你一起学习,共同进步)
# hadoop-2.2.0完全分布式安装以及高可用性验证
# jdk1.7.0_60
一些基本的东西就不说了,前面已经说过,比如ssh免密码登录,时间同步等,如果不清楚的话请上面看hadoop-1.x的搭建,那里面很详细。
# 集群结构图
ip地址 | 主机名 | NameNode | JournalNode | DataNode |
192.168.1.229 | rs229 | 是 | 是 | 是 |
192.168.1.227 | rs227 | 是 | 是 | 是 |
192.168.1.226 | rs226 | 否 | 是 | 是 |
# 修改7个配置文件
~/hadoop-2.2.0/etc/hadoop/hadoop-env.sh
~/hadoop-2.2.0/etc/hadoop/core-site.xml
~/hadoop-2.2.0/etc/hadoop/hdfs-site.xml
~/hadoop-2.2.0/etc/hadoop/mapred-site.xml
~/hadoop-2.2.0/etc/hadoop/yarn-env.sh
~/hadoop-2.2.0/etc/hadoop/yarn-site.xml
~/hadoop-2.2.0/etc/hadoop/slaves
# 1 修改hadoop-env.sh配置文件(jdk路径)
[root@masterhadoop]# pwd
/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/etc/hadoop
[root@masterhadoop]# vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/local/adsit/yting/jdk/jdk1.7.0_60
#export JAVA_HOME=${JAVA_HOME}
# 2 修改core-site.xml文件修改 (注意fs.defaultFS的配置)
fs.defaultFS的配置中,value在rs229上就写rs229,在rs227上就写rs227,在哪台服务器上就写哪台服务器的主机名
[root@master hadoop]# vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
</configuration>
# 3 修改hdfs-site.xml配置文件
[root@masterhadoop]# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>rs229,rs227</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.rs229</name>
<value>rs229:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.rs227</name>
<value>rs227:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.rs229</name>
<value>rs229:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.rs227</name>
<value>rs227:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://rs229:8485;rs227:8485;rs226:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/tmp/journal</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
# 4修改 mapred-site.xml配置文件
[root@master hadoop]# cpmapred-site.xml.template mapred-site.xml
[root@master hadoop]# vimapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
# 5修改yarn-env.sh配置文件
[root@master hadoop]# vi yarn-env.sh
# some Java parameters
export JAVA_HOME=/usr/local/adsit/yting/jdk/jdk1.7.0_60
# 6修改yarn-site.xml配置文件(还是单点,你逗饿么?)
[root@master hadoop]# vi yarn-site.xml
<configuration>
<!-- Site specific YARNconfiguration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>rs229:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>rs229:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>rs229:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>rs229:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>rs229:8088</value>
</property>
</configuration>
# 7修改slaves配置文件
[root@master hadoop]# vislaves
rs229
rs227
rs226
# 完全分布式 启动Hadoop(切记顺序不能乱)
# 在 rs229,rs227,rs226 上分别启动 journalnode
# 启动rs229的journalNode
[root@rs229 sbin]# pwd
/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/sbin
[root@rs229 sbin]#./hadoop-daemon.sh start journalnode
[root@rs229 sbin]# tail -100f/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/logs/hadoop-root-journalnode-rs229.log(查看日志是否报错,不报错你就赢了一半了)
# 启动rs227的journalNode
[root@rs227 sbin]# pwd
/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/sbin
[root@rs227 sbin]#./hadoop-daemon.sh start journalnode
[root@rs227 sbin]# tail -100f/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/logs/hadoop-root-journalnode-rs227.log(查看日志是否报错,不报错你就赢了一半了)
# 启动rs226的journalNode
[root@rs226 sbin]# pwd
/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/sbin
[root@rs226 sbin]#./hadoop-daemon.sh start journalnode
[root@rs226 sbin]# tail -100f/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/logs/hadoop-root-journalnode-rs226.log(查看日志是否报错,不报错你就赢了一半了)
# 在 rs229,rs227上分别格式化和启动namenode
# 格式化跟启动rs229上的namenode
[root@rs229 sbin]# ../bin/hdfsnamenode –format
[root@rs229 sbin]#./hadoop-daemon.sh start namenode
# 在这个过程中饿抱了一个错误java.net.BindException:
Problem binding to[rs229:9000] java.net.BindException: Address already in use; For more detailssee: http://wiki.apache.org/hadoop/BindException,如何解决请看下面的 #妳那伊抹微笑搭建hadoop环境出现的问题(仅供参考)中的 Hadoop-2.X错误中的原因分析以及解决
# 格式化跟启动rs227上的namenode
[root@rs227 sbin]# ../bin/hdfsnamenode -bootstrapStandby
[root@rs227 sbin]#./hadoop-daemon.sh start namenode
# 问题同上
# 打开浏览器,访问rs229跟rs227的50070端口
如果都能访问到,说明你namenode启动成功了,并且这两个namenode都是standby状态
# namenode(rs229)转换成active
[root@rs229 sbin]#../bin/hdfs haadmin -transitionToActive rs229
Java HotSpot(TM) 64-Bit Server VMwarning: You have loaded library/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0which might have disabled stack guard. The VM will try to fix the stack guardnow.
It's highly recommended that youfix the library with 'execstack -c <libfile>', or link it with '-znoexecstack'.
14/06/05 11:17:06 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable
# 打开浏览器,再访问rs229跟rs227的50070端口
发现rs229变成active状态了,而rs227还是standby状态
# 启动 datanodes(在rs226上执行命令)
[root@rs226 sbin]#./hadoop-daemon.sh start datanode (查看日志,没报错你就赢了)
[root@rs226 sbin]# jps
3053 DataNode
2714 JournalNode
3260 Jps
# 实验一下手动切换namenode的状态
[root@rs227 sbin]# ../bin/hdfs haadmin -failover -forceactive rs229 rs227
Java HotSpot(TM) 64-Bit Server VMwarning: You have loaded library/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/lib/native/libhadoop.so whichmight have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that youfix the library with 'execstack -c <libfile>', or link it with '-znoexecstack'.
14/06/05 11:46:02 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for your platform...using builtin-java classes where applicable
Failover from rs229 to rs227successful
# 打开浏览器,再访问rs229跟rs227的50070端口
发现rs229从active切换为standby状态,而rs227从standby切换为active状态了,状态切换成功
# yarn启动
请看下面 Zookeepr-3.4.6+Hadoop-2.2.0+Hbase-0.96.2 +jdk1.7.0_60 的这篇文章,这里就不再写了,这里主要是为了验证高可靠性HA,为了给下面的文章过渡用的,下面将整合Hbase了,这 、、、