Hadoop2.x分布式部署
1、三台电脑,ip、hostname,配置如下表
192.168.217.131 | 192.168.217.132 | 192.168.217.133 |
hadoop-senior | hadoop-senior02 | hadoop-senior03 |
1.5G | 1 G | 1G |
1 CPU | 1 CPU | 1 CPU |
192.168.217.131 hadoop-senior.ibeifeng.com hadoop-senior
192.168.217.132 hadoop-senior02.ibeifeng.com hadoop-senior02
192.168.217.133 hadoop-senior03.ibeifeng.com hadoop-senior03
3、各个机器职能如下表
hadoop-senior | hadoop-senior02 | hadoop-senior03 | |
HDFS | NameNode DataNode | DataNode | DataNode |
YARN | NodeManager | ResourceManager NodeManager | NodeManager |
MapReduce | JobHistoryServer |
三台电脑分别做ssh免秘钥登录
各节点重新产生ssh加密文件
执行命令 ssh-keygen -t rsa 产生生秘钥,位于~/.ssh文件夹中
执行命令 cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
编辑各个节点的/etc/hosts,在该文件中含有所有节点的ip与hostname的映射信息
两两节点之间的SSH免密码登陆
ssh-copy-id -i hadoop-senior
scp /root/.ssh/authorized_keys hadoop-senior:/root/.ssh/
修改配置,并分发到其他节点上
配置 hdfs
hadoop-env.sh (不变)
core-site.xml (不变)
hdfs-site.xml
副本数不用配置 默认3
Secondary设置为hadoop-senior03.ibeifeng.com (配置HA后用不着)
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-senior03.ibeifeng.com</value>
</property>
slaves 三台机器
hadoop-senior.ibeifeng.com
hadoop-senior02.ibeifeng.com
hadoop-senior03.ibeifeng.com
配置yarn
yarn-env.sh (不变)
yarn-site.xml
ResourceManager 配置到hadoop-senior02.ibeifeng.com
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-senior.ibeifeng.com</value>
</property>
配置mapredue
mapred-env.sh (不变)
mapred-site.xml
配置JobHistoryServer
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-senior.ibeifeng.com:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-senior.ibeifeng.com:19888</value>
</property>
分发到其他电脑(scp)
scp -r /opt/modules/hadoop-2.5.0/ hadoop-senior02.ibeifeng.com:/opt/modules/
scp -r /opt/modules/hadoop-2.5.0/ hadoop-senior03.ibeifeng.com:/opt/modules/
在NameNode电脑执行格式化
$ bin/hdfs namenode -format
集群搭建完成后测试
基本测试
服务启动,是否可用,简单的应用 jps
hdfs
读写操作
bin/hdfs dfs -mkdir -p /user/beifeng/tmp/conf
bin/hdfs dfs -put etc/hadoop/*-site.xml /user/beifeng/tmp/conf
bin/hdfs dfs -text /user/beifeng/tmp/conf/core-site.xml
yarn
run jar
mapreduce
bin/yarn jar share/hadoop/mapreduce/hadoop*example*.jar wordcount /user/beifeng/mapreuce/wordcount/input /user/beieng/mapreduce/wordcount/output
基准测试
测试集群的性能
hdfs
写数据
读数据
监控集群
Cloudera
Cloudera Manager
部署安装集群
监控集群
配置同步集群
预警
分布式服务框架Zookeeper
Zookeeper是一个开源的分布式的,为分布式应用提供协调服务的Apache项目。
1、Zookeeper的单机安装
下载、赋执行权限、解压
chmod u+x zookeeper-3.4.5.tar.gz
tar –zxvf zookeeper-3.4.5.tar.gz –C /opt/modules/
配置
复制配置文件 cp conf/zoo_sample.cfg conf/zoo.cfg
配置数据存储目录:dataDir=/opt/modules/zookeeper-3.4.5/data
创建数据存储目录:mkdir /opt/modules/zookeeper-3.4.5/data
启动
bin/zkServer.sh start
检测 查看状态:
bin/zkServer.sh status
client Shell:
bin/zkCli.sh
2、Zookeeper分布式安装
在单机的基础之上,修改配置文件conf/zoo.cfg
vi conf/zoo.cfg
添加如下内容:
Server.1=192.168.217.130:2888:3888
Server.2=192.168.217.131:2888:3888
Server.4=192.168.217.132:2888:3888
在数据存储目录下创建myid文件
touch data/myid
vi myid
1
将Zookeeper安装目录同步到另外两台电脑上
scp –r /opt/modules/zookeeper-3.4.5/ hadoop-senior02.ibeifeng.com:/opt/modules/
scp –r /opt/modules/zookeeper-3.4.5/ hadoop-senior02.ibeifeng.com:/opt/modules/
将hadoop1和hadoop2中的myid分别改为2,3
启动:
三个节点分别执行命令zkServer.sh start启动
检验:
三个节点上分别执行命令zkServer.sh status
HDFS HA(High Available)架构部署测试
Hadoop2.0 之前,在HDFS集群中NameNode存在单点故障(SPOF)。对于只有一个NameNode的集群,若NameNode机器出现故障,则整个集群将无法使用,直到NameNode重新启动。HDFS HA功能通过配置Active/Standby 两个NameNodes实现在集群中对NameNode的热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时通过此种方式将NameNode很快的切换到另一台机器。
QJM(Quorum JouralManager, Quorum日志管理器) HA配置
规划集群
hadoop-senior.beifeng.com | hadoop-senior02.beifeng.com | hadoop-senior03.beifeng.com |
NameNode | NameNode | |
ZKFC | ZKFC | |
JournalNode | JournalNode | JournalNode |
DataNode | DataNode | DataNode |
配置hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>hadoop-senior.beifeng.com:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>hadoop-senior02.beifeng.com:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>hadoop-senior.beifeng.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value> hadoop-senior02.beifeng.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir </name>
<value>qjournal://hadoop-senior.beifeng.com:8485;hadoop-senior02.beifeng.com:8485; hadoop-senior03.beifeng.com:8485</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns1 </name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider </value>
</property>
<property>
<name>dfs.ha.fencing.methods </name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files </name>
<value>/home/beifeng/.ssh/id_rsa</value>
</property>
hdfs-site.xml中的secondary配置可以删除了
配置core-site.xml:
<property>
<name>fs.defultFS </name>
<value>hdfs://ns1 </value>
</property>
<property>
<name>dfs.journalnode.edits.dir </name>
<value>/opt/app/hadoop-2.5.0/data/dfs/jn </value>
</property>
自动故障转移 使用Zookeeper
配置hdfs-site.xml:
<property>
<name>dfs.ha.automatic-failover.enabled </name>
<value>true </value>
</property>
配置core-site.xml:
<property>
<name>ha.zookeeper.quorum</name>
<value> hadoop-senior.beifeng.com:2181, hadoop-senior02.beifeng.com:8020, hadoop-senior03.beifeng.com:8020 </value>
</property>
同步到hadoop-senior02.beifeng.com ,hadoop-senior03.beifeng.com
启动
1、关闭所有HDFS服务 sbin/stop-dfs.sh
关闭顺序namenode,datanode,jouralnode
2、启动Zookeeper集群 bin/zkServer.sh start (三个节点都要启动)
3、初始化HA在Zookeeper中状态
bin/hdfs zkfc –formatZK
4、启动HDFS服务 sbin/start-dfs.sh
启动顺序namenode,datanode,jouralnode,DFSZKFailoverController(zkfc)
5、在各个NameNode节点上启动DFSZKFailoverController,先在那台机器启动,那个机器的NameNode就是Active NameNode
验证
将Active NameNode进程kill,kill -9 pid