Hadoop分布式的搭建
此个Hadoop分布式的搭建是使用zookeeper做的协调,所以我的zookeeper集群,时间同步,免密码登陆,主机映射,和机器名都是已经做好了的。前几篇博客有写详细过程的。这一篇细细说道Hadoop分布式搭建。版本上可能不一致,但是方法都是完全大同小异的。因为是做集群的,所以三台机器都要有四个步骤。可以先在一台机器上配置,再远程复制到另外两台机器上去。
步骤一:Hadoop安装包下载
我使用Hadoop版本是2.7.2的可以把链接贴出来,要啥版本有啥版本
https://hadoop.apache.org/release/2.7.2.html
改链接中的数字可以轻松获取哦
步骤二:配置Hadoop环境
1.按照步骤一,下载完Hadoop压缩包后,进行解压。我的压缩包文件放在/usr/local目录下。
tar -zxvf hadoop-2.7.2.tar.gz#解压缩
2.解压后需要将环境变量配置进/etc/profile文件内。
①查看压缩后文件路径
cd /usr/local/hadoop-2.7.2
pwd#查看全路径
②编写配置,在/etc/profile末端添加内容
export HADOOP_HOME=/usr/local/hadoop-2.7.2
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
③环境变量生效,查看Hadoop版本
source /etc/prrofile#使得配置生效
hadoop version#查看版本
配置正确的话会出现如图下所示,未正确的请查看配置文件是否有错误:
步骤三:编写配置文件
主要需要编写的文件是在路径:/usr/local/hadoop-2.7.2/etc/hadoop内。需要修改的文件有:hadoop-env.sh、yarn-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml、slaves文件。
-
hadoop-env.sh
添加JDK环境: (jdk环境要使用本机的,提前安装好的)
-
yarn-env.sh
添加JDK环境: (jdk环境要使用本机的,提前安装好的)
-
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>master:2181,slave01:2181,slave02:2181</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/tmp/dfs/data</value>
</property>
<property>
<name>dfs.namenode.checkpoint.period</name>
<value>3600</value>
</property>
<property>
<name>dfs.namenode.checkpoint.txns</name>
<value>1000000</value>
</property>
<property>
<name>dfs.namenode.checkpoint.check.period</name>
<value>60</value>
</property>
<property>
<name>dfs.namenode.num.checkpoints.retained</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>spark</value>
</property>
<property>
<name>dfs.ha.namenodes.spark</name>
<value>master,slave01</value>
</property>
<property>
<name>dfs.namenode.http-address.spark.master</name>
<value>master:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.spark.master</name>
<value>master:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.spark.slave01</name>
<value>slave01:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.spark.slave01</name>
<value>slave01:9000</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://192.168.102.101:8485;192.168.102.102:8485;192.168.102.103:8485/spark</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/tmp/dfs/journal</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.spark</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>
- core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://spark</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>master:2181,slave01:2181,slave02:2181</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>
- mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.map.cpu.vcores</name>
<value>1</value>
</property>
<property>
<name>mapreduce.reduce.cpu.vcores</name>
<value>1</value>
</property>
<property>
<name>mapreduce.task.timeout</name>
<value>60000</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>100</value>
</property>
</configuration>
- yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>qianqian</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>slave01</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>slave01:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>master:2181,slave01:2181,slave02:2181</value>
</property>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>51200</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>24</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>2</value>
</property>
</configuration>
- slaves
master
slave01
slave02
步骤四:启动集群
在配置Hadoop前我就把我的zookeeper集群给启动起来了,zookeeper还是需要安装一下下的。
- 启动journalnode集群,在所有的机器上面执行如下命令
hadoop-daemon.sh start journalnode
①主namenode格式化,在第一台机器上
hdfs namenode -format
格式化很容易出错,要出现图片上的提示才算格式化成功
②启动主namenode,在第一台机器上
hadoop-daemon.sh start namenode
③从namenode格式化,在第二台机器上
hdfs namenode -bootstrapStandby
④启动从namenode,在第二台机器上
hadoop-daemon.sh start namenode
⑤启动datanode,在所有节点上执行如下命令
hadoop-daemon.sh start datanode
⑥启动nodemanager,在所有节点上执行如下命令
yarn-daemon.sh start nodemanager
⑦启动主resourcemanager,在第一台机器上
yarn-daemon.sh start resourcemanger
⑧启动备resourcemanager,在第二台机器
yarn-daemon.sh start resourcemanger
验证是否成功
主节点master
从节点slave01
从节点slave02
通过IP+端口进行页面管理查看即可!
主节点master(192.168.102.101)Namenode管理界面
从节点slave01(192.168.102.102)Namenode管理界面