我这里使用的是四台服务器master01,master02,slaver01,slaver02
一.sh并修改root密码
参照:https://blog.csdn.net/weixin_36104843/article/details/80208372
https://blog.csdn.net/weixin_36104843/article/details/80208345
二.修改host
参照:https://blog.csdn.net/weixin_36104843/article/details/80210865
三.root用户免密登陆
参照:https://blog.csdn.net/weixin_36104843/article/details/80210898
四.安装oracle-jdk并配置环境变量
参照:https://blog.csdn.net/weixin_36104843/article/details/80210490
五.安装zookeeper
参考:https://blog.csdn.net/weixin_36104843/article/details/80211404
------------------------------------------------------------------------------------------------
六.
安装hadoop的正文
安装hadoop的正文
安装hadoop的正文
1.cd ~ (打开根目录)
2.下载离线安装包
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.9.0/hadoop-2.9.0.tar.gz
(350M的那个,时间有点久,等等吧)
3. 解压
tar -zxvf hadoop-2.9.0.tar.gz
4.复制到usr/local下面,重命名为hadoop
mv hadoop-2.9.0 /usr/local/hadoop
5.配置hadoop的环境变量
nano /etc/profile
将下面的内容复制进去
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL=$HADOOP_HOME
保存退出,source /etc/profile
用命令
hadoop version
查看hadoop环境变量是否设置成功
6.修改hadoop-env.sh、mapred-env.sh、yarn-env.sh文件的JAVA_HOME
将对应export JAVA_HOME的地方修改为
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
7.修改配置文件
------------------------------文件core-site.xml
<configuration>
<!-- 这儿的bi表示两个namenode组建成的逻辑名字 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://bi/</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/data</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>master01:2181,master02:2181,slaver01:2181,slaver02:2181</value>
</property>
</configuration>
<!-- 这儿的bi表示两个namenode组建成的逻辑名字 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://bi/</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/data</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>master01:2181,master02:2181,slaver01:2181,slaver02:2181</value>
</property>
</configuration>
-----------------------------------文件hdfs-site.xml
<configuration>
<!--指定hdfs的nameservice为bi,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>bi</value>
</property>
<!-- bi下面有两个NameNode,分别是nn1,nn2 ,这儿的nn1,nn2也是逻辑名字,可以自己指定,但是指定后下面的也要随之改变-->
<property>
<name>dfs.ha.namenodes.bi</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn1</name>
<value>master01:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn1</name>
<value>master01:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn2</name>
<value>master02:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn2</name>
<value>master02:50070</value>
</property>
<!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
<!--指定hdfs的nameservice为bi,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>bi</value>
</property>
<!-- bi下面有两个NameNode,分别是nn1,nn2 ,这儿的nn1,nn2也是逻辑名字,可以自己指定,但是指定后下面的也要随之改变-->
<property>
<name>dfs.ha.namenodes.bi</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn1</name>
<value>master01:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn1</name>
<value>master01:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn2</name>
<value>master02:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn2</name>
<value>master02:50070</value>
</property>
<!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/qjournal</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bi</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/qjournal</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bi</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
-------------------------------------------------文件yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master01</value>
</property>
<property>
<!-- Site specific YARN configuration properties -->
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>master02</value>
</property>
<!-- 指定zookeeper集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>master01:2181,master02:2181,slaver01:2181,slaver02:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
<value>master02</value>
</property>
<!-- 指定zookeeper集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>master01:2181,master02:2181,slaver01:2181,slaver02:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
----------------------
mapred-site.xml
<configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 指定jobhistory server的http地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master01:19888</value>
</property>
<!-- 开启uber模式(针对小作业的优化) -->
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<!-- 配置启动uber模式的最大map数 -->
<property>
<name>mapreduce.job.ubertask.maxmaps</name>
<value>9</value>
</property>
<!-- 配置启动uber模式的最大reduce数 -->
<property>
<name>mapreduce.job.ubertask.maxreduces</name>
<value>1</value>
</property>
</configuration>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master01:19888</value>
</property>
<!-- 开启uber模式(针对小作业的优化) -->
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<!-- 配置启动uber模式的最大map数 -->
<property>
<name>mapreduce.job.ubertask.maxmaps</name>
<value>9</value>
</property>
<!-- 配置启动uber模式的最大reduce数 -->
<property>
<name>mapreduce.job.ubertask.maxreduces</name>
<value>1</value>
</property>
</configuration>
8.修改slavers文件
删除localhost
添加slaver节点hostname
slaver01
slaver02
七.将hadoop目录复制到各个机器的/usr/local,并设置hadoop的环境变量
scp -r hadoop root@master02:/usr/local/
scp -r hadoop root@slaver02:/usr/local/
scp -r hadoop root@slaver01:/usr/local/
八.启动顺序
1.先启动zookeeper集群
zkServer.sh start
- 1
2.启动slaver01,slaver02的journalnode
hadoop-daemon.sh start journalnode
- 1
3.格式化HDFS(在master01或者master02上执行格式化)
hdfs namenode -format
- 1
注意:格式化完成之后,为了保证两个namenode的数据一致,需要把master01的namenode的数据目录复制到master02的相应的目录下
4.格式化ZKFC(在master01或者master02上执行格式化)
hdfs zkfc -formatZK
- 1
5.启动HDFS(在master01或者master02上启动)
start-dfs.sh
- 1
6.启动yarn(在master01或者master02上启动)
start-yarn.sh
- 1
注意:在master01上启动yarn后,master02上没有resourcemanager进程,需要自动启动,指令如下
yarn-daemon.sh start resourcemanag
参考:https://www.cnblogs.com/taichu/p/5264185.html
https://blog.csdn.net/hliq5399/article/details/78193113
https://blog.csdn.net/ypersistence/article/details/77678650