本次使用3台阿里云主机进行集群部署:
1. 版本
| 组件 | 版本 | 备注及下载地址 |
|---|---|---|
| Centos | 7.2 64-bit | lsb_release -a 命令查看操作系统版本file /bin/ls 命令查看操作系统位数 |
| Hadoop | hadoop-2.6.0-cdh5.15.1.tar | 下载源码自行编译好的版本 |
| jdk | java version “1.8.0_45" | https://www.oracle.com/technetwork/java/javase/downloads/index.html |
| Zookeeper | zookeeper-3.4.6.tar.gz | 热切,Yarn存储数据使用的协调服务https://zookeeper.apache.org/doc/r3.4.6/releasenotes.html |
2.主机规划
| IP | host | 安装软件 | 进程 |
|---|---|---|---|
| 172.16.128.58 | hadoop001 | Hadoop Zookeeper | NameNode DFSZKFailoverController JournalNode DataNode ResourceManager JobHistoryServer NodeManager QuorumPeerMain |
| 172.16.128.56 | hadoop002 | Hadoop Zookeeper | NameNode DFSZKFailoverController JournalNode DataNode ResourceManager NodeManager QuorumPeerMain |
| 172.16.128.57 | hadoop003 | Hadoop Zookeeper | JournalNode, DataNode NodeManager QuorumPeerMain |
3.环境准备
由于我使用的阿里云云主机,所以我只要配置ip绑定 与三台服务器互相通信就可以了,
- IP与hostname绑定
[root@hadoop001~]vi /etc/hosts
172.16.128.58 hadoop001
172.16.128.56 hadoop002
172.16.128.57 hadoop003
验证 ping hadoop002
- 设置三台机器互相通信
[root@hadoop001~]#yum install -y lrzsz
#注意这里添加一个用户hadoop是没有密码的,在hadoop用户下完成集群的部署
[root@hadoop001~]#useradd hadoop
[root@hadoop001~]#su - hadoop
#配置ssh
[hadoop@hadoop001~]#ssh-keygen
#将本机的公钥放入authorized_keys中,
[hadoop@hadoop001~]#cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
#复制其他两台服务器上的公钥,到当前目录下 同时追加数据到authorized_keys中
[hadoop@hadoop001 .ssh]# cat id_rsa.pub2 >> authorized_keys
#复制authorized_keys到另外两台服务器中,替换其他存在的authorized_keys
验证(每台机器上执行下面3条命令,只输入yes,不输入密码,则这3台互相通信了)
ssh hadoop@hadoop001 date
ssh hadoop@hadoop002 date
ssh hadoop@hadoop003 date
-
JDK的安装
这个不详细描述了,不会的百度下 -
安装Zookeeper
1.解压文件
[hadoop@hadoop001 software]# tar -zxvf zookeeper-3.4.6.tar.gz -C /home/hadoop/app/
#建立软链接
[hadoop@hadoop001 software]# ln -s zookeeper-3.4.6/ zookeeper
2.修改配置
#进入zookeeper下的conf目录
[hadoop@hadoop001 software]# cd root/app/zookeeper-3.4.6/conf
[hadoop@hadoop001 conf]# cp zoo_sample.cfg zoo.cfg
#修改配置文件信息
[hadoop@hadoop001 conf]# vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#注意这个data文件夹需要自己建立
dataDir=/home/hadoop/app/zookeeper-3.4.6/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#设置集群信息,此处的hadoop001可以用ip地址代替
server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888
3.创建myid
[hadoop@hadoop001 zookeeper-3.4.6]# mkdir data
[hadoop@hadoop001 zookeeper-3.4.6]# touch data/myid
#注意echo 写入数据 >的前后需要空格
[hadoop@hadoop001 data]#echo 1 > data/myid
4.配置环境变量
过…
hadoop002 与003同样配置,其中myid写入的数据分别为2,3
- 安装Hadoop
#解压
[hadoop@hadoop001 software]# tar -xvf hadoop-2.6.0-cdh5.15.1 -C /home/hadoop/app/
#建立软链接
[hadoop@hadoop001 software]# ln -s hadoop-2.6.0-cdh5.15.1/ hadoop
#配置环境变量
export HADOOP_HOME=/home/hadoop/app/hadoop/
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
- 修改hadoop-env.sh
[hadoop@hadoop001 software]# cd ~/app/hadoop/etc/hadoop
[hadoop@hadoop001 hadoop ]# vim hadoop-env.sh
将export JAVA_HOME=${JAVA_HOME}替换为你JDK的路径
export JAVA_HOME= /home/hadoop/app/java
- core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ruozeclusterg7</value>
</property>
<!--==============================Trash机制======================================= -->
<property>
<!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->
<name>fs.trash.checkpoint.interval</name>
<value>0</value>
</property>
<property>
<!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->
<name>fs.trash.interval</name>
<value>1440</value>
</property>
<!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp/hadoop</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
</property>
<!--指定ZooKeeper超时间隔,单位毫秒 -->
<property>
<name>ha.zookeeper.session-timeout.ms</name>
<value>2000</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec
</value>
</property>
</configuration>
- 修改hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href=

最低0.47元/天 解锁文章
645

被折叠的 条评论
为什么被折叠?



