下面是配置三台虚拟机的集群!!
一.前期配置.
1、 服务器主机名和静态IP配置(三台)
2、 修改每个主机的/etc/hosts文件,添加IP和主机名的对应关系(三台)
3、 管理节点到从节点配置无密码登录
4、 配置jdk 1.8(三台)
5、 关闭防火墙(三台)
6、 关闭selinux(三台)
vi /etc/selinux/config
SELINUX=enforcing → SELINUX=disabled
重启系统
二. hadoop安装流程
1.软件包上传并解压
tar -zxvf hadoop-2.6.0-cdh5.14.0-with-centos6.9.tar.gz -C ../servers/
2 .查看hadoop支持的压缩方式以及本地库
a) ./hadoop checknative
b) 安装openssl
yum -y install openssl-devel
./hadoop checknative
3 . 修改配置文件
a) core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas</value>
</property>
<!-- 缓冲区大小,实际工作中根据服务器性能动态调整 -->
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
<!-- 开启hdfs的垃圾桶机制,删除掉的数据可以从垃圾桶中回收,单位分钟 -->
<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>
</configuration>
b) hdfs-site.xml
<configuration>
<!-- NameNode存储元数据信息的路径,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 -->
<!-- 集群动态上下线
<property>
<name>dfs.hosts</name>
<value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/accept_host</value>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/deny_host</value>
</property>
-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node01:50090</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>node01:50070</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas</value>
</property>
<!-- 定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas</value>
</property>
<property>
<name>dfs.namenode.edits.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name</value>
</property>
<property>
<name>dfs.namenode.checkpoint.edits.dir</name>
<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
c) Hadoop-env.sh
export JAVA_HOME=/export/servers/jdk1.8.0_141
d) mapred-site.xml
i. cp mapred-site.xml.template mapred-site.xml
ii. 修改
<configuration>
<property>
<!--运行模式-->
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<!--JVM重用 -->
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>node01:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node01:19888</value>
</property>
</configuration>
e) yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node01</value>
</property>
<property>
<!-- nodemanager 上的附属服务,只有配置成mapreduce_shuffle 才能运行-->
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
f) slaves
node01
node02
node03
4 . 创建文件夹
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits
5、安装包的分发
scp -r hadoop-2.6.0-cdh5.14.0/ node02:$PWD
scp -r hadoop-2.6.0-cdh5.14.0/ node03:$PWD
6、配置hadoop环境变量
a) 创建文件/etc/profile.d/hadoop.sh 并编辑
export HADOOP_HOME=/export/servers/hadoop-2.6.0-cdh5.14.0
export PATH=$PATH:$HADOOP_HOME/bin
b) source /etc/profile
7、 启动集群
启动: ./start-all.sh 关闭:./stop-all.sh
解压hadoop文件目录下的sbin中执行启动或关闭
浏览器查看启动页面
主节点IP:50070
主节点IP:8088