一、环境说明
1、机器:一台物理机(MASTER)和一台虚拟机(SLAVE)
2、集群节点:两个 MASTER(Master), SLAVE(Slave)
MASTER 10.12.2.182
SLAVE 10.12.2.90
3, 设置主机名称
1)
Vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=MASTER
2) vi /etc/hostname
MASTER
Vi /etc/hosts
10.12.2.182 MASTER
10.12.2.90 SLAVE
二、准备工作
1, 安装JDK,
下载JDK安装包, 解压到/usr/java/jdk1.8.0_111
配置环境变量 ~/.bashrc:
exportJAVA_HOME=/usr/java/jdk1.8.0_111
exportJRE_HOME=/usr/java/jdk1.8.0_111/jre
exportCLASSPATH=/usr/java/jdk1.8.0_111/lib
exportPATH=$JAVA_HOME/bin:$PATH
2, 改变root用户的根路径:
[root@MASTER hadoop-2.6.0]# cat /etc/passwd
把根路径从/home/rli
root:x:0:0:root:/home/rli:/bin/bash
改成
root:x:0:0:root:/root:/bin/bash
1) 先logout, 再用root用户登录系统
2) 如果crash, 在登录前把/home/rli/.bash* 所有问题copy到/root/下;
3,SSH 免校验设置
首先在MASTER
进去.ssh文件: [root@MASTER ]$ cd ~/.ssh/
生成秘钥 ssh-keygen : ssh-keygen -t rsa , 连续按回车键
最终生成(id_rsa,id_rsa.pub两个文件)
生成authorized_keys文件:[root@MASTER .ssh]$ cat id_rsa.pub >> authorized_keys
在另一台机器SLAVE 也生成公钥和秘钥,步骤与上面相同。
将SLAVE 的id_rsa.pub文件copy到MASTER机器:[root@SLAVE.ssh]$ scp id_rsa.pub root@10.12.2.182:~/.ssh/id_rsa.pub_sl
此切换到机器MASTER 合并authorized_keys; [root@MASTER.ssh]$ cat id_rsa.pub_sl>> authorized_keys
将authorized_keys copy到SLAVE机器(/home/rli/.ssh):[root@MASTER .ssh]$ scp authorized_keys root@10.12.2.182:~/.ssh/
现在讲两台机器 .ssh/ 文件夹权限改为700,authorized_keys文件权限改为600(or 644)
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
三、安装Hadoop
这是下载后的hadoop-2.6.0.tar.gz压缩包,
1、解压 tar -xzvf hadoop-2.6.0.tar.gz
2、move到指定目录下:[root@MASTER ]$ mv hadoop-2.6.0 /root/cluster/opt/
3、进入hadoop目前 [root@MASTER ~]# cd /root/cluster/opt/hadoop-2.6.0/
[root@MASTERhadoop-2.6.0]# pwd
/root/cluster/opt/hadoop-2.6.0
4, 配置之前,先在本地文件系统创建以下文件夹:/root/cluster/opt/hadoop-2.6.0/tmp
/root/cluster/opt/hadoop-2.6.0/dfs/data,
/root/cluster/opt/hadoop-2.6.0/dfs/name。主要涉及的配置文件有7个:都在/ hadoop-2.6.0/etc/hadoop文件夹下,可以用gedit命令对其进行编辑。
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/hadoop-env.sh
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/yarn-env.sh
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/slaves
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/core-site.xml
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/hdfs-site.xml
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/mapred-site.xml
/root/cluster/opt/hadoop-2.6.0//etc/hadoop/yarn-site.xml
[root@MASTER hadoop-2.6.0]# ls
bin etc lib LICENSE.txt NOTICE.txt sbin tmp
dfs include libexec logs README.txt share
4、进去hadoop配置文件目录
[root@MASTER hadoop-2.6.0]$ cd etc/hadoop/
[root@MASTER hadoop]$ ls
capacity-scheduler.xml hadoop-env.sh httpfs-env.sh kms-env.sh mapred-env.sh ssl-client.xml.example
configuration.xsl hadoop-metrics2.properties httpfs-log4j.properties kms-log4j.properties mapred-queues.xml.template ssl-server.xml.example
Container-executor.cfg hadoop-metrics.properties httpfs-signature.secret kms-site.xml mapred-site.xml yarn-env.cmd
core-site.xml hadoop-policy.xml httpfs-site.xml log4j.properties mapred-site.xml.template yarn-env.sh
hadoop-env.cmd hdfs-site.xml kms-acls.xml mapred-env.cmd slaves yarn-site.xml
4.1、配置 hadoop-env.sh文件-->修改JAVA_HOME
# The javaimplementation to use.
export JAVA_HOME=$JAVA_HOME ( 保持原状,已经设置JAVA_HOME环境变量)
4.2、配置 yarn-env.sh 文件-->>修改JAVA_HOME
# some Java parameters
exportJAVA_HOME= $JAVA_HOME
4.3、配置slaves文件-->>增加slave节点
10.12.2.90
4.4、配置 core-site.xml文件-->>增加hadoop核心配置(hdfs文件端口是9000、file: /root/cluster/opt/hadoop-2.6.0/tmp、)
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.12.2.182:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/root/cluster/opt/hadoop-2.6.0/tmp</value>
<description>Abasefor other temporarydirectories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>
4.5、配置 hdfs-site.xml 文件-->>增加hdfs配置信息(namenode、datanode端口和目录位置)
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>10.12.2.182:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/cluster/opt/hadoop-2.6.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/cluster/opt/hadoop-2.6.0/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
4.6、配置 mapred-site.xml 文件-->>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.12.2.182:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>10.12.2.182:19888</value>
</property>
</configuration>
4.7、配置 yarn-site.xml 文件-->>增加yarn功能
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>10.12.2.182:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>10.12.2.182:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>10.12.2.182:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>10.12.2.182:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>10.12.2.182:8088</value>
</property>
</configuration>
5、将配置好的hadoop文件copy到另一台slave机器上
[root@rli_linuxopt]$ scp -r hadoop-2.6.0/ root@10.12.2.90:/root/cluster/opt/
四、验证
1、格式化namenode:
[root@MASTER opt]$ cdhadoop-2.6.0/
[root@MASTER hadoop-2.6.0]$ ls
bin dfs etc include input lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp
[root@MASTER hadoop-2.6.0]$ ./bin/hdfs namenode -format
[root@MASTER .ssh]$ cd~/opt/hadoop-2.6.0
[root@MASTER hadoop-2.6.0]$ ./bin/hdfs namenode -format
2、启动hdfs:
[root@MASTERhadoop-2.6.0]$ ./sbin/start-dfs.sh
3、停止hdfs:
[root@MASTER hadoop-2.6.0]$./sbin/stop-dfs.sh
4、启动yarn:
[root@MASTERhadoop-2.6.0]$./sbin/start-yarn.sh
[root@MASTER hadoop-2.6.0]$ jps
5、停止yarn:
[root@MASTERhadoop-2.6.0]$ ./sbin/stop-yarn.sh
6、查看集群状态:
[root@MASTERhadoop-2.6.0]$ ./bin/hdfsdfsadmin -report
7、查看hdfs:http://10.12.2.182:50070/
8、查看RM:http://10.12.2.182:8088/
9, 启动hadoop 集群
root@MASTERhadoop-2.6.0]$./sbin/start-all.sh
[root@MASTER hadoop-2.6.0]# jps
11253 Jps
9977 NameNode
10410 ResourceManager
10253 SecondaryNameNode
[root@SLAVE hadoop-2.6.0]# jps
2162 NodeManager
2091 Datanode