hadoop2.7完全分布式安装

安装准备

操作系统: Centos 7
三台机器(hadoop-0为master):

hadoop-0:192.168.116.130
hadoop-1:192.168.116.131
hadoop-2:192.168.116.132

软件包:

hadoop下载地址:
http://apache.fayea.com/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
java下载地址:
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

配置系统环境

分别在hadoop-0,hadoop-1,hadoop-2上创建用户
$ useradd hadoop
$ passwd hadoop
修改hostname

192.168.116.130主机:
echo “hadoop-0” > /etc/hostname
192.168.116.131主机:
echo “hadoop-1” > /etc/hostname
192.168.116.132主机:
echo “hadoop-2” > /etc/hostname

分别在hadoop-0,hadoop-1,hadoop-2上修改hosts文件:

echo “192.168.116.130 hadoop-0” >>/etc/hosts
echo “192.168.116.131 hadoop-1” >>/etc/hosts
echo “192.168.116.132 hadoop-2” >>/etc/hosts

互ping测试

ping hadoop-0
ping hadoop-1
ping hadoop-2

设置SSH免密码登录(使用hadoop账号登录)
分别在hadoop-0,hadoop-1,hadoop-2上生成公钥
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys

将三台机器上的 ~/.ssh/id_rsa.pub汇总到同一个authorized_keys文件,并用该文件替换三台机器上的~/.ssh/authorized_keys文件,然后对authorized_keys文件授予0600权限
复制命令:

$ scp ~/.ssh/id_rsa.pub hadoop@hadoop-1:/tmp/authorized_keys
$ mv /tmp/authorized_keys ~/.ssh/

安装JDK

解压安装包:
tar -zxvf jdk-8u101-linux-x64.tar.gz
设置环境变量:
a.编辑profile文件

vi /etc/profile

b.在profile文件最后加上以下代码,JAVA_HOME根据安装路径自行修改
JAVA_HOME=/opt/jdk1.8.0_101
CLASSPATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
c.使环境变量生效

source /etc/profile

验证:
root@ubuntu:/opt# java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

安装hadoop

解压 hadoop-2.7.3.tar.gz,并移动到要安装的目录(本例安装目录/opt/hadoop)
$ tar -zxvf hadoop-2.7.3.tar.gz
$ mv /tmp/hadoop-2.7.3 /opt/hadoop
为hadoop用户授权
$ sudo chown -R hadoop:hadoop /opt/hadoop/
$ sudo chmod -R 775 /opt/hadoop/
修改配置文件
以下修改可在一台机器上完成,然后将修改后的hadoop打包发送的另外两台机器
修改hadoop-env.sh文件,修改JAVA_HMOE
# The java implementation to use.
# export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/opt/jdk1.8.0_101
修改core-site.xml文件
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/opt/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop-0:9000</value>
    </property>
</configuration>
修改hdfs-site.xml文件
<configuration>

        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>hadoop2:50090</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/opt/hadoop/tmp/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/opt/hadoop/tmp/dfs/data</value>
        </property>

</configuration>
修改yarn-site.xml文件
<configuration>

<!-- Site specific YARN configuration properties -->

        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>hadoop-0</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>

</configuration>
修改mapred-site.xml文件

<configuration>

        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>hadoop-0:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>hadoop-0:19888</value>
        </property>

</configuration>
修改slaves文件
hadoop-0
hadoop-1
hadoop-2
发送到另外两台机器
# 复制
$ scp /opt/hadoop root@hadoop-1:/opt/hadoop
$ scp /opt/hadoop root@hadoop-2:/opt/hadoop
# 授权
$ sudo chown -R hadoop:hadoop /opt/hadoop/
$ sudo chmod -R 775 /opt/hadoop/
配置环境变量
export HADOOP_HOME=/opt/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
格式化(只在hadoop-0)
hdfs namenode -format
启动(只在hadoop-0)
start-all.sh
查看进程

$ jps

hadoop-0

115363 Jps
91430 DataNode
92216 NodeManager
91898 ResourceManager
91660 SecondaryNameNode
91263 NameNode

hadoop-1

18760 DataNode
18904 NodeManager
32443 Jps

hadoop-2

16913 Jps
3090 DataNode
3234 NodeManager

然后恭喜,安装完毕.

如果50070和8088页面打不开请检查防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

网址:
http://hadoop-0:50070/
http://hadoop-0:8088/

其他:
错误:Host key verification failed 解决方法:
http://www.51testing.com/html/38/225738-234384.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值