centos7(vm)下hadoop2.7.2完全分布式安装验证(x86)-hadoop3节点集群(2副本)

1.vm安装(略)

centos7 操作系统,  ip地址分别为 192.168.1.150(namenode),192.168.1.151(datanode1),192.168.1.152(datanode2)

2.软件下载

hadoop-2.7.2.tar

jdk-8u77-linux-x64.tar


useradd hadoop  #增加hadoop用户,用户组、home目录、终端使用默认
    passwd  hadoop  #修改密码
    建议在学习阶段将hadoop用户加入sudo权限管理,简单方法如下
        1.执行visudo命令
        2.在root    ALL=(ALL)      ALL 后加入
            hadoop    ALL=(ALL)      ALL
    master,node1,node2上进入hadoop用户:
    su - hadoop


第一步:java环境变量,hadoop环境变量(三节点)

第一台主机

[root@localhost ~]# mkdir java
[root@localhost ~]# mkdir hadoop

[root@localhost ~]# cp jdk-8u77-linux-x64.tar.gz java/
[root@localhost ~]# cp hadoop-2.7.2.tar.gz hadoop

[root@localhost ~]# cd java/
[root@localhost java]# tar -zxvf jdk-8u77-linux-x64.tar.gz
[root@localhost java]# vi /etc/profile

#JAVA
export JAVA_HOME=/root/java/jdk1.8.0_77
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

wq!

[root@localhost java]# source /etc/profile

[root@localhost java]# java -version
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

[root@localhost ~]# cd hadoop/
[root@localhost hadoop]# tar -zxvf hadoop-2.7.2.tar.gz

[root@localhost hadoop]# vi /etc/profile

#HADOOP
export HADOOP_HOME=/root/hadoop/hadoop-2.7.2
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

wq!

[root@localhost java]# source /etc/profile

第二台主机

[root@localhost ~]# mkdir java
[root@localhost ~]# mkdir hadoop

[root@localhost ~]# cp jdk-8u77-linux-x64.tar.gz java/
[root@localhost ~]# cp hadoop-2.7.2.tar.gz hadoop

[root@localhost ~]# cd java/
[root@localhost java]# tar -zxvf jdk-8u77-linux-x64.tar.gz
[root@localhost java]# vi /etc/profile

#JAVA
export JAVA_HOME=/root/java/jdk1.8.0_77
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

wq!

[root@localhost java]# source /etc/profile

[root@localhost java]# java -version
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

[root@localhost ~]# cd hadoop/
[root@localhost hadoop]# tar -zxvf hadoop-2.7.2.tar.gz

[root@localhost hadoop]# vi /etc/profile

#HADOOP
export HADOOP_HOME=/root/hadoop/hadoop-2.7.2
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

wq!

[root@localhost java]# source /etc/profile


第三台主机

[root@localhost ~]# mkdir java
[root@localhost ~]# mkdir hadoop

[root@localhost ~]# cp jdk-8u77-linux-x64.tar.gz java/
[root@localhost ~]# cp hadoop-2.7.2.tar.gz hadoop

[root@localhost ~]# cd java/
[root@localhost java]# tar -zxvf jdk-8u77-linux-x64.tar.gz
[root@localhost java]# vi /etc/profile

#JAVA
export JAVA_HOME=/root/java/jdk1.8.0_77
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

wq!

[root@localhost java]# source /etc/profile

[root@localhost java]# java -version
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

[root@localhost ~]# cd hadoop/
[root@localhost hadoop]# tar -zxvf hadoop-2.7.2.tar.gz

[root@localhost ~]# cd hadoop/
[root@localhost hadoop]# tar -zxvf hadoop-2.7.2.tar.gz

[root@localhost hadoop]# vi /etc/profile

#HADOOP
export HADOOP_HOME=/root/hadoop/hadoop-2.7.2
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

wq!

[root@localhost java]# source /etc/profile

第二步:修改hostname主机名称(三节点 )

[root@localhost ~]# vi /etc/hostname
namenode

wq!

重新登陆

[root@namenode ~]#

[root@localhost ~]# vi /etc/hostname
datanode1

wq!

重新登陆

[root@datanode1 ~]#

[root@localhost ~]# vi /etc/hostname
datanode2

wq!

重新登陆

[root@data2node ~]#

第三步:修改hosts文件(三节点)

[root@namenode ~]# vi /etc/hosts

192.168.1.150 namenode
192.168.1.151 datanode1
192.168.1.152 datanode2

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
..1       localhost localhost.localdomain localhost6 localhost6.localdomain6

~

wq!

[root@datanode1 ~]# vi /etc/hosts

192.168.1.150 namenode
192.168.1.151 datanode1
192.168.1.152 datanode2

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
..1       localhost localhost.localdomain localhost6 localhost6.localdomain6
~

wq!

[root@datanode1 ~]# vi /etc/hosts

192.168.1.150 namenode
192.168.1.151 datanode1
192.168.1.152 datanode2

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
..1       localhost localhost.localdomain localhost6 localhost6.localdomain6
~

wq!

[root@datanode2 ~]# vi /etc/hosts

namenode 192.168.1.150
datanode1 192.168.1.151
datanode2 192.168.1.152

wq!

第四步:ssh免密码登录(三节点)

[root@namenode ~]# ssh-keygen -t rsa   一直回车即可

[root@datanode1 ~]# ssh-keygen -t rsa

[root@datanode2 ~]# ssh-keygen -t rsa

[root@namenode ~]# cd /root/.ssh/

[root@namenode .ssh]# cat id_rsa.pub>> authorized_keys

[root@namenode .ssh]# ssh root@192.168.1.151 cat ~/.ssh/id_rsa.pub >> authorized_keys
The authenticity of host '192.168.1.151 (192.168.1.151)' can't be established.
ECDSA key fingerprint is 0a:16:4e:24:58:e0:37:e8:a5:01:91:01:a6:2f:f3:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.151' (ECDSA) to the list of known hosts.
root@192.168.1.151's password:
[root@namenode .ssh]# ssh root@192.168.1.152 cat ~/.ssh/id_rsa.pub >> authorized_keys
The authenticity of host '192.168.1.152 (192.168.1.152)' can't be established.
ECDSA key fingerprint is 0a:16:4e:24:58:e0:37:e8:a5:01:91:01:a6:2f:f3:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.152' (ECDSA) to the list of known hosts.
root@192.168.1.152's password:
[root@namenode .ssh]# ll
total 16
-rw-r--r--. 1 root root 1187 Dec 21 01:37 authorized_keys
-rw-------. 1 root root 1679 Dec 21 01:32 id_rsa
-rw-r--r--. 1 root root  395 Dec 21 01:32 id_rsa.pub
-rw-r--r--. 1 root root  350 Dec 21 01:37 known_hosts

[root@namenode .ssh]# scp authorized_keys  root@192.168.1.151:/root/.ssh/

[root@namenode .ssh]# scp known_hosts   root@192.168.1.151:/root/.ssh/

[root@namenode .ssh]# scp authorized_keys  root@192.168.1.152:/root/.ssh/

[root@namenode .ssh]# scp known_hosts   root@192.168.1.152:/root/.ssh/

[root@namenode .ssh]# ssh 192.168.1.151
Last login: Wed Dec 21 01:53:51 2016 from namenode
[root@datanode1 ~]# exit
logout
Connection to 192.168.1.151 closed.
[root@namenode .ssh]# ssh 192.168.1.152
Last login: Wed Dec 21 01:53:56 2016 from namenode
[root@datanode2 ~]# exit
logout
Connection to 192.168.1.152 closed.
[root@namenode .ssh]#

第五步:配置hadoop

a.在/root/hadoop目录下创建数据存放的文件夹,tmp、hdfs、hdfs/data、hdfs/name

[root@namenode hadoop]# mkdir tmp
[root@namenode hadoop]# mkdir hdfs
[root@namenode hadoop]# cd hdfs/
[root@namenode hdfs]# mkdir data
[root@namenode hdfs]# mkdir name
[root@namenode hdfs]# cd
[root@namenode ~]# cd hadoop/
[root@namenode hadoop]# ll
total 207084
drwxr-xr-x. 9 10011 10011      4096 Jan 26  2016 hadoop-2.7.2
-rw-r--r--. 1 root  root  212046774 Dec 21 00:57 hadoop-2.7.2.tar.gz
drwxr-xr-x. 4 root  root         28 Dec 21 02:05 hdfs
drwxr-xr-x. 2 root  root          6 Dec 21 02:05 tmp
[root@namenode hadoop]#

b.配置core-site.xml

[root@namenode hadoop]# cd /root/hadoop/hadoop-2.7.2/etc/hadoop/

[root@namenode hadoop]# vi core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.1.150:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/root/hadoop/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
</configuration>

wq!

c.配置hdfs-site.xml

[root@namenode hadoop]# cd /root/hadoop/hadoop-2.7.2/etc/hadoop/

[root@namenode hadoop]# vi hdfs-site.xml


<configuration>
  <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/root/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/root/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>192.168.1.150:9001</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>
</configuration>

wq!

d.配置mapred-site.xml

[root@namenode hadoop]# cd /root/hadoop/hadoop-2.7.2/etc/hadoop/

[root@namenode hadoop]# mv mapred-site.xml.template mapred-site.xml
[root@namenode hadoop]# vi mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>192.168.1.150:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>192.168.1.150:19888</value>
    </property>
</configuration>

wq!

e.配置yarn-site.xml

[root@namenode hadoop]# cd /root/hadoop/hadoop-2.7.2/etc/hadoop/

[root@namenode hadoop]# vi yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>192.168.1.150:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>192.168.1.150:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>192.168.1.150:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>192.168.1.150:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>192.168.1.150:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>768</value>
    </property>

</configuration>

wq!

f.hadoop-env.sh、yarn-env.sh的JAVA_HOME

[root@namenode hadoop]# vi hadoop-env.sh
export JAVA_HOME=/root/java/jdk1.8.0_77

wq!

[root@namenode hadoop]# vi yarn-env.sh
 JAVA_HOME=/root/java/jdk1.8.0_77

wq!

第六步:配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的slaves,删除默认的localhost,增加2个从节点

[root@namenode hadoop]# vi slaves

192.168.1.151
192.168.1.152
wq!

第七步:将配置好的Hadoop复制到各个节点对应位置上,通过scp传送,

[root@namenode ~]# scp -r hadoop root@192.168.1.151:/root/

[root@namenode ~]# scp -r hadoop root@192.168.1.152:/root/

第八步: 启动hadoop

a. 初始化,输入命令,bin/hdfs namenode -format    格式化

[root@namenode hadoop-2.7.2]# bin/hdfs namenode -format

16/12/21 02:36:31 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1917967227-192.168.1.150-1482258991239
16/12/21 02:36:31 INFO common.Storage: Storage directory /root/hadoop/dfs/name has been successfully formatted.
16/12/21 02:36:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

b. 全部启动sbin/start-all.sh,也可以分开sbin/start-dfs.sh、sbin/start-yarn.sh


(3)停止的话,输入命令,sbin/stop-all.sh


(4)输入命令,jps,可以看到相关信息

启动

[root@namenode sbin]# jps
12487 SecondaryNameNode
13223 Jps
12234 NameNode
12750 ResourceManager
[root@namenode sbin]#


[root@datanode1 ~]# jps
12544 Jps
12018 DataNode
12199 NodeManager


[root@datanode2 ~]# jps
12566 Jps
12216 NodeManager
12041 DataNode
[root@datanode2 ~]#




  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值