Hadoop2.6.0完全集群安装配置

这个教程是按照hadoop2.6.0版本来整理的,2.x版本应该都适用,有什么问题欢迎留言。建议照着做的时候,各个软件版本尽量跟文档中一致,免得不必要的麻烦:

    centos 6.5 64位版本
    虚拟机:Virtualox4.3.3
    jdk:java-1.7.0-openjdk-devel.x86_64 (无需下载直接用文档中的命令安装即可)
    hadoop2.0.0
    虚拟机中两台机器的网络都采用桥接模式

准备工作

  • 安装两台centos,我的版本是6.5 64位,
  • 两台机器分别为enilu1(master) enilu2(slave)

基础环境

更改主机名:

  • enilu1
    [root@enilu1 ~]# vi /etc/sysconfig/network
    NETWORKING=yes
    HOSTNAME=enilu2
  • enilu2

    [root@enilu2 ~]# vi /etc/sysconfig/network
    NETWORKING=yes
    HOSTNAME=enilu2
    

配置host

  • enilu1
    [root@enilu1 ~]# vi /etc/hosts
    192.168.1.137 enilu1
    192.168.1.138 enilu2
  • enilu2

    [root@enilu2 ~]# vi /etc/hosts
    192.168.1.137 enilu1
    192.168.1.138 enilu2
    
  • 测试,在两台机器上,分别输入 ping enilu1 和ping enilu2 通过即可

jdk

  • 安装jdk,所有机器版本最好一致,
  • 使用下列命令安装,centos自带的只是jre,所以必须安装jdk

    yum install -y java-1.7.0-openjdk.x86_64 java-1.7.0-openjdk-devel.x86_64
    
  • 安装完成后,检测是否安装成功

    [root@enilu1 hadoop]# java -version
    java version "1.7.0_91"
    OpenJDK Runtime Environment (rhel-2.6.2.2.el6_7-x86_64 u91-b00)
    OpenJDK 64-Bit Server VM (build 24.91-b01, mixed mode)
    

网络配置

  • 配置为静态ip,配置以下几项:

    ONBOOT=yes
    BOOTPROTO=static
    IPADDR=192.168.1.137
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1     
    
  • enilu1

    [root@enilu1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
    DEVICE=eth0
    HWADDR=08:00:27:A7:E1:E2
    TYPE=Ethernet
    UUID=f67a8c22-4525-4c09-a28b-5b9ec9e088a4
    ONBOOT=yes
    NM_CONTROLLED=yes
    BOOTPROTO=static
    IPADDR=192.168.1.137
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1
    
  • enilu2

    [root@enilu2 ~]#  cat /etc/sysconfig/network-scripts/ifcfg-eth0 
    DEVICE=eth1
    HWADDR=08:00:27:c3:f5:01
    TYPE=Ethernet
    UUID=f67a8c22-4525-4c09-a28b-5b9ec9e088a4
    ONBOOT=yes
    NM_CONTROLLED=yes
    BOOTPROTO=static
    IPADDR=192.168.1.138
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1
    

ssh免密码登录

  • 要求在slave 机器上可以免密码登录master
  • 在两台机器上都分别运行

    ssh-keygen -t rsa
    
  • 默认在 ~/.ssh目录生成两个文件:

    id_rsa      :私钥
    id_rsa.pub  :公钥
    
  • 在两台机器上都执行下面命令:

     cat ~/.ssh/id_rsa >>authorized_keys
    
  • 在两台机器上分别把自己的~/.ssh/id_rsa.pub 拷贝到其他机器的/home/目录下

     [root@enilu1 .ssh]# scp ~/.ssh/id_rsa.pub root@enilu2:/home/
     [root@enilu2 .ssh]# scp ~/.ssh/id_rsa.pub root@enilu1:/home/
    
  • 分别把其他机器上拷贝过来的id_rsa.pub中的内容写入到authorized_keys文件中

    [root@enilu1 .ssh]# cat /home/id_rsa.pub >>~/.ssh/authorized_keys 
    [root@enilu2 .ssh]# cat /home/id_rsa.pub >>~/.ssh/authorized_keys
    
  • 配置enilu1 到enilu1的无密码登录,在enilu1 上执行下列命令

    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    
  • 测试是否配置成功

    • enilu1 测试
      ##测试到enilu2
      [root@enilu1 .ssh]# ssh enilu2
      Last login: Sat Dec 26 22:44:27 2015 from 192.168.1.118
      [root@enilu2 ~]# exit
      logout
      Connection to enilu2 closed.
      ## 测试到本机
      [root@enilu1 hadoop]# ssh enilu1
      Last login: Sun Dec 27 05:16:06 2015 from enilu1
      [root@enilu1 ~]#

    • 在enilu2上测试到enilu1的登录(应该是不需要在enilu2上配置无密码登录到enilu1的)

      [root@enilu2 .ssh]# ssh enilu1
      Last login: Sat Dec 26 22:44:35 2015 from enilu2
      [root@enilu1 ~]# exit
      logout
      Connection to enilu1 closed.
      [root@enilu2 .ssh]# 
      

hadoop安装配置

下载安装

配置

  • 在enilu上配置好以后将/opt/hadoop-2.6.0 目录拷贝至enilu2机器上对应的目录即可。
  • 配置文件目录在/opt/hadoop-2.6.0/etc/hadoop ,需要配置的文件包括以下7个:

    hadoop-env.sh
    yarn-env.sh
    slaves
    core-site.xml
    hdfs-site.xml
    mapred-site.xml
    yarn-site.xml
    
  • hadoop-env.sh 修改java home

    # The java implementation to use.
    export JAVA_HOME=/usr/lib/jvm/java
    
  • yarn-env.sh 修改JAVA_HOME

    # some Java parameters      
    export JAVA_HOME=/usr/lib/jvm/java
    
  • slaves 文件,更改为:

    slave机器名

    enilu2

  • core-site.xml,配置为:

    <configuration>
     <property>
      <name>fs.defaultFS</name>
      <value>hdfs://enilu1:9000</value>
     </property>
    
     <property>
      <name>io.file.buffer.size</name>
      <value>131072</value>
     </property>
     <property>
      <name>hadoop.tmp.dir</name>
      <value>file:/opt/hadoop-2.6.0/tmp</value>
      <description>Abasefor other temporary directories.</description>
     </property>
     <property>
      <name>hadoop.proxyuser.spark.hosts</name>
      <value>*</value>
     </property>
    <property>
      <name>hadoop.proxyuser.spark.groups</name>
      <value>*</value>
     </property>
    
    </configuration>
    
  • hdfs-site.xml 配置为:

    <configuration>
    <property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>enilu1:9001</value>
     </property>
    
      <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/opt/hadoop-2.6.0/dfs/name</value>
     </property>
    
     <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/opt/hadoop-2.6.0/dfs/data</value>
      </property>
    
     <property>
      <name>dfs.replication</name>
      <value>3</value>
     </property>
    
     <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
     </property>
    </configuration>
    

-mapred-site.xml修改为:

    <configuration>
     <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
     </property>
     <property>
      <name>mapreduce.jobhistory.address</name>
      <value>enilu1:10020</value>
     </property>
     <property>
      <name>mapreduce.jobhistory.webapp.address</name>
      <value>enilu1:19888</value>
     </property>
    </configuration>
  • yar-site.xml 修改为:

    <configuration>
    
    <!-- Site specific YARN configuration properties -->
     <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
      </property>
      <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
      </property>
      <property>
       <name>yarn.resourcemanager.address</name>
       <value>enilu1:8032</value>
      </property>
      <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>enilu1:8030</value>
      </property>
      <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>enilu1:8035</value>
      </property>
      <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>enilu1:8033</value>
      </property>
      <property>
       <name>yarn.resourcemanager.webapp.address</name>
       <value>enilu1:8088</value>
      </property>
    </configuration>
    
  • 配置好的hadoop拷贝到另外一台机器enilu2上:/opt/hadoop-2.6.0

启动hadoop

  • 格式化文件系统,进入enilu1机器的/opt/hadoop-2.6.0/目录下执行命令:

    ./bin/hdfs namenode -format
    
  • 启动hdfs:

    ./sbin/start-dfs.sh 
    
  • 启动yarn:

    ./sbin/start-yarn.sh 
    
  • 启动完毕后,分别在enilu1和enilu2机器上执行jps命令,查看hadoop相关进程

    [root@enilu1 hadoop]# jps
    5527 ResourceManager
    30007 Jps
    5377 SecondaryNameNode
    5201 NameNode
    [root@enilu1 hadoop]# 
    
    
    [root@enilu2 hadoop]# jps
    3826 NodeManager
    27220 Jps
    3728 DataNode
    [root@enilu2 hadoop]# 
    

测试

命令行测试:

  • 创建目录:./bin/hdfs dfs -mkdir hellohadoop

    [root@enilu2 hadoop-2.6.0]# ./bin/hdfs dfs -mkdir hellohadoop
    15/12/27 04:53:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    
  • 查看文件列表:./bin/hdfs dfs -ls

    [root@enilu2 hadoop-2.6.0]# ./bin/hdfs dfs -ls
    15/12/27 04:53:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Found 1 items
    drwxr-xr-x   - root supergroup          0 2015-12-27 04:53 hellohadoop
    [root@enilu2 hadoop-2.6.0]# 
    

web测试

  • 如果上面命令行测试没有问题,说明hadoop集群安装配置已经大功告成。
  • hadoop也提供了web界面来查看管理hadoop,访问http://192.168.1.137:50070/ 如下所示

    50070.png

java程序中使用hadoop进行文件操作

  • 如果你想测试java对hadoop的操作,可以来这里下载这个java项目 点击下载
  • 下载后导入到eclipse中,将源代码导出为jar包。 也可以来这里直接下载jar包 点击下载
  • 将导出的jar包(我这里导出jar包名字为:hadoop-test-1.2.1.jar)拷贝至enilu1机器上的/opt/hadoop-2.6.0/目录下
  • 然后进入/opt/hadoop-2.6.0目录执行:

    • 测试拷贝./bin/hadoop jar hadoop-test-1.2.1.jar copy,具体操作是将/etc/profile 文件上传到hadoop的/profile(具体可以看项目代码)

      [root@enilu1 hadoop-2.6.0]# ./bin/hadoop jar hadoop-test-1.2.1.jar copy
      accept command:copy
      15/12/27 04:23:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      [root@enilu1 hadoop-2.6.0]# 
      
    • 测试ls命令:./bin/hadoop jar hadoop-test-1.2.1.jar list

      [root@enilu1 hadoop-2.6.0]# ./bin/hadoop jar hadoop-test-1.2.1.jar list
      accept command:list
      15/12/27 05:09:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      group:supergroup owner:rootpath:hdfs://enilu1:9000/NOTICE.txt
      group:supergroup owner:rootpath:hdfs://enilu1:9000/user
      [root@enilu1 hadoop-2.6.0]# 
      

停止hadoop

    ## 停止yarn
    [root@enilu1 hadoop-2.6.0]# ./sbin/stop-yarn.sh 
    stopping yarn daemons
    stopping resourcemanager
    enilu2: stopping nodemanager
    enilu2: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
    no proxyserver to stop

    ## 停止hdfs
    [root@enilu1 hadoop-2.6.0]# ./sbin/stop-dfs.sh 
    15/12/27 05:32:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Stopping namenodes on [enilu1]
    enilu1: stopping namenode
    enilu2: stopping datanode
    Stopping secondary namenodes [enilu1]
    enilu1: stopping secondarynamenode
    15/12/27 05:32:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    ## jps命令查看所有hadoop进程没有了
    [root@enilu1 hadoop-2.6.0]# jps
    30743 Jps
    [root@enilu1 hadoop-2.6.0]# 

欢迎去我的个人站点查看文章
或者,欢迎关注俺的微信订阅号,每天一篇小笔记,每天提高一点点:

公众号:enilu123
这里写图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值