hbase+hadoop完全分布式环境搭建

1、环境准备

linux系统:Centos6.4 release (Final) 下载地址:http://www.centos.org/

jdk:1.7.0_45 下载地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html

hadoop:hadoop-2.6.0 下载地址:http://apache.claz.org/hadoop/common/stable2/
hbase:hbase-0.98.8-hadoop2 下载地址:http://mirrors.gigenet.com/apache/hbase/stable/

2、配置步骤:

2.1 安装jdk,别忘了修改 vim /etc/profile。

2.2 ssh免密码登陆

说明:由于jdk的环境配置以及ssh面密码登陆一般不出什么问题,网上资料较多,故在此不做赘述

2.3 hadoop环境配置

本机分布式服务器

ip30:lingcloud30 (由于本台服务器配置相对较低,故用其作为NameNode,也是SecondaryNameNode。也是下面Hbase配置的HMaster)

ip29:lingcloud29 (本台以及下面的两台作为DateNode。以及下面Hbase配置的HRegionServer)

ip31:lingcloud31
ip32:lingcloud32

2.3.1{hadoop}/etc/hadoop/core-site.xml配置

下面是我的lingcloud30的配置,有的有注释,方便理解

<code=xml>

<prename="code" class="html"><configuration>
          <!--     this file must be the same with the  $hbase$/conf/ "hbase.rootdir"property   -->
          <property>
        <name>fs.defaultFS</name>
       <value>hdfs://lingcloud30:9000</value>
    </property>
          <!--     if the HMaster start servel seconds andthen HMaster server abort ,you should remove the file hadooptmp and restart thehbase     -->
          <property>
        <name>hadoop.tmp.dir</name>
       <value>/usr/qhl/hadoopWorkspace/hadooptmp</value>
     </property>
          <property>
                    <name>io.file.buffer.size</name>
                    <value>131072</value>
          </property>
  <!--添加 httpfs 的 选项-->
          <property>
                    <name>hadoop.proxyuser.root.hosts</name>
                    <value>lingcloud30</value>
          </property>
          <property>
                    <name>hadoop.proxyuser.root.groups</name>
                    <value>*</value>
          </property>
</configuration>



</code>

2.3.2{hadoop}/etc/hadoop/hdfs-site.xml配置

<code=xml>

<prename="code" class="html"><configuration>
        <property>
               <name>dfs.datanode.handler.count</name>
               <value>5</value>
               <description>The number ofserver threads for the datanode.</description>
        </property>
        <property>
               <name>dfs.namenode.handler.count</name>
               <value>5</value>
               <description>The number ofserver threads for the namenode.</description>
        </property>
        <property>
               <name>dfs.replication</name>
               <value>3</value>
        </property>
        <property>
               <name>dfs.namenode.name.dir</name>
               <value>file:/usr/qhl/hadoopWorkspace/hdfs/name</value>
               <final>true</final>
        </property>
        <property>
               <name>dfs.permissions</name>
               <value>false</value>
        </property>
        <property>
               <name>dfs.federation.nameservice.id</name>
               <value>ns1</value>
        </property>
        <property>
                <name>dfs.namenode.backup.address.ns1</name>
               <value>lingcloud30:50100</value>
        </property>
        <property>
               <name>dfs.namenode.backup.http-address.ns1</name>
               <value>lingcloud30:50105</value>
        </property>
        <property>
               <name>dfs.federation.nameservices</name>
               <value>ns1</value>
        </property>
        <property>
               <name>dfs.namenode.rpc-address.ns1</name>
               <value>lingcloud30:9000</value>
        </property>
        <property>
               <name>dfs.namenode.rpc-address.ns2</name>
               <value>lingcloud30:9000</value>
        </property>
        <property>
               <name>dfs.namenode.http-address.ns1</name>
               <value>lingcloud30:23001</value>         <!--  this is the web browsre  port if you browsre http://lingcloud30:23001. you'll see some information          -->
        </property>
        <property>
               <name>dfs.namenode.http-address.ns2</name>
               <value>lingcloud30:13001</value>
        </property>
        <property>
               <name>dfs.dataname.data.dir</name>
               <value>file:/usr/qhl/hadoopWorkspace/hdfs/data</value>
        <final>true</final>
        </property>
        <property>
               <name>dfs.namenode.secondary.http-address.ns1</name>
               <value>lingcloud30:23002</value>
        </property>
        <property>
               <name>dfs.namenode.secondary.http-address.ns2</name>
               <value>lingcloud30:23002</value>
        </property>
        <property>
               <name>dfs.namenode.secondary.http-address.ns1</name>
               <value>lingcloud30:23003</value>
        </property>
        <property>
               <name>dfs.namenode.secondary.http-address.ns2</name>
               <value>lingcloud30:23003</value>
        </property>
        <property>
               <name>dfs.datanode.max.xcievers</name>
               <value>8192</value>
        </property>
</configuration>
 


</code>

2.3.3{hadoop}/etc/hadoop/yarn-site.xml配置

<code=xml>

<configuration>
        <property>
               <name>yarn.resourcemanager.address</name>
               <value>lingcloud30:18040</value>
        </property>
        <property>
               <name>yarn.resourcemanager.scheduler.address</name>
               <value>lingcloud30:18030</value>
        </property>
        <property>
               <name>yarn.resourcemanager.webapp.address</name>
               <value>lingcloud30:18088</value>  <!-- this is the web browsre  port, ifyou browsre http://lingcloud30:18088, you'll see some information               -->
        </property>
        <property>
               <name>yarn.resourcemanager.resource-tracker.address</name>
               <value>lingcloud30:18025</value>
        </property>
        <property>
               <name>yarn.resourcemanager.admin.address</name>
               <value>lingcloud30:18141</value>
        </property>
        <property>
               <name>yarn.nodemanager.aux-services</name>
               <value>mapreduce.shuffle</value>
        </property>
</configuration>
 


</code>

2.3.4{hadoop}/etc/hadoop/mapred-env.sh配置

添加 

<code=xml>

exportHADOOP_MAPRED_PID_DIR=/usr/qhl/hadoopWorkspace/haddopMapredPidDir # The pidfiles are stored. /tmp by default.


</code>

2.3.5{hadoop}/etc/hadoop/slaves配置

<code=xml>

lingcloud32
lingcloud31
lingcloud29


</code>

2.3.6 小结

以上配置复制到其他node节点,按照相同配置即可。

3.hbase配置

3.1 habs-env.sh环境配置

<code=xml>

exportJAVA_HOME=/usr/lib/jdk/jdk1.7.0_45/  #你的jdk安装目录
exportHBASE_PID_DIR=/usr/qhl/hbaseWorkspace/pids # The directory where pid files arestored. /tmp by default.
exportHBASE_MANAGES_ZK=true   #使用hhase自带的zookeeper


</code>

3.2 hbase-site.xml配置

<code=xml>

<configuration>
        <property>
               <name>hbase.rootdir</name>
               <value>hdfs://lingcloud30:9000/hbase</value>
               <!--
               this must be ths same with the{hadoop}/etc/hadoop:core-site.xml  
               <property>
                       <name>fs.defaultFS</name> 
                       <value>hdfs://lingcloud30:9000</value> 
               </property> 
               -->
        </property>
        <property>
               <name>hbase.zookeeper.property.dataDir</name>
               <value>/usr/qhl/zookeeper</value>
        </property>
        <property>
               <name>hbase.cluster.distributed</name>
               <value>true</value>
               <description>which directsHBase to run in distributed mode, with one JVM instance perdaemon.</description>
        </property>
        <property>
               <name>hbase.tmp.dir</name>
               <value>/usr/qhl/hbaseWorkspace/hbasetmp</value>
               <!--     if the HMaster start servel seconds andthen HMaster server abort ,you should remove the file hadooptmp and restart thehbase     -->
        </property>
        <property>
               <name>hbase.zookeeper.quorum</name>
               <value>lingcloud29,lingcloud31,lingcloud32</value>
        </property>
        <property>
               <name>hbase.master</name>
               <value>lingcloud30:60000</value>
        </property>
        <property>
               <name>hbase.master.port</name>
               <value>60000</value>
               <description>The portmaster should bind to.</description>
        </property>
        <property> 
        <name>hbase.master.maxclockskew</name> 
        <value>200000</value> 
        <description>Time difference ofregionserver from master</description> 
    </property> 
</configuration>
 


</code>

3.3 regionserver 配置

<code=xml>

lingcloud29
lingcloud31
lingcloud32


</code>

4.配置成功后浏览器截图

我在{hadoop}etc/hadoop下的hdfs-site.xml中配置了端口为23001.           配置项是dfs.namenode.http-address.ns1

访问18088端口会出现如下界面:

 

hbase默认端口为60010


5.总结和参考文献

本人必须承认,很多东西都是从网上学到的,参考了很多的博文,也遇到了很多的错误,但是由于日志中间没有保存,错误就不贴了。同时由于参考了很多的博文,在此也不能一一列举了,本篇博文是我的第一篇,有什么不对之处,还请批评指正,共同学习。

 

下面是参考文献:

1.hadoop的API,个人感觉资料还不错:http://hadoop.apache.org/docs/current/api/overview-summary.html#overview_description

2.hbase官方配置。  http://hbase.apache.org/book/configuration.html

3. hadoop官方配置。 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

4.参考博文。http://www.cnblogs.com/scotoma/archive/2012/09/18/2689902.html

5.参考博文。 http://www.cnblogs.com/ringwang/p/3623149.html

6.参考博文。http://www.linuxidc.com/Linux/2012-12/76947p6.htm

7.参考博文。http://dongxicheng.org/mapreduce-nextgen/apache-hadoop-2-0-alpha/

8.参考博文。http://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html

9.hbase官方文档汉语版。http://abloz.com/hbase/book.html#hbase_default_configurations 

 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值