Hbase单机模式和伪分布式模式安装

安装环境:
Centos7.2 64位
jdk1.8.0_91

一、安装前准备工作

  1. 在安装之前首先应将防火墙和SELinux关闭。
  2. 若没有jdk或者版本过低请先安装相应的jdk。
  3. 我没有在root用户下安装而是安装在了hadoop用户下,所以先创建hadoop用户。
  4. 修改/etc/sysconfig/network和/etc/hosts文件来修改hostname(一开始我以为这一步是可以忽略的,但是后来出现了很多问题,还是老老实实的做这一步吧)。
    注意:如果你不修改主机名,默认为localhost,而它是和127.0.0.1相对应的,这一点要注意
  5. 在hadoop用户下ssh免密登录
[hadoop@localhost ~]$ ssh-keygen -t rsa
[hadoop@localhost ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h153

注:以上这些步骤都在我的hadoop-2.6.0-cdh5.5.2安装Ambari2.2.2安装部署(Centos7.2)的开头都有详细步骤,这里就不加以累述了。
 

二、hbase单机模式(很简单,其实就两个主要步骤)

1.解压相应的hbase安装包:
[hadoop@h153 ~]$ tar -zxvf hbase-1.0.0-cdh5.5.2.tar.gz
2.修改hbase-env.sh:
[hadoop@h153 ~]$ vi hbase-1.0.0-cdh5.5.2/conf/hbase-env.sh
添加
export JAVA_HOME=/usr/jdk1.8.0_91
3.配置hbase-site.xml,创建用于存放数据的目录:
[hadoop@h153 ~]$ vi hbase-1.0.0-cdh5.5.2/conf/hbase-site.xml
添加
    <property>
        <name>hbase.rootdir</name>
        <value>file:///home/hadoop/hbase-1.0.0-cdh5.5.2/data</value>
    </property>
    
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/home/hadoop/hbase-1.0.0-cdh5.5.2/zookeeper</value>
    </property>
4.启动hbase:
[hadoop@h153 hbase-1.0.0-cdh5.5.2]$ bin/start-hbase.sh
5.hbase验证:
[hadoop@h153 hbase-1.0.0-cdh5.5.2]$ jps
19658 HMaster
19978 Jps
6.启动shell命令验证:
[hadoop@h153 hbase-1.0.0-cdh5.5.2]$ bin/hbase shell
hbase(main):001:0> list
TABLE                                                                                                                                                                                                                                        
0 row(s) in 0.4270 seconds

=> []
hbase(main):002:0> create 'scores','grade','course'
0 row(s) in 0.4920 seconds

=> Hbase::Table - scores
hbase(main):003:0> put 'scores','zhangsan01','course:math','99'
0 row(s) in 0.1680 seconds

hbase(main):004:0> scan 'scores'
ROW                                                          COLUMN+CELL                                                                                                                                                                     
 zhangsan01                                                  column=course:math, timestamp=1502138745681, value=99                                                                                                                           
1 row(s) in 0.1130 seconds
7.配置运行环境:
[hadoop@h153 ~]$ vi .bash_profile
export CLASSPATH=.:/home/hadoop/hbase-1.0.0-cdh5.5.2/lib/*
注意:这里要写成*而不是*.jar

[hadoop@h153 ~]$ source .bash_profile
8.补充:zookeeper单节点安装:
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ vi conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/data
dataLogDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/log

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ mkdir -pv data log

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ ./bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Starting zookeeper ... ./zookeeper-3.4.5-cdh5.5.2/bin/zkServer.sh: line 120: [: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/data: binary operator expected
STARTED

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ ./bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Mode: standalone

思考:
  后来我本来想在单机模式下让hbase不使用自带的zookeeper而用我自己安装,设置export HBASE_MANAGES_ZK=false后发现hbase在单机模式下还是会自动启动2181端口,而hbase完全分布式模式下则不会。并且不管hbase在单机模式下是否配置为使用自带的zookeeper在启动hbase后都会自动启动2181端口,而且也不会有HQuorumPeer进程产生,我目前也没有想明白也不知道是什么原因。这就导致我自己安装的zookeeper在启动时2181被占用而无法启动,报java.net.BindException: Address already in use。

  最后无奈只能将我自己安装的zookeeper在zoo.cfg中端口改为2182再启动才行,可这样hbase会使用它吗?我这样做的应用场景是在单机模式中安装了hbase和kafka,而kafka还必须要有zookeeper,所以只能自己安装了,因为也不知道kafka咋引用hbase自带的zookeeper(后来发现既然hbase使用了自带的zookeeper,那么根本就不用自己再安装了,kafka在配置的时候直接配置成zookeeper.connect=h153:2181就OK了)。

  后来惊奇的发现,不管hbase在单节点模式下配置是否使用自己的zookeeper,在启动hbase后,即使我自己安装的zookeeper-3.4.5-cdh5.5.2虽然无法成功启动,但是执行[hadoop@h153 ~]$ ./zookeeper-3.4.5-cdh5.5.2/bin/zkServer.sh status命令却能查看到zookeeper显示的是Mode: standalone,这让我很是困惑。
 

三、伪分布式安装

1.安装hadoop:

(1)解压相应的hadoop安装包:

[hadoop@h153~]$ tar -zxvf hadoop-2.6.0-cdh5.5.2.tar.gz

(2)修改core-site.xml:

[hadoop@h153 ~]$ vi hadoop-2.6.0-cdh5.5.2/etc/hadoop/core-site.xml
添加
    <property>  
        <name>fs.defaultFS</name>  
        <value>hdfs://h153:9000</value>
    </property>
    
    <property>  
        <name>io.file.buffer.size</name>  
        <value>131072</value>  
        <description>Size of read/write buffer used inSequenceFiles.</description>  
    </property>

(3)修改hdfs-site.xml文件:

[hadoop@h153 ~]$ vi hadoop-2.6.0-cdh5.5.2/etc/hadoop/hdfs-site.xml
添加
    <property>
        <name>dfs.namenode.secondary.http-address</name>  
        <value>h153:50090</value>  
        <description>The secondary namenode http server address andport.</description>  
    </property>
    
    <property>
        <name>dfs.namenode.name.dir</name>  
        <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/name</value>  
        <description>Path on the local filesystem where the NameNodestores the namespace and transactions logs persistently.</description>  
    </property>
    
    <property>
        <name>dfs.datanode.data.dir</name>  
        <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/data</value>  
        <description>Comma separated list of paths on the local filesystemof a DataNode where it should store its blocks.</description>  
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.dir</name>  
        <value>file:///home/hadoop/hadoop-2.6.0-cdh5.5.2/dfs/namesecondary</value>  
        <description>Determines where on the local filesystem the DFSsecondary name node should store the temporary images to merge. If this is acomma-delimited list of directories then the image is replicated in all of thedirectories for redundancy.</description>  
    </property>
    
    <property>
        <name>dfs.replication</name>  
        <value>1</value>  
    </property>

(4)编辑mapred-site.xml:

[hadoop@h153 ~]$ cp hadoop-2.6.0-cdh5.5.2/etc/hadoop/mapred-site.xml.template hadoop-2.6.0-cdh5.5.2/etc/hadoop/mapred-site.xml

[hadoop@h153 ~]$ vi hadoop-2.6.0-cdh5.5.2/etc/hadoop/mapred-site.xml
添加
    <property>
        <name>mapreduce.framework.name</name>  
        <value>yarn</value>  
        <description>Theruntime framework for executing MapReduce jobs. Can be one of local, classic oryarn.</description>  
    </property>
    
    <property>
        <name>mapreduce.jobhistory.address</name>  
        <value>h153:10020</value>  
        <description>MapReduce JobHistoryServer IPC host:port</description>  
    </property>
    
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>  
        <value>h153:19888</value>  
        <description>MapReduce JobHistoryServer Web UI host:port</description>  
    </property>

(5)编辑yarn-site.xml:

[hadoop@h153 ~]$ vi hadoop-2.6.0-cdh5.5.2/etc/hadoop/yarn-site.xml
    <property>
        <name>yarn.resourcemanager.hostname</name>  
        <value>h153</value>  
        <description>The hostname of theRM.</description>  
    </property>
  
    <property>
        <name>yarn.nodemanager.aux-services</name>  
        <value>mapreduce_shuffle</value>  
        <description>Shuffle service that needs to be set for Map Reduceapplications.</description>  
    </property>

(6)格式化文件系统:

[hadoop@h153 hadoop-2.6.0-cdh5.5.2]$ bin/hdfs namenode -format

(7)启动hadoop:

[hadoop@h153 hadoop-2.6.0-cdh5.5.2]$ sbin/start-all.sh

(8)用jps命令查看相应进程:

[hadoop@h153 hadoop-2.6.0-cdh5.5.2]$ jps
4416 SecondaryNameNode
4148 NameNode
4260 DataNode
4822 NodeManager
4954 Jps
4556 ResourceManager
2.zookeeper安装(运行模式为:集群伪分布模式):

说明:也可以使用hbase自带的zookeeper,而且启动一个zookeeper进程似乎就够用了。我这里是启动了三个zookeeper进程,模拟zookeeper集群。。。

(1)解压相应的zookeeper安装包:

[hadoop@h153 ~]$ tar -zxvf zookeeper-3.4.5-cdh5.5.2.tar.gz

(2)编辑zoo1.cfg、zoo2.cfg、zoo3.cfg:

[hadoop@h153 ~]$ vi zookeeper-3.4.5-cdh5.5.2/conf/zoo1.cfg
添加
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/data1
dataLogDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/logs
server.1=h153:2887:3887
server.2=h153:2888:3888
server.3=h153:2889:3889

[hadoop@h153 ~]$ vi zookeeper-3.4.5-cdh5.5.2/conf/zoo2.cfg
添加
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2182
dataDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/data2
dataLogDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/logs
server.1=h153:2887:3887
server.2=h153:2888:3888
server.3=h153:2889:3889

[hadoop@h153 ~]$ vi zookeeper-3.4.5-cdh5.5.2/conf/zoo3.cfg
添加
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2183
dataDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/data3
dataLogDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/logs
server.1=h153:2887:3887
server.2=h153:2888:3888
server.3=h153:2889:3889

(3)创建目录:

[hadoop@h153 ~]$ cd zookeeper-3.4.5-cdh5.5.2/
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ mkdir -pv data1 data2 data3 logs

(4)在zoo*.cfg中的dataDir指定的目录下,新建myid文件:

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ vi data1/myid
1
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ vi data2/myid
2
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ vi data3/myid
3

(5)启动zookeeper:

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ bin/zkServer.sh start zoo1.cfg
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ bin/zkServer.sh start zoo2.cfg
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ bin/zkServer.sh start zoo3.cfg

(6)验证是否启动成功:

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ bin/zkServer.sh status zoo1.cfg
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo1.cfg
Mode: follower
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ bin/zkServer.sh status zoo2.cfg
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo2.cfg
Mode: leader
[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ bin/zkServer.sh status zoo3.cfg
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo3.cfg
Mode: follower

[hadoop@h153 zookeeper-3.4.5-cdh5.5.2]$ jps
4416 SecondaryNameNode
12352 QuorumPeerMain
4148 NameNode
4260 DataNode
12421 QuorumPeerMain
4822 NodeManager
12635 Jps
4556 ResourceManager
5165 QuorumPeerMain

注意:如果启动失败,可以查看生成的zookeeper.out查看原因。
 

3.hbase安装:

(1)解压相应的hbase安装包:

[hadoop@h153 ~]$ tar -zxvf hbase-1.0.0-cdh5.5.2.tar.gz

(2)修改hbase-env.sh

[hadoop@h153 ~]$ vi hbase-1.0.0-cdh5.5.2/conf/hbase-env.sh
添加
export JAVA_HOME=/usr/jdk1.8.0_91
export HBASE_MANAGES_ZK=false  //不使用自带的zookeeper

(3)配置hbase-site.xml,创建用于存放数据的目录:

[hadoop@h153 ~]$ vi hbase-1.0.0-cdh5.5.2/conf/hbase-site.xml
添加
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://h153:9000/hbase</value>
    </property>
    
    <property>  
        <name>hbase.cluster.distributed</name>  
        <value>true</value>  
    </property>
    
    <property>  
        <name>dfs.replication</name>  
        <value>1</value>  
    </property>
    
    <property>  
        <name>hbase.zookeeper.quorum</name>  
        <value>h153</value>  
    </property>
    
    <property>  
        <name>zookeeper.session.timeout</name>  
        <value>60000</value>
    </property>

(4)启动hbase:

[hadoop@h153 hbase-1.0.0-cdh5.5.2]$ bin/start-hbase.sh

(5)hbase验证:

[hadoop@h153 hbase-1.0.0-cdh5.5.2]$ jps
4416 SecondaryNameNode
12352 QuorumPeerMain
12946 Jps
4148 NameNode
4260 DataNode
12421 QuorumPeerMain
4822 NodeManager
12855 HRegionServer
12762 HMaster
4556 ResourceManager
5165 QuorumPeerMain

(6)启动shell命令验证:

[hadoop@h153 hbase-1.0.0-cdh5.5.2]$ bin/hbase shell
hbase(main):001:0> list
TABLE                                                                                                                                                                                                                                        
0 row(s) in 0.4270 seconds

(7)配置运行环境:

[hadoop@h153 ~]$ vi .bash_profile
HADOOP_HOME=/home/hadoop/hadoop-2.6.0-cdh5.5.2
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME HADOOP_CONF_DIR PATH

export CLASSPATH=.:/home/hadoop/hadoop-2.6.0-cdh5.5.2/etc/hadoop:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/common/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/common/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/hdfs:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/hdfs/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/hdfs/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/yarn/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/yarn/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/mapreduce/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/mapreduce/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/contrib/capacity-scheduler/*.jar:/home/hadoop/hbase-1.0.0-cdh5.5.2/lib/*

[hadoop@h153 ~]$ source .bash_profile
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小强签名设计

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值