hbase
1、上传hbase1.2.4
上传hbase1.2.4压缩包
2.解压
[hadoop@master ~]$cd /home/hadoop/software
[hadoop@master software]$tar xzvf hbase-1.2.4-bin.tar.gz
配置环境变量
[hadoop@master software]$cd
[hadoop@master ~]$vi .bashrc
添加下面三行
export HADOOP_HOME=/home/hadoop/software/hadoop-2.6.4
export ZOOKEEPER_HOME=/home/hadoop/software/zookeeper-3.4.6
export HBASE_HOME=/home/hadoop/software/hbase-1.2.4
export PATH=$HBASE_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
3.配置hbase集群,要修改3个文件(首先zk集群已经安装好了)
注意:由于hbase最终数据存放到hdfs,故需要要把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下
[hadoop@master~]$$cd /home/hadoop/software/hadoop-2.6.4/etc/hadoop
[hadoop@master hadoop]$cp hdfs-site.xml /home/hadoop/software/hbase-1.2.4/conf/
[hadoop@master hadoop]$cp core-site.xml /home/hadoop/software/hbase-1.2.4/conf/
3.1在master 上修改hbase-env.sh 在hbase-1.2.4/conf目录下
[hadoop@master ~]$cd /home/hadoop/software/hbase-1.2.4/conf/
[hadoop@master conf]$vi hbase-env.sh
1.修改为export JAVA_HOME= /usr/java/jdk1.7.0_67
//告诉hbase使用外部的zk
2.修改为export HBASE_MANAGES_ZK=false
3.修改为export HBASE_HEAPSIZE=8G
保存退出
3.2修改hbase-site.xml
[hadoop@master conf]$vi hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://192.168.6.250:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master:2181,slave1:2181,slave2:2181</value>
</property>
<property>
<name>hbase.master.port</name>
<value>16000</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>16010</value>
</property>
</configuration>
3.3 指定机器为regionserver,不单独指定master。其中:在哪个机器上启动,哪台就是master,在regionservers文件说明要启动的HRegionServer
[hadoop@master conf]$vi regionservers
192.168.6.250
192.168.6.251
192.168.6.252
3.4 在$HBASE_HOME/lib 下替换Hadoop版本和Zookeeper对应的版本
(1)[hadoop@master ~]$rm -rf $HBASE_HOME/lib/hadoop*.jar
这里可以删除hadoop中test/sources 相关的包(可以选择)
1. find /home/hadoop/software/hadoop-2.6.4/share/ -name "hadoop*jar"| xargs -i cp {} $HBASE_HOME/lib
###解析http://blog.csdn.net/luojiafei/article/details/7213489
(2) hbase1.2.0 依赖 amazonaws包下的两个文件,故需要把下面两个文件上传至$HBASE_HOME/lib 目录下,否则会出现下面的错误
依赖的两个文件:
上传两个jar包到//home/hadoop/software/hbase-1.2.4/lib
aws-java-sdk-core-1.10.77.jar
aws-java-sdk-s3-1.11.34.jar
不添加问题出现ClassNotFoundException:
Caused by: java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider
Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.s3.AmazonS3
4.将配置好的HBase拷贝到salves里
[hadoop@master software]$scp -r hbase-1.2.4 slave1:/home/hadoop/software/
[hadoop@master software]$scp -r hbase-1.2.4 slave2:/home/hadoop/software/
5.启动所有的hbase
启动hbase,在master(主节点)上运行:
start-hbase.sh
[hadoop@master ~]$jps
[hadoop@slave1(2)~]$jps