Centos7下安装HBase1.4.13
一、环境基础
安装Hbase前需要确保已经安装了hadoop和zookeeper,可参考Centos7下安装Hadoop2.8.5和Centos7下安装zookeeper伪集群
二、安装HBase1.4.13
1、使用工具将windows中下载的HBase的安装包复制到Centos环境下,并切换到hadoop用户,对压缩包解压
su hadoop
tar -zxvf hbase-1.4.13-bin.tar.gz
解压后重命名并移除压缩包
mv hbase-1.4.13 hbas
rm hbase-1.4.13-bin.tar.gz
2、切换到root下,配置profile文件
su root
vim /etc/profile
添加以下内容
export HBASE_HOME=/home/hadoop/hbase
export PATH=$PATH:$HBASE_HOME/bin
执行命令是环境生效,并切换回hadoop用户
source /etc/profile
su hadoop
3、修改配置文件
切换到/home/hadoop/hbase/conf目录下
修改hbase-env.sh文件
export JAVA_HOME=/usr/local/jdk1.8.0_151 //jdk安装目录
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
export HBASE_MANAGES_ZK=false #使用独立安装的zookeeper这个地方就是false
修改hbase-site.xml
<configuration>
<property>
<name>hbase.master.maxclockskew</name> #时间同步允许的时间差
<value>180000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop0:9000/hbase</value>#hbase共享目录,持久化hbase数据
</property>
<property>
<name>hbase.cluster.distributed</name> #是否分布式运行,false即为单机
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>#zookeeper地址
<value>hadoop0</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>#zookeeper配置信息快照的位置
<value>/home/hadoop/hbase/tmp/zookeeper</value> #这个目录需要手动创建
</property>
</configuration>
把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下
cp /home/hadoop/apps/hadoop-2.8.5/etc/hadoop/hdfs-site.xml /home/hadoop/hbase/conf
cp /home/hadoop/apps/hadoop-2.8.5/etc/hadoop/core-site.xml /home/hadoop/hbase/conf
启动hadoop
/bin/start-hbase.sh
查看
进程:jps 有HMaster和HRegionServerjiedian
进入hbase的shell查看:bin/shbase shell
可使用list命令查看已有的表
退出hbase的shell :quit
浏览器查看: http://192.168.140.132:16010
三、可能出现的问题
1、启动Hbase后没有HMaster节点或者进入./hbase shell 后报错znode==null
方案一: 解决办法:关闭所有进程,删除hadoop下的logs文件和tmp文件,重建。在初始化namenodehdfs nameneode -format
;再将hbase中的logs删除重建,重启所有的服务即可。
方案二: 如果方案一解决不了,则需要进入zookeeper的bin下,启动客户端./zkCli.sh
ls /
将hbase节点删除rmr /hbase
。重启服务。
2、使用JavaAPI访问Hbase中的表的时候报错如下
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Wed Nov 11 13:45:27 CST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=64029: can not resolve hadoop0,16201,1605072781509 row 'test0,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hadoop0,16201,1605072781509, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:329)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:275)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:310)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:639)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:409)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:419)
at com.lcg.test.HbaseAPI.isTableExist(HbaseAPI.java:29)
at com.lcg.test.HbaseAPI.main(HbaseAPI.java:139)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=64029: can not resolve hadoop0,16201,1605072781509 row 'test0,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hadoop0,16201,1605072781509, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:178)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: can not resolve hadoop0,16201,1605072781509
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.createAddr(AbstractRpcClient.java:429)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.createBlockingRpcChannel(AbstractRpcClient.java:507)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getClient(ConnectionManager.java:1694)
at org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:168)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.prepare(ScannerCallableWithReplicas.java:400)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
... 4 more
Process finished with exit code 1
这里需要在Window中的C:\Windows\System32\drivers\etc中的hosts文件中加上:192.168.140.132 hadoop0
这是与Centos7中的/etc/hosts文件中添加的一致。