五个节点
node1 backup-master
node2 regionserver
node3 regionserver
node4 regionserver
node5 master
安装前提:
1.安装JDK----建议安装JDK1.7,JDK 1.8的也支持,但是官网建议使用1.8之前的版本
2.Hadoop高可用: Hadoop,HDFS,Zookeeper--不使用hbase自带的zookeeper
1.下载0.98的Hbase,拷贝到服务器上,解压
复制到每一台节点上解压。
2.设置ssh免密码登录
If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa --生成秘钥 id_dsa是私钥 id_dsa.pub公钥 $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3.scp id_dsa.pub root@ruijie02:/opt/ 把Nanenode里的公钥复制到其他三个节点
4.在opt文件夹里 cat di_dsa.pub >> ~/.ssh/authorized_keys 把Namenode里的公钥追加到authorized_keys里,ssh远程免登陆设置完成!
3.配置环境变量,HBASE_HOME
4.配置backup-master
node1
5.配置regionserver
node1
node2
node3
6.配置hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_60
export HBASE_MANAGES_ZK=false //false的意思是不使用hbase自带的zookeeper
7.配置hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://mycluster/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node1,node2,node3</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/opt/zookeeper</value>
<description>Property from ZooKeeper config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
第一个节点下的mycluster是我的nameservices,因为使用的是HA,所以没有指定特定的namanode节点和port
8.拷贝hadoop下的hdfs-site.xml到hbase/conf下:
官方文档给出的三种方法,这里我使用的是第二种方法
1. Of note, if you have made HDFS client configuration changes on your Hadoop cluster, such as
configuration directives for HDFS clients, as opposed to server-side configurations, you must use
one of the following methods to enable HBase to see and use these configuration changes:
a. Add a pointer to your HADOOP_CONF_DIR to the HBASE_CLASSPATH environment variable in hbaseenv.
sh.
b. Add a copy of hdfs-site.xml (or hadoop-site.xml) or, better, symlinks, under ${HBASE_HOME}/conf,
or
c. if only a small set of HDFS client configurations, add them to hbase-site.xml.
完成!!
启动:
1.先启动zookeeper:zkServer.sh start
2.然后启动hdfs : start-dfs.sh
3.启动resourcemanager和nodemanager
4.启动Hbase:start-hbase.sh