前提,已经搭建好了hadoop的分布式.这里只是说下载hadoop环境中再安装hbase,然后在本地windows中的eclipse操作linux下的hbase
1.硬件准备
hbase-1.2.6-src.tar.gz
zookeeper-3.4.6.tar.gz
2.安装zookeeper(我的hadoop分布式中只有一个master和一个slave)
1.首先在master的linux下解压hbase
tar -zxf zookeeper-3.4.6.tar.gz
2.将zookeeper目录下的conf目录下的zoo-simple.xml改名,改成zoo.xml
3.在里面添加
dataDir=/home/hadoop/zookeeper/data
dataLogDir=/home/hadoop/zookeeper/data
server.1=master:2888:3888
server.2=slave:2888:3888
ps:前连个保证目录已经创建出来. 后两个master和slave是我的两个主机名
4.然后在上面配置好的data目录里创建myid文件,并且分别在master和slave中填写1和2(只是 1 或 2)
5.然后将master配置好的zookeeper文件发送到slave中,注意要将slave中myid文件修改成2
使用 scp -r zookeeper slave:/home/hadoop
6.如果搞定了 ,那么zookeeper结束.
3.安装hbase
1.首先在master中解压hbase.
2.配置hbase目录下的conf下的hbase-env.sh
在里面添加
export JAVA_HOME=$JAVA_HOME
export HADOOP_HOME=$HADOOP_HOME
export HBASH_HOME=$HBASE_HOME
(保证你都配置了这个三个环境变量)
3.配置同目录下的hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>master:60000</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zookeeper/data</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
</property>
(到这 hbase配置差不多了.)
4.将配好的habse也发送给slave一份,什么都不用修改
scp.....看上面
5.可以启动了
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver (hadoop基本环境启动完毕,只需要在master中)
zkServer.sh start (启动zookeeper,在master和slave都执行以下)
start-hbase.sh (启动hbase,只需要在master中启动)
如果成功的话,会出现这个在master中使用jps命令
3985 JobHistoryServer
7428 Jps
5349 HMaster
3286 NameNode
3495 SecondaryNameNode
3689 ResourceManager
4107 QuorumPeerMain
在slave中有这个
3234 NodeManager
3752 HRegionServer
3087 DataNode
3407 QuorumPeerMain
5327 Jps
环境启动ok,接下来在搞定在windows下访问linux下的hbase
我用的maven创建的,首先是pom.xml
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.4</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.2.4</version>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.8</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/log4j/log4j -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- 设置编译版本为1.8 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</build>
Ps:先说下,我hbase安装的是1.2.6,但是maven中导入1.2.6就会报错,但是换成1.2.4就好了,不过不影响运行.
然后将linux下的hbase-site.xml导入src下
然后运行以下代码
private static Configuration conf = HBaseConfiguration.create();
// 创建表
public static void createTable(String tablename, String columnFamily) throws Exception {
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
TableName tableNameObj = TableName.valueOf(tablename);
if (admin.tableExists(tableNameObj)) {
System.out.println("Table exists!");
System.exit(0);
} else {
HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(tablename));
tableDesc.addFamily(new HColumnDescriptor(columnFamily));
admin.createTable(tableDesc);
System.out.println("create table success!");
}
admin.close();
connection.close();
}
public static void main(String[] args) throws Exception {
createTable("student", "info");
}