hbase的api编程put数据_HBase编程 API入门系列之put(客户端而言)(1)

心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了。

[hadoop@HadoopSlave1 conf]$ cat regionservers

HadoopMaster

HadoopSlave1

HadoopSlave2

hbase.zookeeper.quorum

HadoopMaster,HadoopSlave1,HadoopSlave2

hbase.rootdir

hdfs://HadoopMaster:9000/hbase

hbase.cluster.distributed

true

hbase.tmp.dir

/home/hadoop/data/hbase-1.2.3/tmp

zookeeper.znode.parent

/hbase

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

export HBASE_MANAGES_ZK=false (外装的Zookeeper集群)

hbase(main):002:0> create 'test_table','f'

语法: create table name , column family

test_table是HBase的数据表名,f是列簇

为了后续的编程,我这里直接先手动创建最简单的hbase数据表。在代码里可以修改。

1 packagezhouls.bigdata.HbaseProject.Test1;2

3 importorg.apache.hadoop.conf.Configuration;4 importorg.apache.hadoop.hbase.HBaseConfiguration;5 importorg.apache.hadoop.hbase.TableName;6 importorg.apache.hadoop.hbase.client.HTable;7 importorg.apache.hadoop.hbase.client.Put;8 importorg.apache.hadoop.hbase.util.Bytes;9

10

11 public classHBaseTest {12 public static void main(String[] args) throwsException {13 HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table

14 Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01

15 put.add(Bytes.toBytes("f"),Bytes.toBytes("col"),Bytes.toBytes("maizi"));//列簇是f,列修饰符是col,值是maizi

16 table.put(put);17 table.close();18 }19

20 public staticConfiguration getConfig(){21 Configuration configuration = newConfiguration();22 //conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");

23 configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");24 returnconfiguration;25 }26 }

2016-12-10 11:05:45,077 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x5fc2fc59 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181

2016-12-10 11:05:45,115 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT

2016-12-10 11:05:45,115 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM

2016-12-10 11:05:45,115 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51

2016-12-10 11:05:45,115 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation

2016-12-10 11:05:45,116 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre

2016-12-10 11:05:45,116 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar

2016-12-10 11:05:45,118 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.

2016-12-10 11:05:45,119 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\

2016-12-10 11:05:45,120 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=

2016-12-10 11:05:45,120 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7

2016-12-10 11:05:45,121 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64

2016-12-10 11:05:45,121 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1

2016-12-10 11:05:45,131 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator

2016-12-10 11:05:45,131 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator

2016-12-10 11:05:45,132 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject

2016-12-10 11:05:45,136 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=180000 watcher=hconnection-0x5fc2fc590x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase

2016-12-10 11:05:45,329 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)

2016-12-10 11:05:45,365 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session

2016-12-10 11:05:45,421 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582587a9550008, negotiated timeout = 40000

2016-12-10 11:05:47,266 INFO [org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation] - Closing zookeeper sessionid=0x1582587a9550008

2016-12-10 11:05:47,275 INFO [org.apache.zookeeper.ZooKeeper] - Session: 0x1582587a9550008 closed

2016-12-10 11:05:47,275 INFO [org.apache.zookeeper.ClientCnxn] - EventThread shut down

scan 'test_table'

全盘扫描该Hbase数据表test_table

1 packagezhouls.bigdata.HbaseProject.Test1;2

3 importorg.apache.hadoop.conf.Configuration;4 importorg.apache.hadoop.hbase.HBaseConfiguration;5 importorg.apache.hadoop.hbase.TableName;6 importorg.apache.hadoop.hbase.client.HTable;7 importorg.apache.hadoop.hbase.client.Put;8 importorg.apache.hadoop.hbase.util.Bytes;9

10

11 public classHBaseTest {12 public static void main(String[] args) throwsException {13 HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table

14 Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01

15 put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy"));//列簇是f,列修饰符是col,值是maizi

16 table.put(put);17 table.close();18 }19

20 public staticConfiguration getConfig(){21 Configuration configuration = newConfiguration();22 //conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");

23 configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");24 returnconfiguration;25 }26 }

scan 'test_table'

全盘扫描该Hbase数据表test_table

1 packagezhouls.bigdata.HbaseProject.Test1;2

3 importorg.apache.hadoop.conf.Configuration;4 importorg.apache.hadoop.hbase.HBaseConfiguration;5 importorg.apache.hadoop.hbase.TableName;6 importorg.apache.hadoop.hbase.client.HTable;7 importorg.apache.hadoop.hbase.client.Put;8 importorg.apache.hadoop.hbase.util.Bytes;9

10

11 public classHBaseTest {12 public static void main(String[] args) throwsException {13 HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table

14 Put put = new Put(Bytes.toBytes("row_02"));//行键是row_01

15 put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy2"));//列簇是f,列修饰符是col,值是maizi

16 table.put(put);17 table.close();18 }19

20 public staticConfiguration getConfig(){21 Configuration configuration = newConfiguration();22 //conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");

23 configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");24 returnconfiguration;25 }26 }

scan 'test_table'

全盘扫描该Hbase数据表test_table

1 packagezhouls.bigdata.HbaseProject.Test1;2

3 importorg.apache.hadoop.conf.Configuration;4 importorg.apache.hadoop.hbase.HBaseConfiguration;5 importorg.apache.hadoop.hbase.TableName;6 importorg.apache.hadoop.hbase.client.HTable;7 importorg.apache.hadoop.hbase.client.Put;8 importorg.apache.hadoop.hbase.util.Bytes;9

10

11 public classHBaseTest {12 public static void main(String[] args) throwsException {13 HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table

14 Put put = new Put(Bytes.toBytes("row_03"));//行键是row_01

15 put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f,列修饰符是col,值是maizi

16 table.put(put);17 table.close();18 }19

20 public staticConfiguration getConfig(){21 Configuration configuration = newConfiguration();22 //conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");

23 configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");24 returnconfiguration;25 }26 }

scan 'test_table'

全盘扫描该Hbase数据表test_table

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
您好!关于使用HBASE API创建表tb_user以及增删查改的问题,我可以为您提供一些参考信息。 首先,使用HBASE API创建表需要以下步骤: 1. 创建Configuration对象,设置Zookeeper地址和HBase配置文件路径。 2. 创建HBaseAdmin对象,用于管理HBase集群。 3. 创建HTableDescriptor对象,设置表名和族信息。 4. 调用HBaseAdmin的createTable方法创建表。 具体代码示例如下: ``` Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "zk01,zk02,zk03"); conf.set("hbase.zookeeper.property.clientPort", "2181"); conf.addResource("hbase-site.xml"); HBaseAdmin admin = new HBaseAdmin(conf); HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf("tb_user")); tableDesc.addFamily(new HColumnDescriptor("info")); admin.createTable(tableDesc); ``` 其中,"zk01,zk02,zk03"需要替换为实际的Zookeeper地址,"info"为族信息。 接下来,使用HBase API进行增删查改操作需要分别使用Put、Get、Delete和Scan类,具体代码示例如下: 1. 插入一条数据: ``` Table table = connection.getTable(TableName.valueOf("tb_user")); Put put = new Put(Bytes.toBytes("rowkey")); put.addColumn(Bytes.toBytes("info"), Bytes.toBytes("name"), Bytes.toBytes("张三")); table.put(put); ``` 2. 查询一条数据: ``` Table table = connection.getTable(TableName.valueOf("tb_user")); Get get = new Get(Bytes.toBytes("rowkey")); Result result = table.get(get); byte[] value = result.getValue(Bytes.toBytes("info"), Bytes.toBytes("name")); ``` 3. 删除一条数据: ``` Table table = connection.getTable(TableName.valueOf("tb_user")); Delete delete = new Delete(Bytes.toBytes("rowkey")); table.delete(delete); ``` 4. 扫描所有数据: ``` Table table = connection.getTable(TableName.valueOf("tb_user")); Scan scan = new Scan(); ResultScanner scanner = table.getScanner(scan); for (Result result : scanner) { byte[] value = result.getValue(Bytes.toBytes("info"), Bytes.toBytes("name")); } ``` 以上是关于使用HBASE API创建表tb_user、以及增删查改的一些参考信息。希望能对您有所帮助!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值