spark-shell操作HBase get、put and Return

各算子异常问题记录
1、当过滤器为元素为空时,无法过滤出数据,需加流程控制
    filterList.filterRow() 返回true 则表示filter 不存在元素,返回false 则表示filter 存在元素
2、对 Result 结果集数据操作时: 要预防空指针操作,采用
   Result.advance() 流程控制,数据存在则返回true,数据不存在则返回false
3、对 Result 结果集数据操作时:
   result.getValue(familyName, Qualifier) 如果列不存在会报【空指针异常】
   解决方案:
   采用result.containsColumn(familyName, Qualifier) 进行流程控制解决
4、gets 与scan 返回值均为ArrayList[Result]可迭代 进行数据处理
5、gets 异常捕捉:需谨慎处理 RowKey 存在 null 的数据
1、HBase 建表 插入数据语句
create 'DIGITOPER_CXYYGL:ZL_TEST','info'
put "DIGITOPER_CXYYGL:ZL_TEST","1508947425011","info:ID","110"
put "DIGITOPER_CXYYGL:ZL_TEST","1508947425011","info:NAME","JOE"
put "DIGITOPER_CXYYGL:ZL_TEST","1508947425011","info:AGE","10"

put "DIGITOPER_CXYYGL:ZL_TEST","1508947425012","info:ID","110"
put "DIGITOPER_CXYYGL:ZL_TEST","1508947425012","info:NAME","JOHE"
put "DIGITOPER_CXYYGL:ZL_TEST","1508947425012","info:AGE","20"
2、spark-shell 操作 并附带返回值
import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
import org.apache.hadoop.hbase.client.{Admin, BufferedMutator, Connection, ConnectionFactory, Delete, Get, Mutation, Put, Result, ResultScanner, Scan, Table}
import org.apache.hadoop.hbase.util.{Base64, Bytes}
import java.util
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
import org.apache.hadoop.hbase.filter.FilterList.Operator
import org.apache.hadoop.hbase.filter.{FilterList, SingleColumnValueFilter}

scala> val hConf = HBaseConfiguration.create()
hConf: org.apache.hadoop.conf.Configuration = Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, hdfs-io-settings.xml, hbase-default.xml, hbase-site.xml

scala> val hConn = ConnectionFactory.createConnection(hConf)
hConn: org.apache.hadoop.hbase.client.Connection = hconnection-0x44485db

scala> val hTable = hConn.getTable(TableName.valueOf("DIGITOPER_CXYYGL:ZL_TEST"))
hTable: org.apache.hadoop.hbase.client.Table = DIGITOPER_CXYYGL:ZL_TEST;hconnection-0x3eff6846

scala> val get = new Get(Bytes.toBytes("1508947425011"))
get: org.apache.hadoop.hbase.client.Get = {"cacheBlocks":true,"totalColumns":0,"row":"1508947425011","families":{},"maxVersions":1,"timeRange":[0,9223372036854775807]}

scala> val filterList = new FilterList(Operator.MUST_PASS_ALL)
filterList: org.apache.hadoop.hbase.filter.FilterList = FilterList AND (0/0): []

scala> if(!filterList.filterRow()) get.setFilter(filterList)
res0: Any = {"filter":"FilterList AND (0/0): []","cacheBlocks":true,"totalColumns":0,"row":"1508947425011","families":{},"maxVersions":1,"timeRange":[0,9223372036854775807]}

scala> val result=hTable.get(get)
result: org.apache.hadoop.hbase.client.Result = keyvalues={1508947425011/info:AGE/1607578572840/Put/vlen=2/seqid=0, 1508947425011/info:ID/1607578572698/Put/vlen=3/seqid=0, 1508947425011/info:NAME/1607578572786/Put/vlen=3/seqid=0}

scala> Bytes.toString(result.getValue(Bytes.toBytes("info"), Bytes.toBytes("ID")))
res3: String = 110
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Splicing

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值