hive 支持行级update、delete操作权限

1. 找到对应的hive-site.xml文件,加上以下配置:

 <property>
    <name>hive.support.concurrency</name>
    <value>true</value>
</property>
<property>
    <name>hive.enforce.bucketing</name>
    <value>true</value>
</property>
<property>
    <name>hive.exec.dynamic.partition.mode</name>
    <value>nonstrict</value>
</property>
<property>
    <name>hive.txn.manager</name>
    <value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
    <name>hive.compactor.initiator.on</name>
    <value>true</value>
</property>
<property>
    <name>hive.compactor.worker.threads</name>
    <value>1</value>
</property>
进行重启hive元数据服务metastore,再通过beeline到hive客户端进行操作。

2. 进行更新表数据,成功!

0: jdbc:hive2://demo3.leap.com:2181,demo1.lea> select * from test_cdc;
INFO  : Compiling command(queryId=hive_20180607104040_34af3962-1273-4a35-9725-c0e653d2b31d): select * from test_cdc
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:test_cdc.id, type:int, comment:null), FieldSchema(name:test_cdc.name, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20180607104040_34af3962-1273-4a35-9725-c0e653d2b31d); Time taken: 1.781 seconds
INFO  : Executing command(queryId=hive_20180607104040_34af3962-1273-4a35-9725-c0e653d2b31d): select * from test_cdc
INFO  : Completed executing command(queryId=hive_20180607104040_34af3962-1273-4a35-9725-c0e653d2b31d); Time taken: 0.006 seconds
INFO  : OK
+--------------+----------------+--+
| test_cdc.id  | test_cdc.name  |
+--------------+----------------+--+
| 1            | aaa            |
+--------------+----------------+--+
1 row selected (2.334 seconds)
0: jdbc:hive2://demo3.leap.com:2181,demo1.lea> update test_cdc set name = 'ccc' where id =1; 
INFO  : Compiling command(queryId=hive_20180607104040_2eee4dba-43ec-4e49-b559-1548ef9dbb05): update test_cdc set name = 'ccc' where id =1
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:row__id, type:struct<transactionid:bigint,bucketid:int,rowid:bigint>, comment:null), FieldSchema(name:id, type:int, comment:null), FieldSchema(name:_c2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20180607104040_2eee4dba-43ec-4e49-b559-1548ef9dbb05); Time taken: 0.703 seconds
INFO  : Executing command(queryId=hive_20180607104040_2eee4dba-43ec-4e49-b559-1548ef9dbb05): update test_cdc set name = 'ccc' where id =1
INFO  : Query ID = hive_20180607104040_2eee4dba-43ec-4e49-b559-1548ef9dbb05
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Number of reduce tasks determined at compile time: 8
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:8
INFO  : Submitting tokens for job: job_1528335916181_0001
INFO  : The url to track the job: http://demo1.leap.com:8088/proxy/application_1528335916181_0001/
INFO  : Starting Job = job_1528335916181_0001, Tracking URL = http://demo1.leap.com:8088/proxy/application_1528335916181_0001/
INFO  : Kill Command = /usr/bin/hadoop job  -kill job_1528335916181_0001
INFO  : Hadoop job information for Stage-1: number of mappers: 8; number of reducers: 8
INFO  : 2018-06-07 10:40:51,464 Stage-1 map = 0%,  reduce = 0%
INFO  : 2018-06-07 10:40:58,846 Stage-1 map = 25%,  reduce = 0%, Cumulative CPU 4.2 sec
INFO  : 2018-06-07 10:40:59,889 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 8.44 sec
INFO  : 2018-06-07 10:41:00,932 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 24.08 sec
INFO  : 2018-06-07 10:41:08,860 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 34.19 sec
INFO  : 2018-06-07 10:41:09,903 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 44.77 sec
INFO  : MapReduce Total cumulative CPU time: 44 seconds 770 msec
INFO  : Ended Job = job_1528335916181_0001
INFO  : Starting task [Stage-0:MOVE] in serial mode
INFO  : Loading data to table default.test_cdc from hdfs://demo1.leap.com:8020/apps/hive/warehouse/test_cdc/.hive-staging_hive_2018-06-07_10-40-42_440_5275939439062405091-1/-ext-10000
INFO  : Starting task [Stage-2:STATS] in serial mode
INFO  : Table default.test_cdc stats: [numFiles=9, numRows=1, totalSize=1234, rawDataSize=91]
INFO  : MapReduce Jobs Launched: 
INFO  : Stage-Stage-1: Map: 8  Reduce: 8   Cumulative CPU: 44.77 sec   HDFS Read: 70685 HDFS Write: 979 SUCCESS
INFO  : Total MapReduce CPU Time Spent: 44 seconds 770 msec
INFO  : Completed executing command(queryId=hive_20180607104040_2eee4dba-43ec-4e49-b559-1548ef9dbb05); Time taken: 30.418 seconds
INFO  : OK
No rows affected (31.262 seconds)
0: jdbc:hive2://demo3.leap.com:2181,demo1.lea> select * from test_cdc;
INFO  : Compiling command(queryId=hive_20180607104141_8986f857-f02e-4c32-854b-d224b3414f04): select * from test_cdc
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:test_cdc.id, type:int, comment:null), FieldSchema(name:test_cdc.name, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20180607104141_8986f857-f02e-4c32-854b-d224b3414f04); Time taken: 0.22 seconds
INFO  : Executing command(queryId=hive_20180607104141_8986f857-f02e-4c32-854b-d224b3414f04): select * from test_cdc
INFO  : Completed executing command(queryId=hive_20180607104141_8986f857-f02e-4c32-854b-d224b3414f04); Time taken: 0.001 seconds
INFO  : OK
+--------------+----------------+--+
| test_cdc.id  | test_cdc.name  |
+--------------+----------------+--+
| 1            | ccc            |
+--------------+----------------+--+
1 row selected (0.415 seconds)


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值