Hive 表被锁定,无法删除表

如果当hive 执行一个脚本查询的过程中,由于你的脚本进程遇到不可控的原因被杀死,这时你再去删除hive表中你查询关联到的表是无法删除的。
你可以使用命令 show locks tablename;查看你的表状态

hive> show locks days_album_click_play;
OK
pb_log@days_album_click_play    EXCLUSIVE

可以看到此表有一个EXCLUSIVE 锁,即排它锁,所有你的表是无法删除的

解决方法:
1.JPS 查看java进程

[root@10.10.121.58 common]# jps
48828 RunJar
22349 RunJar
37087 -- process information unavailable
33123 Jps

你会发现有两个RunJar 进程,37087进程其实是一个datanode,因为我用的是cloudera,hdfs使用hdfs用户启动的,所有root用户是看不到真实进程名称的。对于这两个RunJar进程一我们需要仔细看一下

2.执行命令 ps -aux | grep 48828

[root@10.10.121.58 common]# ps -aux |grep 48828
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
root     33190  0.0  0.0 103248   860 pts/0    S+   07:51   0:00 grep 48828
root     48828 99.6  0.2 926760 286268 ?       Sl   Apr02 967:42 /opt/soft/jdk1.7/bin/java -Xmx256m -Dhadoop.log.dir=/opt/soft/BI/cloudera/cm/cm5.3.1/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/soft/BI/cloudera/cm/cm5.3.1/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/soft/BI/cloudera/cm/cm5.3.1/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/soft/BI/cloudera/cm/cm5.3.1/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/../lib/hive/lib/hive-cli-0.13.1-cdh5.3.1.jar org.apache.hadoop.hive.cli.CliDriver -e use pb_log;?insert overwrite table days_album_click_play?select '2015-04-01', t1.partner_code, t1.album_id, t1.src_id,?  if(sum(t1.click_users) is null, 0, sum(t1.click_users)) as click_users,?  if(sum(t1.click_times) is null, 0, sum(t1.click_times)) as click_times,?  if(sum(t1.play_users) is null, 0, sum(t1.play_users)) as play_users,?  if(sum(t1.play_times) is null, 0, sum(t1.play_times)) as play_times,?  if(sum(t1.total_length) is null, 0, sum(t1.total_length)) as total_length?  from (?   select partner_code,album_id,src_id,0 as click_users, 0 as click_times, count(distinct dev_mac) as play_users, count(dev_mac) as play_times, sum (play_length) as total_length?   from gitv_log_original where logdate='2015-04-01'  and action = 5 and  play_type in (1,2) and block_type=0 group by partner_code,album_id,src_id?   union all?   select partner_code,album_id,src_id,count(distinct dev_mac) as click_users, count(dev_mac) as click_times, 0 as play_users, 0 as play_times, 0 as total_length?   from gitv_log_original where logdate='2015-04-01' and action = 3 group by partner_code,album_id,src_id          ?   )t1 group by t1.partner_code, t1.album_id, t1.src_id;

你会看到这个进程有一个hql的查询,并且处于Sl状态,睡眠状态。

3.执行命令 kill -9 48828

[root@10.10.121.58 common]# kill -9 48828

将进程杀死,对于第二个RunJAR同理,这里不再赘述。

4.这时再删除表就可以了

hive> drop table days_album_click_play;
OK
Time taken: 0.162 seconds
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值