hive 简单操作和错误


[jifeng@jifeng02 ~]$ hive

Logging initialized using configuration in jar:file:/home/jifeng/hadoop/hive-0.12.0-bin/lib/hive-common-0.12.0.jar!/hive-log4j.properties
hive> create database test_hive;
OK
Time taken: 2.249 seconds
hive> create table t1 (key string);
OK  --创建一个新库
Time taken: 0.434 seconds
hive> show tables;
OK--显示表
t1
Time taken: 0.124 seconds, Fetched: 1 row(s)
hive> load data local inpath '/home/jifeng/hive-cs.txt' into table t1;
Copying data from file:/home/jifeng/hive-cs.txt --加载数据
Copying file: file:/home/jifeng/hive-cs.txt
Loading data to table default.t1
[Warning] could not update stats.
OK
Time taken: 0.642 seconds
hive> select * from t1
    > ;
OK ---查询表
jifeng
feng
sohudo
secx
Time taken: 0.19 seconds, Fetched: 4 row(s)
hive> select count(*) from t1;
Total MapReduce jobs = 1--统计表记录数
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201408142306_0002, Tracking URL = http://jifeng01:50030/jobdetails.jsp?jobid=job_201408142306_0002
Kill Command = /home/jifeng/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -kill job_201408142306_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-08-17 15:30:52,429 Stage-1 map = 0%,  reduce = 0%
2015-08-17 15:30:56,477 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:30:57,488 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:30:58,500 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:30:59,508 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:31:00,514 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:31:01,524 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:31:02,530 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:31:03,540 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.86 sec
2015-08-17 15:31:04,549 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 0.86 sec
2015-08-17 15:31:05,560 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.3 sec
2015-08-17 15:31:06,570 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.3 sec
2015-08-17 15:31:07,582 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.3 sec
2015-08-17 15:31:08,591 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.3 sec
MapReduce Total cumulative CPU time: 2 seconds 300 msec
Ended Job = job_201408142306_0002
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 2.3 sec   HDFS Read: 233 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 300 msec
OK
4
Time taken: 25.351 seconds, Fetched: 1 row(s)
hive> drop table t1;--删除表
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Iteration request failed : SELECT `A0`.`COMMENT`,`A0`.`COLUMN_NAME`,`A0`.`TYPE_NAME`,`A0`.`INTEGER_IDX` AS NUCORDER0 FROM `COLUMNS_V2` `A0` WHERE `A0`.`CD_ID` = ? AND `A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0
        at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
        at org.datanucleus.api.jdo.JDOPersistenceManager.jdoRetrieve(JDOPersistenceManager.java:610)
        at org.datanucleus.api.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:622)
        at org.datanucleus.api.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:631)
        at org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2293)
        at org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2321)
        at org.apache.hadoop.hive.metastore.ObjectStore.dropTable(ObjectStore.java:742)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
        at com.sun.proxy.$Proxy10.dropTable(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1192)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1328)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
        at com.sun.proxy.$Proxy11.drop_table_with_environment_context(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:671)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:647)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
        at com.sun.proxy.$Proxy12.dropTable(Unknown Source)
        at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:869)
        at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:836)
        at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:3329)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:277)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

在删除其他表试试:

hive> show tables;                                                             
OK
t1
tianq
Time taken: 0.024 seconds, Fetched: 2 row(s)
hive> drop table tianq;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
        at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
        at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
        at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:826)
        at org.apache.hadoop.hive.metastore.ObjectStore.dropTable(ObjectStore.java:710)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
        at com.sun.proxy.$Proxy10.dropTable(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1192)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1328)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
        at com.sun.proxy.$Proxy11.drop_table_with_environment_context(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:671)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:647)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
        at com.sun.proxy.$Proxy12.dropTable(Unknown Source)
        at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:869)
        at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:836)
        at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:3329)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:277)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

hive> create table tianqi(year int,month int,day int,quen int,wentu int,wentu2 int,msl int)row format delimited fields terminated by ' ';
OK
Time taken: 0.502 seconds
hive> load data local inpath '/home/jifeng/hadoop/sample.txt' into table tianqi;         
Copying data from file:/home/jifeng/hadoop/sample.txt
Copying file: file:/home/jifeng/hadoop/sample.txt
Loading data to table default.tianqi
[Warning] could not update stats.
OK
Time taken: 0.429 seconds
hive> select max(quen) from tianqi;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201408142306_0005, Tracking URL = http://jifeng01:50030/jobdetails.jsp?jobid=job_201408142306_0005
Kill Command = /home/jifeng/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -kill job_201408142306_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-08-18 10:20:11,082 Stage-1 map = 0%,  reduce = 0%
2015-08-18 10:20:15,103 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:16,108 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:17,115 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:18,121 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:19,127 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:20,133 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:21,140 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:22,146 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.98 sec
2015-08-18 10:20:23,152 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 0.98 sec
2015-08-18 10:20:24,159 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.38 sec
2015-08-18 10:20:25,168 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.38 sec
2015-08-18 10:20:26,173 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.38 sec
2015-08-18 10:20:27,183 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.38 sec
MapReduce Total cumulative CPU time: 2 seconds 380 msec
Ended Job = job_201408142306_0005
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 2.38 sec   HDFS Read: 541596 HDFS Write: 3 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 380 msec
OK
23
Time taken: 21.565 seconds, Fetched: 1 row(s)
hive> 

在元数据库Derby中删除表没有问题

hive> CREATE TABLE lxw_t (user_id string,  class string,  score int   );  
OK
Time taken: 0.038 seconds

hive> drop table lxw_t;
OK
Time taken: 1.268 seconds
hive> 


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值