mjiang@syvenus:~/program/eclipse/customer/exscript/2012-09-08$ hive -f ca1_1.sql
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/home/mjiang/hadoop_work/hive-0.9.0/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/mjiang/hive_job_log_mjiang_201209201444_1896555150.txt
FAILED: Parse Error: line 1:0 character '' not supported here
但是在hive命令行下却可以
实验了好多原来是编码问题,删除第一个字符就好了,太坑爹了- mjiang@syvenus:~/program/eclipse/customer/sql$ hive -f ca1_6.sql
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/home/mjiang/hadoop_work/hive-0.9.0/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/mjiang/hive_job_log_mjiang_201209201513_1688812131.txt
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
java.net.UnknownHostException: syvenus: syvenus
at java.net.InetAddress.getLocalHost(InetAddress.java:1360)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:874)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)
at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:48)
Job Submission failed with exception 'java.net.UnknownHostException(syvenus: syvenus)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
> select * from(
> select user_name,1,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where send_time='2012-08-30'
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,2,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where send_time>date_sub('2012-08-30',7)
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,3,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where send_time>date_sub('2012-08-30',15)
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,4,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where send_time>date_sub('2012-08-30',30)
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,5,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where send_time>date_sub('2012-08-30',90)
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,6,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where weekofyear(send_time)=weekofyear('2012-08-30')
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,7,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where month(send_time)=month('2012-08-30')
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,8,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where weekofyear(send_time)=weekofyear('2012-08-30')-1
> group by user_name,latitude_id,latitude_type,marketing_type
> union all
> select user_name,9,latitude_id,count(distinct uid),latitude_type,marketing_type
> from (select * from t_latitude_marketing_daily_uid where stat_time<='20120830' )tmp
> where month(send_time)=month('2012-08-30')-1
> group by user_name,latitude_id,latitude_type,marketing_type
> )tmp;
FAILED: Error in semantic analysis: Unable to fetch table t_latitude_marketing_daily_uid
[hadoop@master ~]$ hive --service hiveserver
Starting Hive Thrift ServerWARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10000.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
at org.apache.hadoop.hive.service.HiveServer.main(HiveServer.java:659)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[hadoop@master ~]$ netstat -nl | grep 10000**
tcp 0 0 :::10000 :::* LISTEN
if it shows something means hive server is already running.
transport = TSocket.TSocket('localhost', 10000)
在客户端开启服务。才能访问客户端的数据,在哪开启TSocket.TSocket就写哪
Total MapReduce jobs = 5
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Execution log at: /tmp/hadoop/hadoop_20120831124343_c13598e1-3369-4c69-b79d-4604b5b69c80.log
2012-08-31 12:43:48 Starting to launch local task to process map join; maximum memory = 1048576000
2012-08-31 12:43:50 Processing rows: 200000 Hashtable size: 199999 Memory usage: 112007064 rate: 0.107
2012-08-31 12:43:51 Processing rows: 300000 Hashtable size: 299999 Memory usage: 148708648 rate: 0.142
2012-08-31 12:43:51 Processing rows: 400000 Hashtable size: 399999 Memory usage: 212782568 rate: 0.203
2012-08-31 12:43:52 Processing rows: 500000 Hashtable size: 499999 Memory usage: 257658824 rate: 0.246
2012-08-31 12:43:52 Processing rows: 600000 Hashtable size: 599999 Memory usage: 300206496 rate: 0.286
2012-08-31 12:43:53 Processing rows: 700000 Hashtable size: 699999 Memory usage: 344639904 rate: 0.329
2012-08-31 12:43:53 Processing rows: 800000 Hashtable size: 799999 Memory usage: 421689184 rate: 0.402
2012-08-31 12:43:54 Processing rows: 900000 Hashtable size: 899999 Memory usage: 472521872 rate: 0.451
2012-08-31 12:43:54 Processing rows: 1000000 Hashtable size: 999999 Memory usage: 543292608 rate: 0.518
2012-08-31 12:43:55 Processing rows: 1100000 Hashtable size: 1099999 Memory usage: 588318696 rate: 0.561
2012-08-31 12:43:56 Processing rows: 1200000 Hashtable size: 1199999 Memory usage: 630659480 rate: 0.601
2012-08-31 12:43:56 Processing rows: 1300000 Hashtable size: 1299999 Memory usage: 693285008 rate: 0.661
2012-08-31 12:43:57 Processing rows: 1400000 Hashtable size: 1399999 Memory usage: 739383472 rate: 0.705
2012-08-31 12:43:58 Processing rows: 1500000 Hashtable size: 1499999 Memory usage: 823282872 rate: 0.785
2012-08-31 12:43:59 Processing rows: 1600000 Hashtable size: 1599999 Memory usage: 842248752 rate: 0.803
2012-08-31 12:43:59 Processing rows: 1700000 Hashtable size: 1699999 Memory usage: 925879752 rate: 0.883
2012-08-31 12:44:00 Processing rows: 1741344 Hashtable size: 1741344 Memory usage: 887714296 rate: 0.847