hive cli命令行选项

1 篇文章 0 订阅

Hive Command line Options


Usage:

  Usage: hive [-hiveconf x=y]* [<-i filename>]* [<-f filename>|<-e query-string>] [-S]

  -i <filename>             Initialization Sql from file (executed automatically and silently before any other commands)
  -e 'quoted query string'  Sql from command line
  -f <filename>             Sql from file
  -S                        Silent mode in interactive shell where only data is emitted
  -v                        Verbose mode (echo executed SQL to the console)
  -p <port>                 connect to Hive Server on port number
  -hiveconf x=y             Use this to set hive/hadoop configuration variables. 
  
   -e and -f cannot be specified together. In the absence of these options, interactive shell is started.  
   However, -i can be used with any other options.  Multiple instances of -i can be used to execute multiple init scripts.

   To see this usage help, run hive -h

  • 从命令行执行指定的sql语句
       $HIVE_HOME/bin/hive -e 'select a.col from tab1 a'
       
  • 以指定的hive环境变量执行指定的sql语句
       $HIVE_HOME/bin/hive -e 'select a.col from tab1 a' -hiveconf hive.exec.scratchdir=/home/my/hive_scratch  -hiveconf mapred.reduce.tasks=32
       
  • 以沉默模式执行指定的sql语句,并将执行结果导出到指定文件
       HIVE_HOME/bin/hive -S -e 'select a.col from tab1 a' > a.txt
       
  • 以非交互式模式执行sql文件
       HIVE_HOME/bin/hive -f /home/my/hive-script.sql
       
  • 在进入交互模式之前,执行初始化sql文件
       HIVE_HOME/bin/hive -i /home/my/hive-init.sql
       

    Hive 交互式Shell命令

         当命令 $HIVE_HOME/bin/hive以不带 -e/-f 选项的方式运行时, hive将进入到交互模式。以(;)冒号结束命令行。评论信息通过在行首添加 (--).

*Command *Description
quitUse quit or exit to come out of interactive shell.
set <key>=<value>Use this to set value of particular configuration variable. One thing to note here is that if you misspell the variable name, cli will not show an error.
setThis will print list of configuration variables that overridden by user or hive.
set -vThis will give all possible hadoop/hive configuration variables.
add FILE <value> <value>*Adds a file to the list of resources.
list FILElist all the resources already added
list FILE <value>*Check given resources are already added or not.
! <cmd>execute a shell command from hive shell
dfs <dfs command>execute dfs command command from hive shell
<query string>executes hive query and prints results to stdout

Sample Usage:

  hive> set  mapred.reduce.tasks=32;
  hive> set;
  hive> select a.* from tab1;
  hive> !ls;
  hive> dfs -ls;

Logging

Hive uses log4j for logging. These logs are not emitted to the standard output by default but are instead captured to a log file specified by Hive's log4j properties file. By default Hive will usehive-log4j.default in the conf/ directory of the hive installation which writes out logs to/tmp/<userid>/hive.log and uses the WARN level.

It is often desirable to emit the logs to the standard output and/or change the logging level for debugging purposes. These can be done from the command line as follows:

 
 $HIVE_HOME/bin/hive -hiveconf hive.root.logger=INFO,console 

hive.root.logger specifies the logging level as well as the log destination. Specifyingconsole as the target sends the logs to the standard error (instead of the log file).

Hive Resources

Hive can manage the addition of resources to a session where those resources need to be made available at query execution time. Any locally accessible file can be added to the session. Once a file is added to a session, hive query can refer to this file by its name (in map/reduce/transform clauses) and this file is available locally at execution time on the entire hadoop cluster. Hive uses Hadoop's Distributed Cache to distribute the added files to all the machines in the cluster at query execution time.

Usage:

   ADD { FILE[S] | JAR[S] | ARCHIVE[S] } <filepath1> [<filepath2>]*
   LIST { FILE[S] | JAR[S] | ARCHIVE[S] } [<filepath1> <filepath2> ..]
   DELETE { FILE[S] | JAR[S] | ARCHIVE[S] } [<filepath1> <filepath2> ..]
 
  • FILE resources are just added to the distributed cache. Typically, this might be something like a transform script to be executed.
  • JAR resources are also added to the Java classpath. This is required in order to reference objects they contain such as UDF's.
  • ARCHIVE resources are automatically unarchived as part of distributing them.

Example:

  hive> add FILE /tmp/tt.py;
  hive> list FILES;
  /tmp/tt.py
  hive> from networks a  MAP a.networkid USING 'python tt.py' as nn where a.ds = '2009-01-04' limit  10;
 

It is not neccessary to add files to the session if the files used in a transform script are already available on all machines in the hadoop cluster using the same path name. For example:

  • ... MAP a.networkid USING 'wc -l' ...: here wc is an executable available on all machines
  • ... MAP a.networkid USING '/home/nfsserv1/hadoopscripts/tt.py' ...: here tt.py may be accessible via a nfs mount point that's configured identically on all the cluster nodes.

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值