hive 基础执行语句

hive简单概念

hive是一种基于Hadoop的数据仓库的处理工具,目前只支持简单的类似传统关系型数据库的SQL查询,修改操作功能,他可以直接将SQL转化为MapReduce程序,开发人员不必一定要学会写MR程序,提高了开发效率。

例子:基于mysql存储的hive环境,hive元数据(hive相关表,表的各个字段属性等信息)存放在mysql数据库中,mysql数据存放在hdfs默认是/user/hive/warehouse/hive.db中

ddl 语句

mysql作为元数据存储 数据库(hive)结构目录


创建表

hive> create table test (id int, name string);

引入分区的概念,因为hive 中的select 一般会扫描整个表,这样会浪费很多时间,所以引入分区的概念

hive> create table test2 (id int, name string) partitioned by (ds string);

浏览表

hive>show tables;

引入正则表达式 类似like的功能

hive>show tables '.*t'

查看数据结构

hive> DESCRIBE test;或desc test;

修改或删除表

hive>alter table test rename to test3;

hive>alter table add columns (new_column type comment '注释')

hive>drop table test;
DML操作语句

1、倒入数据

 

LOAD DATA LOCAL INPATH '/home/hadoop/test.txt' OVERWRITE INTO TABLE test;

local 表示执行本地,如果去掉默认是取hdfs上的文件,overwrite表示导入数据覆盖,如果去掉表示append

2、执行查询

select * from test2 where test2.ds='2014-08-26'

3、值得注意的是 select count(*) from test 与我们平时关系型数据库记录查询操作不同,他执行的是一个mr

hive> select count(*) from test2;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1411720827309_0004, Tracking URL = http://master:8031/proxy/application_1411720827309_0004/
Kill Command = /usr/local/cloud/hadoop/bin/hadoop job -kill job_1411720827309_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
Stage-1 map = 0%, reduce = 0%
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
MapReduce Total cumulative CPU time: 2 seconds 300 msec
Ended Job = job_1411720827309_0004
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 2.3 sec HDFS Read: 245 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 300 msec
OK
3
Time taken: 27.508 seconds, Fetched: 1 row(s)





转载于:https://my.oschina.net/u/1169079/blog/321109

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
1.上传tar包 2.解压 tar -zxvf hive-1.2.1.tar.gz 3.安装mysql数据库 推荐yum 在线安装 4.配置hive (a)配置HIVE_HOME环境变量 vi conf/hive-env.sh 配置其的$hadoop_home (b)配置元数据库信息 vi hive-site.xml 添加如下内容: javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName root username to use against metastore database javax.jdo.option.ConnectionPassword hadoop password to use against metastore database 5.安装hive和mysq完成后,将mysql的连接jar包拷贝到$HIVE_HOME/lib目录下 如果出现没有权限的问题,在mysql授权(在安装mysql的机器上执行) mysql -uroot -p #(执行下面的语句 *.*:所有库下的所有表 %:任何IP地址或主机都可以连接) GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION; FLUSH PRIVILEGES; 6. Jline包版本不一致的问题,需要拷贝hive的lib目录jline.2.12.jar的jar包替换掉hadoop的 /home/hadoop/app/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar 启动hive bin/hive ---------------------------------------------------------------------------------------------------- Hive几种使用方式: 1.Hive交互shell bin/hive 2.Hive JDBC服务(参考java jdbc连接mysql) 3.hive启动为一个服务器,来对外提供服务 bin/hiveserver2 nohup bin/hiveserver2 1>/var/log/hiveserver.log 2>/var/log/hiveserver.err & 启动成功后,可以在别的节点上用beeline去连接 bin/beeline -u jdbc:hive2://mini1:10000 -n root 或者 bin/beeline ! connect jdbc:hive2://mini1:10000 4.Hive命令 hive -e ‘sql’ bin/hive -e 'select * from t_test'
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值