hive 把mysql语句执行_hive 基础执行语句

本文介绍了Hive作为Hadoop数据仓库工具的基本概念和操作,包括如何通过DDL创建表、引入分区,DML操作如加载数据、执行查询以及修改和删除表。特别指出,Hive的`SELECT COUNT(*)`会执行MapReduce作业。
摘要由CSDN通过智能技术生成

hive简单概念

hive是一种基于Hadoop的数据仓库的处理工具,目前只支持简单的类似传统关系型数据库的SQL查询,修改操作功能,他可以直接将SQL转化为MapReduce程序,开发人员不必一定要学会写MR程序,提高了开发效率。

例子:基于mysql存储的hive环境,hive元数据(hive相关表,表的各个字段属性等信息)存放在mysql数据库中,mysql数据存放在hdfs默认是/user/hive/warehouse/hive.db中

ddl 语句

mysql作为元数据存储 数据库(hive)结构目录

创建表

hive> create table test (id int, name string);

引入分区的概念,因为hive 中的select 一般会扫描整个表,这样会浪费很多时间,所以引入分区的概念

hive> create table test2 (id int, name string) partitioned by (ds string);

浏览表

hive>show tables;

引入正则表达式 类似like的功能

hive>show tables '.*t'

查看数据结构

hive> DESCRIBE test;或desc test;

修改或删除表

hive>alter table test rename to test3;

hive>alter table add columns (new_column type comment '注释')

hive>drop table test;

DML操作语句

1、倒入数据

LOAD DATA LOCAL INPATH '/home/hadoop/test.txt' OVERWRITE INTO TABLE test;

local 表示执行本地,如果去掉默认是取hdfs上的文件,overwrite表示导入数据覆盖,如果去掉表示append

2、执行查询

select * from test2 where test2.ds='2014-08-26'

3、值得注意的是 select count(*) from test 与我们平时关系型数据库记录查询操作不同,他执行的是一个mr

hive> select count(*) from test2;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=

In order to set a constant number of reducers:

set mapred.reduce.tasks=

Starting Job = job_1411720827309_0004, Tracking URL = http://master:8031/proxy/application_1411720827309_0004/

Kill Command = /usr/local/cloud/hadoop/bin/hadoop job -kill job_1411720827309_0004

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

Stage-1 map = 0%, reduce = 0%

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec

Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec

Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec

MapReduce Total cumulative CPU time: 2 seconds 300 msec

Ended Job = job_1411720827309_0004

MapReduce Jobs Launched:

Job 0: Map: 1 Reduce: 1 Cumulative CPU: 2.3 sec HDFS Read: 245 HDFS Write: 2 SUCCESS

Total MapReduce CPU Time Spent: 2 seconds 300 msec

OK

3

Time taken: 27.508 seconds, Fetched: 1 row(s)

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值