[hadoop]hive语法(十二)

原创 2016年06月01日 09:05:48

一、创建数据库

create database hive;

二、创建表

  1. 创建员工表
    create table t_emp (
    id int,
    name string,
    age int,
    dept_name string
    )
    ROW FORMAT DELIMITED
       FIELDS TERMINATED BY ',';

  2. 创建员工文件 emp.txt
    1,张三,30,销售部
    2,李四,31,市场部
    3,王五,32,研发部
    4,孙六,33,行政部
  3. 导入数据
    LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
    LOAD DATA LOCAL INPATH '/root/emp.txt' INTO TABLE t_emp;
  4. 查询员工总数
    hive> select count(*) from t_emp;
    Query ID = root_20160531150116_fa9fcc80-eb98-4e84-ba50-646e4e56d9aa
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<span style="font-family: Arial, Helvetica, sans-serif;"><number></span>
    Starting Job = job_1464666884311_0002, Tracking URL = http://node1:8088/proxy/application_1464666884311_0002/
    Kill Command = /home/hadoop-2.5/bin/hadoop job  -kill job_1464666884311_0002
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2016-05-31 15:03:09,162 Stage-1 map = 0%,  reduce = 0%
    2016-05-31 15:04:09,510 Stage-1 map = 0%,  reduce = 0%
    2016-05-31 15:04:39,292 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 16.43 sec
    2016-05-31 15:05:39,300 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 16.43 sec
    2016-05-31 15:05:46,423 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 19.93 sec
    2016-05-31 15:05:49,925 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 23.47 sec
    MapReduce Total cumulative CPU time: 23 seconds 470 msec
    Ended Job = job_1464666884311_0002
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 25.84 sec   HDFS Read: 6793 HDFS Write: 2 SUCCESS
    Total MapReduce CPU Time Spent: 25 seconds 840 msec
    OK
    4
    Time taken: 281.847 seconds, Fetched: 1 row(s)

三、数据类型
参考文档:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types


四、导入数据 LOAD

  1. LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
    LOCAL:导入本地数据
    filepath: 相对路径project/data1、绝对路径/user/hive/project/data1、完整url hdfs://namenode:9000/user/hive/project/data1等
    OVERWRITE: 覆盖文件
    PARTITION: 分区,即按名称保存数据的文件夹
五、插入数据 INSERT
Standard syntax:
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1 FROM from_statement;
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement;
 
Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2]
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM from_statement
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2]
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;
 
Hive extension (dynamic partition inserts):
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
例如:
  1. hive> insert into table t_emp values (5,'xiaoming',12,'hr'),(7,'xiaohong',13,'manager');
  2. 创建数据库
    hive> create table t_emp_1 (
        > name varchar(50),
        > dname varchar(50));
    插入数据到t_emp_1
    hive> insert into t_emp_1 select name, dept_name from t_emp group dept_name;
六、导出 Export
EXPORT TABLE tablename [PARTITION (part_column="value"[, ...])]
  TO 'export_target_path' [ FOR replication('eventid') ]
hive> export table t_emp to '/usr/file/emp.txt';


七、导入 Import
IMPORT [[EXTERNAL] TABLE new_or_original_tablename [PARTITION (part_column="value"[, ...])]]
  FROM 'source_path'
  [LOCATION 'import_target_path']


Hive官方dml详细文档:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingdataintoHiveTablesfromqueries


版权声明:本文为博主原创文章,未经博主允许不得转载。

spark学习十二 hive on spark 环境搭建及测试

安装概览 整体的安装过程分为以下几步 搭建Hadoop集群 (整个cluster由3台机器组成,一台作为Master,另两台作为Slave)编译Spark 1.0,使其支持Hadoop 2...

Hive使用脚本加载数据

方式一:直接写在脚本中load_track_logs.sh:#!/bin/sh## 环境变量生效 . /etc/profile## HIVE HOME HIVE_HOME=/opt/cdh-5.3.6...

Hadoop Hive sql语法详解3--DML 操作:元数据存储

转载自:http://www.aboutyun.com/thread-7325-1-1.html hive不支持用insert语句一条一条的进行插入操作,也不支持update操作。数据是以l...

hadoop hive sql语法解释

DDL Operations 创建内部表 hive> CREATE TABLE pokes (foo INT, bar STRING);  创建内部表并创建分区ds hive> CREATE ...

Hadoop Hive基础SQL语法(DML 操作:元数据存储)

2. DML操作:元数据存储hive不支持用insert语句一条一条的进行插入操作,也不支持update操作。数据是以load的方式加载到建立好的表中。数据一旦导入就不可以修改。DML包括:INSER...
  • it_dx
  • it_dx
  • 2016年09月25日 19:12
  • 206

Hadoop Hive sql语法详解5--HiveQL与SQL区别

1.hive内联支持什么格式? 2.分号字符注意什么问题? 3.hive中empty是否为null? 4.hive是否支持插入现有表或则分区中? 5.hive是否支持INSERT INTO 表...

Hadoop Hive sql语法详解4--DQL 操作:数据查询SQL

1.基本的Select 操作如何实现? 2.基于Partition的查询如何实现? 3.如何实现join,是否支持左连接,右连接? 4.hive数据如何去重? 5.ORDER BY 是否全局排...

Hadoop Hive sql语法详解2-修改表结构

转载自:http://www.aboutyun.com/thread-7324-1-1.html hive同样也面对传统数据库的一些操作,那么hive 1.如何...

Hadoop Hive sql语法详解3--DML 操作:元数据存储

转载自:http://www.aboutyun.com/thread-7326-1-1.html 1 基本的Select 操作 SELECT [ALL | DISTINCT] se...

Hadoop Hive sql语法详解4--DQL 操作:数据查询SQL

Hadoop Hive sql语法详解4--DQL 操作:数据查询SQL  [复制链接]     pig2 ...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:[hadoop]hive语法(十二)
举报原因:
原因补充:

(最多只允许输入30个字)