hive知识点总结

Hive

一、Hive简介

什么是Hive

  1. Hive由FaceBook实现并开源
  2. 基于Hadoop的数据仓库工具
  3. 可以将结构化的数据映射为一张数据库表
  4. 并提供HQL(Hive Sql)查询功能
  5. 底层数据是存储在HDFS上的
  6. Hive的本质是将SQL转化为MapReduce任务进行
  7. 不熟悉MapReduce的用户很方便的利用HQL处理和计算HDFS上结构化的数据,适用于离线的批量数据计算

数据仓库是一个面向主题的、集成的、相对稳定的、反应历史变化的数据集合用于支持管理决策。

Hive依赖于HDFS存储数据,Hive将HQL转化为MapReduce执行,所以说HIve是基于Hadoop 的一款数据仓库工具,实质就是一款基于HDFS的MapReduce计算框架,对存储在HDFS上的数据进行分析和管理。

为什么使用Hive

直接使用MapReduce所面临的问题:

  • 人员学习成本太高
  • 项目周期要求短
  • MapReduce实现复杂查询逻辑开发难度太大

为什么使用Hive:

  • 更优好的借口:操作接口使用类Sql的语法,提供了快速开发的能力
  • 更低的学习成本:避免了些MapReduce,减少了开发人员的学习成本
  • 更好的扩展性:可以自由扩展集群规模而无需重启服务,支持用户自定义函数

Hive的特点

优点:

  • 可扩展性:Hive可以自由的扩展集群的规模,一般情况下不需要重启服务 横向扩展:通过分担压力的方式扩展集群的规模 纵向扩展:扩展线程、扩展内存等
  • 延展性:Hive支持自定义函数,用户可以根据自己的需求来实现自己的函数
  • 良好的容错性:可以保障即使有节点出现问题,SQL仍然可以完成执行

缺点:

  • Hive不支持记录级别的增删改操作:但是用户可以通过查询生成新表或者将查询结果导入文件中
  • Hive的查询延时很严重:因为MapReduce Job的启动消耗很长时间,所以不能用在交互查询系统中
  • HIve不支持事务

Hive和RDBMS的对比

总结:Hive具有SQL数据库的外表,但是他们的使用场景完全不同,HIve只适合用来做海量离线数据统分析,也就是数据仓库

二、Hive的架构

从上图可以看出Hive的内部架构由4部分组成:

用户接口:

shell/CLI,jdbc/odbc,WEB UI Command Line Interface

CLI:Shell终端命令行(Command Line Interface),采用交互形式使用Hive命令与Hive进行交互,常用于学习、调试、生产

JDBC/ODBC:是Hive的基于JDBC操作提供的客户端,用户(开发员、运维人员)通过这连接至Hive Sever服务

Web UI:通过浏览器访问Hive

跨语言服务:

thrift sever 提供了一种能力,让用户可以使用多种不同的语言来操纵Hive

Thrift 是FaceBook 开发的一个软件框架,可以用来进行可扩展且跨语言的服务的开发,Hive集成了该功能,能让不同的编程语言调用Hive的接口

底层的Driver:

驱动器Driver、编译器Driver、优化器Driver、执行器Driver

Driver 组件完成HQL 查询语法从词法分析、语法分析、编译、优化、以及生成逻辑执行计划。生成的逻辑执行计划存储在HDFS中,并随后由MapReduce调用执行。

Hive的核心是驱动引擎,驱动引擎由四部分组成:

  1. 解释器:解释器的作用就是将HiveSQL转化为抽象语法树
  2. 编译器:编译器是将语法树编译为逻辑执行计划
  3. 优化器:优化器是对逻辑执行计划进行优化
  4. 执行期:执行期是调用底层的运行框架执行逻辑计划

元数据存储

元数据,通俗的来讲,就是存储在HIve中的数据的描述信息

Hive 中元数据通常包括:表的名字、表的列和分区及其属性、表的属性(内部表和外部表)、表的数据所在目录

MemStore 默认存在自带的Derby数据库中。缺点是不适合多用户操作,并且数据存储目录不固定。数据库跟着Hive走,极其不方便管理

解决方案:存储在本地或者远程的Mysql数据库中

Hive和Mysql之间通过MemStore进行服务交互

执行流程

HQL 通过命令行或者客户端提交,经过Compiler 编译器,运用MetaData 中的元数据进行类型检测和语法分析,生成一个逻辑方案(Logical Plan),然后通过优化处理,产生一个MapReduce任务。

三、HIve的数据组织

  1. Hive 的存储结构包括数据库、表、视图、分区和表数据等。数据库、表、分区等等都对应HDFS上的一个目录。表数据对应HDFS对应目录下的文件。

  2. HIve中所有数据都存储在HDFS中,没有专门的数据存储格式,因为Hive是读模式,支持多种格式

  3. 只需要在创建表的时候告诉HIve数据中的列分隔符和行分隔符,Hive 就可以解析数据

    1. Hive的默认列分隔符:控制符 Ctrl + A
    2. Hive默认的行分隔符:换行符 \n
  4. Hive中包含以下的数据模型

    1. database:在HDFS中表现为${hive.metastore.warehouse.dir}目录下一个文件夹
    2. table:在HDFS中表现所属database目录下一个文件夹
    3. external table:与table类似,不过其数据存放位置可以指定任意的HDFS 目录路径
    4. partition:在HDFS中表现为table目录下的子目录
    5. bucket:在HDFS 中表现为同一个表目录或者是分区目录下根据某个字端的值进行Hash散列之后的都个文件
    6. view:与传统数据库类似,只读,基于基本表创建
  5. Hive的元数据存储在RDBMS中,除元数据之外的所有数据都基于HDFS存储。默认情况下Hive的元数据存储在内嵌的Derby 数据库中,智能允许一个会话连接,只适合做简单的测试,实际的生产环境并不适用,为了支持多用户会话,则需要一个独立的元数据库,使用Mysql作为元数据库,Hive 内部对Mysql提供了很好的支持。

  6. Hive 中的表分为内部表、外部表、分区表和Bucket表

内部表和外部表的区别

  • 删除内部表,删除表元数据和数据
  • 删除外部表,删除元数据,不删除数据

内部表和外部表的使用选择

在大多数的情况下,内部表和外部表的区别并不明显,如果数据的所有处理都在Hive中执行,那么更加倾向于选择内部表,但是如果hive 和其他的工具要针对你相同的数据集进行处理,外部表更加合适。

使用外部表访问存储在HDFS上的初始数据,然后通过Hive转换数据并存放到内部表中

使用外部表的场景是针对一个数据集由多个不同的Schema

通过外部表和内部表的区别和使用的对比可以看出来,hive 其实仅仅只是对存储在HDFS 上的数据提供了一种新的抽象。而不是管理存储在HDFS上的数据,所以不管创建内部表还是外部表,都可以对hive表的数据存储目录中的数据进行增删操作。

分区表和分桶表的区别:

Hive数据可以根据某些字段进行分区操作,细化数据管理,可以让部分查询更快。同时表和分区也可以进一步被划分为Buckets,分桶表的原理和MapReduce中HashPartition的原理类似

分区和分桶都是细化管理数据,但是分区表是手动添加分区,由于Hive是读模式,所以对添加进分区的数据不做模式校验,分桶表中的数据是按照某些分桶字段进行Hash散列形成的多个文件,所以数据的准确性也很高。

四、Hive的基本使用

HIve的安装

现有一个student.txt,将其存入hive,student.txt的格式如下:

95002,刘晨,,19,IS
95017,王风娟,,18,IS
95018,王一,,19,IS
95013,冯伟,,21,CS
95014,王小丽,,19,CS
95019,邢小丽,,19,IS
95020,赵钱,,21,IS
95003,王敏,,22,MA
95004,张立,,19,IS
95012,孙花,,20,CS
95010,孔小涛,,19,CS
95005,刘刚,,18,MA
95006,孙庆,,23,CS
95007,易思玲,,19,MA
95008,李娜,,18,CS
95021,周二,,17,MA
95022,郑明,,20,MA
95001,李勇,,20,CS
95011,包小柏,,18,MA
95009,梦圆圆,,18,MA
95015,王君,,18,MA

一、创建一个数据库myhive

hive> create database myhive;
OK
Time taken: 7.847 seconds
hive>

二、使用新的数据库myhive

hive> use myhive;
OK
Time taken: 0.047 seconds
hive> 

三、查看当前正在使用的数据库

hive> select current_database();
OK
myhive
Time taken: 0.728 seconds, Fetched: 1 row(s)
hive>

四、在数据库myhive中创建一张student表

hive> create table student(id int, name string, sex string, age int, department string) row format delimited fields terminated by ",";
OK
Time taken: 0.718 seconds
hive> 

五、向表中加载数据

hive> load data local inpath "/home/hadoop/student.txt" into table student;
Loading data to table myhive.student
OK
Time taken: 1.854 seconds
hive>

六、查询数据

hive> select * from student;
OK
95002    刘晨    女    19    IS
95017    王风娟    女    18    IS
95018    王一    女    19    IS
95013    冯伟    男    21    CS
95014    王小丽    女    19    CS
95019    邢小丽    女    19    IS
95020    赵钱    男    21    IS
95003    王敏    女    22    MA
95004    张立    男    19    IS
95012    孙花    女    20    CS
95010    孔小涛    男    19    CS
95005    刘刚    男    18    MA
95006    孙庆    男    23    CS
95007    易思玲    女    19    MA
95008    李娜    女    18    CS
95021    周二    男    17    MA
95022    郑明    男    20    MA
95001    李勇    男    20    CS
95011    包小柏    男    18    MA
95009    梦圆圆    女    18    MA
95015    王君    男    18    MA
Time taken: 2.455 seconds, Fetched: 21 row(s)
hive>

七、查看表结构

三种方式都是显示表的详细信息

hive> desc student;
OK
id                      int                                     
name                    string                                  
sex                     string                                  
age                     int                                     
department              string                                  
Time taken: 0.102 seconds, Fetched: 5 row(s)
hive>
hive> desc extended student;
OK
id                      int                                     
name                    string                                  
sex                     string                                  
age                     int                                     
department              string                                  
      
Detailed Table Information    Table(tableName:student, dbName:myhive, owner:hadoop, createTime:1522750487, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:sex, type:string, comment:null), FieldSchema(name:age, type:int, comment:null), FieldSchema(name:department, type:string, comment:null)], location:hdfs://myha01/user/hive/warehouse/myhive.db/student, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=,, field.delim=,}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{transient_lastDdlTime=1522750695, totalSize=523, numRows=0, rawDataSize=0, numFiles=1}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, rewriteEnabled:false)  
Time taken: 0.127 seconds, Fetched: 7 row(s)
hive>
hive> desc formatted student; // 获得更加详细的内容相比于Desc
OK
# col_name                data_type               comment   
  
id                      int       
name                    string    
sex                     string    
age                     int       
department              string    
  
# Detailed Table Information  
Database:               myhive   
Owner:                  hadoop   
CreateTime:             Tue Apr 03 18:14:47 CST 2018   
LastAccessTime:         UNKNOWN  
Retention:              0  
Location:               hdfs://myha01/user/hive/warehouse/myhive.db/student   
Table Type:             MANAGED_TABLE  
Table Parameters:  
    numFiles                1   
    numRows                 0   
    rawDataSize             0   
    totalSize               523   
    transient_lastDdlTime    1522750695  
  
# Storage Information  
SerDe Library:          org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe   
InputFormat:            org.apache.hadoop.mapred.TextInputFormat   
OutputFormat:           org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
Compressed:             No   
Num Buckets:            -1   
Bucket Columns:         []   
Sort Columns:           []   
Storage Desc Params:  
    field.delim             ,   
    serialization.format    ,   
Time taken: 0.13 seconds, Fetched: 34 row(s)
hive>

五、Hive SQL

HQL之数据类型和存储格式

数据类型
基本数据类型

Hive支持关系型数据中大多数基本数据类型

类型描述示例
booleantrue/falseTRUE
tinyint1字节的有符号整数-128~127 1Y
smallint2个字节的有符号整数,-32768~327671S
int4个字节的带符号整数1
bigint8字节带符号整数1L
float4字节单精度浮点数1.0
double8字节双精度浮点数1.0
deicimal任意精度的带符号小数1.0
String字符串,变长“a”,’b’
varchar变长字符串“a”,’b’
char固定长度字符串“a”,’b’
binary字节数组无法表示
timestamp时间戳,纳秒精度122327493795
date日期‘2018-04-07’

和其他的SQL语言一样,这些都是保留字。需要注意的是所有的这些数据类型都是对Java中接口的实现,因为这些类型的具体行为细节和Java中对应的类型是完全一样的。例如:String类型实现的事Java中的String,float实现的是Java中的float等

复杂类型
类型描述示例
array有序的的同类型的集合array(1,2)
mapkey-value,key必须为原始类型,value可以任意类型map(‘a’,1,’b’,2)
struct字段集合,类型可以不同struct(‘1’,1,1.0), named_stract(‘col1’,’1’,’col2’,1,’clo3’,1.0)
存储格式

Hive会为创建的每个数据库在HDFS上创建一个目录,该数据库的表会以子目录形式存储,表中的数据会以表目录下的文件形式存储。对于defalut数据库,默认的缺省数据库没有自己的目录,defalut数据库下的表默认存放在/user/hive/warehouse目录下。

textfile

textfile为默认格式,存储方式为行存储,数据不做压缩,磁盘开销大,数据解析开销大

SequenceFile

SequenceFile是Hadoop API提供的一种二进制文件支持,其具有使用方便、可分割、可压缩的特点。

SequenceFile支持三种压缩选择:NONE、RECORD、BLOCK。Record压缩效率低一半选择BLOCK压缩。

RCFile

一种行列存储相结合的存储方式

OCRFile

数据按照行分块,每个块按照列存储,其中每个块都存储有一个索引。hive给出的新的格式,属于RCFLIE的升级版,性能有大幅度的提升,而且数据可以压缩存储。

Parquet

Parquet也是一种行式存储,同时具有很好的压缩性能;同时减少大量的表扫描反序列化的时间

数据格式

当数据存储在文本文件中的时候,必须按照一定的格式区分行和列,并且在hive中指明这些区分符。hive默认使用了几个平时很少出现的字符,这些字符一半不会作为内容出现在记录中。

hive默认的行和列分隔符如下所示:

分隔符描述
\n对于文本文件来说,每行是一条记录,所以\n 来分割记录
^A (Ctrl+A)分割字段,也可以用\001 来表示
^B (Ctrl+B)用于分割 Arrary 或者 Struct 中的元素,或者用于 map 中键值之间的分割,也可以用\002 分割。
^C用于 map 中键和值自己分割,也可以用\003 表示。

DDL操作

库操作
创建库

语法结构

CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
  [COMMENT database_comment]      //关于数据块的描述
  [LOCATION hdfs_path]          //指定数据库在HDFS上的存储位置
  [WITH DBPROPERTIES (property_name=property_value, ...)];    //指定数据块属性

创建库的方式

  1. 创建普通的数据库
    0: jdbc:hive2://hadoop3:10000> create database t1;
    No rows affected (0.308 seconds)
    0: jdbc:hive2://hadoop3:10000> show databases;
    +----------------+
    | database_name  |
    +----------------+
    | default        |
    | myhive         |
    | t1             |
    +----------------+
    3 rows selected (0.393 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  2. 创建库的时候检查是否存在
    0: jdbc:hive2://hadoop3:10000> create database if not exists t1;
    No rows affected (0.176 seconds)
    0: jdbc:hive2://hadoop3:10000> 
    
  3. 创建库的时候带注释img
    0: jdbc:hive2://hadoop3:10000> create database if not exists t2 comment 'learning hive';
    No rows affected (0.217 seconds)
    0: jdbc:hive2://hadoop3:10000> 
    
  4. 创建带属性的库
    0: jdbc:hive2://hadoop3:10000> create database if not exists t3 with dbproperties('creator'='hadoop','date'='2018-04-05');
    No rows affected (0.255 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
查看库

查看库的方式

  1. 查看有哪些数据库

    0: jdbc:hive2://hadoop3:10000> show databases;
    +----------------+
    | database_name  |
    +----------------+
    | default        |
    | myhive         |
    | t1             |
    | t2             |
    | t3             |
    +----------------+
    5 rows selected (0.164 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  2. 显示数据库的详细属性信息

    desc database [extended] dbname;
    
    0: jdbc:hive2://hadoop3:10000> desc database extended t3;
    +----------+----------+------------------------------------------+-------------+-------------+------------------------------------+
    | db_name  | comment  |                 location                 | owner_name  | owner_type  |             parameters             |
    +----------+----------+------------------------------------------+-------------+-------------+------------------------------------+
    | t3       |          | hdfs://myha01/user/hive/warehouse/t3.db  | hadoop      | USER        | {date=2018-04-05, creator=hadoop}  |
    +----------+----------+------------------------------------------+-------------+-------------+------------------------------------+
    1 row selected (0.11 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  3. 查看正在使用哪个库

    0: jdbc:hive2://hadoop3:10000> select current_database();
    +----------+
    |   _c0    |
    +----------+
    | default  |
    +----------+
    1 row selected (1.36 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  4. 查看创建库的详细语句

    0: jdbc:hive2://hadoop3:10000> show create database t3;
    +----------------------------------------------+
    |                createdb_stmt                 |
    +----------------------------------------------+
    | CREATE DATABASE `t3`                         |
    | LOCATION                                     |
    |   'hdfs://myha01/user/hive/warehouse/t3.db'  |
    | WITH DBPROPERTIES (                          |
    |   'creator'='hadoop',                        |
    |   'date'='2018-04-05')                       |
    +----------------------------------------------+
    6 rows selected (0.155 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
删除库

删除库操作

drop database dbname;
drop database if exists dbname;

默认情况下,hive不允许删除包含表的数据库,有以下两种解决方法

  1. 手动删除库下的所有表,然后删库
  2. 使用cascade关键字
drop database if exists dbname cascade;
默认情况下就是 restrict drop database if exists myhive ==== drop database if exists myhive restrict
  1. 删除不含表的数据库
    0: jdbc:hive2://hadoop3:10000> show tables in t1;
    +-----------+
    | tab_name  |
    +-----------+
    +-----------+
    No rows selected (0.147 seconds)
    0: jdbc:hive2://hadoop3:10000> drop database t1;
    No rows affected (0.178 seconds)
    0: jdbc:hive2://hadoop3:10000> show databases;
    +----------------+
    | database_name  |
    +----------------+
    | default        |
    | myhive         |
    | t2             |
    | t3             |
    +----------------+
    4 rows selected (0.124 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  2. 删除含有表的数据库
    0: jdbc:hive2://hadoop3:10000> drop database if exists t3 cascade;
    No rows affected (1.56 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
切换库
use database_name
0: jdbc:hive2://hadoop3:10000> use t2;
No rows affected (0.109 seconds)
0: jdbc:hive2://hadoop3:10000> 
表操作
创建表

语法:

CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
  [(col_name data_type [COMMENT col_comment], ...)]
  [COMMENT table_comment]
  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
  [CLUSTERED BY (col_name, col_name, ...)
    [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
  [ROW FORMAT row_format]
  [STORED AS file_format]
  [LOCATION hdfs_path]
CREATE TABLE 创建一个指定名字的表。如果相同名字的表已经存在,则抛出异常;用户可以用 IF NOT EXIST 选项来忽略这个异常
•EXTERNAL 关键字可以让用户创建一个外部表,在建表的同时指定一个指向实际数据的路径(LOCATION)
•LIKE 允许用户复制现有的表结构,但是不复制数据
•COMMENT可以为表与字段增加描述
•PARTITIONED BY 指定分区
•ROW FORMAT
   DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char]
     MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
     | SERDE serde_name [WITH SERDEPROPERTIES
     (property_name=property_value, property_name=property_value, ...)]
   用户在建表的时候可以自定义 SerDe 或者使用自带的 SerDe。如果没有指定 ROW FORMAT 或者 ROW FORMAT DELIMITED,将会使用自带的 SerDe。在建表的时候,用户还需要为表指定列,用户在指定表的列的同时也会指定自定义的 SerDe,Hive 通过 SerDe 确定表的具体的列的数据。 
•STORED AS
   SEQUENCEFILE //序列化文件
  | TEXTFILE //普通的文本文件格式
  | RCFILE  //行列存储相结合的文件
  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname //自定义文件格式
  如果文件数据是纯文本,可以使用 STORED AS TEXTFILE。如果数据需要压缩,使用 STORED AS SEQUENCE 。
•LOCATION指定表在HDFS的存储路径

最佳实践:

如果一份数据已经存储在HDFS上,并且要被多个用户或者客户端使用,最好创建外部表,反之,最好创建内部表。

如果不指定,就按照默认的规则存储在默认的仓库路径中。

  1. 创建默认的内部表

    0: jdbc:hive2://hadoop3:10000> create table student(id int, name string, sex string, age int,department string) row format delimited fields terminated by ",";
    No rows affected (0.222 seconds)
    0: jdbc:hive2://hadoop3:10000> desc student;
    +-------------+------------+----------+
    |  col_name   | data_type  | comment  |
    +-------------+------------+----------+
    | id          | int        |          |
    | name        | string     |          |
    | sex         | string     |          |
    | age         | int        |          |
    | department  | string     |          |
    +-------------+------------+----------+
    5 rows selected (0.168 seconds)
    0: jdbc:hive2://hadoop3:10000
    
  2. 外部表

    0: jdbc:hive2://hadoop3:10000> create external table student_ext
    (id int, name string, sex string, age int,department string) row format delimited fields terminated by "," location "/hive/student";
    No rows affected (0.248 seconds)
    0: jdbc:hive2://hadoop3:10000> 
    
  3. 分区表

    0: jdbc:hive2://hadoop3:10000> create external table student_ptn(id int, name string, sex string, age int,department string)
    . . . . . . . . . . . . . . .> partitioned by (city string)
    . . . . . . . . . . . . . . .> row format delimited fields terminated by ","
    . . . . . . . . . . . . . . .> location "/hive/student_ptn";
    No rows affected (0.24 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
    添加分区
    
    0: jdbc:hive2://hadoop3:10000> alter table student_ptn add partition(city="beijing");
    No rows affected (0.269 seconds)
    0: jdbc:hive2://hadoop3:10000> alter table student_ptn add partition(city="shenzhen");
    No rows affected (0.236 seconds)
    0: jdbc:hive2://hadoop3:10000> 
    
    如果某张表是分区表。那么每个分区的定义,其实就表现了为了这张表的数据存储目录下的一个子目录
    如果是分区表。那么数据文件一定要存储在某个分区中,而不能直接存储在表中
    
  4. 分桶表

    0: jdbc:hive2://hadoop3:10000> create external table student_bck(id int, name string, sex string, age int,department string)
    . . . . . . . . . . . . . . .> clustered by (id) sorted by (id asc, name desc) into 4 buckets
    . . . . . . . . . . . . . . .> row format delimited fields terminated by ","
    . . . . . . . . . . . . . . .> location "/hive/student_bck";
    No rows affected (0.216 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  5. 使用CTAS创建表

    作用:就是从一个查询SQL的结果来创建一个表进行存储
    现向student表中导入数据
    0: jdbc:hive2://hadoop3:10000> load data local inpath "/home/hadoop/student.txt" into table student;
    No rows affected (0.715 seconds)
    0: jdbc:hive2://hadoop3:10000> select * from student;
    +-------------+---------------+--------------+--------------+---------------------+
    | student.id  | student.name  | student.sex  | student.age  | student.department  |
    +-------------+---------------+--------------+--------------+---------------------+
    | 95002       | 刘晨            || 19           | IS                  |
    | 95017       | 王风娟           || 18           | IS                  |
    | 95018       | 王一            || 19           | IS                  |
    | 95013       | 冯伟            || 21           | CS                  |
    | 95014       | 王小丽           || 19           | CS                  |
    | 95019       | 邢小丽           || 19           | IS                  |
    | 95020       | 赵钱            || 21           | IS                  |
    | 95003       | 王敏            || 22           | MA                  |
    | 95004       | 张立            || 19           | IS                  |
    | 95012       | 孙花            || 20           | CS                  |
    | 95010       | 孔小涛           || 19           | CS                  |
    | 95005       | 刘刚            || 18           | MA                  |
    | 95006       | 孙庆            || 23           | CS                  |
    | 95007       | 易思玲           || 19           | MA                  |
    | 95008       | 李娜            || 18           | CS                  |
    | 95021       | 周二            || 17           | MA                  |
    | 95022       | 郑明            || 20           | MA                  |
    | 95001       | 李勇            || 20           | CS                  |
    | 95011       | 包小柏           || 18           | MA                  |
    | 95009       | 梦圆圆           || 18           | MA                  |
    | 95015       | 王君            || 18           | MA                  |
    +-------------+---------------+--------------+--------------+---------------------+
    21 rows selected (0.342 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
    
    使用CTAS创建分区
    
    0: jdbc:hive2://hadoop3:10000> create table student_ctas as select * from student where id < 95012;
    WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
    No rows affected (34.514 seconds)
    0: jdbc:hive2://hadoop3:10000> select * from student_ctas
    . . . . . . . . . . . . . . .> ;
    +------------------+--------------------+-------------------+-------------------+--------------------------+
    | student_ctas.id  | student_ctas.name  | student_ctas.sex  | student_ctas.age  | student_ctas.department  |
    +------------------+--------------------+-------------------+-------------------+--------------------------+
    | 95002            | 刘晨                 || 19                | IS                       |
    | 95003            | 王敏                 || 22                | MA                       |
    | 95004            | 张立                 || 19                | IS                       |
    | 95010            | 孔小涛                || 19                | CS                       |
    | 95005            | 刘刚                 || 18                | MA                       |
    | 95006            | 孙庆                 || 23                | CS                       |
    | 95007            | 易思玲                || 19                | MA                       |
    | 95008            | 李娜                 || 18                | CS                       |
    | 95001            | 李勇                 || 20                | CS                       |
    | 95011            | 包小柏                || 18                | MA                       |
    | 95009            | 梦圆圆                || 18                | MA                       |
    +------------------+--------------------+-------------------+-------------------+--------------------------+
    11 rows selected (0.445 seconds)
    0: jdbc:hive2://hadoop3:10000>
    
  6. 复制表结构

    0: jdbc:hive2://hadoop3:10000> create table student_copy like student;
    No rows affected (0.217 seconds)
    0: jdbc:hive2://hadoop3:10000> 
    
    注意:
    如果在table前面没有加external关键词,那么复制出来的新表。无论如何都是内部表
    如果table前面加了external关键字,那么复制出来的新表无论,如何都是外部表
    
查看表
查看表列表

查看当前使用的数据库中有哪些表

0: jdbc:hive2://hadoop3:10000> show tables;
+---------------+
|   tab_name    |
+---------------+
| student       |
| student_bck   |
| student_copy  |
| student_ctas  |
| student_ext   |
| student_ptn   |
+---------------+
6 rows selected (0.163 seconds)
0: jdbc:hive2://hadoop3:10000

查看非当前使用的数据库中有哪些表

0: jdbc:hive2://hadoop3:10000> show tables in myhive;
+-----------+
| tab_name  |
+-----------+
| student   |
+-----------+
1 row selected (0.144 seconds)
0: jdbc:hive2://hadoop3:10000>

查看数据库中以XXX开头的表

0: jdbc:hive2://hadoop3:10000> show tables like 'student_c*';
+---------------+
|   tab_name    |
+---------------+
| student_copy  |
| student_ctas  |
+---------------+
2 rows selected (0.13 seconds)
0: jdbc:hive2://hadoop3:10000
查看表的详细信息

查看表的信息

0: jdbc:hive2://hadoop3:10000> desc student;
+-------------+------------+----------+
|  col_name   | data_type  | comment  |
+-------------+------------+----------+
| id          | int        |          |
| name        | string     |          |
| sex         | string     |          |
| age         | int        |          |
| department  | string     |          |
+-------------+------------+----------+
5 rows selected (0.149 seconds)
0: jdbc:hive2://hadoop3:10000>

查看表的详细信息(格式并不友好)

0: jdbc:hive2://hadoop3:10000> desc extended student;

查看表的详细信息(格式友好)

0: jdbc:hive2://hadoop3:10000> desc formatted student;

查看分区信息

0: jdbc:hive2://hadoop3:10000> show partitions student_ptn;

查看表的详细建表语句

0: jdbc:hive2://hadoop3:10000> show create table student_ptn;

修改表
修改表名
0: jdbc:hive2://hadoop3:10000> alter table student rename to new_student;

修改字段定义

增加一个字段

0: jdbc:hive2://hadoop3:10000> alter table new_student add columns (score int);

修改一个字段的定义

0: jdbc:hive2://hadoop3:10000> alter table new_student change name new_name string;

删除一个字段

不支持

替换所有字段

0: jdbc:hive2://hadoop3:10000> alter table new_student replace columns (id int, name string, address string);
修改分区信息

添加分区

静态分区

  • 添加一个
    0: jdbc:hive2://hadoop3:10000> alter table student_ptn add partition(city="chongqing");
    
  • 添加多个
    0: jdbc:hive2://hadoop3:10000> alter table student_ptn add partition(city="chongqing2") partition(city="chongqing3") partition(city="chongqing4");
    

动态分区

先向student_ptn表中插入数据,数据格式如下

0: jdbc:hive2://hadoop3:10000> load data local inpath "/home/hadoop/student.txt" into table student_ptn partition(city="beijing");

img

把这张表的内容直接插入到另一张表student_ptn_age中,并实现sex自动分区(不指定到底是哪种性别,让系统自己分配决定)

首先创建student_ptn_age并指定分区为age

0: jdbc:hive2://hadoop3:10000> create table student_ptn_age(id int,name string,sex string,department string) partitioned by (age int);

从student_ptn表中查询数据并插入到student_ptn_age中

0: jdbc:hive2://hadoop3:10000> insert overwrite table student_ptn_age partition(age)
. . . . . . . . . . . . . . .> select id,name,sex,department,age from student_ptn;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
No rows affected (27.905 seconds)
0: jdbc:hive2://hadoop3:10000>

修改分区

修改分区,一般来说,都是指修改分区的数据存储目录

在添加分区的时候,直接指定当前分区的数据存储目录

0: jdbc:hive2://hadoop3:10000> alter table student_ptn add if not exists partition(city='beijing') 
. . . . . . . . . . . . . . .> location '/student_ptn_beijing' partition(city='cc') location '/student_cc';
No rows affected (0.306 seconds)
0: jdbc:hive2://hadoop3:10000>

修改已经制定好的分区的数据存储目录

0: jdbc:hive2://hadoop3:10000> alter table student_ptn partition (city='beijing') set location '/student_ptn_beijing';

此时原先的分区文件夹仍然存在,但是往分区添加数据的时候,只会添加到新的分区目录

删除分区

0: jdbc:hive2://hadoop3:10000> alter table student_ptn drop partition (city='beijing');

删除表
0: jdbc:hive2://hadoop3:10000> drop table new_student;

清空表
0: jdbc:hive2://hadoop3:10000> truncate table student_ptn;
其他辅助命令

六、Hive的内置函数 {内置函数}

数学函数

Return TypeName (Signature)Description
DOUBLEround(DOUBLE a)Returns the rounded BIGINTvalue of a.返回对a四舍五入的BIGINT值
DOUBLEround(DOUBLE a, INT d)Returns arounded to ddecimal places.返回DOUBLE型d的保留n位小数的DOUBLW型的近似值
DOUBLEbround(DOUBLE a)Returns the rounded BIGINT value of ausing HALF_EVEN rounding mode (as ofHive 1.3.0, 2.0.0). Also known as Gaussian rounding or bankers’ rounding. Example: bround(2.5) = 2, bround(3.5) = 4.``银行家舍入法(14:舍,69:进,5->前位数是偶:舍,5->前位数是奇:进)
DOUBLEbround(DOUBLE a, INT d)Returns arounded to ddecimal places using HALF_EVEN rounding mode (as ofHive 1.3.0, 2.0.0). Example: bround(8.25, 1) = 8.2, bround(8.35, 1) = 8.4.``银行家舍入法,保留d位小数
BIGINTfloor(DOUBLE a)Returns the maximum BIGINTvalue that is equal to or less than a向下取整,最数轴上最接近要求的值的左边的值 如:6.10->6 -3.4->-4
BIGINTceil(DOUBLE a), ceiling(DOUBLE a)Returns the minimum BIGINT value that is equal to or greater than a.求其不小于小给定实数的最小整数如:ceil(6) = ceil(6.1)= ceil(6.9) = 6
DOUBLErand(), rand(INT seed)Returns a random number (that changes from row to row) that is distributed uniformly from 0 to 1. Specifying the seed will make sure the generated random number sequence is deterministic.每行返回一个DOUBLE型随机数seed是随机因子
DOUBLEexp(DOUBLE a), exp(DECIMAL a)Returns e<sup>a</sup>where eis the base of the natural logarithm. Decimal version added inHive 0.13.0.返回e的a幂次方, a可为小数
DOUBLEln(DOUBLE a), ln(DECIMAL a)Returns the natural logarithm of the argument a. Decimal version added inHive 0.13.0.以自然数为底d的对数,a可为小数
DOUBLElog10(DOUBLE a), log10(DECIMAL a)Returns the base-10 logarithm of the argument a. Decimal version added inHive 0.13.0.以10为底d的对数,a可为小数
DOUBLElog2(DOUBLE a), log2(DECIMAL a)Returns the base-2 logarithm of the argument a. Decimal version added inHive 0.13.0.以2为底数d的对数,a可为小数
DOUBLElog(DOUBLE base, DOUBLE a)log(DECIMAL base, DECIMAL a)Returns the base-baselogarithm of the argument a. Decimal versions added inHive 0.13.0.以base为底的对数,base 与 a都是DOUBLE类型
DOUBLEpow(DOUBLE a, DOUBLE p), power(DOUBLE a, DOUBLE p)Returns a<sup>p</sup>.计算a的p次幂
DOUBLEsqrt(DOUBLE a), sqrt(DECIMAL a)Returns the square root of a. Decimal version added inHive 0.13.0.计算a的平方根
STRINGbin(BIGINT a)Returns the number in binary format (seehttp://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_bin).计算二进制a的STRING类型,a为BIGINT类型
STRINGhex(BIGINT a) hex(STRING a) hex(BINARY a)If the argument is an INTor binary,hexreturns the number as a STRINGin hexadecimal format. Otherwise if the number is a STRING, it converts each character into its hexadecimal representation and returns the resulting STRING. (Seehttp://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_hex,BINARYversion as of Hive0.12.0.)计算十六进制a的STRING类型,如果a为STRING类型就转换成字符相对应的十六进制
BINARYunhex(STRING a)Inverse of hex. Interprets each pair of characters as a hexadecimal number and converts to the byte representation of the number. (BINARYversion as of Hive0.12.0, used to return a string.)hex的逆方法
STRINGconv(BIGINT num, INT from_base, INT to_base), conv(STRING num, INT from_base, INT to_base)Converts a number from a given base to another (seehttp://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_conv).将GIGINT/STRING类型的num从from_base进制转换成to_base进制
DOUBLEabs(DOUBLE a)Returns the absolute value.计算a的绝对值
INT or DOUBLEpmod(INT a, INT b), pmod(DOUBLE a, DOUBLE b)Returns the positive value of a mod b.a对b取模
DOUBLEsin(DOUBLE a), sin(DECIMAL a)Returns the sine of a(ais in radians). Decimal version added inHive 0.13.0.求a的正弦值
DOUBLEasin(DOUBLE a), asin(DECIMAL a)Returns the arc sin of aif -1<=a<=1 or NULL otherwise. Decimal version added inHive 0.13.0.求d的反正弦值
DOUBLEcos(DOUBLE a), cos(DECIMAL a)Returns the cosine of a(ais in radians). Decimal version added inHive 0.13.0.求余弦值
DOUBLEacos(DOUBLE a), acos(DECIMAL a)Returns the arccosine of aif -1<=a<=1 or NULL otherwise. Decimal version added inHive 0.13.0.求反余弦值
DOUBLEtan(DOUBLE a), tan(DECIMAL a)Returns the tangent of a(ais in radians). Decimal version added inHive 0.13.0.求正切值
DOUBLEatan(DOUBLE a), atan(DECIMAL a)Returns the arctangent of a. Decimal version added inHive 0.13.0.求反正切值
DOUBLEdegrees(DOUBLE a), degrees(DECIMAL a)Converts value of afrom radians to degrees. Decimal version added inHive 0.13.0.奖弧度值转换角度值
DOUBLEradians(DOUBLE a), radians(DOUBLE a)Converts value of afrom degrees to radians. Decimal version added inHive 0.13.0.将角度值转换成弧度值
INT or DOUBLEpositive(INT a), positive(DOUBLE a)Returns a.返回a
INT or DOUBLEnegative(INT a), negative(DOUBLE a)Returns -a.返回a的相反数
DOUBLE or INTsign(DOUBLE a), sign(DECIMAL a)Returns the sign of aas ‘1.0’ (if ais positive) or ‘-1.0’ (if ais negative), ‘0.0’ otherwise. The decimal version returns INT instead of DOUBLE. Decimal version added inHive 0.13.0.如果a是正数则返回1.0,是负数则返回-1.0,否则返回0.0
DOUBLEe()Returns the value of e.数学常数e
DOUBLEpi()Returns the value of pi.数学常数pi
BIGINTfactorial(INT a)Returns the factorial of a(as of Hive1.2.0). Valid ais [0…20].``求a的阶乘
DOUBLEcbrt(DOUBLE a)Returns the cube root of adouble value (as of Hive1.2.0).``求a的立方根
INT BIGINTshiftleft(TINYINTSMALLINT
INTBIGINTshiftright(TINYINTSMALLINT
INTBIGINTshiftrightunsigned(TINYINTSMALLINT
Tgreatest(T v1, T v2, …)Returns the greatest value of the list of values (as of Hive1.1.0). Fixed to return NULL when one or more arguments are NULL, and strict type restriction relaxed, consistent with “>” operator (as of Hive2.0.0).``求最大值
Tleast(T v1, T v2, …)Returns the least value of the list of values (as of Hive1.1.0). Fixed to return NULL when one or more arguments are NULL, and strict type restriction relaxed, consistent with “<” operator (as of Hive2.0.0).``求最小值

集合函数

Return TypeName(Signature)Description
intsize(Map<K.V>)Returns the number of elements in the map type.求map的长度
intsize(Array<T>)Returns the number of elements in the array type.求数组的长度
array<K>map_keys(Map<K.V>)Returns an unordered array containing the keys of the input map.返回map中的所有key
array<V>map_values(Map<K.V>)Returns an unordered array containing the values of the input map.返回map中的所有value
booleanarray_contains(Array<T>, value)Returns TRUE if the array contains value.如该数组Array<T>包含value返回true。,否则返回false
arraysort_array(Array<T>)Sorts the input array in ascending order according to the natural ordering of the array elements and returns it (as of version0.9.0).按自然顺序对数组进行排序并返回

类型转换函数

Return Type****Name(Signature)``Description
binarybinary(stringbinary)
Expected “=” to follow “type”cast(expr as<type>)Converts the results of the expression expr to<type>. For example, cast(‘1’ as BIGINT) will convert the string ‘1’ to its integral representation. A null is returned if the conversion does not succeed. If cast(expr as boolean) Hive returns true for a non-empty string.将expr转换成type类型 如:cast(“1” as BIGINT) 将字符串1转换成了BIGINT类型,如果转换失败将返回NULL

日期函数

Return TypeName(Signature)Description
stringfrom_unixtime(bigint unixtime[, string format])Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of “1970-01-01 00:00:00”.将时间的秒值转换成format格式(format可为“yyyy-MM-dd hh:mm:ss”,“yyyy-MM-dd hh”,“yyyy-MM-dd hh:mm”等等)如from_unixtime(1250111000,“yyyy-MM-dd”) 得到2009-03-12
bigintunix_timestamp()Gets current Unix timestamp in seconds.获取本地时区下的时间戳
bigintunix_timestamp(string date)Converts time string in format yyyy-MM-dd HH:mm:ssto Unix timestamp (in seconds), using the default timezone and the default locale, return 0 if fail: unix_timestamp(‘2009-03-20 11:30:01’) = 1237573801将格式为yyyy-MM-dd HH:mm:ss的时间字符串转换成时间戳 如unix_timestamp(‘2009-03-20 11:30:01’) = 1237573801
bigintunix_timestamp(string date, string pattern)Convert time string with given pattern (see [http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html]) to Unix time stamp (in seconds), return 0 if fail: unix_timestamp(‘2009-03-20’, ‘yyyy-MM-dd’) = 1237532400.将指定时间字符串格式字符串转换成Unix时间戳,如果格式不对返回0 如:unix_timestamp(‘2009-03-20’, ‘yyyy-MM-dd’) = 1237532400
stringto_date(string timestamp)Returns the date part of a timestamp string: to_date(“1970-01-01 00:00:00”) = “1970-01-01”.返回时间字符串的日期部分
intyear(string date)Returns the year part of a date or a timestamp string: year(“1970-01-01 00:00:00”) = 1970, year(“1970-01-01”) = 1970.返回时间字符串的年份部分
intquarter(date/timestamp/string)Returns the quarter of the year for a date, timestamp, or string in the range 1 to 4 (as of Hive1.3.0). Example: quarter(‘2015-04-08’) = 2.返回当前时间属性哪个季度 如quarter(‘2015-04-08’) = 2
intmonth(string date)Returns the month part of a date or a timestamp string: month(“1970-11-01 00:00:00”) = 11, month(“1970-11-01”) = 11.返回时间字符串的月份部分
intday(string date) dayofmonth(date)Returns the day part of a date or a timestamp string: day(“1970-11-01 00:00:00”) = 1, day(“1970-11-01”) = 1.返回时间字符串的天
inthour(string date)Returns the hour of the timestamp: hour(‘2009-07-30 12:58:59’) = 12, hour(‘12:58:59’) = 12.返回时间字符串的小时
intminute(string date)Returns the minute of the timestamp.返回时间字符串的分钟
intsecond(string date)Returns the second of the timestamp.返回时间字符串的秒
intweekofyear(string date)Returns the week number of a timestamp string: weekofyear(“1970-11-01 00:00:00”) = 44, weekofyear(“1970-11-01”) = 44.返回时间字符串位于一年中的第几个周内 如weekofyear(“1970-11-01 00:00:00”) = 44, weekofyear(“1970-11-01”) = 44
intdatediff(string enddate, string startdate)Returns the number of days from startdate to enddate: datediff(‘2009-03-01’, ‘2009-02-27’) = 2.计算开始时间startdate到结束时间enddate相差的天数
stringdate_add(string startdate, int days)Adds a number of days to startdate: date_add(‘2008-12-31’, 1) = ‘2009-01-01’.从开始时间startdate加上days
stringdate_sub(string startdate, int days)Subtracts a number of days to startdate: date_sub(‘2008-12-31’, 1) = ‘2008-12-30’.从开始时间startdate减去days
timestampfrom_utc_timestamp(timestamp, string timezone)Assumes given timestamp is UTC and converts to given timezone (as of Hive0.8.0). For example, from_utc_timestamp(‘1970-01-01 08:00:00’,‘PST’) returns 1970-01-01 00:00:00.如果给定的时间戳并非UTC,则将其转化成指定的时区下时间戳
timestampto_utc_timestamp(timestamp, string timezone)Assumes given timestamp is in given timezone and converts to UTC (as of Hive0.8.0). For example, to_utc_timestamp(‘1970-01-01 00:00:00’,‘PST’) returns 1970-01-01 08:00:00.如果给定的时间戳指定的时区下时间戳,则将其转化成UTC下的时间戳
datecurrent_dateReturns the current date at the start of query evaluation (as of Hive1.2.0). All calls of current_date within the same query return the same value.返回当前时间日期
timestampcurrent_timestampReturns the current timestamp at the start of query evaluation (as of Hive1.2.0). All calls of current_timestamp within the same query return the same value.返回当前时间戳
stringadd_months(string start_date, int num_months)Returns the date that is num_months after start_date (as of Hive1.1.0). start_date is a string, date or timestamp. num_months is an integer. The time part of start_date is ignored. If start_date is the last day of the month or if the resulting month has fewer days than the day component of start_date, then the result is the last day of the resulting month. Otherwise, the result has the same day component as start_date.返回当前时间下再增加num_months个月的日期
stringlast_day(string date)Returns the last day of the month which the date belongs to (as of Hive1.1.0). date is a string in the format ‘yyyy-MM-dd HH:mm:ss’ or ‘yyyy-MM-dd’. The time part of date is ignored.返回这个月的最后一天的日期,忽略时分秒部分(HH:mm:ss)
stringnext_day(string start_date, string day_of_week)Returns the first date which is later than start_date and named as day_of_week (as of Hive1.2.0). start_date is a string/date/timestamp. day_of_week is 2 letters, 3 letters or full name of the day of the week (e.g. Mo, tue, FRIDAY). The time part of start_date is ignored. Example: next_day(‘2015-01-14’, ‘TU’) = 2015-01-20.返回当前时间的下一个星期X所对应的日期 如:next_day(‘2015-01-14’, ‘TU’) = 2015-01-20 以2015-01-14为开始时间,其下一个星期二所对应的日期为2015-01-20
stringtrunc(string date, string format)Returns date truncated to the unit specified by the format (as of Hive1.2.0). Supported formats: MONTH/MON/MM, YEAR/YYYY/YY. Example: trunc(‘2015-03-17’, ‘MM’) = 2015-03-01.返回时间的最开始年份或月份 如trunc(“2016-06-26”,“MM”)=2016-06-01 trunc(“2016-06-26”,“YY”)=2016-01-01 注意所支持的格式为MONTH/MON/MM, YEAR/YYYY/YY
doublemonths_between(date1, date2)Returns number of months between dates date1 and date2 (as of Hive1.2.0). If date1 is later than date2, then the result is positive. If date1 is earlier than date2, then the result is negative. If date1 and date2 are either the same days of the month or both last days of months, then the result is always an integer. Otherwise the UDF calculates the fractional portion of the result based on a 31-day month and considers the difference in time components date1 and date2. date1 and date2 type can be date, timestamp or string in the format ‘yyyy-MM-dd’ or ‘yyyy-MM-dd HH:mm:ss’. The result is rounded to 8 decimal places. Example: months_between(‘1997-02-28 10:30:00’, ‘1996-10-30’) = 3.94959677返回date1与date2之间相差的月份,如date1>date2,则返回正,如果date1<date2,则返回负,否则返回0.0 如:months_between(‘1997-02-28 10:30:00’, ‘1996-10-30’) = 3.94959677 1997-02-28 10:30:00与1996-10-30相差3.94959677个月
stringdate_format(date/timestamp/string ts, string fmt)Converts a date/timestamp/string to a value of string in the format specified by the date format fmt (as of Hive1.2.0). Supported formats are Java SimpleDateFormat formats –https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html. The second argument fmt should be constant. Example: date_format(‘2015-04-08’, ‘y’) = ‘2015’.date_format can be used to implement other UDFs, e.g.😗 dayname(date) is date_format(date, ‘EEEE’)* dayofyear(date) is date_format(date, ‘D’)按指定格式返回时间date 如:date_format(“2016-06-22”,“MM-dd”)=06-22

条件函数

Return TypeName(Signature)Description
Tif(boolean testCondition, T valueTrue, T valueFalseOrNull)Returns valueTrue when testCondition is true, returns valueFalseOrNull otherwise.如果testCondition 为true就返回valueTrue,否则返回valueFalseOrNull ,(valueTrue,valueFalseOrNull为泛型)
Tnvl(T value, T default_value)Returns default value if value is null else returns value (as of HIve0.11).如果value值为NULL就返回default_value,否则返回value
TCOALESCE(T v1, T v2, …)Returns the first v that is not NULL, or NULL if all v’s are NULL.返回第一非null的值,如果全部都为NULL就返回NULL 如:COALESCE (NULL,44,55)=44/strong>
TCASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] ENDWhen a = b, returns c; when a = d, returns e; else returns f.如果a=b就返回c,a=d就返回e,否则返回f 如CASE 4 WHEN 5 THEN 5 WHEN 4 THEN 4 ELSE 3 END 将返回4
TCASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] ENDWhen a = true, returns b; when c = true, returns d; else returns e.如果a=ture就返回b,c= ture就返回d,否则返回e 如:CASE WHEN 5>0 THEN 5 WHEN 4>0 THEN 4 ELSE 0 END 将返回5;CASE WHEN 5<0 THEN 5 WHEN 4<0 THEN 4 ELSE 0 END 将返回0
booleanisnull( a )Returns true if a is NULL and false otherwise.如果a为null就返回true,否则返回false
booleanisnotnull ( a )Returns true if a is not NULL and false otherwise.如果a为非null就返回true,否则返回false

字符函数

Return TypeName(Signature)Description
intascii(string str)Returns the numeric value of the first character of str.返回str中首个ASCII字符串的整数值
stringbase64(binary bin)Converts the argument from binary to a base 64 string (as of Hive0.12.0)…将二进制bin转换成64位的字符串
stringconcat(stringbinary A, string
array<struct<string,double>>context_ngrams(array<array<string>>, array<string>, int K, int pf)Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of “context”. SeeStatisticsAndDataMiningfor more information…与ngram类似,但context_ngram()允许你预算指定上下文(数组)来去查找子序列,具体看StatisticsAndDataMining(这里的解释更易懂)
stringconcat_ws(string SEP, string A, string B…)Like concat() above, but with custom separator SEP…与concat()类似,但使用指定的分隔符喜进行分隔
stringconcat_ws(string SEP, array<string>)Like concat_ws() above, but taking an array of strings. (as of Hive0.9.0).拼接Array中的元素并用指定分隔符进行分隔
stringdecode(binary bin, string charset)Decodes the first argument into a String using the provided character set (one of ‘US-ASCII’, ‘ISO-8859-1’, ‘UTF-8’, ‘UTF-16BE’, ‘UTF-16LE’, ‘UTF-16’). If either argument is null, the result will also be null. (As of Hive0.12.0.).使用指定的字符集charset将二进制值bin解码成字符串,支持的字符集有:‘US-ASCII’, ‘ISO-8859-1’, ‘UTF-8’, ‘UTF-16BE’, ‘UTF-16LE’, ‘UTF-16’,如果任意输入参数为NULL都将返回NULL
binaryencode(string src, string charset)Encodes the first argument into a BINARY using the provided character set (one of ‘US-ASCII’, ‘ISO-8859-1’, ‘UTF-8’, ‘UTF-16BE’, ‘UTF-16LE’, ‘UTF-16’). If either argument is null, the result will also be null. (As of Hive0.12.0.).使用指定的字符集charset将字符串编码成二进制值,支持的字符集有:‘US-ASCII’, ‘ISO-8859-1’, ‘UTF-8’, ‘UTF-16BE’, ‘UTF-16LE’, ‘UTF-16’,如果任一输入参数为NULL都将返回NULL
intfind_in_set(string str, string strList)Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. For example, find_in_set(‘ab’, ‘abc,b,ab,c,def’) returns 3…返回以逗号分隔的字符串中str出现的位置,如果参数str为逗号或查找失败将返回0,如果任一参数为NULL将返回NULL回
stringformat_number(number x, int d)Formats the number X to a format like ‘#,###,###.##’, rounded to D decimal places, and returns the result as a string. If D is 0, the result has no decimal point or fractional part. (As of Hive0.10.0; bug with float types fixed inHive 0.14.0, decimal type support added inHive 0.14.0).将数值X转换成"#,###,###.##"格式字符串,并保留d位小数,如果d为0,将进行四舍五入且不保留小数
stringget_json_object(string json_string, string path)Extracts json object from a json string based on json path specified, and returns json string of the extracted json object. It will return null if the input json string is invalid. *NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys cannot start with numbers. * This is due to restrictions on Hive column names…从指定路径上的JSON字符串抽取出JSON对象,并返回这个对象的JSON格式,如果输入的JSON是非法的将返回NULL,注意此路径上JSON字符串只能由数字 字母 下划线组成且不能有大写字母和特殊字符,且key不能由数字开头,这是由于Hive对列名的限制
booleanin_file(string str, string filename)Returns true if the string str appears as an entire line in filename…如果文件名为filename的文件中有一行数据与字符串str匹配成功就返回true
intinstr(string str, string substr)Returns the position of the first occurrence of substrin str. Returns nullif either of the arguments are nulland returns 0if substrcould not be found in str. Be aware that this is not zero based. The first character in strhas index 1…查找字符串str中子字符串substr出现的位置,如果查找失败将返回0,如果任一参数为Null将返回null,注意位置为从1开始的
intlength(string A)Returns the length of the string…返回字符串的长度
intlocate(string substr, string str[, int pos])Returns the position of the first occurrence of substr in str after position pos…查找字符串str中的pos位置后字符串substr第一次出现的位置
stringlower(string A) lcase(string A)Returns the string resulting from converting all characters of B to lower case. For example, lower(‘fOoBaR’) results in ‘foobar’…将字符串A的所有字母转换成小写字母
stringlpad(string str, int len, string pad)Returns str, left-padded with pad to a length of len…从左边开始对字符串str使用字符串pad填充,最终len长度为止,如果字符串str本身长度比len大的话,将去掉多余的部分
stringltrim(string A)Returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(’ foobar ') results in 'foobar '…去掉字符串A前面的空格
array<struct<string,double>>ngrams(array<array<string>>, int N, int K, int pf)Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. SeeStatisticsAndDataMiningfor more information…返回出现次数TOP K的的子序列,n表示子序列的长度,具体看StatisticsAndDataMining(这里的解释更易懂)
stringparse_url(string urlString, string partToExtract [, string keyToExtract])Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO. For example, parse_url(‘http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1’, ‘HOST’) returns ‘facebook.com’. Also a value of a particular key in QUERY can be extracted by providing the key as the third argument, for example, parse_url(‘http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1’, ‘QUERY’, ‘k1’) returns ‘v1’…返回从URL中抽取指定部分的内容,参数url是URL字符串,而参数partToExtract是要抽取的部分,这个参数包含(HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO,例如:parse_url(‘http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1’, ‘HOST’) =‘facebook.com’,如果参数partToExtract值为QUERY则必须指定第三个参数key 如:parse_url(‘http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1’, ‘QUERY’, ‘k1’) =‘v1’
stringprintf(String format, Obj… args)Returns the input formatted according do printf-style format strings (as of Hive0.9.0)…按照printf风格格式输出字符串
stringregexp_extract(string subject, string pattern, int index)Returns the string extracted using the pattern. For example, regexp_extract(‘foothebar’, ‘foo(.*?)(bar)’, 2) returns ‘bar.’ Note that some care is necessary in using predefined character classes: using ‘\s’ as the second argument will match the letter s; ‘\s’ is necessary to match whitespace, etc. The ‘index’ parameter is the Java regex Matcher group() method index. See docs/api/java/util/regex/Matcher.html for more information on the ‘index’ or Java regex group() method…抽取字符串subject中符合正则表达式pattern的第index个部分的子字符串,注意些预定义字符的使用,如第二个参数如果使用’\s’将被匹配到s,'\s’才是匹配空格
stringregexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT)Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT. For example, regexp_replace(“foobar”, "oo
stringrepeat(string str, int n)Repeats str n times…重复输出n次字符串str
stringreverse(string A)Returns the reversed string…反转字符串
stringrpad(string str, int len, string pad)Returns str, right-padded with pad to a length of len…从右边开始对字符串str使用字符串pad填充,最终len长度为止,如果字符串str本身长度比len大的话,将去掉多余的部分
stringrtrim(string A)Returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(’ foobar ‘) results in ’ foobar’…去掉字符串后面出现的空格
array<array<string>>sentences(string str, string lang, string locale)Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The ‘lang’ and ‘locale’ are optional arguments. For example, sentences(‘Hello there! How are you?’) returns ( (“Hello”, “there”), (“How”, “are”, “you”) )…字符串str将被转换成单词数组,如:sentences(‘Hello there! How are you?’) =( (“Hello”, “there”), (“How”, “are”, “you”) )
stringspace(int n)Returns a string of n spaces…返回n个空格
arraysplit(string str, string pat)Splits str around pat (pat is a regular expression)…按照正则表达式pat来分割字符串str,并将分割后的数组字符串的形式返回
map<string,string>str_to_map(text[, delimiter1, delimiter2])Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ‘,’ for delimiter1 and ‘=’ for delimiter2…将字符串str按照指定分隔符转换成Map,第一个参数是需要转换字符串,第二个参数是键值对之间的分隔符,默认为逗号;第三个参数是键值之间的分隔符,默认为"="
stringsubstr(stringbinary A, int start) substring(string
stringsubstr(stringbinary A, int start, int len) substring(string
stringsubstring_index(string A, string delim, int count)Returns the substring from string A before count occurrences of the delimiter delim (as of Hive1.3.0). If count is positive, everything to the left of the final delimiter (counting from the left) is returned. If count is negative, everything to the right of the final delimiter (counting from the right) is returned. Substring_index performs a case-sensitive match when searching for delim. Example: substring_index(‘www.apache.org’, ‘.’, 2) = ‘www.apache’…截取第count分隔符之前的字符串,如count为正则从左边开始截取,如果为负则从右边开始截取
stringtranslate(stringchar
stringtrim(string A)Returns the string resulting from trimming spaces from both ends of A. For example, trim(’ foobar ') results in ‘foobar’.将字符串A前后出现的空格去掉
binaryunbase64(string str)Converts the argument from a base 64 string to BINARY. (As of Hive0.12.0.).将64位的字符串转换二进制值
stringupper(string A) ucase(string A)Returns the string resulting from converting all characters of A to upper case. For example, upper(‘fOoBaR’) results in ‘FOOBAR’…将字符串A中的字母转换成大写字母
stringinitcap(string A)Returns string, with the first letter of each word in uppercase, all other letters in lowercase. Words are delimited by whitespace. (As of Hive1.1.0.).将字符串A转换第一个字母大写其余字母的字符串
intlevenshtein(string A, string B)Returns the Levenshtein distance between two strings (as of Hive1.2.0). For example, levenshtein(‘kitten’, ‘sitting’) results in 3…计算两个字符串之间的差异大小 如:levenshtein(‘kitten’, ‘sitting’) = 3
stringsoundex(string A)Returns soundex code of the string (as of Hive1.2.0). For example, soundex(‘Miller’) results in M460…将普通字符串转换成soundex字符串

聚合函数

Return TypeName(Signature)Description
BIGINTcount(*), count(expr), count(DISTINCT expr[, expr…])count(*) - Returns the total number of retrieved rows, including rows containing NULL values.统计总行数,包括含有NULL值的行count(expr) - Returns the number of rows for which the supplied expression is non-NULL.统计提供非NULL的expr表达式值的行数count(DISTINCT expr[, expr]) - Returns the number of rows for which the supplied expression(s) are unique and non-NULL. Execution of this can be optimized withhive.optimize.distinct.rewrite.统计提供非NULL且去重后的expr表达式值的行数
DOUBLEsum(col), sum(DISTINCT col)Returns the sum of the elements in the group or the sum of the distinct values of the column in the group.sum(col),表示求指定列的和,sum(DISTINCT col)表示求去重后的列的和
DOUBLEavg(col), avg(DISTINCT col)Returns the average of the elements in the group or the average of the distinct values of the column in the group.avg(col),表示求指定列的平均值,avg(DISTINCT col)表示求去重后的列的平均值
DOUBLEmin(col)Returns the minimum of the column in the group.求指定列的最小值
DOUBLEmax(col)Returns the maximum value of the column in the group.求指定列的最大值
DOUBLEvariance(col), var_pop(col)Returns the variance of a numeric column in the group.求指定列数值的方差
DOUBLEvar_samp(col)Returns the unbiased sample variance of a numeric column in the group.求指定列数值的样本方差
DOUBLEstddev_pop(col)Returns the standard deviation of a numeric column in the group.求指定列数值的标准偏差
DOUBLEstddev_samp(col)Returns the unbiased sample standard deviation of a numeric column in the group.求指定列数值的样本标准偏差
DOUBLEcovar_pop(col1, col2)Returns the population covariance of a pair of numeric columns in the group.求指定列数值的协方差
DOUBLEcovar_samp(col1, col2)Returns the sample covariance of a pair of a numeric columns in the group.求指定列数值的样本协方差
DOUBLEcorr(col1, col2)Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group.返回两列数值的相关系数
DOUBLEpercentile(BIGINT col, p)Returns the exact pthpercentile of a column in the group (does not work with floating point types). p must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.返回col的p%分位数

表生成函数

Return TypeName(Signature)Description
Array Typeexplode(array<TYPE > a)For each element in a, generates a row containing that element.对于a中的每个元素,将生成一行且包含该元素
N rowsexplode(ARRAY)Returns one row for each element from the array…每行对应数组中的一个元素
N rowsexplode(MAP)Returns one row for each key-value pair from the input map with two columns in each row: one for the key and another for the value. (As of Hive0.8.0.).每行对应每个map键-值,其中一个字段是map的键,另一个字段是map的值
N rowsposexplode(ARRAY)Behaves like explodefor arrays, but includes the position of items in the original array by returning a tuple of (pos, value). (As ofHive 0.13.0.).与explode类似,不同的是还返回各元素在数组中的位置
N rowsstack(INT n, v_1, v_2, …, v_k)Breaks up v_1, …, v_k into n rows. Each row will have k/n columns. n must be constant…把M列转换成N行,每行有M/N个字段,其中n必须是个常数
tuplejson_tuple(jsonStr, k1, k2, …)Takes a set of names (keys) and a JSON string, and returns a tuple of values. This is a more efficient version of the get_json_objectUDF because it can get multiple keys with just one call…从一个JSON字符串中获取多个键并作为一个元组返回,与get_json_object不同的是此函数能一次获取多个键值
tupleparse_url_tuple(url, p1, p2, …)This is similar to the parse_url()UDF but can extract multiple parts at once out of a URL. Valid part names are: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO, QUERY:<KEY>返回从URL中抽取指定N部分的内容,参数url是URL字符串,而参数p1,p2,…是要抽取的部分,这个参数包含HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO, QUERY:<KEY>
inline(ARRAY<STRUCT[,STRUCT]>)Explodes an array of structs into a table. (As of Hive0.10.).将结构体数组提取出来并插入到表中

七、Hive的高级操作

负责数据类型

array

现有数据如下:

1 huangbo guangzhou,xianggang,shenzhen a1:30,a2:20,a3:100 beijing,112233,13522334455,500
2 xuzheng xianggang b2:50,b3:40 tianjin,223344,13644556677,600
3 wangbaoqiang beijing,zhejinag c1:200 chongqinjg,334455,15622334455,20

建表语句

use class;
create table cdt(
id int, 
name string, 
work_location array<string>, 
piaofang map<string,bigint>, 
address struct<location:string,zipcode:int,phone:string,value:int>) 
row format delimited 
fields terminated by "\t" 
collection items terminated by "," 
map keys terminated by ":" 
lines terminated by "\n";

导入数据

0: jdbc:hive2://hadoop3:10000> load data local inpath "/home/hadoop/cdt.txt" into table cdt;

查询语句

select * from cdt;

select name from cdt;

select work_location from cdt;

select work_location[0] from cdt;

select work_location[1] from cdt;

map

建表语句、导入语句和上面相同

查询语句

select piaofang from cdt;

select piaofang["a1"] from cdt;

sturct

建表语句、导入数据和上面相同

查询语句

select address from cdt;

select address.location from cdt;

uniontype

很少使用

视图

Hive的视图和关系型数据库的视图的区别

和关系型数据库相同,Hive也提供了视图功能,不过注意,Hive的视图和关系型数据库的视图有很大的区别:

  1. 只有逻辑视图,没有物化视图
  2. 视图只能查询,不能增删改查数据
  3. 视图在创建时候,只是保存了一份元数据,当查询视图的时候才开始执行视图对应的那些子查询
Hive视图的创建语句
create view view_cdt as select * from cdt;

Hive视图的查看语句
show views;
desc view_cdt;-- 查看某个具体视图的信息

Hive视图的使用语句
select * from view_cdt;

Hive视图的删除语句
drop view view_cdt;

函数

内置函数

看第六板块

查看内置函数
show functions;

显示函数的详细信息
desc function substr;

显示函数的扩展信息
desc function extended substr;

自定义函数UDF

当Hive提供的内置函数无法满足业务处理需求时,此时就可以考虑使用用户自定义函数

UDF (user-defined function) 作用于单个数据行,产生一个数据行作为输出

UDAF(用户定义聚集函数 User-Defined Aggregation Function):接受多个输入数据行,并产生一个输出数据行

UDTF(表格生成函数 User- defined Table Function)接受一行输入,输出多行

简单UDF实例

导入Hive需要的jar包,自定义一个java类继承UDF,重载evaluate方法

ToLowerCase.java

import org.apache.hadoop.hive.ql.exec.UDF;

public class ToLowerCase extends UDF{
  
    // 必须是 public,并且 evaluate 方法可以重载
    public String evaluate(String field) {
    String result = field.toLowerCase();
    return result;
    }
  
}

打包jar包上传服务器

将jar包添加到Hive的classpath

add JAR /home/hadoop/udf.jar;

创建临时函数与开发好的class关联起来

0: jdbc:hive2://hadoop3:10000> create temporary function tolowercase as 'com.study.hive.udf.ToLowerCase';

自此便可以在hql使用自定义的函数

0: jdbc:hive2://hadoop3:10000> select tolowercase('HELLO');

JSON数据解析UDF开发

现有原始json数据(rating.json)如下

{"movie":"1193","rate":"5","timeStamp":"978300760","uid":"1"}
{"movie":"661","rate":"3","timeStamp":"978302109","uid":"1"}
{"movie":"914","rate":"3","timeStamp":"978301968","uid":"1"}
{"movie":"3408","rate":"4","timeStamp":"978300275","uid":"1"}
{"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"}
{"movie":"1197","rate":"3","timeStamp":"978302268","uid":"1"}
{"movie":"1287","rate":"5","timeStamp":"978302039","uid":"1"}
{"movie":"2804","rate":"5","timeStamp":"978300719","uid":"1"}
{"movie":"594","rate":"4","timeStamp":"978302268","uid":"1"}

现在需要将数据导入到hive仓库中,并且最终要得到这么一个结果

img

get_json_object(string json_string, string path)

返回值:String

说明:解析json的字符串json_string,返回指定的path的内容。如果输入的json字符串无效,那么返回NULL。这个函数每次只能返回一个数据项。

0: jdbc:hive2://hadoop3:10000> select get_json_object('{"movie":"594","rate":"4","timeStamp":"978302268","uid":"1"}','$.movie');

创建json表并将数据导入其中

0: jdbc:hive2://hadoop3:10000> create table json(data string);
No rows affected (0.983 seconds)
0: jdbc:hive2://hadoop3:10000> load data local inpath '/home/hadoop/json.txt' into table json;
No rows affected (1.046 seconds)
0: jdbc:hive2://hadoop3:10000> 

0: jdbc:hive2://hadoop3:10000> select 
. . . . . . . . . . . . . . .> get_json_object(data,'$.movie') as movie 
. . . . . . . . . . . . . . .> from json;

img

json_tuple(jsonStr, k1, k2, …)

参数为一组键K1,K2…和JSON字符串,返回值的元组。该方法比get_json_object高级,因为可以在一次调用中输入多个键

0: jdbc:hive2://hadoop3:10000> select 
. . . . . . . . . . . . . . .>   b.b_movie,
. . . . . . . . . . . . . . .>   b.b_rate,
. . . . . . . . . . . . . . .>   b.b_timeStamp,
. . . . . . . . . . . . . . .>   b.b_uid   
. . . . . . . . . . . . . . .> from json a 
. . . . . . . . . . . . . . .> lateral view json_tuple(a.data,'movie','rate','timeStamp','uid') b as b_movie,b_rate,b_timeStamp,b_uid;

Transfrom实现

Hive的transfrom关键字提供了在sql中调用自写脚本的功能。适合实现Hive中没有的功能有不想写UDF的情况

示例

Json数据:{“movie”:“1193”,“rate”:“5”,“timeStamp”:“978300760”,“uid”:“1”}

需求:把timestamp的值转换为日期编号

  1. 先加载rating.json文件到hive的一个原始表rate_json

    create table rate_json(line string) row format delimited;
    load data local inpath '/home/hadoop/rating.json' into table rate_json;
    
  2. 创建rate这张表用来存储解析json出来的字段

    create table rate(movie int, rate int, unixtime int, userid int) row format delimited fields
    terminated by '\t';
    
  3. 解析json,得到的结果存入rate表中:

    insert into table rate select
    get_json_object(line,'$.movie') as moive,
    get_json_object(line,'$.rate') as rate,
    get_json_object(line,'$.timeStamp') as unixtime,
    get_json_object(line,'$.uid') as userid
    from rate_json;
    
  4. 使用transfrom+python的方式去转换unixtime为weekday,先编辑一个python脚本

    ########python######代码
    ## vi weekday_mapper.py
    #!/bin/python
    import sys
    import datetime
    for line in sys.stdin:
     line = line.strip()
     movie,rate,unixtime,userid = line.split('\t')
     weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
     print '\t'.join([movie, rate, str(weekday),userid])
    
  5. 保存文件,然后,将文件加入到hive的clasepath

    hive>add file /home/hadoop/weekday_mapper.py;
    hive> insert into table lastjsontable select transform(movie,rate,unixtime,userid)
    using 'python weekday_mapper.py' as(movie,rate,weekday,userid) from rate;
    
  6. 创建最后的用来存储调用python脚本解析出来的数据的表:lastjsontable

    create table lastjsontable(movie int, rate int, weekday int, userid int) row format delimited
    fields terminated by '\t';
    
  7. 最后查看数据是否正确

    select distinct(weekday) from lastjsontable;
    

特殊分隔符处理

补充:hive 读取数据的机制:

1、 首先用 InputFormat<默认是:org.apache.hadoop.mapred.TextInputFormat >的一个具体实 现类读入文件数据,返回一条一条的记录(可以是行,或者是你逻辑中的“行”)

2、 然后利用 SerDe<默认:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe>的一个具体 实现类,对上面返回的一条一条的记录进行字段切割

Hive 对文件中字段的分隔符默认情况下只支持单字节分隔符,如果数据文件中的分隔符是多 字符的,如下所示:

01||huangbo

02||xuzheng

03||wangbaoqiang

使用RegexSerDe正则表达式解析

创建表

create table t_bi_reg(id string,name string)
row format serde 'org.apache.hadoop.hive.serde2.RegexSerDe'
with serdeproperties('input.regex'='(.*)\\|\\|(.*)','output.format.string'='%1$s %2$s')
stored as textfile;

导入数据并查询

0: jdbc:hive2://hadoop3:10000> load data local inpath '/home/hadoop/data.txt' into table t_bi_reg;
No rows affected (0.747 seconds)
0: jdbc:hive2://hadoop3:10000> select a.* from t_bi_reg a;

img

通过自定义InputFormat处理特殊分隔符
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

看着天上飞的猪

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值