查看hive表默认元数据
desc formatted emp;
查看hive表分区
show partitions emp;
修改hive表分区
alter table emp add if not exists partition(dt=‘2017-01-01’);
创建表,以dt字段为分区
create table emp(name string,salary bigint) partitioned by (dt string) row format delimited fields terminated by ‘,’ lines terminated by ‘\n’ stored as textfile;
加载指定数据
load data local inpath ‘/tmp/a.txt’ into table emp partition(dt=‘2017-01-01’);
查看指定分区目录
hadoop fs -ls /user/hive/warehouse/emp/dt=2017-01-01
hive 分区仅是目录名,分区字段不是表中的列,数据文件中没有对应列,查看数据hive会自动添加分区列,可建立多级分区
动态分区(写入数据时自动创建分区,包括目录结构)
set hive.exec.dynamic.partition.mode查看模式
insert into table emp_2 partition(dt) select name,salary,dt from emp;
set hive.exec.dynamic.partition.mode=nonstrict;
函数
内置函数
自定义函数 udf:map
udaf:reduce
desc function xxxx
1.hadoop fs -put hive-contrib.jar /user/hive/lib/
将jar添加到hdfs环境变量
2.add jar hdfs:///user/hive/lib/hive-contrib.jar
3.创建临时函数,给数据加序号
create temporary function row_sequence as ‘org.apache.hadoop.hive.contrib.udf.UDFRowSequence’;
创建永久的函数:
create function my_test.row_sequence as ‘org.apache.hadoop.hive.contrib.udf.UDFRowSequence’ using jar ‘hdfs:///user/hive/lib/hive-contrib.jar’
4.查看表
select row_sequence(),* from hive_attack_event;
udf.py 将输入转为大写后输出
select transform(fnam,lname) using ‘python2.6 udf.py’ as (fname,l_name) from u;