Hive-常用操作

前提条件:

安装好hadoop2.7.3(Linux系统下),参考:Ubuntu下安装Hadoop

安装好hive2.3.6(Linux系统下),参考:安装Hive

 

准备源数据:

1. 打开终端,新建emp.csv文件

$ nano emp.csv

输入内容如下,保存退出。

7369,SMITH,CLERK,7902,1980/12/17,800,,20
7499,ALLEN,SALESMAN,7698,1981/2/20,1600,300,30
7521,WARD,SALESMAN,7698,1981/2/22,1250,500,30
7566,JONES,MANAGER,7839,1981/4/2,2975,,20
7654,MARTIN,SALESMAN,7698,1981/9/28,1250,1400,30
7698,BLAKE,MANAGER,7839,1981/5/1,2850,,30
7782,CLARK,MANAGER,7839,1981/6/9,2450,,10
7788,SCOTT,ANALYST,7566,1987/4/19,3000,,20
7839,KING,PRESIDENT,,1981/11/17,5000,,10
7844,TURNER,SALESMAN,7698,1981/9/8,1500,0,30
7876,ADAMS,CLERK,7788,1987/5/23,1100,,20
7900,JAMES,CLERK,7698,1981/12/3,950,,30
7902,FORD,ANALYST,7566,1981/12/3,3000,,20
7934,MILLER,CLERK,7782,1982/1/23,1300,,10

 2. 新建dept.csv文件

$ nano dept.csv

输入以下内容,保存退出

10,ACCOUNTING,NEW YORK
20,RESEARCH,DALLAS
30,SALES,CHICAGO
40,OPERATIONS,BOSTON

 

实验操作:

(1)把上面两张表上传到hdfs某个目录下,如/001/hive,001可以改为学号

在linux终端下输入命令:

$ hdfs dfs -mkdir -p /001/hive
$ hdfs dfs -put dept.csv /001/hive
$ hdfs dfs -put emp.csv /001/hive

(2)创建员工表(emp+学号,如:emp001)注意:在hive命令行下输入

     启动hadoop

​$ start-all.sh

    然后进入hive命令行

$ hive

新建员工表(emp+学号,如:emp001)

create table emp001(empno int,ename string,job string,mgr int,hiredate string,sal int,comm int,deptno int) row format delimited fields terminated by ',';

(3)创建部门表(dept+学号,如:dept001)

create table dept001(deptno int,dname string,loc string) row format delimited fields terminated by ',';

(4)导入数据

load data inpath '/001/hive/emp.csv' into table emp001;  
load data inpath '/001/hive/dept.csv' into table dept001;

注意:csv文件的路径是HDFS的路径

 

 (5)查询表数据

   查询emp001表数据

select * from emp001;

 

  查询dept001表数据

select * from dept001;

(6)根据员工的部门号创建分区,表名emp_part+学号,如:emp_part001

create table emp_part001(empno int,ename string,job string,mgr int,hiredate string,sal int,comm int)partitioned by (deptno int)row format delimited fields terminated by ',';

 往分区表中插入数据:指明导入的数据的分区(通过子查询导入数据)。

insert into table emp_part001 partition(deptno=10) select empno,ename,job,mgr,hiredate,sal,comm from emp001 where deptno=10;
insert into table emp_part001 partition(deptno=20) select empno,ename,job,mgr,hiredate,sal,comm from emp001 where deptno=20;
insert into table emp_part001 partition(deptno=30) select empno,ename,job,mgr,hiredate,sal,comm from emp001 where deptno=30;

查看emp_part001表的分区情况

show partitions emp_part001;

查看分区表在HDFS中存储时对应的目录 

dfs -ls /user/hive/warehouse/emp_part001;

 

查询某个分区的数据,例如,查看deptno=10分区的数据

dfs -cat /user/hive/warehouse/emp_part001/deptno=10/000000_0;

查看另外两个分区的数据

dfs -cat /user/hive/warehouse/emp_part001/deptno=20/000000_0;
dfs -cat /user/hive/warehouse/emp_part001/deptno=30/000000_0;

(7)创建一个桶表,表名emp_bucket+学号,如:emp_bucket001,根据员工的职位(job)进行分桶

设置开启桶表

set hive.enforce.bucketing = true;

创建桶表 

create table emp_bucket001(empno int,ename string,job string,mgr int,hiredate string,sal int,comm int,deptno int)clustered by (job) into 4 buckets row format delimited fields terminated by ',';	

 通过子查询插入数据:

insert into emp_bucket001 select * from emp001;

查询分桶数据

dfs -ls /user/hive/warehouse/emp_bucket001;

 (8)查询员工信息:员工号 姓名 薪水

select empno,ename,sal from emp001;

(9)多表查询

select dept001.dname,emp001.ename from emp001,dept001 where emp001.deptno=dept001.deptno;

(10)做报表,根据职位给员工涨工资,把涨前、涨后的薪水显示出来 

按如下规则涨薪,PRESIDENT涨1000元,MANAGER涨800元,其他人员涨400元

select empno,ename,job,sal,
case job when 'PRESIDENT' then sal+1000
 when 'MANAGER' then sal+800
 else sal+400
end 
from emp001;

 

完成!

  • 7
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值