一,导入数据
1,load
1.1 从本地磁盘导入:
load data local inpath ‘/localpath’ into table table1;
1.2 从HDFS导入(无关键字local):
load data inpath ‘/localpath’ into table table1;
2,insert into … select
insert into table1 select id,name from table2;
3,create … as select
create table1 as select * from table2 where 1=1;
4,import
import只能导入export命令从Hive到出去的数据,且目标表可以为空,如果已经创建则必须是空表
import table student2
from '/user/hive/warehouse/export/student';
5,location
create external table if not exists student5( id int, name string)
row format delimited fields terminated by '\t'
location '/student;
6,sqoop
二,导出数据
1,insert
导出到本地文件系统:
insert overwrite local directory '/opt/module/hive/data/export/student1'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
select * from student;
导出到HDFS文件系统(无local关键字):
insert overwrite directory '/opt/module/hive/data/export/student1'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
select * from student;
2,dfs导出
dfs -get /user/hive/warehouse/student/student.txt
/opt/module/data/export/student3.txt;
3,export
export 和 import 主要用于两个 Hadoop 平台集群之间 Hive 表迁移。
export table default.student to '/user/hive/warehouse/export/student';
4,hive shell(hive -e/-f)
hive shell执行sql语句,结果存储到文件
bin/hive -e 'select * from default.student;' > /opt/module/hive/data/export/student4.txt;
hive shell执行sql文件,结果存储到文件
bin/hive -f hivesql.sql > /opt/module/hive/data/export/student4.txt;