概述
Apache Sqoop(TM)是⼀种旨在在Apache Hadoop和结构化数据存储(例如关系数据库)之间⾼效传输批量数据的⼯具。通过内嵌的MapReduce程序实现关系型数据库和HDFS、Hbase、Hive等数据的倒⼊导出。
sqoop-import
Import⼯具将单个表从RDBMS导⼊到HDFS。表中的每⼀⾏在HDFS中均表示为单独的记录。记录可以存储为⽂本⽂件(每⾏⼀个记录),也可以⼆进制表示形式存储为Avro或SequenceFiles。
导⼊导出参考:http://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html
RDBMS 导⼊HDFS
全表导入
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://zly:3306/mysql?characterEncoding=UTF-8 \
--username root \
--password 123456 \
--table t_user \
--num-mappers 4 \
--fields-terminated-by '\t' \
--target-dir /mysql/test/t_user \
--delete-target-dir
字段导入
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://zly:3306/mysql?characterEncoding=UTF-8 \
--username root \
--password 123456 \
--table t_user \
--columns "id,name,age" \
--where "id > 2 or name like '%z%'" \
--target-dir /mysql/test/t_user1 \
--delete-target-dir \
--num-mappers 4 \
--fields-terminated-by '\t'
导⼊查询
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://zly:3306/mysql \
--username root \
--password 123456 \
--num-mappers 3 \
--fields-terminated-by '\t' \
--query 'select id, name,sex, age ,birthDay from t_user where $CONDITIONS LIMIT 100' \
--split-by id \
--target-dir /mysql/test/t_user2 \
--delete-target-dir
如果要并⾏导⼊查询结果,则每个Map任务将需要执⾏查询的副本,其结果由Sqoop推断的边界条件进⾏分区。
查询必须包含令牌 $CONDITIONS ,每个Sqoop进程将⽤ 唯⼀条件表达式 替换该令牌。您还必须使⽤ --split-by 选择拆分列.
$CONDITIONS 为split-by后的字段。
RDBMS 导⼊Hive
全量导⼊
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
--connect jdbc:mysql://zly:3306/mysql \
--username root \
--password 123456 \
--table t_user \
--num-mappers 3 \
--hive-import \
--fields-terminated-by "\t" \
--hive-overwrite \
--hive-table baizhi.t_user
注意:运行前需要先拷jar
cp /usr/soft/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar /usr/soft/sqoop-1.4.7/lib/
cp /usr/soft/apache-hive-1.2.2-bin/lib/hive-exec-1.2.2.jar /usr/soft/sqoop-1.4.7/lib/
导⼊分区
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
--connect jdbc:mysql://zly:3306/mysql \
--username root \
--password 123456 \
--table t_user \
--num-mappers 3 \
--hive-import \
--fields-terminated-by "\t" \
--hive-overwrite \
--hive-table baizhi.t_user2 \
--hive-partition-key city \
--hive-partition-value 'bj'
RDBMS 导⼊Hbase
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" \
--connect jdbc:mysql://zly:3306/mysql \
--username root \
--password 123456 \
--table t_user \
--num-mappers 3 \
--hbase-table baizhi:t_user \
--column-family cf1 \
--hbase-create-table \
--hbase-row-key id \
--hbase-bulkload
启动Hbase服务,创建baizhi 数据库,t_user由系统⾃动创建!
注意:如果数据不统一,需要删除数据
hbase clean --cleanAll
hdfs dfs -rm -r -f /hbase
start-hbase.sh
hbase shell
sqoop-import
Export⼯具将⼀组⽂件从HDFS导出回RDBMS。⽬标表必须已经存在于数据库中。根据⽤户指定的定界符,读取输⼊⽂件并将其解析为⼀组记录。
HDFS -> MySQL
sqoop export \
--connect jdbc:mysql://zly:3306/mysql \
--username root \
--password 123456 \
--table t_user3 \
--update-key id \
--update-mode allowinsert \
--export-dir /demo/src \
--input-fields-terminated-by '\t'
update-key id:根据id检查重复
update-mode allowinsert:id不存在就插入,如果id存在就更新数据。
导⼊模式可选值可以是updateonly或者allowinsert,updateonly仅仅会更新已经存在的记录数据上传到hdfs中/demo/src的目录中,注意数据的格式,如果出错,自己导入数据,以tab分隔。
0 zhangsan true 20 2020-01-11
1 lisi false 25 2020-01-10
3 wangwu true 36 2020-01-17
4 zhaoliu false 50 1990-02-08
5 win7 true 20 1991-02-08
mysql中创建t_user3表
create table t_user3(
id int primary key auto_increment,
name VARCHAR(32),
sex boolean,
age int,
birthDay date
) CHARACTER SET=utf8;
HBASE -> MySQL
① 准备数据 t_employee
7369,SMITH,CLERK,7902,1980-12-17 00:00:00,800,\N,20
7499,ALLEN,SALESMAN,7698,1981-02-20 00:00:00,1600,300,30
7521,WARD,SALESMAN,7698,1981-02-22 00:00:00,1250,500,30
7566,JONES,MANAGER,7839,1981-04-02 00:00:00,2975,\N,20
7654,MARTIN,SALESMAN,7698,1981-09-28 00:00:00,1250,1400,30
7698,BLAKE,MANAGER,7839,1981-05-01 00:00:00,2850,\N,30
7782,CLARK,MANAGER,7839,1981-06-09 00:00:00,2450,\N,10
7788,SCOTT,ANALYST,7566,1987-04-19 00:00:00,1500,\N,20
7839,KING,PRESIDENT,\N,1981-11-17 00:00:00,5000,\N,10
7844,TURNER,SALESMAN,7698,1981-09-08 00:00:00,1500,0,30
7876,ADAMS,CLERK,7788,1987-05-23 00:00:00,1100,\N,20
7900,JAMES,CLERK,7698,1981-12-03 00:00:00,950,\N,30
7902,FORD,ANALYST,7566,1981-12-03 00:00:00,3000,\N,20
7934,MILLER,CLERK,7782,1982-01-23 00:00:00,1300,\N,10
②init.sql
create database if not exists baizhi;
use baizhi;
drop table if exists t_employee;
CREATE TABLE t_employee(
empno INT,
ename STRING,
job STRING,
mgr INT,
hiredate TIMESTAMP,
sal DECIMAL(7,2),
comm DECIMAL(7,2),
deptno INT)
row format delimited
fields terminated by ','
collection items terminated by '|'
map keys terminated by '>'
lines terminated by '\n'
stored as textfile;
load data local inpath '/root/t_employee' overwrite into table t_employee;
drop table if exists t_employee_hbase;
create external table t_employee_hbase(
empno INT,
ename STRING,
job STRING,
mgr INT,
hiredate TIMESTAMP,
sal DECIMAL(7,2),
comm DECIMAL(7,2),
deptno INT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES("hbase.columns.mapping" =
":key,cf1:name,cf1:job,cf1:mgr,cf1:hiredate,cf1:sal,cf1:comm,cf1:deptno")
TBLPROPERTIES("hbase.table.name" = "baizhi:t_employee");
insert overwrite table t_employee_hbase select empno,ename,job,mgr,hiredate,sal,comm,deptno from t_employee;
③ 将HBase的数据导出到HDFS bb.sql
use baizhi;
INSERT OVERWRITE DIRECTORY '/demo/src/employee' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE select empno,ename,job,mgr,hiredate,sal,comm
,deptno from t_employee_hbase;
④ 将HDFS中数据导出RDBMS
sqoop export \
--connect jdbc:mysql://zly:3306/mysql \
--username root \
--password 123456 \
--table t_employee \
--update-key id \
--update-mode allowinsert \
--export-dir /demo/src/employee \
--input-fields-terminated-by ',' \
--input-null-string '\\N' \
--input-null-non-string '\\N';
⑤ 导入数据成功