Sqoop: ------ 介绍、安装 、云平台数据和关系型数据库的数据导入导出

Apache Sqoop
Sqoop官网
下载地址
导入​ 导出参考文档

概述

Apache Sqoop(TM)是一种旨在在Apache Hadoop和结构化数据存储(例如关系数据库)之间高效传输批量数据的工具。通过内嵌的MapReduce程序实现关系型数据库和HDFS、Hbase、Hive等数据的倒入导出。

在这里插入图片描述

安装

1、访问sqoop的网址http://sqoop.apache.org/,选择相应的sqoop版本下载,本案例选择下载的是 1.4.7 下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/sqoop/1.4.7/sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz,下载完相应的工具包后,解压Sqoop

[root@Centos ~]# tar -zxf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz -C /usr/
[root@Centos ~]# cd /usr/
[root@Centos usr]# mv sqoop-1.4.7.bin__hadoop-2.6.0 sqoop-1.4.7
[root@Centos ~]# cd /usr/sqoop-1.4.7/

2、配置SQOOP_HOME环境变量

[root@CentOS sqoop-1.4.7]# vi ~/.bashrc
SQOOP_HOME=/usr/sqoop-1.4.7
HADOOP_HOME=/usr/hadoop-2.9.2
HIVE_HOME=/usr/apache-hive-1.2.2-bin
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$SQOOP_HOM
E/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export HADOOP_HOME
export CLASSPATH
export HIVE_HOME
export SQOOP_HOME
[root@CentOS sqoop-1.4.7]# source ~/.bashrc

3、修改conf下的sqoop-env.sh.template配置文件

[root@CentOS sqoop-1.4.7]# mv conf/sqoop-env-template.sh conf/sqoop-env.sh
[root@CentOS sqoop-1.4.7]# vi conf/sqoop-env.sh
#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/usr/hadoop-2.9.2
#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/usr/hadoop-2.9.2
#set the path to where bin/hbase is available
#export HBASE_HOME=
#Set the path to where bin/hive is available
export HIVE_HOME=/usr/apache-hive-1.2.2-bin
#Set the path for where zookeper config dir is
export ZOOCFGDIR=/usr/zookeeper-3.4.6/conf

4、将MySQL驱动jar拷贝到sqoop的lib目录下

[root@CentOS ~]# cp /usr/apache-hive-1.2.2-bin/lib/mysql-connector-java-5.1.48.jar
/usr/sqoop-1.4.7/lib/

5、验证Sqoop是否安装成功

[root@CentOS sqoop-1.4.7]# sqoop version
Warning: /usr/sqoop-1.4.7/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/sqoop-1.4.7/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/sqoop-1.4.7/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/sqoop-1.4.7/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
19/12/22 08:40:12 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
Sqoop 1.4.7
git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8
Compiled by maugli on Thu Dec 21 15:59:58 STD 2017
[root@CentOS sqoop-1.4.7]# sqoop list-tables --connect
jdbc:mysql://192.168.52.1:3306/mysql --username root --password root

数据导入导出

导入​ 导出参考:http://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html

RDBMS 导入HDFS

sqoop-import

Import工具将单个表从RDBMS导入到HDFS。表中的每一行行在HDFS中均表示为单独的记录。记录可以存储为文本文件(每行一个记录),也可以二进制表示形式存储为Avro或SequenceFiles。
在这里插入图片描述

全表导入HDFS

sqoop import \
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://Centos:3306/mysql?characterEncoding=UTF-8 \
--username root \
--password root \
--table t_user \
--num-mappers 1 \
--fields-terminated-by '\t' \
--target-dir /RDBMS/mysql/bloguser \
--delete-target-dir

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

导入hdfs结果

在这里插入图片描述

指定特定字段导入HDFS

sqoop import \
-Dorg.apache.sqoop.splitter.allow_text_splitter=true
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://Centos:3306/mysql?characterEncoding=UTF-8 \
--username root \
--password root \
--table blog\
--columns "id,title,author" \
--where "title like '%一%'" \
--target-dir /RDBMS/mysql/blog \
--delete-target-dir \
--num-mappers 2 \
--fields-terminated-by '\t'

在这里插入图片描述

hdfs结果

在这里插入图片描述

导入查询

sqoop import \
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://Centos:3306/mysql?characterEncoding=UTF-8 \
--username root \
--password root \
--num-mappers 3 \
--fields-terminated-by '\t' \
--query 'select id,title,author from blog where $CONDITIONS LIMIT 100' \
--split-by id \
--target-dir /RDBMS/mysql/blog01 \
--delete-target-dir
  • 如果要并行导入查询结果,则每个Map任务将需要执行查询的副本,其结果由Sqoop推断的边界条件进行分区。您的查询必须包含令牌 $CONDITIONS ,每个Sqoop进程将用 唯一条件表达式 替换该令牌。您还必须使用 --split-by 选择拆分列。

RDBMS 导入Hive

全量导入

sqoop import \
--connect jdbc:mysql://Centos:3306/test \
--username root \
--password root \
--table t_user \
--num-mappers 3 \
--hive-import \
--fields-terminated-by "\t" \
--hive-overwrite \
--hive-table baizhi.t_user
[root@Centos ~]# cp /usr/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar /usr/sqoop-1.4.7/lib/
[root@Centos ~]# cp /usr/apache-hive-1.2.2-bin/lib/hive-exec-1.2.2.jar /usr/sqoop-1.4.7/lib/

在这里插入图片描述

导入分区

sqoop import \
--connect jdbc:mysql://Centos:3306/test \
--username root \
--password root \
--table t_user \
--num-mappers 3 \
--hive-import \
--fields-terminated-by "\t" \
--hive-overwrite \
--hive-table baizhi.t_user \
--hive-partition-key city \
--hive-partition-value 'bj'

在这里插入图片描述

RDBMS-> Hbase

sqoop import \
--connect jdbc:mysql://Centos:3306/test \
--username root \
--password root \
--table t_user \
--num-mappers 3 \
--hbase-table baizhi:t_user \
--column-family cf1 \
--hbase-create-table \
--hbase-row-key id \
--hbase-bulkload

在这里插入图片描述

HDFS 导入 RDBMS

sqoop-export

  • Export工具将一组文件从HDFS导出回RDBMS。目标表必须已经存在于数据库中。根据用户指定的定界符,读取输入文件并将其解析为一组记录。

HDFS -> MySQL

hdfs中要导出的数据

0 zhangsan true 20 2020-01-11
1 lisi false 25 2020-01-10
3 wangwu true 36 2020-01-17
4 zhaoliu false 50 1990-02-08
5 win7 true 20 1991-02-08

导出命令

create table t_user(
id int primary key auto_increment,
name VARCHAR(32),
sex boolean,
age int,
birthDay date
) CHARACTER SET=utf8;
sqoop export \
--connect jdbc:mysql://Centos:3306/test \
--username root \
--password root \
--table t_user \
--update-key id \
--update-mode allowinsert \
--export-dir /demo/src \
--input-fields-terminated-by '\t'

在这里插入图片描述

  • 导入模式可选值可以是updateonly或者allowinsert,updateonly仅仅会更新已经存在的记录。

HBASE -> RDBMS

HBASE -> HIVE

①准备测试数据
测试数据 t_employee

sqoop export \
--connect jdbc:mysql://Centos:3306/test \
--username root \
--password root \
--table t_user \
--update-key id \
--update-mode allowinsert \
--export-dir /demo/src \
--input-fields-terminated-by '\t'

7369,SMITH,CLERK,7902,1980-12-17 00:00:00,800,\N,20
7499,ALLEN,SALESMAN,7698,1981-02-20 00:00:00,1600,300,30
7521,WARD,SALESMAN,7698,1981-02-22 00:00:00,1250,500,30
7566,JONES,MANAGER,7839,1981-04-02 00:00:00,2975,\N,20
7654,MARTIN,SALESMAN,7698,1981-09-28 00:00:00,1250,1400,30
7698,BLAKE,MANAGER,7839,1981-05-01 00:00:00,2850,\N,30
7782,CLARK,MANAGER,7839,1981-06-09 00:00:00,2450,\N,10
7788,SCOTT,ANALYST,7566,1987-04-19 00:00:00,1500,\N,20
7839,KING,PRESIDENT,\N,1981-11-17 00:00:00,5000,\N,10
7844,TURNER,SALESMAN,7698,1981-09-08 00:00:00,1500,0,30
7876,ADAMS,CLERK,7788,1987-05-23 00:00:00,1100,\N,20
7900,JAMES,CLERK,7698,1981-12-03 00:00:00,950,\N,30
7902,FORD,ANALYST,7566,1981-12-03 00:00:00,3000,\N,20
7934,MILLER,CLERK,7782,1982-01-23 00:00:00,1300,\N,10

create database if not exists baizhi;
use baizhi;
drop table if exists t_employee;
CREATE TABLE t_employee(
	empno INT,
	ename STRING,
	job STRING,
	mgr INT,
	hiredate TIMESTAMP,
	sal DECIMAL(7,2),
	comm DECIMAL(7,2),
	deptno INT)
row format delimited
fields terminated by ','
collection items terminated by '|'
map keys terminated by '>'
lines terminated by '\n'
stored as textfile;
load data local inpath '/root/t_employee' overwrite into table t_employee;

②先尝试将HBase的数据导出到HDFS

drop table if exists t_employee_hbase;
create external table t_employee_hbase(
	empno INT,
	ename STRING,
	job STRING,
	mgr INT,
	hiredate TIMESTAMP,
	sal DECIMAL(7,2),
	comm DECIMAL(7,2),
	deptno INT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES("hbase.columns.mapping" =
":key,cf1:name,cf1:job,cf1:mgr,cf1:hiredate,cf1:sal,cf1:comm,cf1:deptno")
TBLPROPERTIES("hbase.table.name" = "baizhi:t_employee");
insert overwrite table t_employee_hbase select
empno,ename,job,mgr,hiredate,sal,comm,deptno from t_employee;
INSERT OVERWRITE DIRECTORY '/demo/src/employee' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE select
empno,ename,job,mgr,hiredate,sal,comm,deptno from t_employee_hbase;

③将HDFS中数据导出RDBMS

sqoop export \
--connect jdbc:mysql://Centos:3306/test \
--username root \
--password root \
--table t_employee \
--update-key id \
--update-mode allowinsert \
--export-dir /demo/src/employee \
--input-fields-terminated-by ',' \
--input-null-string '\\N' \
--input-null-non-string '\\N';
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值