Sqoop辅助工具

概述

Apache Sqoop(TM)是一种旨在在Apache Hadoop和结构化数据存储(例如关系数据库)之间高效传输批量数据的工具。通过内嵌的MapReduce程序实现关系型数据库和HDFS、Hbase、Hive等数据的倒入导出。
在这里插入图片描述

安装

1、访问sqoop的网址http://sqoop.apache.org/,选择相应的sqoop版本下载,本案例选择下载的是1.4.7,下载完相应的工具包后,解压Sqoop:

[root@CentOS ~]# tar -zxf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz -C /usr/
[root@CentOS ~]# cd /usr/
[root@CentOS usr]# mv sqoop-1.4.7.bin__hadoop-2.6.0 sqoop-1.4.7
[root@CentOS ~]# cd /usr/sqoop-1.4.7/

2、配置SQOOP_HOME环境变量

[root@CentOS sqoop-1.4.7]# vi ~/.bashrc 
M2_HOME=/usr/apache-maven-3.6.3
SQOOP_HOME=/usr/sqoop-1.4.7
HIVE_HOME=/usr/apache-hive-1.2.2-bin
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/usr/hadoop-2.9.2/
HBASE_HOME=/usr/hbase-1.2.4/
ZOOKEEPER_HOME=/usr/zookeeper-3.4.6
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$M2_HOME/bin:$HIVE_HOME/bin:$SQOOP_HOME/bin:$ZOOKEEPER_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export HBASE_HOME
HBASE_CLASSPATH=$(/usr/hbase-1.2.4/bin/hbase classpath)
HADOOP_CLASSPATH=/root/mysql-connector-java-5.1.49.jar
export HADOOP_CLASSPATH
export M2_HOME
export HIVE_HOME
export SQOOP_HOME
export ZOOKEEPER_HOME
[root@CentOS sqoop-1.4.7]# source ~/.bashrc

3、修改conf下的sqoop-env.sh.template配置文件

[root@CentOS7 sqoop-1.4.7]#  mv conf/sqoop-env-template.sh conf/sqoop-env.sh
[root@CentOS7 sqoop-1.4.7]# vi conf/sqoop-env.sh

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# included in all the hadoop scripts with source command
# should not be executable directly
# also should not be passed any arguments, since we need original $*

# Set Hadoop-specific environment variables here.

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/usr/hadoop-2.9.2

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/usr/hadoop-2.9.2

#set the path to where bin/hbase is available
export HBASE_HOME=/usr/hbase-1.2.4

#Set the path to where bin/hive is available
export HIVE_HOME=/usr/apache-hive-1.2.2-bin

#Set the path for where zookeper config dir is
export ZOOCFGDIR=/usr/zookeeper-3.4.6/conf

4、将MySQL驱动jar拷贝到sqoop的lib目录下

5、验证Sqoop是否安装成功

[root@CentOS7 sqoop-1.4.7]# sqoop help
...
20/10/14 02:48:42 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
usage: sqoop COMMAND [ARGS]

Available commands:
  codegen            Generate code to interact with database records
  create-hive-table  Import a table definition into Hive
  eval               Evaluate a SQL statement and display the results
  export             Export an HDFS directory to a database table
  help               List available commands
  import             Import a table from a database to HDFS
  import-all-tables  Import tables from a database to HDFS
  import-mainframe   Import datasets from a mainframe server to HDFS
  job                Work with saved jobs
  list-databases     List available databases on a server
  list-tables        List available tables in a database
  merge              Merge results of incremental imports
  metastore          Run a standalone Sqoop metastore
  version            Display version information

See 'sqoop help COMMAND' for information on a specific command.

用户使用参考指南:http://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html

基本使用

使用Sqoop,您可以将数据从关系数据库系统导入HDFS。导入过程的输入是数据库表数据集。对于数据库,Sqoop会将表逐行读取到HDFS中。导入过程的输出是一组文件并且导入过程是并行执行的。因此,输出将在多个文件中。这些文件可以是结构化的文本文件(例如,用逗号或制表符分隔每个字段),或包含序列化记录数据的二进制Avro或SequenceFiles。

在处理了导入的记录(例如,使用MapReduce或Hive)之后,您可能会有一个结果数据集,然后可以将其导出回关系数据库。 Sqoop的导出过程将从HDFS并行读取一组定界的文本文件,将它们解析为记录,并将它们作为新行插入目标数据库表中,以供外部应用程序或用户使用。

Sqoop是相关工具的集合。要使用Sqoop,请指定要使用的工具以及控制该工具的参数。Sqoop包含一些其他命令,这些命令使您可以检查正在使用的数据库。例如,您可以列出可用的数据库模式(使用sqoop-list-databases工具)和数据库中的表(使用sqoop-list-tables工具)。 Sqoop还包括原始SQL执行(sqoop-eval工具)。

  • list-databases
[root@CentOS7 sqoop-1.4.7]# sqoop  list-databases \
                                    --username root \
                                    --password root \
                                    --connect jdbc:mysql://CentOS:3306
  • list-tables
[root@CentOS7 sqoop-1.4.7]# sqoop list-tables \
							--driver com.mysql.jdbc.Driver \
                            --username root \
                            --password root \
                            --connect jdbc:mysql://CentOS:3306/hive
  • version
[root@CentOS7 sqoop-1.4.7]# sqoop version

sqoop-import

导入工具将单个表从RDBMS导入到HDFS。表中的每一行在HDFS中均表示为单独的记录。记录可以存储为文本文件(每行一个记录),也可以二进制表示为Avro或SequenceFiles。

  • 连接参数
参数说明
–connect Specify JDBC connect string
–password Set authentication password
–username Set authentication username
–relaxed-isolationSet connection transaction isolation to read uncommitted for the mappers.
  • 导入控制
参数说明
–appendAppend data to an existing dataset in HDFS
–as-avrodatafileImports data to Avro Data Files
–as-sequencefileImports data to SequenceFiles
–as-textfileImports data as plain text (default)
–as-parquetfileImports data to Parquet Files
–columns <col,col,col…>Columns to import from table
–delete-target-dirDelete the import target directory if it exists
-m,–num-mappers Use n map tasks to import in parallel
-e,–query Import the results of statement.
–split-by Column of the table used to split work units. Cannot be used with --autoreset-to-one-mapper option.
–table Table to read
–target-dir HDFS destination dir
–where WHERE clause to use during import
–null-string The string to be written for a null value for string columns
–null-non-string The string to be written for a null value for non-string columns
  • 输出控制
参数说明
–fields-terminated-bySets the field separator character

1、全表导入

sqoop import \
            --driver com.mysql.jdbc.Driver \
            --connect jdbc:mysql://CentOS:3306/test?characterEncoding=UTF-8 \
            --username root \
            --password root \
            --table t_user \
            --num-mappers 4 \
            --fields-terminated-by '\t' \
            --target-dir /mysql/test/t_user \
            --delete-target-dir 

2、导入字段

sqoop import \
            --driver com.mysql.jdbc.Driver \
            --connect jdbc:mysql://CentOS:3306/test?characterEncoding=UTF-8 \
            --username root \
            --password root \
            --table t_user \
            --columns id,name,sex \
            --where "id >= 2 or name like '%z%'" \
            --target-dir /mysql/test/t_user \
            --delete-target-dir \
            --num-mappers 4 \
            --fields-terminated-by '\t'

3、导入查询

如果要并行导入查询的结果,则每个Map任务将需要执行查询的副本,并按Sqoop推断的边界条件对结果进行分区。您的查询必须包含令牌$ CONDITIONS,每个Sqoop进程将用唯一条件表达式替换该令牌。

sqoop import \
            --driver com.mysql.jdbc.Driver \
            --connect jdbc:mysql://CentOS:3306/test \
            --username root \
            --password root \
            --num-mappers 3 \
            --fields-terminated-by '\t' \
            --query 'select id,name,sex  from t_user where $CONDITIONS ' \
            --split-by id \
            --target-dir /mysql/test/t_user \
            --delete-target-dir

4、增量导入

sqoop import \
            --driver com.mysql.jdbc.Driver \
            --connect jdbc:mysql://CentOS:3306/test \
            --username root \
            --password root \
            --num-mappers 3 \
            --fields-terminated-by '\t' \
            --query 'select id, name,sex  from t_user where $CONDITIONS ' \
            --split-by id \
            --target-dir /mysql/test/t_user \
            --append \
            --incremental append \
            --check-column id \
            --last-value 2

5、导入数据到Hive

sqoop import \
            --connect jdbc:mysql://CentOS:3306/test \
            --username root \
            --password root \
            --table t_user \
            --num-mappers 3 \
            --hive-import \
            --fields-terminated-by "\t" \
            --split-by id \
            --hive-overwrite \
            --hive-table baizhi.t_user \
            --create-hive-table
[root@CentOS7 sqoop-1.4.7]# cp /usr/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar /usr/sqoop-1.4.7/lib/
[root@CentOS7 sqoop-1.4.7]# cp /usr/apache-hive-1.2.2-bin/lib/hive-exec-1.2.2.jar /usr/sqoop-1.4.7/lib/

6、将数据导入到指定分区

sqoop import \
        --connect jdbc:mysql://CentOS:3306/test \
        --username root \
        --password root \
        --table t_user \
        --num-mappers 3 \
        --hive-import \
        --fields-terminated-by "\t" \
        --hive-overwrite \
        --hive-table baizhi.t_user2 \
        --create-hive-table \
        --hive-partition-key city \
        --hive-partition-value 'bj'

7、将数据导入HBase数据库

sqoop import \
        --connect jdbc:mysql://CentOS:3306/test \
        --username root \
        --password root \
        --table t_user \
        --num-mappers 3 \
        --hbase-table baizhi:t_employee \
        --column-family cf1 \
        --hbase-create-table \
        --hbase-row-key id \
        --hbase-bulkload 

sqoop-export

Export工具将一组文件从HDFS导出回RDBMS。目标表必须已经存在于数据库中。根据用户指定的定界符,读取输入文件并将其解析为一组记录。

1、HDFS -> MySQL

0,zhangsan,true,20,2020-01-11
1,lisi,false,25,2020-01-10
3,wangwu,true,36,2020-01-17
4,zhaoliu,false,50,1990-02-08
5,win7,true,20,1991-02-08
create table t_employee(
 id int primary key auto_increment,
 name VARCHAR(32),
 sex boolean,
 age int,
 birthDay date
) CHARACTER SET=utf8;
sqoop export \
--connect jdbc:mysql://CentOS:3306/test \
--username root \
--password root \
--table t_employee  \
--update-key id  \
--update-mode allowinsert \
--export-dir hdfs:///demo/src \
--input-fields-terminated-by ','
参数说明
–export-dir导出的数据
–input-fields-terminated-by字段分割符号

导入模式可选值可以是updateonly或者allowinsert,updateonly仅仅会更新已经存在的记录。

2、HBASE -> MySQL

HBASE -> HIVE HIVE-RDBMS 等价 HDFS => RDBMS

①准备测试数据

测试数据 t_employee

7369,SMITH,CLERK,7902,1980-12-17 00:00:00,800,\N,20
7499,ALLEN,SALESMAN,7698,1981-02-20 00:00:00,1600,300,30
7521,WARD,SALESMAN,7698,1981-02-22 00:00:00,1250,500,30
7566,JONES,MANAGER,7839,1981-04-02 00:00:00,2975,\N,20
7654,MARTIN,SALESMAN,7698,1981-09-28 00:00:00,1250,1400,30
7698,BLAKE,MANAGER,7839,1981-05-01 00:00:00,2850,\N,30
7782,CLARK,MANAGER,7839,1981-06-09 00:00:00,2450,\N,10
7788,SCOTT,ANALYST,7566,1987-04-19 00:00:00,1500,\N,20
7839,KING,PRESIDENT,\N,1981-11-17 00:00:00,5000,\N,10
7844,TURNER,SALESMAN,7698,1981-09-08 00:00:00,1500,0,30
7876,ADAMS,CLERK,7788,1987-05-23 00:00:00,1100,\N,20
7900,JAMES,CLERK,7698,1981-12-03 00:00:00,950,\N,30
7902,FORD,ANALYST,7566,1981-12-03 00:00:00,3000,\N,20
7934,MILLER,CLERK,7782,1982-01-23 00:00:00,1300,\N,10
create database if not exists baizhi;
use baizhi;
drop table if exists t_employee;
CREATE TABLE t_employee(
    empno INT,
    ename STRING,
    job STRING,
    mgr INT,
    hiredate TIMESTAMP,
    sal DECIMAL(7,2),
    comm DECIMAL(7,2),
    deptno INT)
row format delimited
fields terminated by ','
collection items terminated by '|'
map keys terminated by '>'
lines terminated by '\n'
stored as textfile;
load data local inpath '/root/t_employee' overwrite into table t_employee;

drop table if exists t_employee_hbase;
create external table t_employee_hbase(
    empno INT,
    ename STRING,
    job STRING,
    mgr INT,
    hiredate TIMESTAMP,
    sal DECIMAL(7,2),
    comm DECIMAL(7,2),
    deptno INT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES("hbase.columns.mapping" = ":key,cf1:name,cf1:job,cf1:mgr,cf1:hiredate,cf1:sal,cf1:comm,cf1:deptno") 
TBLPROPERTIES("hbase.table.name" = "baizhi:t_employee");

insert overwrite  table t_employee_hbase  select empno,ename,job,mgr,hiredate,sal,comm,deptno from t_employee;

②先尝试将HBase的数据导出到HDFS

INSERT OVERWRITE  DIRECTORY '/demo/src/employee' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE select empno,ename,job,mgr,hiredate,sal,comm,deptno from t_employee_hbase;

③将HDFS中数据导出RDBMS

CREATE TABLE t_employee(
    empno INT,
    ename VARCHAR(32),
    job VARCHAR(32),
    mgr INT,
    hiredate TIMESTAMP,
    sal DECIMAL(7,2),
    comm DECIMAL(7,2),
    deptno INT)CHARACTER SET=utf8;
sqoop export \
--connect jdbc:mysql://CentOS:3306/test \
--username root \
--password root \
--table t_employee  \
--update-key id  \
--update-mode allowinsert \
--export-dir /demo/src/employee \
--input-fields-terminated-by ',' \
--input-null-string '\\N' \
--input-null-non-string '\\N';
b VARCHAR(32),
    mgr INT,
    hiredate TIMESTAMP,
    sal DECIMAL(7,2),
    comm DECIMAL(7,2),
    deptno INT)CHARACTER SET=utf8;
sqoop export \
--connect jdbc:mysql://CentOS:3306/test \
--username root \
--password root \
--table t_employee  \
--update-key id  \
--update-mode allowinsert \
--export-dir /demo/src/employee \
--input-fields-terminated-by ',' \
--input-null-string '\\N' \
--input-null-non-string '\\N';
  • 4
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小中.

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值