《数据同步-Sqoop系列》Sqoop详细入门教程

Sqoop详细入门教程

1 Sqoop简介

SQL to Hadoop

开源工具

用于hadoop(hive)与传统数据库之间数据的导入导出

输入:Mysql、Oracle、DB2等关系数据数据导入到Hadoop

输出:从Hadoop的数据导出到Mysql、Oracle等等

2 Sqoop原理

导入和导出都需要在底层调用mapreduce,换言之使用sqoop必须得开yarn。
在这里插入图片描述

3 Sqoop安装

需要已具备jdk环境和Hadoop环境

##1. 解压并配置环境变量
[root@hadoop software]# tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C /opt/apps/ & cd /opt/apps
[root@hadoop apps]# mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha/ sqoop-1.4.6

[root@hadoop sqoop-1.4.6]# vi /etc/profile
## 自定义环境变量
export JAVA_HOME=/opt/apps/jdk1.8.0_45
export HADOOP_HOME=/opt/apps/hadoop-2.8.1
export HIVE_HOME=/opt/apps/hive-1.2.1
export HBASE_HOME=/opt/apps/hbase-1.2.1
export COLLECT_HOME=/opt/apps/collect-app
export FRP_HOME=/opt/apps/frp
export SCRIPT_HOME=/opt/apps/scripts
export SQOOP_HOME=/opt/apps/sqoop-1.4.6

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin
export PATH=$PATH:$COLLECT_HOME:$FRP_HOME:$SCRIPT_HOME:$SQOOP_HOME/bin
export CLASS_PATH=.:$JAVA_HOME/lib
export FLUME_HOME=/opt/apps/flume-1.9.0
export PATH=$PATH:/opt/apps/flume-1.9.0/bin

[root@hadoop sqoop-1.4.6]# source /etc/profile

##2. sqoop-env.sh
[root@hadoop conf]# mv sqoop-env-template.sh sqoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# included in all the hadoop scripts with source command
# should not be executable directly
# also should not be passed any arguments, since we need original $*

# Set Hadoop-specific environment variables here.

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/opt/apps/hadoop-2.8.1

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/opt/apps/hadoop-2.8.1

#set the path to where bin/hbase is available
#export HBASE_HOME=

#Set the path to where bin/hive is available
export HIVE_HOME=/opt/apps/hive-1.2.1

#Set the path for where zookeper config dir is
#export ZOOCFGDIR=

##3. 导入jdbc的mysql的驱动导入到sqoop的lib
[root@hadoop sqoop-1.4.6]# cp /opt/apps/hive-1.2.1/lib/mysql-connector-java-5.1.47-bin.jar ./lib/

##4. 测试
[root@hadoop sqoop-1.4.6]# sqoop version
21/05/24 14:27:43 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
Sqoop 1.4.6
git commit id c0c5a81723759fa575844a0a1eae8f510fa32c25
Compiled by root on Mon Apr 27 14:38:36 CST 2015

4 Sqoop简单使用案例

4.1 全量导入

sqoop import \
--connect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student \
--target-dir /user/source/student \
--delete-target-dir \
--num-mappers 1 \
--fields-terminated-by "\t"

4.2 查询导入

sqoop import \
--connnect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student \
--target-dir /user/source/student \
--delete-target-dir \
--num-mappers 1 \
--fields-terminated-by "\t" \
--query 'select id,name,sex from student where sex = 0 and $CONDITIONS;'

--query "select id,name,sex from student where sex = 0 and \$CONDITIONS;"

4.3 使用内部提供的增量导入

##1. append模式
sqoop import \
--connect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student \
--target-dir /user/hive/warehouse/student \
--fields-terminated-by '\001' \
--split-by id \
--num-mappers 1 \
--check-column id \
--incremental append \
--last-value 1

##2. lastmodified模式
sqoop import \
--connect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student \
--append \
--target-dir /user/hive/warehouse/student \
--fields-terminated-by '\001' \
--split-by id \
--num-mappers 1 \
--check-column create \
--incremental lastmodified \
--last-value '2021-07-26 16:13:25'

4.3 按列导入

sqoop import \
--connnect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student \
--columns id,sex \
--target-dir /user/source/student \
--delete-target-dir \
--num-mappers 1 \
--fields-terminated-by "\t" 

4.4 按列条件查询

sqoop import \
--connnect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--target-dir /user/source/student \
--delete-target-dir \
--num-mappers 1 \
--fields-terminated-by "\t" 
--table student \
--columns id,sex \
--where "sex=0"

4.5 MySQL To Hive

sqoop import \
--connnect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student \
--num-mappers 1 \
--hive-import \
--fields-terminated-by "\t" \
--hive-overwrite \
--hive-table student_hive

4.6 MySQL To Hbase

  • 创建Hbase中的表
create 'zxy_hbase','info'
  • 查看
list
  • 执行导入语句
sqoop import \
--connect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student  \
--hbase-table zxy_hbase \
--column-family info  \
--hbase-create-table \
--hbase-row-key id

4.7 导出

sqoop export \
--connect jdbc:mysql://hadoop:3306/zxy \
--username root \
--password root \
--table student  \
--num-mappers 1 \
--export-dir /user/hive/warehouse/zxy_hive \
--input-fields-terminated-by "\t"

二、Sqoop常见问题

1 Sqoop在导入导出时,null值怎么处理?

Hive的null值在底层使用”\n”代表,MySQL的null值就是存储的null。所以在导入导出的是需要指定空值方式。

2 Sqoop在导入导出时,怎么保证数据一致性?

Sqoop在导入导出的时候,如果启动了4个Map,但是又2个Map失败了,这样就会导致数据不一致。可以通过中间表导入导出,要么成功要么失败,要保证中间表在导入前是空的。

3 Sqoop的底层任务是什么?

Sqoop只有Map任务,没有reduce任务。

4 Sqoop在导出数据时,一般需要多久?

一般需要5分钟-2小时,根据具体的数据量而定。

5 Sqoop在导入时遇到的问题?

使用Sqoop往Hdfs一级目录导入的时候,可能会失败。修改路径到二级目录即可。

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

DATA数据猿

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值