Sqoop增量从MySQL中向hive导入数据



增量导入

增量导入是仅导入新添加的表中的行的技术。

它需要添加‘incremental’, ‘check-column’, ‘last-value’选项来执行增量导入。

下面的语法用于Sqoop导入命令增量选项。

--incremental <mode>

--check-column <column name>

--last value <last check column value>

 

 

假设新添加的数据转换成emp表如下:

1206, satish p, grp des, 20000, GR

下面的命令用于在EMP表执行增量导入。

bin/sqoop import \

--connect jdbc:mysql://itcast01:3306/userdb \

--username root \

--password root123 \

--table emp --m 1 \

--target-dir /emp_append \

--incremental append \

--check-column id \

--last-value 1203


 sqoop import --connect jdbc:mysql://192.168.72.1:3306/tianzhicloud_security --username root --password root --target-dir hdfs://centoshostnameKL1:9000/queryresult/sys_user1 --table sys_user    --hive-table sys_user --hive-import --m 1 --incremental lastmodified --check-column createtime  --last-value '2011-11-30 16:59:43.1' ; 按时间增量导入hive

sqoop import --connect jdbc:mysql://192.168.72.1:3306/tianzhicloud_security --username root --password root --target-dir hdfs://centoshostnameKL1:9000/queryresult/sys_user --table sys_user --m 1 --incremental append --check-column userId  --last-value '20' ;按字段导入增量hdfs



Sqoop增量从MySQL中向hive导入数据
sqoop job --create incretest -- import --connect  jdbc:mysql://10.8.2.19:3306/db  --table table1 --username op_root --password root -m 1  --hive-import --incremental lastmodified --check-column dtTime  --last-value '2015-11-30 16:59:43.1';
  • 1

注意: 
1. – import之间有空格 
2. dtTime要和你mysql数据库字段一样,此处区分大小写 
3. hive中时间精确到2015-11-30 16:59:43.0跟2015-11-30 16:59:43不同

select * from table1 where dtTime  = '2015-11-30 16:59:43.0';
select * from table1 where dtTime  = '2015-11-30 16:59:43';
结果不同
  • 1
  • 2
  • 3
  • 4

4.job中自动更新的时间是按你执行命令的时间更新,而不是dtTime最后一个值的时间


sqoop job - - create incretest - - import - - connect jdbc:mysql://10 . 8 . 2 . 19:3306/db - - table table1 - - username op_root - - password root - m 1 - - hive - import - - incremental lastmodified - - check - column dtTime - - last - value '2015 - 11 - 30 16:59:43 . 1';

1. – import之间有空格 
2. dtTime要和你mysql数据库字段一样,此处区分大小写 
3. hive中时间精确到2015-11-30 16:59:43.0跟2015-11-30 16:59:43不同

select * from table1 where dtTime  = '2015-11-30 16:59:43.0';
select * from table1 where dtTime  = '2015-11-30 16:59:43';
结果不同
  • 1
  • 2
  • 3
  • 4

4.job中自动更新的时间是按你执行命令的时间更新,而不是dtTime最后一个值的时间

[wangshumin@centoshostnameKL3 conf]$ sqoop import --connect jdbc:mysql://192.168.72.1:3306/tianzhicloud_security --username root --password root --target-dir hdfs://centoshostnameKL1:9000/queryresult/sys_user --table sys_user --m 1 --incremental append --check-column userId  --last-value '20' ;
Warning: /home/wangshumin/sqoop/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/wangshumin/sqoop/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/wangshumin/sqoop/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
18/05/03 02:05:26 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/05/03 02:05:26 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/05/03 02:05:26 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/05/03 02:05:26 INFO tool.CodeGenTool: Beginning code generation
18/05/03 02:05:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sys_user` AS t LIMIT 1
18/05/03 02:05:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `sys_user` AS t LIMIT 1
18/05/03 02:05:27 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/wangshumin/app/hadoop-2.4.1
注: /tmp/sqoop-wangshumin/compile/7a5a118635159f5a4514b0d2d9dec83e/sys_user.java使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
18/05/03 02:05:28 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-wangshumin/compile/7a5a118635159f5a4514b0d2d9dec83e/sys_user.jar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/wangshumin/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/wangshumin/hbase/hbase-1.2.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/05/03 02:05:29 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`userId`) FROM `sys_user`
18/05/03 02:05:29 INFO tool.ImportTool: Incremental import based on column `userId`
18/05/03 02:05:29 INFO tool.ImportTool: Lower bound value: 20
18/05/03 02:05:29 INFO tool.ImportTool: Upper bound value: 1040000000260000
18/05/03 02:05:29 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/05/03 02:05:29 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/05/03 02:05:29 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/05/03 02:05:29 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/05/03 02:05:29 INFO mapreduce.ImportJobBase: Beginning import of sys_user
18/05/03 02:05:29 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/05/03 02:05:29 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/05/03 02:05:29 INFO client.RMProxy: Connecting to ResourceManager at centoshostnameKL1/192.168.72.101:8032
18/05/03 02:05:33 INFO db.DBInputFormat: Using read commited transaction isolation
18/05/03 02:05:33 INFO mapreduce.JobSubmitter: number of splits:1
18/05/03 02:05:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1525264668092_0026
18/05/03 02:05:34 INFO impl.YarnClientImpl: Submitted application application_1525264668092_0026
18/05/03 02:05:34 INFO mapreduce.Job: The url to track the job: http://centoshostnameKL1:8088/proxy/application_1525264668092_0026/
18/05/03 02:05:34 INFO mapreduce.Job: Running job: job_1525264668092_0026
18/05/03 02:05:42 INFO mapreduce.Job: Job job_1525264668092_0026 running in uber mode : false
18/05/03 02:05:42 INFO mapreduce.Job:  map 0% reduce 0%
18/05/03 02:06:36 INFO mapreduce.Job:  map 100% reduce 0%
18/05/03 02:06:36 INFO mapreduce.Job: Job job_1525264668092_0026 completed successfully
18/05/03 02:06:36 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=113235
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=272477918
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters 
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=52319
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=52319
Total vcore-seconds taken by all map tasks=52319
Total megabyte-seconds taken by all map tasks=53574656
Map-Reduce Framework
Map input records=1017799
Map output records=1017799
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=1888
CPU time spent (ms)=31710
Physical memory (bytes) snapshot=82542592
Virtual memory (bytes) snapshot=328224768
Total committed heap usage (bytes)=26972160
File Input Format Counters 
Bytes Read=0
File Output Format Counters 
Bytes Written=272477918
18/05/03 02:06:36 INFO mapreduce.ImportJobBase: Transferred 259.8552 MB in 67.0828 seconds (3.8736 MB/sec)
18/05/03 02:06:36 INFO mapreduce.ImportJobBase: Retrieved 1017799 records.
18/05/03 02:06:36 INFO util.AppendUtils: Appending to directory sys_user
18/05/03 02:06:36 INFO util.AppendUtils: Using found partition 1
18/05/03 02:06:36 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
18/05/03 02:06:36 INFO tool.ImportTool:  --incremental append
18/05/03 02:06:36 INFO tool.ImportTool:   --check-column userId
18/05/03 02:06:36 INFO tool.ImportTool:   --last-value 1040000000260000
18/05/03 02:06:36 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
[wangshumin@centoshostnameKL3 conf]$ 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值