Sqoop 知识学习(使用)

[size=large][b]一 简介 [/b][/size]
Sqoop是一个用来将Hadoop和关系型数据库中的数据相互转移的工具,[color=red][b]可以将一个关系型数据库(例如 : MySQL ,Oracle ,Postgres等)中的数据导进到Hadoop的HDFS中,也可以将HDFS的数据导进到关系型数据库中。 [/b][/color]

[size=large][b]二 特点 [/b][/size]
Sqoop中一大亮点就是可以[color=blue][b]通过hadoop的mapreduce把数据从关系型数据库中导入数据到HDFS。 [/b][/color]

[size=large][b]三 Sqoop 命令 [/b][/size]
Sqoop大约有13种命令,和几种通用的参数(都支持这13种命令),这里先列出这13种命令。
接着列出Sqoop的各种通用参数,然后针对以上13个命令列出他们自己的参数。Sqoop通用参数又分Common arguments,Incremental import arguments,Output line formatting arguments,Input parsing arguments,Hive arguments,HBase arguments,Generic Hadoop command-line arguments,下面一一说明:

[color=red][b]1.Common arguments [/b][/color]
通用参数,主要是针对关系型数据库链接的一些参数

[size=large][b]四 sqoop命令举例 [/b][/size]

在hadoop的core-site.xml中添加

[color=red][b]xxx表示当前的用户名[/b][/color]
<property>
<name>hadoop.proxyuser.xxx.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.xxx.groups</name>
<value>*</value>
</property>


然后关闭安全模式:[b]hdfs dfsadmin -safemode leave [/b]

通过>sqoop.sh client 进去shell界面

首先创建
server>set server -h master -p 12000 -w sqoop


[color=red][b]创建hdfs链接[/b][/color]
sqoop:000> create link --connector hdfs-connector

Creating link for connector with name hdfs-connector
Please fill following values to create new link object
Name: HDFS # 要创建的 link 的名称(必填)

HDFS cluster

URI: hdfs://master:9000/ # 这里要填的就是我之前要大家记住的 fs.defaultFS 的值(必填)
Conf directory: /usr/hadoop/hadoop-2.6.4/etc/hadoop # hadoop配置文件的目录(必填)
Additional configs::
There are currently 0 values in the map:
entry# (选填)
New link was successfully created with validation status OK and name HDFS
sqoop:000>


[color=red][b]创建mysql链接[/b][/color]
sqoop:000> create link --connector generic-jdbc-connector

Creating link for connector with name generic-jdbc-connector
Please fill following values to create new link object
Name: MYSQL # 要创建的 link 的名称(必填)

Database connection

Driver class: com.mysql.jdbc.Driver # (必填)
Connection String: jdbc:mysql://master:3306/test # (必填) 必须你有权限的链接
Username: root # (必填)
Password: ****** # (必填)
Fetch Size: # (选填)
Connection Properties: # (选填)
There are currently 0 values in the map:
entry# # (选填)

SQL Dialect

Identifier enclose: # (必填,这里是个空格)
New link was successfully created with validation status OK and name MYSQL
sqoop:000>


[size=medium][color=blue][b]注意:Identifier enclose 必须是个空格[/b][/color][/size]

[color=red][b]创建 job 对象[/b][/color]
[color=red][b]HDFS -- > MYSQL[/b][/color]

sqoop:000> create job --from HDFS --to MYSQL

Creating job for links with from name HDFS and to name MYSQL
Please fill following values to create new job object
Name: FisrtJob # 要创建的job的名称(必填)

Input configuration

Input directory: /toMysql # 数据来源于hdfs上的哪个目录(必填)
Override null value: # (选填)
Null value: # (选填)

Incremental import

Incremental type:
0 : NONE
1 : NEW_FILES
Choose: 0 # (选填)
Last imported date: # (选填)

Database target

Schema name: test # 要导入到哪一个数据库(必填)
Table name: people # 要导入到数据库中的那张表(必填)
Column names: # 要导入到表中的哪些列(选填)
There are currently 0 values in the list:
element# # (选填)
Staging table: # (选填)
Clear stage table: # (选填)

Throttling resources

Extractors: # (选填)
Loaders: # (选填)

Classpath configuration

Extra mapper jars: # (选填)
There are currently 0 values in the list:
element# # (选填)
New job was successfully created with validation status OK and name FisrtJob
sqoop:000>


[color=red][b]MYSQL-- > HDFS[/b][/color]
sqoop:000> create job --from MYSQL --to HDFS
Creating job for links with from name MYSQL and to name HDFS
Please fill following values to create new job object
Name: SecondJob # 要创建的job对象的名称(必填)

Database source

Schema name: test # 数据来源于哪个数据库(必填)
Table name: people # 数据来源于数据库中的哪张表(选填)
SQL statement: # SQL语句(选填)
Column names: # 列名(选填)
There are currently 0 values in the list:
element# # (选填)
Partition column: # (选填)
Partition column nullable: # (选填)
Boundary query: # (选填)

Incremental read

Check column: # (选填)
Last value: # (选填)

Target configuration

Override null value: # (选填)
Null value: # (选填)
File format:
0 : TEXT_FILE
1 : SEQUENCE_FILE
2 : PARQUET_FILE
Choose: 0 # (必填)
Compression codec:
0 : NONE
1 : DEFAULT
2 : DEFLATE
3 : GZIP
4 : BZIP2
5 : LZO
6 : LZ4
7 : SNAPPY
8 : CUSTOM
Choose: 0 # (必填)
Custom codec: # (选填)
Output directory: /OutputMysql #(必填) 输出到 hdfs 上的哪个目录
Append mode: true # (选填)

Throttling resources

Extractors: # (选填)
Loaders: # (选填)

Classpath configuration

Extra mapper jars: # (选填)
There are currently 0 values in the list:
element# # (选填)
New job was successfully created with validation status OK and name SecondJob
sqoop:000>



[b]启动job[/b]

sqoop:000> start job --name FisrtJob

Submission details
Job Name: FisrtJob
Server URL: http://master:12000/sqoop/
Created by: root
Creation date: 2016-11-16 21:27:16 CST
Lastly updated by: root
External ID: job_1479259884185_0002
http://master:8088/proxy/application_1479259884185_0002/
2016-11-16 21:27:16 CST: BOOTING - Progress is not available
sqoop:000>


关于从 hdfs 导出到 mysql 的一些东西

[b]后来发现,要是建表时指定了主键,从 hdfs 导数据进来的时候是有序的,如果没有主键则是无序的。[/b]

[color=red][b]从 mysql 导出到 hdfs 时,表没有主键的话必须指定按照哪一列来分区,哈哈,这个是千真万确的。[/b][/color]

[size=large][b]五 Sqoop原理(以import为例) [/b][/size]
Sqoop在import时,需要制定split-by参数。Sqoop根据不同的split-by参数值来进行切分,然后将切分出来的区域分配到不同map中。每个map中再处理数据库中获取的一行一行的值,写入到HDFS中。同时split-by根据不同的参数类型有不同的切分方法,如比较简单的int型,Sqoop会取最大和最小split-by字段值,然后根据传入的num-mappers来确定划分几个区域。 比如select max(split_by),min(split-by) from得到的max(split-by)和min(split-by)分别为1000和1,而num-mappers为2的话,则会分成两个区域(1,500)和(501-100),同时也会分成2个sql给2个map去进行导入操作,分别为select XXX from table where split-by>=1 and split-by<500和select XXX from table where split-by>=501 and split-by<=1000。最后每个map各自获取各自SQL中的数据进行导入工作。

[size=large][b]六 mapreduce job所需要的各种参数在Sqoop中的实现 [/b][/size]
1) InputFormatClass
com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat

2) OutputFormatClass
1)TextFile
com.cloudera.sqoop.mapreduce.RawKeyTextOutputFormat
2)SequenceFile
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
3)AvroDataFile
com.cloudera.sqoop.mapreduce.AvroOutputFormat

3)Mapper
1)TextFile
com.cloudera.sqoop.mapreduce.TextImportMapper
2)SequenceFile
com.cloudera.sqoop.mapreduce.SequenceFileImportMapper

3)AvroDataFile
com.cloudera.sqoop.mapreduce.AvroImportMapper

4)taskNumbers
1)mapred.map.tasks(对应num-mappers参数)
2)job.setNumReduceTasks(0);

这里以命令行:import –connect jdbc:mysql://localhost/test –username root –password 123456 –query “select sqoop_1.id as foo_id, sqoop_2.id as bar_id from sqoop_1 ,sqoop_2 WHERE $CONDITIONS” –target-dir /user/sqoop/test -split-by sqoop_1.id –hadoop-home=/home/hdfs/hadoop-0.20.2-CDH3B3 –num-mappers 2


1)设置Input
DataDrivenImportJob.configureInputFormat(Job job, String tableName,String tableClassName, String splitByCol)

a)DBConfiguration.configureDB(Configuration conf, String driverClass, String dbUrl, String userName, String passwd, Integer fetchSize)
1).mapreduce.jdbc.driver.class com.mysql.jdbc.Driver
2).mapreduce.jdbc.url jdbc:mysql://localhost/test
3).mapreduce.jdbc.username root
4).mapreduce.jdbc.password 123456
5).mapreduce.jdbc.fetchsize -2147483648

b)DataDrivenDBInputFormat.setInput(Job job,Class<? extends DBWritable> inputClass, String inputQuery, String inputBoundingQuery)
1)job.setInputFormatClass(DBInputFormat.class);
2)mapred.jdbc.input.bounding.query SELECT MIN(sqoop_1.id), MAX(sqoop_2.id) FROM (select sqoop_1.id as foo_id, sqoop_2.id as bar_id from sqoop_1 ,sqoop_2 WHERE (1 = 1) ) AS t1
3)job.setInputFormatClass(com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat.class);
4)mapreduce.jdbc.input.orderby sqoop_1.id
c)mapreduce.jdbc.input.class QueryResult
d)sqoop.inline.lob.length.max 16777216

2)设置Output
ImportJobBase.configureOutputFormat(Job job, String tableName,String tableClassName)
a)job.setOutputFormatClass(getOutputFormatClass());
b)FileOutputFormat.setOutputCompressorClass(job, codecClass);
c)SequenceFileOutputFormat.setOutputCompressionType(job,CompressionType.BLOCK);
d)FileOutputFormat.setOutputPath(job, outputPath);

3)设置Map
DataDrivenImportJob.configureMapper(Job job, String tableName,String tableClassName)
a)job.setOutputKeyClass(Text.class);
b)job.setOutputValueClass(NullWritable.class);
c)job.setMapperClass(com.cloudera.sqoop.mapreduce.TextImportMapper);

4)设置task number
JobBase.configureNumTasks(Job job)
mapred.map.tasks 4
job.setNumReduceTasks(0);

[size=large][b]七 大概流程 [/b][/size]
1.读取要导入数据的表结构,生成运行类,默认是QueryResult,打成jar包,然后提交给Hadoop

2.设置好job,主要也就是设置好以上第六章中的各个参数

3.这里就由Hadoop来执行MapReduce来执行Import命令了,

1)首先要对数据进行切分,也就是DataSplit
DataDrivenDBInputFormat.getSplits(JobContext job)

2)切分好范围后,写入范围,以便读取
DataDrivenDBInputFormat.write(DataOutput output) 这里是lowerBoundQuery and upperBoundQuery

3)读取以上2)写入的范围
DataDrivenDBInputFormat.readFields(DataInput input)

4)然后创建RecordReader从数据库中读取数据
DataDrivenDBInputFormat.createRecordReader(InputSplit split,TaskAttemptContext context)

5)创建Map
TextImportMapper.setup(Context context)

6)RecordReader一行一行从关系型数据库中读取数据,设置好Map的Key和Value,交给Map
DBRecordReader.nextKeyValue()

7)运行map
TextImportMapper.map(LongWritable key, SqoopRecord val, Context context)
最后生成的Key是行数据,由QueryResult生成,Value是NullWritable.get()

[size=large][b]八 总结[/b][/size]
通过这些,了解了MapReduce运行流程.但对于Sqoop这种切分方式感觉还是有很大的问题.比如这里根据ID范围来切分,如此切分出来的数据会很不平均,比如min(split-id)=1,max(split-id)=3000,交给三个map来处理。那么范围是(1-1000),(1001-2000),(2001-3000).而假如1001-2000是没有数据,已经被删除了。那么这个map就什么都不能做。而其他map却累的半死。如此就会拖累job的运行结果。这里说的范围很小,比如有几十亿条数据交给几百个map去做。map一多,如果任务不均衡就会影响进度。看有没有更好的切分方式?比如取样?如此看来,写好map reduce也不简单!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值