mongodb mysql 读取数据,是否可以读取MongoDB数据,使用Hadoop处理数据,并将其输出到RDBS(MySQL)中?...

Summary:

Is it possible to:

Import data into Hadoop with the «MongoDB Connector for Hadoop».

Process it with Hadoop MapReduce.

Export it with Sqoop in a single transaction.

I am building a web application with MongoDB. While MongoDB work well for most of the work, in some parts I need stronger transactional guarantees, for which I use a MySQL database.

My problem is that I want to read a big MongoDB collection for data analysis, but the size of the collection means that the analytic job would take too long to process. Unfortunately, MongoDB's built-in map-reduce framework would not work well for this job, so I would prefer to carry out the analysis with Apache Hadoop.

I understand that it is possible read data from MongoDB into Hadoop by using the «MongoDB Connector for Hadoop», which reads data from MongoDB, processes it with MapReduce in Hadoop, and finally outputs the results back into a MongoDB database.

The problem is that I want the output of the MapReduce to go into a MySQL database, rather than MongoDB, because the results must be merged with other MySQL tables.

For this purpose I know that Sqoop can export result of a Hadoop MapReduce into MySQL.

Ultimately, I want too read MongoDB data then process it with Hadoop and finally output the result into a MySQL database.

Is this possible? Which tools are available to do this?

解决方案TL;DR: Set an an output formatter that writes to a RDBS in your Hadoop job:

job.setOutputFormatClass( DBOutputFormat.class );

Several things to note:

Exporting data from MongoDB to Hadoop using Sqoop is not possible. This is because Sqoop uses JDBC which provides a call-level API for SQL-based database, but MongoDB is not an SQL-based database. You can look at the «MongoDB Connector for Hadoop» to do this job. The connector is available on GitHub. (Edit: as you point out in your update.)

Sqoop exports are not made in a single transaction by default. Instead, according to the Sqoop docs:

Since Sqoop breaks down export process into multiple transactions, it is possible that a failed export job may result in partial data being committed to the database. This can further lead to subsequent jobs failing due to insert collisions in some cases, or lead to duplicated data in others. You can overcome this problem by specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.

The «MongoDB Connector for Hadoop» does not seem to force the workflow you describe. According to the docs:

This connectivity takes the form of allowing both reading MongoDB data into Hadoop (for use in MapReduce jobs as well as other components of the Hadoop ecosystem), as well as writing the results of Hadoop jobs out to MongoDB.

Indeed, as far as I understand from the «MongoDB Connector for Hadoop»: examples, it would be possible to specify a org.apache.hadoop.mapred.lib.db.DBOutputFormat into your Hadoop MapReduce job to write the output to a MySQL database. Following the example from the connector repository:

job.setMapperClass( TokenizerMapper.class );

job.setCombinerClass( IntSumReducer.class );

job.setReducerClass( IntSumReducer.class );

job.setOutputKeyClass( Text.class );

job.setOutputValueClass( IntWritable.class );

job.setInputFormatClass( MongoInputFormat.class );

/* Instead of:

* job.setOutputFormatClass( MongoOutputFormat.class );

* we use an OutputFormatClass that writes the job results

* to a MySQL database. Beware that the following OutputFormat

* will only write the *key* to the database, but the principle

* remains the same for all output formatters

*/

job.setOutputFormatClass( DBOutputFormat.class );

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值