大数据同步工具之DataX

DataX

DataX概述

GitHub地址:https://github.com/alibaba/DataX

DataX 是阿里巴巴集团内被广泛使用的离线数据同步工具/平台,实现包括 MySQL、SQL Server、Oracle、PostgreSQL、HDFS、Hive、HBase、OTS、ODPS 等各种异构数据源之间高效的数据同步功能。

DataX本身作为数据同步框架,将不同数据源的同步抽象为从源头数据源读取数据的Reader插件,以及向目标端写入数据的Writer插件,理论上DataX框架可以支持任意数据源类型的数据同步工作。同时DataX插件体系作为一套生态系统, 每接入一套新数据源该新加入的数据源即可实现和现有的数据源互通。

在这里插入图片描述

框架设计

在这里插入图片描述

DataX本身作为离线数据同步框架,采用Framework + plugin架构构建。将数据源读取和写入抽象成为Reader/Writer插件,纳入到整个同步框架中。

Reader:Reader为数据采集模块,负责采集数据源的数据,将数据发送给Framework。

Writer: Writer为数据写入模块,负责不断向Framework取数据,并将数据写入到目的端。

Framework:Framework用于连接reader和writer,作为两者的数据传输通道,并处理缓冲,流控,并发,数据转换等核心技术问题。

插件体系

类型数据源Reader(读)Writer(写)文档
RDBMS 关系型数据库MySQL
Oracle
OceanBase
SQLServer
PostgreSQL
DRDS
达梦
通用RDBMS(支持所有关系型数据库)
阿里云数仓数据存储ODPS
ADS
OSS
OCS
NoSQL数据存储OTS
Hbase0.94
Hbase1.1
MongoDB
Hive
无结构化数据存储TxtFile
FTP
HDFS
Elasticsearch

核心架构

在这里插入图片描述
核心模块介绍:

DataX完成单个数据同步的作业,我们称之为Job,DataX接受到一个Job之后,将启动一个进程来完成整个作业同步过程。DataX Job模块是单个作业的中枢管理节点,承担了数据清理、子任务切分(将单一作业计算转化为多个子Task)、TaskGroup管理等功能。

DataXJob启动后,会根据不同的源端切分策略,将Job切分成多个小的Task(子任务),以便于并发执行。Task便是DataX作业的最小单元,每一个Task都会负责一部分数据的同步工作。

切分多个Task之后,DataX Job会调用Scheduler模块,根据配置的并发数据量,将拆分成的Task重新组合,组装成TaskGroup(任务组)。每一个TaskGroup负责以一定的并发运行完毕分配好的所有Task,默认单个任务组的并发数量为5。
每一个Task都由TaskGroup负责启动,Task启动后,会固定启动Reader—>Channel—>Writer的线程来完成任务同步工作。

DataX作业运行起来之后, Job监控并等待多个TaskGroup模块任务完成,等待所有TaskGroup任务完成后Job成功退出。否则,异常退出,进程退出值非0

DataX调度流程:

举例来说,用户提交了一个DataX作业,并且配置了20个并发,目的是将一个100张分表的mysql数据同步到odps里面。 DataX的调度决策思路是:

DataXJob根据分库分表切分成了100个Task。

根据20个并发,DataX计算共需要分配4个TaskGroup。

4个TaskGroup平分切分好的100个Task,每一个TaskGroup负责以5个并发共计运行25个Task。

安装DataX

安装dataX有两种方式,一种是tar.gz直接安装,一种是用源码自行编译安装。这里使用tar.gz直接安装

方法一

直接下载DataX工具包:http://datax-opensource.oss-cn-hangzhou.aliyuncs.com/datax.tar.gz

下载后解压至本地某个目录,进入bin目录,即可运行同步作业:

$ cd  {YOUR_DATAX_HOME}/bin

$ python datax.py {YOUR_JOB.json}

自检脚本:

python {YOUR_DATAX_HOME}/bin/datax.py {YOUR_DATAX_HOME}/job/job.json

方法二

下载DataX源码,自己编译:DataX源码

(1)、下载DataX源码:

$ git clone git@github.com:alibaba/DataX.git

(2)、通过maven打包:

$ cd  {DataX_source_code_home}
$ mvn -U clean package assembly:assembly -Dmaven.test.skip=true

打包成功,日志显示如下:

[INFO] BUILD SUCCESS
[INFO] -----------------------------------------------------------------
[INFO] Total time: 08:12 min
[INFO] Finished at: 2015-12-13T16:26:48+08:00
[INFO] Final Memory: 133M/960M
[INFO] -----------------------------------------------------------------

打包成功后的DataX包位于 {DataX_source_code_home}/target/datax/datax/ ,结构如下:

$ cd  {DataX_source_code_home}
$ ls ./target/datax/datax/
bin		conf		job		lib		log		log_perf	plugin		script	

系统要求

Linux

JDK(1.8以上,推荐1.8)

Python(2或3都可以)

Apache Maven 3.x (Compile DataX)

下载与安装

wget http://datax-opensource.oss-cn-hangzhou.aliyuncs.com/datax.tar.gz

tar -zxvf datax.tar.gz

cd datax

官方演示案例

使用官方演示案例,执行自检脚本:

cd datax

python bin/datax.py job/job.json 

报错:

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.


2022-05-12 17:02:28.374 [main] WARN  ConfigParser - 插件[streamreader,streamwriter]加载失败,1s后重试... Exception:Code:[Common-00], Describe:[您提供的配置文件存在错误信息,请检查您的作业配置 .] - 配置信息错误,您提供的配置文件[/usr/local/program/datax/plugin/reader/._hbase094xreader/plugin.json]不存在. 请检查您的配置文件. 
2022-05-12 17:02:29.382 [main] ERROR Engine -DataX智能分析,该任务最可能的错误原因是:
com.alibaba.datax.common.exception.DataXException: Code:[Common-00], Describe:[您提供的配置文件存在错误信息,请检查您的作业配置 .] - 配置信息错误,您提供的配置文件[/usr/local/program/datax/plugin/reader/._hbase094xreader/plugin.json]不存在. 请检查您的配置文件.
        at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:26)
        at com.alibaba.datax.common.util.Configuration.from(Configuration.java:95)
        at com.alibaba.datax.core.util.ConfigParser.parseOnePluginConfig(ConfigParser.java:153)
        at com.alibaba.datax.core.util.ConfigParser.parsePluginConfig(ConfigParser.java:125)
        at com.alibaba.datax.core.util.ConfigParser.parse(ConfigParser.java:63)
        at com.alibaba.datax.core.Engine.entry(Engine.java:137)
        at com.alibaba.datax.core.Engine.main(Engine.java:204)

删除datax/plugin/readerdatax/plugin/writer/下所有._xxxx隐藏文件

rm -rf plugin/reader/._*er

rm -rf plugin/writer/._*er

再次执行:python bin/datax.py job/job.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.


2022-05-12 17:08:40.916 [main] INFO  VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl
2022-05-12 17:08:40.928 [main] INFO  Engine - the machine info  => 

        osInfo: Oracle Corporation 1.8 25.311-b11
        jvmInfo:        Linux amd64 3.10.0-1160.59.1.el7.x86_64
        cpu num:        2

        totalPhysicalMemory:    -0.00G
        freePhysicalMemory:     -0.00G
        maxFileDescriptorCount: -1
        currentOpenFileDescriptorCount: -1

        GC Names        [PS MarkSweep, PS Scavenge]

        MEMORY_NAME                    | allocation_size                | init_size                      
        PS Eden Space                  | 256.00MB                       | 256.00MB                       
        Code Cache                     | 240.00MB                       | 2.44MB                         
        Compressed Class Space         | 1,024.00MB                     | 0.00MB                         
        PS Survivor Space              | 42.50MB                        | 42.50MB                        
        PS Old Gen                     | 683.00MB                       | 683.00MB                       
        Metaspace                      | -0.00MB                        | 0.00MB                         


2022-05-12 17:08:40.960 [main] INFO  Engine - 
{
        "content":[
                {
                        "reader":{
                                "name":"streamreader",
                                "parameter":{
                                        "column":[
                                                {
                                                        "type":"string",
                                                        "value":"DataX"
                                                },
                                                {
                                                        "type":"long",
                                                        "value":xxxx0604
                                                },
                                                {
                                                        "type":"date",
                                                        "value":"xxxx-06-04 00:00:00"
                                                },
                                                {
                                                        "type":"bool",
                                                        "value":true
                                                },
                                                {
                                                        "type":"bytes",
                                                        "value":"test"
                                                }
                                        ],
                                        "sliceRecordCount":100000
                                }
                        },
                        "writer":{
                                "name":"streamwriter",
                                "parameter":{
                                        "encoding":"UTF-8",
                                        "print":false
                                }
                        }
                }
        ],
        "setting":{
                "errorLimit":{
                        "percentage":0.02,
                        "record":0
                },
                "speed":{
                        "byte":10485760
                }
        }
}

2022-05-12 17:08:40.997 [main] WARN  Engine - prioriy set to 0, because NumberFormatException, the value is: null
2022-05-12 17:08:41.000 [main] INFO  PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0
2022-05-12 17:08:41.001 [main] INFO  JobContainer - DataX jobContainer starts job.
2022-05-12 17:08:41.003 [main] INFO  JobContainer - Set jobId = 0
2022-05-12 17:08:41.035 [job-0] INFO  JobContainer - jobContainer starts to do prepare ...
2022-05-12 17:08:41.036 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] do prepare work .
2022-05-12 17:08:41.037 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] do prepare work .
2022-05-12 17:08:41.038 [job-0] INFO  JobContainer - jobContainer starts to do split ...
2022-05-12 17:08:41.040 [job-0] INFO  JobContainer - Job set Max-Byte-Speed to 10485760 bytes.
2022-05-12 17:08:41.042 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] splits to [1] tasks.
2022-05-12 17:08:41.043 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] splits to [1] tasks.
2022-05-12 17:08:41.080 [job-0] INFO  JobContainer - jobContainer starts to do schedule ...
2022-05-12 17:08:41.087 [job-0] INFO  JobContainer - Scheduler starts [1] taskGroups.
2022-05-12 17:08:41.093 [job-0] INFO  JobContainer - Running by standalone Mode.
2022-05-12 17:08:41.115 [taskGroup-0] INFO  TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.
2022-05-12 17:08:41.122 [taskGroup-0] INFO  Channel - Channel set byte_speed_limit to -1, No bps activated.
2022-05-12 17:08:41.127 [taskGroup-0] INFO  Channel - Channel set record_speed_limit to -1, No tps activated.
2022-05-12 17:08:41.147 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2022-05-12 17:08:41.350 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[206]ms
2022-05-12 17:08:41.351 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] completed it's tasks.
2022-05-12 17:08:51.134 [job-0] INFO  StandAloneJobContainerCommunicator - Total 100000 records, 2600000 bytes | Speed 253.91KB/s, 10000 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.084s |  All Task WaitReaderTime 0.096s | Percentage 100.00%
2022-05-12 17:08:51.134 [job-0] INFO  AbstractScheduler - Scheduler accomplished all tasks.
2022-05-12 17:08:51.135 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] do post work.
2022-05-12 17:08:51.136 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] do post work.
2022-05-12 17:08:51.136 [job-0] INFO  JobContainer - DataX jobId [0] completed successfully.
2022-05-12 17:08:51.138 [job-0] INFO  HookInvoker - No hook invoked, because base dir not exists or is a file: /usr/local/program/datax/hook
2022-05-12 17:08:51.139 [job-0] INFO  JobContainer - 
         [total cpu info] => 
                averageCpu                     | maxDeltaCpu                    | minDeltaCpu                    
                -1.00%                         | -1.00%                         | -1.00%
                        

         [total gc info] => 
                 NAME                 | totalGCCount       | maxDeltaGCCount    | minDeltaGCCount    | totalGCTime        | maxDeltaGCTime     | minDeltaGCTime     
                 PS MarkSweep         | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             
                 PS Scavenge          | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             

2022-05-12 17:08:51.139 [job-0] INFO  JobContainer - PerfTrace not enable!
2022-05-12 17:08:51.140 [job-0] INFO  StandAloneJobContainerCommunicator - Total 100000 records, 2600000 bytes | Speed 253.91KB/s, 10000 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.084s |  All Task WaitReaderTime 0.096s | Percentage 100.00%
2022-05-12 17:08:51.141 [job-0] INFO  JobContainer - 
任务启动时刻                    : 2022-05-12 17:08:41
任务结束时刻                    : 2022-05-12 17:08:51
任务总计耗时                    :                 10s
任务平均流量                    :          253.91KB/s
记录写入速度                    :          10000rec/s
读出记录总数                    :              100000
读写失败总数                    :                   0

从stream读取数据并打印到控制台

查看配置模板

可以通过命令查看配置模板: python datax.py -r {YOUR_READER} -w {YOUR_WRITER}

cd datax/bin 

python datax.py -r streamreader -w streamwriter
DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.


Please refer to the streamreader document:
     https://github.com/alibaba/DataX/blob/master/streamreader/doc/streamreader.md 

Please refer to the streamwriter document:
     https://github.com/alibaba/DataX/blob/master/streamwriter/doc/streamwriter.md 
 
Please save the following configuration as a json file and  use
     python {DATAX_HOME}/bin/datax.py {JSON_FILE_NAME}.json 
to run the job.

{
    "job": {
        "content": [
            {
                "reader": {
                    "name": "streamreader", 
                    "parameter": {
                        "column": [], 
                        "sliceRecordCount": ""
                    }
                }, 
                "writer": {
                    "name": "streamwriter", 
                    "parameter": {
                        "encoding": "", 
                        "print": true
                    }
                }
            }
        ], 
        "setting": {
            "speed": {
                "channel": ""
            }
        }
    }
}

创建作业配置文件

根据模板配置,创建作业配置文件vim stream2stream.json

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "streamreader",
          "parameter": {
            "sliceRecordCount": 10,
            "column": [
              {
                "type": "long",
                "value": "10"
              },
              {
                "type": "string",
                "value": "hello,你好,世界-DataX"
              }
            ]
          }
        },
        "writer": {
          "name": "streamwriter",
          "parameter": {
            "encoding": "UTF-8",
            "print": true
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 5
       }
    }
  }
}

启动DataX任务

cd datax

python bin/datax.py job/stream2stream.json
......................
2022-05-12 17:15:51.107 [job-0] INFO  StandAloneJobContainerCommunicator - Total 50 records, 950 bytes | Speed 95B/s, 5 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 100.00%
2022-05-12 17:15:51.108 [job-0] INFO  AbstractScheduler - Scheduler accomplished all tasks.
2022-05-12 17:15:51.109 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] do post work.
2022-05-12 17:15:51.111 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] do post work.
2022-05-12 17:15:51.111 [job-0] INFO  JobContainer - DataX jobId [0] completed successfully.
2022-05-12 17:15:51.113 [job-0] INFO  HookInvoker - No hook invoked, because base dir not exists or is a file: /usr/local/program/datax/hook
2022-05-12 17:15:51.114 [job-0] INFO  JobContainer - 
         [total cpu info] => 
                averageCpu                     | maxDeltaCpu                    | minDeltaCpu                    
                -1.00%                         | -1.00%                         | -1.00%
                        

         [total gc info] => 
                 NAME                 | totalGCCount       | maxDeltaGCCount    | minDeltaGCCount    | totalGCTime        | maxDeltaGCTime     | minDeltaGCTime     
                 PS MarkSweep         | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             
                 PS Scavenge          | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             

2022-05-12 17:15:51.115 [job-0] INFO  JobContainer - PerfTrace not enable!
2022-05-12 17:15:51.115 [job-0] INFO  StandAloneJobContainerCommunicator - Total 50 records, 950 bytes | Speed 95B/s, 5 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 100.00%
2022-05-12 17:15:51.116 [job-0] INFO  JobContainer - 
任务启动时刻                    : 2022-05-12 17:15:40
任务结束时刻                    : 2022-05-12 17:15:51
任务总计耗时                    :                 10s
任务平均流量                    :               95B/s
记录写入速度                    :              5rec/s
读出记录总数                    :                  50
读写失败总数                    :                   0

从MySQL抽取数据到HDFS

获取配置模板

python bin/datax.py -r mysqlreader -w hdfswriter

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.


Please refer to the mysqlreader document:
     https://github.com/alibaba/DataX/blob/master/mysqlreader/doc/mysqlreader.md 

Please refer to the hdfswriter document:
     https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md 
 
Please save the following configuration as a json file and  use
     python {DATAX_HOME}/bin/datax.py {JSON_FILE_NAME}.json 
to run the job.

{
    "job": {
        "content": [
            {
                "reader": {
                    "name": "mysqlreader", 
                    "parameter": {
                        "column": [], 
                        "connection": [
                            {
                                "jdbcUrl": [], 
                                "table": []
                            }
                        ], 
                        "password": "", 
                        "username": "", 
                        "where": ""
                    }
                }, 
                "writer": {
                    "name": "hdfswriter", 
                    "parameter": {
                        "column": [], 
                        "compress": "", 
                        "defaultFS": "", 
                        "fieldDelimiter": "", 
                        "fileName": "", 
                        "fileType": "", 
                        "path": "", 
                        "writeMode": ""
                    }
                }
            }
        ], 
        "setting": {
            "speed": {
                "channel": ""
            }
        }
    }
}

创建作业配置文件

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "mysqlreader",
          "parameter": {
            "column": [
              "id",
              "name",
              "age"
            ],
            "connection": [
              {
                "jdbcUrl": [
                  "jdbc:mysql://127.0.0.1:3306/demo"
                ],
                "table": [
                  "user"
                ]
              }
            ],
            "password": "123456",
            "username": "root",
            "where": ""
          }
        },
        "writer": {
          "name": "hdfswriter",
          "parameter": {
            "column": [
              {
                "name": "id",
                "type": "INT"
              },
              {
                "name": "name",
                "type": "STRING"
              },
              {
                "name": "age",
                "type": "SMALLINT"
              }
            ],
            "compress": "GZIP",
            "defaultFS": "hdfs://administrator:9000",
            "fieldDelimiter": "\t",
            "fileName": "mysql2hdfs.text",
            "fileType": "text",
            "path": "/datax",
            "writeMode": "append"
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": "10"
      }
    }
  }
}

启动DataX任务

注意:DataX导入数据时,需要目的目录存在,因此在执行DataX任务之前,首先要创建导出目录:

hadoop fs -mkdir -p /datax

启动DataX

cd datax

python bin/datax.py job/mysql2hdfs.json
................
2022-05-13 15:07:22.405 [job-0] INFO  HookInvoker - No hook invoked, because base dir not exists or is a file: /usr/local/program/datax/hook
2022-05-13 15:07:22.507 [job-0] INFO  JobContainer - 
         [total cpu info] => 
                averageCpu                     | maxDeltaCpu                    | minDeltaCpu                    
                -1.00%                         | -1.00%                         | -1.00%
                        

         [total gc info] => 
                 NAME                 | totalGCCount       | maxDeltaGCCount    | minDeltaGCCount    | totalGCTime        | maxDeltaGCTime     | minDeltaGCTime     
                 PS MarkSweep         | 1                  | 1                  | 1                  | 0.051s             | 0.051s             | 0.051s             
                 PS Scavenge          | 1                  | 1                  | 1                  | 0.029s             | 0.029s             | 0.029s             

2022-05-13 15:07:22.508 [job-0] INFO  JobContainer - PerfTrace not enable!
2022-05-13 15:07:22.508 [job-0] INFO  StandAloneJobContainerCommunicator - Total 4 records, 24 bytes | Speed 2B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 100.00%
2022-05-13 15:07:22.510 [job-0] INFO  JobContainer - 
任务启动时刻                    : 2022-05-13 15:07:09
任务结束时刻                    : 2022-05-13 15:07:22
任务总计耗时                    :                 12s
任务平均流量                    :                2B/s
记录写入速度                    :              0rec/s
读出记录总数                    :                   4
读写失败总数                    :                   0

在这里插入图片描述

Hadoop高可用的配置

                "writer": {
                    "name": "hdfswriter",
                    "parameter": {
                        "column": [
   
                        ],
                        "hadoopConfig": {
                            "dfs.nameservices": "mycluster",
                            "dfs.namenode.rpc-address.mycluster.nn2": "node01:9000",
                            "dfs.namenode.rpc-address.mycluster.nn1": "node02:9000",
                            "dfs.client.failover.proxy.provider.mycluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
                            "dfs.ha.namenodes.mycluster": "nn1,nn2"
                        },
                        "compress": "gzip",
                        "defaultFS": "hdfs://mycluster",
                        "fieldDelimiter": "\t",
                        "fileName": "mysql2hdfs",
                        "fileType": "text",
                        "path": "${targetdir}",
                        "writeMode": "append"
                    }
                }

启动并传递参数

在配置文件中,将导出目录设置为引用一个变量,当在执行job的时候,在执行命令时动态传入变量值

bin/datax.py job/mysql2hdfs.json -p "-Dtargetdir=/datax/2022-05-13"

DataX导入脚本

新建mysql_to_hdfs.sh脚本

#!/bin/bash
if [ "$#" -ne 1 ]; then
  echo "参数错误!"
  exit 1
fi

targetdir="/data/$1"

# 设置DataX安装路径变量
datax_path="/usr/local/program/datax"

# 确保HDFS目录存在且为空
hadoop fs -rm -r "${targetdir}"
hadoop fs -mkdir -p "${targetdir}"

# 执行DataX任务
${datax_path}/bin/datax.py ${datax_path}/job/mysql2hdfs.json -p"-Dtargetdir=${targetdir}"

赋予执行权限

chmod +x mysql_to_hdfs.sh

执行

mysql_to_hdfs.sh 2022-05-13

从Hive抽取数据到MySQL

准备Hive数据

创建Hive外部表

create external table if not exists tb_user(id int ,name string,age int) row format delimited fields terminated by ',' lines terminated by '\n';

创建vim tb_user.text并添加如下数据

1,hive1,11
2,hive2,22
3,hive3,33
4,hive4,44
5,hive5,55

上传到Hive外部表/hive/warehouse/tb_user目录下

hadoop fs -put user_data.txt /hive/warehouse/tb_user

查看tb_user表数据

hive (default)> select * from tb_user;
OK
tb_user.id      tb_user.name    tb_user.age
1       hive1   11
2       hive2   22
3       hive3   33
4       hive4   44
5       hive5   55

查看配置模板

cd datax/bin 

python datax.py -r hdfsreader -w mysqlwriter
{
    "job": {
        "content": [
            {
                "reader": {
                    "name": "hdfsreader", 
                    "parameter": {
                        "column": [], 
                        "defaultFS": "", 
                        "encoding": "UTF-8", 
                        "fieldDelimiter": ",", 
                        "fileType": "orc", 
                        "path": ""
                    }
                }, 
                "writer": {
                    "name": "mysqlwriter", 
                    "parameter": {
                        "column": [], 
                        "connection": [
                            {
                                "jdbcUrl": "", 
                                "table": []
                            }
                        ], 
                        "password": "", 
                        "preSql": [], 
                        "session": [], 
                        "username": "", 
                        "writeMode": ""
                    }
                }
            }
        ], 
        "setting": {
            "speed": {
                "channel": ""
            }
        }
    }
}

创建作业配置文件

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "hdfsreader",
          "parameter": {
            "column": [
              {
                "index": 0,
                "type": "long"
              },
              {
                "index": 1,
                "type": "string"
              },
              {
                "index": 2,
                "type": "long"
              }
            ],
            "defaultFS": "hdfs://112.74.96.150:9000",
            "encoding": "UTF-8",
            "fieldDelimiter": ",",
            "fileType": "text",
            "path": "/hive/warehouse/tb_user/*"
          }
        },
        "writer": {
          "name": "mysqlwriter",
          "parameter": {
            "column": [
              "id",
              "name",
              "age"
            ],
            "connection": [
              {
                "jdbcUrl": "jdbc:mysql://127.0.0.1:3306/demo",
                "table": [
                  "user"
                ]
              }
            ],
            "password": "123456",
            "preSql": [
              "delete from user"
            ],
            "session": [
              "select count(*) from user"
            ],
            "username": "root",
            "writeMode": "insert"
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": "2"
      }
    }
  }
}

启动DataX任务

python bin/datax.py job/hive2mysql.json 
DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.


2022-05-14 14:44:57.570 [main] INFO  VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl
2022-05-14 14:44:57.579 [main] INFO  Engine - the machine info  => 

        osInfo: Oracle Corporation 1.8 25.311-b11
        jvmInfo:        Linux amd64 3.10.0-1160.59.1.el7.x86_64
        cpu num:        2

        totalPhysicalMemory:    -0.00G
        freePhysicalMemory:     -0.00G
        maxFileDescriptorCount: -1
        currentOpenFileDescriptorCount: -1

        GC Names        [PS MarkSweep, PS Scavenge]

        MEMORY_NAME                    | allocation_size                | init_size                      
        PS Eden Space                  | 256.00MB                       | 256.00MB                       
        Code Cache                     | 240.00MB                       | 2.44MB                         
        Compressed Class Space         | 1,024.00MB                     | 0.00MB                         
        PS Survivor Space              | 42.50MB                        | 42.50MB                        
        PS Old Gen                     | 683.00MB                       | 683.00MB                       
        Metaspace                      | -0.00MB                        | 0.00MB                         

................

2022-05-14 14:45:01.151 [0-0-0-writer] INFO  DBUtil - execute sql:[select count(*) from user]
2022-05-14 14:45:01.178 [0-0-0-reader] INFO  Reader$Task - end read source files...
2022-05-14 14:45:01.512 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[519]ms
2022-05-14 14:45:01.513 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] completed it's tasks.
2022-05-14 14:45:11.007 [job-0] INFO  StandAloneJobContainerCommunicator - Total 5 records, 40 bytes | Speed 4B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.019s | Percentage 100.00%
2022-05-14 14:45:11.007 [job-0] INFO  AbstractScheduler - Scheduler accomplished all tasks.
2022-05-14 14:45:11.008 [job-0] INFO  JobContainer - DataX Writer.Job [mysqlwriter] do post work.
2022-05-14 14:45:11.008 [job-0] INFO  JobContainer - DataX Reader.Job [hdfsreader] do post work.
2022-05-14 14:45:11.008 [job-0] INFO  JobContainer - DataX jobId [0] completed successfully.
2022-05-14 14:45:11.010 [job-0] INFO  HookInvoker - No hook invoked, because base dir not exists or is a file: /usr/local/program/datax/hook
2022-05-14 14:45:11.011 [job-0] INFO  JobContainer - 
         [total cpu info] => 
                averageCpu                     | maxDeltaCpu                    | minDeltaCpu                    
                -1.00%                         | -1.00%                         | -1.00%
                        

         [total gc info] => 
                 NAME                 | totalGCCount       | maxDeltaGCCount    | minDeltaGCCount    | totalGCTime        | maxDeltaGCTime     | minDeltaGCTime     
                 PS MarkSweep         | 1                  | 1                  | 1                  | 0.059s             | 0.059s             | 0.059s             
                 PS Scavenge          | 1                  | 1                  | 1                  | 0.026s             | 0.026s             | 0.026s             

2022-05-14 14:45:11.011 [job-0] INFO  JobContainer - PerfTrace not enable!
2022-05-14 14:45:11.012 [job-0] INFO  StandAloneJobContainerCommunicator - Total 5 records, 40 bytes | Speed 4B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.019s | Percentage 100.00%
2022-05-14 14:45:11.019 [job-0] INFO  JobContainer - 
任务启动时刻                    : 2022-05-14 14:44:57
任务结束时刻                    : 2022-05-14 14:45:11
任务总计耗时                    :                 13s
任务平均流量                    :                4B/s
记录写入速度                    :              0rec/s
读出记录总数                    :                   5
读写失败总数                    :                   0

在这里插入图片描述

DataX Web

DataX Web是在DataX之上开发的分布式数据同步工具,提供简单易用的 操作界面,降低使用DataX的学习成本,缩短任务配置时间,避免配置过程中出错。

DataX Web使用指南参考:https://blog.csdn.net/qq_38628046/article/details/124769355

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
DataX Web是在DataX之上开发的分布式数据同步工具,提供简单易用的操作界面,降低用户使用DataX的学习成本,缩短任务配置时间,避免配置过程中出错。用户可通过页面选择数据源即可创建数据同步任务,RDBMS数据源可批量创建数据同步任务,支持实时查看数据同步进度及日志并提供终止同步功能,集成并二次开发xxl-job可根据时间、自增主键增量同步数据。 任务"执行器"支持集群部署,支持执行器多节点路由策略选择,支持超时控制、失败重试、失败告警、任务依赖,执行器CPU.内存.负载的监控等等。后续还将提供更多的数据源支持、数据转换UDF、表结构同步、数据同步血缘等更为复杂的业务场景。 DataX Web安装环境: Language: Java 8(jdk版本建议1.8.201以上) Python2.7(支持Python3需要修改替换datax/bin下面的三个python文件,替换文件在doc/datax-web/datax-python3下) Environment: MacOS, Windows,Linux Database: Mysql5.7 DataX Web功能特点: 1、通过Web构建DataX Json; 2、DataX Json保存在数据库中,方便任务的迁移,管理; 3、Web实时查看抽取日志,类似Jenkins的日志控制台输出功能; 4、DataX运行记录展示,可页面操作停止DataX作业; 5、支持DataX定时任务,支持动态修改任务状态、启动/停止任务,以及终止运行中任务,即时生效; 6、调度采用中心式设计,支持集群部署; 7、任务分布式执行,任务"执行器"支持集群部署; 8、执行器会周期性自动注册任务, 调度中心将会自动发现注册的任务并触发执行; 9、路由策略:执行器集群部署时提供丰富的路由策略,包括:第一个、最后一个、轮询、随机、一致性HASH、最不经常使用、最近最久未使用、故障转移、忙碌转移等; 10、阻塞处理策略:调度过于密集执行器来不及处理时的处理策略,策略包括:单机串行(默认)、丢弃后续调度、覆盖之前调度; 11、任务超时控制:支持自定义任务超时时间,任务运行超时将会主动中断任务; 12、任务失败重试:支持自定义任务失败重试次数,当任务失败时将会按照预设的失败重试次数主动进行重试; 13、任务失败告警;默认提供邮件方式失败告警,同时预留扩展接口,可方便的扩展短信、钉钉等告警方式; 14、用户管理:支持在线管理系统用户,存在管理员、普通用户两种角色; 15、任务依赖:支持配置子任务依赖,当父任务执行结束且执行成功后将会主动触发一次子任务的执行, 多个子任务用逗号分隔; 16、运行报表:支持实时查看运行数据,以及调度报表,如调度日期分布图,调度成功分布图等; 17、指定增量字段,配置定时任务自动获取每次的数据区间,任务失败重试,保证数据安全; 18、页面可配置DataX启动JVM参数; 19、数据源配置成功后添加手动测试功能; 20、可以对常用任务进行配置模板,在构建完JSON之后可选择关联模板创建任务; 21、jdbc添加hive数据源支持,可在构建JSON页面选择数据源生成column信息并简化配置; 22、优先通过环境变量获取DataX文件目录,集群部署时不用指定JSON及日志目录; 23、通过动态参数配置指定hive分区,也可以配合增量实现增量数据动态插入分区; 24、任务类型由原来DataX任务扩展到Shell任务、Python任务、PowerShell任务; 25、添加HBase数据源支持,JSON构建可通过HBase数据源获取hbaseConfig,column; 26、添加MongoDB数据源支持,用户仅需要选择collectionName即可完成json构建; 27、添加执行器CPU、内存、负载的监控页面; 28、添加24类插件DataX JSON配置样例 29、公共字段(创建时间,创建人,修改时间,修改者)插入或更新时自动填充 30、对swagger接口进行token验证 31、任务增加超时时间,对超时任务kill datax进程,可配合重试策略避免网络问题导致的datax卡死。 32、添加项目管理模块,可对任务分类管理; 33、对RDBMS数据源增加批量任务创建功能,选择数据源,表即可根据模板批量生成DataX同步任务; 34、JSON构建增加ClickHouse数据源支持; 35、执行器CPU.内存.负载的监控页面图形化; 36、RDBMS数据源增量抽取增加主键自增方式并优化页面参数配置; 37、更换MongoDB数据源连接方式,重构HBase数据源JSON构建模块; 38、脚本类型任务增加停止功能; 39、rdbms json构建增加postSql,并支持构建多个preSq

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CodeDevMaster

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值