新项目使用ogg抽数据按照年月日+小时的形式保存到hdfs,由于ogg数据是保留修改前数据和修改后数据的,所以采用json格式保存文本
{"table":"TEST.TT_SALES_RECORDS","op_type":"U","op_ts":"2020-05-19 02:05:03.000701","current_ts":"2020-05-19T10:05:10.427000","before":{"ID":178733,"PS_ORDER_NO":"PV2003110002","PS_ORDER_STATUS":35121004,"PS_ORDER_TYPE":null,"PS_ORDER_DATE":"2020-03-11 17:52:20","SALES_CONSULTANT":null,"BUYER_NAME":"猪哈哈"},"after":{"ID":178733,"PS_ORDER_NO":"PV2003110002","PS_ORDER_STATUS":35121004,"PS_ORDER_TYPE":null,"PS_ORDER_DATE":"2020-03-12 17:52:20","SALES_CONSULTANT":null,"BUYER_NAME":"猪坚强"}}
问题一:由于数据没有主键,必须配置row_id
tokens(DBROWID= @GETENV('RECORD','ROWID'));
问题二:json格式数据中间没有换行符
gg.handler.name.format=json
gg.handler.name.format.jsonDelimiter=CDATA[\N]
问题三:新增和修改是全量字段,删除仅有ID
NOCOMPRESSDELETES 参数可以记录所有列删除值
最终文本格式:
{"table":"TEST.TT_SALES_RECORDS","op_type":"U","op_ts":"2020-05-19 02:05:03.000701","current_ts":"2020-05-19T10:05:10.427000","tokens":{"DBROWID":"AABIl2AAiAABwGZAAB"},"before":{"ID":178733,"PS_ORDER_NO":"PV2003110002","PS_ORDER_STATUS":35121004,"PS_ORDER_TYPE":null,"PS_ORDER_DATE":"2020-03-11 17:52:20","SALES_CONSULTANT":null,"BUYER_NAME":"猪哈哈"},"after":{"ID":178733,"PS_ORDER_NO":"PV2003110002","PS_ORDER_STATUS":35121004,"PS_ORDER_TYPE":null,"PS_ORDER_DATE":"2020-03-12 17:52:20","SALES_CONSULTANT":null,"BUYER_NAME":"猪坚强"}}
然后使用sqoop把全量数据表按照rowid格式抽取到hdfs:
sqoop import \
--connect jdbc:oracle:thin:@//IP:1521/DB \
--username u \
--password p \
--query "SELECT rowidtochar(t.ROWID) as ROW_ID,t.* FROM SBPOPT.TT_TEST t where \$CONDITIONS " \
--delete-target-dir \
--target-dir /user/asmp/hive/ogg/tt_test \
--split-by bill_no \
--as-parquetfile \
-m 16
实际使用案例:
sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" --connect jdbc:oracle:thin:@//127.0.0.1:1521/ASMP2 --username test --password test --delete-target-dir --target-dir /user/asmp/hive/asmp/tt_repair_part --query "select rowidtochar(t.rowid) as row_id,t.* from sbpopt.tt_repair_part t where \$CONDITIONS" --as-parquetfile --split-by "to_char(part_date,'yyyy')" -m 8
注意事项:
1)增加/修改/删除 都要全列字段
(2)增加row_id
(3)字段json格式
(4)一行与一行之间有换行符
(5)日期生成文件夹,每小时一个文件
(6)HDFS和kafka中数据格式一致
(7)kafka中每张表对应一个topic
(8)kafka环境是否有kerberos认证
总计配置了五个进程分别是:
生产HDFS、生产Kafka、测试HDFS、测试Kafka1和Kafka2
之后就可以用spark读数据,解析修改数据&合并到全量数据~
技术要点:
(1)处理过程如何针对不同库表统一方法,通过参数配置的方式
(2)json字段顺序如何和Hive读取顺序保持一致