阿里DataX的HDFSWriter原生不支持写parquet文件,但是业务需要写parquet格式并采用snappy压缩。
阿里DataX项目git地址:https://github.com/alibaba/DataX
在网上又找不到有教程支持parquet文件写入,于是对HdfsWriter进行了一点改造。在此抛砖引玉。
主要就是在HdfsHelper类中添加以下代码:
public void parFileStartWrite(RecordReceiver lineReceiver, Configuration config, String fileName,
TaskPluginCollector taskPluginCollector) {
List<Configuration> columns = config.getListConfiguration(Key.COLUMN);
String compress = config.getString(Key.COMPRESS, null);
List<String> columnNames = getColumnNames(columns);
List<ObjectInspector> columnTypeInspectors = getColumnTypeInspectors(columns);
StructObjectInspector inspector = (StructObjectInspector) ObjectInspectorFactory
.getStandardStructObjectInspector(columnNames, columnTypeInspectors);
ParquetHiveSerDe parquetHiveSerDe = new ParquetHiveSerDe();
MapredParquetOutputFormat outFormat = new MapredParquetOutputFormat();
if (!"NONE".equalsIgnoreCase(compress) && null != compress) {
Class<? extends CompressionCodec> codecClass = getCompressCodec(compress);
if (null != codecClass) {
outFormat.setOutputCompressorClass(conf, codecClass);
}
}
try {
Properties colProperties = new Properties();
colProperties.setProperty("columns", String.join(",", columnNames));
List<String> colType = Lists.newArrayList();
columns.forEach(c -> colType.add(c.getString(Key.TYPE)));
colProperties.setProperty("columns.types", String.join(",", colType));
RecordWriter writer = (RecordWriter) outFormat.getHiveRecordWriter(conf, new Path(fileName), ObjectWritable.class, true, colProperties, Reporter.NULL);
Record record = null;
while ((record = lineReceiver.getFromReader()) != null) {
MutablePair<List<Object>, Boolean> transportResult = transportOneRecord(record, columns, taskPluginCollector);
if (!transportResult.getRight()) {
writer.write(null, parquetHiveSerDe.serialize(transportResult.getLeft(), inspector));
}
}
writer.close(Reporter.NULL);
} catch (Exception e) {
String message = String.format("写文件文件[%s]时发生IO异常,请检查您的网络是否正常!", fileName);
LOG.error(message);
Path path = new Path(fileName);
deleteDir(path.getParent());
throw DataXException.asDataXException(HdfsWriterErrorCode.Write_FILE_IO_ERROR, e);
}
}
其他参数验证,判断,和函数调用在hdfswriter类中。
如果采取snappy格式压缩文件,需要在job的json文件中配置:
"hadoopConfig": {
"parquet.compression": "SNAPPY",
"hive.exec.compress.output": true,
"mapred.output.compression.codec": "org.apache.hadoop.io.compress.SnappyCodec"
},
经过测试 ,文件是能正常的转成parquet文件,压缩也是正常的。
本文为原创,如需转载请注明出处或联系作者本人。