hdfs 多个文件合并,如何将HDFS小文件合并为一个大文件?

I have number of small files generated from Kafka stream so I like merge small files to one single file but this merge is based on the date i.e. the original folder may have number of previous files but I only like to merge for given date files to one single file.

Any suggestions?

解决方案

Use something like the code below to iterate over the smaller files and aggregate them into a big one (assuming that source contains the HDFS path to your smaller files, and target is the path where you want your big result file):

val fs = FileSystem.get(spark.sparkContext.hadoopConfiguration)

fs.listStatus(new Path(source)).map(_.getPath.toUri.getPath).

foreach(name => spark.read.text(name).coalesce(1).write.mode(Append).text(target))

This example assumes text file format, but you can just as well read any Spark-supported format, and you can use different formats for source and target, as well

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值