场景描述
算法模型是 java 代码使用 spark-submit yarn cluster 运行的,输出结果存储在了 HDFS 上,可能因为数据结构比较复杂吧,所以没有选择将结果存储在 hive 表中。这样的话,当后期在从 HDFS 读取结果时就会遇到数据合并的问题。
比如:模型输出就是三行 json 数据。如下:
{ "description":"desc-1","id":"1","name":"name-1"} { "description":"desc-2","id":"2","name":"name-2"} { "description":"desc-3","id":"3","name":"name-3"}
同时在代码中设定存储文件为 /user/deployer/people-result
,但实际存储在 HDFS 上时的目录结构是这样的:
(base) [deployer@sh01 ~]$ hadoop fs -ls /user/deployer/people-result Found 3 items -rw-r--r-- 3 deployer supergroup 0 2022-07-05 18:44 /user/deployer/people-result/_SUCCESS -rw-r--r-- 3 deployer supergroup 50 2022-07-05 18:44 /user/deployer/people-result/part-00000-3d314ceb-5fd4-4a13-9d16-da3c838aa13f-c000.json -rw-r--r-- 3 deployer supergroup 100 2022-07-05 18:44 /user/deployer/people-result/part-00001-3d314ceb-5fd4-4a13-9d16-da3c838aa13f-c000.json
给定的存储文件会被创建成目录,数据被分散存储在该目录下的若干个 .json 文件中。
后期读取肯定需要合并数据。若手写代码合并不仅工作量大,而且也容易出错。
解决办法
java 已经提供了读取且合并的 API: FileUtil.copyMerge
。先上代码:
import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.Path; import org.junit.jupiter.api.Test; import org.springframework.boot.test.context.SpringBootTest; @SpringBootTest() public class DownloadFileFromHdfs { @Test public void test_download_and_merge() throws IOException { String src = "/user/deployer/people-result"; String dst = "/Users/mac/Downloads/people-result.json"; boolean result = copyMerge(src, dst); System.out.println(result); } private boolean copyMerge(String src, String dst) throws IOException { Configuration conf = new Configuration(); FileSystem hdfs = FileSystem.get(conf); FileSystem local = FileSystem.getLocal(conf); final boolean result; try { Path srcPath = new Path(src); Path dstPath = new Path(dst); boolean deleteSource = false; String addString = null; result = FileUtil.copyMerge(hdfs, srcPath, local, dstPath, deleteSource, conf, addString); } finally { hdfs.close(); } return result; } }
上面的代码很简单,需要说明两点:
-
Configuration
类中保存了大数据集群的配置,所以需要将core-site.xml
和hdfs-site.xml
提前存放到 classpath 目录下,等程序启动时 xml 文件中的信息会被自动读取到配置类中。 -
使用时需要分清楚是从哪里读取到哪里。在上述场景中 src 是目录,dst 是文件。
结果展示
(base) ➜ Downloads cat /Users/mac/Downloads/people-result.json { "description":"desc-1","id":"1","name":"name-1"} { "description":"desc-2","id":"2","name":"name-2"} { "description":"desc-3","id":"3","name":"name-3"}