使用Java实现对文件的切片和合并

使用Java实现对文件的切片和合并

切片

private static void cutFile() throws Exception {
        long startTime = System.currentTimeMillis();
        FileInputStream fis = null;
        FileOutputStream fos = null;
        int i = 1;
        int length = 0;
        byte[] data = new byte[1024 * 1024]; //1024 * 1024 = 1MB

        File file = new File("C:\\Users\\wjq\\Desktop\\test\\testCut.rar");
        try {
            fis = new FileInputStream(file);
            while ((length = fis.read(data)) != -1) {
                fos = new FileOutputStream("C:\\Users\\wjq\\Desktop\\test\\" + i + ".rar");
                fos.write(data, 0, length);
                logger.info("正在切分第" + i + "个文件,文件大小:" + fos.getChannel().size() / 1024 + "KB");
                fos.close();
                i++;
            }
            fis.close();
            long endTime = System.currentTimeMillis();
            logger.info("文件切分已完成,共耗时:" + (endTime - startTime) + "ms");
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (!StringUtil.isEmpty(fis)) {
                fis.close();
            }
            if (!StringUtil.isEmpty(fos)) {
                fos.close();
            }
        }
    }

合并

private static void conFile() throws Exception {
        long startTime = System.currentTimeMillis();

        byte[] data = new byte[1024 * 1024]; //一次读取 1024 * 1024 = 1MB
        BufferedInputStream bis = null;
        BufferedOutputStream bos = null;
        try {

            File file = new File("C:\\Users\\wjq\\Desktop\\test\\");
            String[] listFiles = file.list();

            bos = new BufferedOutputStream(new FileOutputStream("C:\\Users\\wjq\\Desktop\\testCon.rar"));

            for (int i = 1; i <= listFiles.length; i++) {
                bis = new BufferedInputStream(new FileInputStream(new File("C:\\Users\\wjq\\Desktop\\test\\" + i + ".rar")));
                while (bis.read(data) != -1) {
                    bos.write(data);
                }
                bis.close();
                logger.info("已完成第" + i + "块文件的合并,文件大小:" + data.length / 1024 + "KB");
            }
            bos.close();
            long endTime = System.currentTimeMillis();
            logger.info("文件合并已完成,文件大小 ==> {}, 共耗时 ==> {}", new FileInputStream("C:\\Users\\wjq\\Desktop\\testCon.rar").getChannel().size() / 1024 + "KB", (endTime - startTime) + "ms");
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (!StringUtil.isEmpty(bis)) {
                bis.close();
            }
            if (!StringUtil.isEmpty(bos)) {
                bos.close();
            }
        }
    }
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
前后端分离实现文件切片上传的流程通常如下: 前端: 1. 将大文件切分成多个小文件,每个小文件大小固定或者根据网络状况动态调整大小。 2. 对于每个小文件使用 FormData 对象进行封装,并通过 AJAX 请求将其发送给后端。 3. 在发送请求时,需要同时传递当前切片的序号和总切片数,以便后端进行文件合并。 后端: 1. 接收前端传递的每个切片,并进行存储。 2. 每当接收到一个切片后,就检查是否已经接收到了所有切片,如果是,则进行文件合并操作。 3. 在合并文件时,可以使用 Java NIO 的 MappedByteBuffer 进行文件拼接,以提高效率。 下面是一个简单的 Java 代码实现: 前端: ```javascript // 切分文件 function sliceFile(file, chunkSize) { const chunks = [] let start = 0 let end = chunkSize while (start < file.size) { chunks.push(file.slice(start, end)) start = end end = start + chunkSize } return chunks } // 上传切片 function uploadChunk(url, formData, onProgress) { return new Promise((resolve, reject) => { const xhr = new XMLHttpRequest() xhr.open('POST', url) xhr.upload.onprogress = onProgress xhr.onload = () => resolve(xhr.responseText) xhr.onerror = () => reject(xhr.statusText) xhr.send(formData) }) } const file = document.getElementById('file').files[0] const chunkSize = 1024 * 1024 // 1MB const chunks = sliceFile(file, chunkSize) const totalChunks = chunks.length let uploadedChunks = 0 for (let i = 0; i < totalChunks; i++) { const formData = new FormData() formData.append('chunk', chunks[i]) formData.append('filename', file.name) formData.append('chunkIndex', i) formData.append('totalChunks', totalChunks) uploadChunk('/upload', formData, e => { const progress = (uploadedChunks + e.loaded) / file.size * 100 console.log(`Upload progress: ${progress.toFixed(2)}%`) }).then(() => { uploadedChunks++ if (uploadedChunks === totalChunks) { console.log('Upload complete') } }) } ``` 后端: ```java @RestController public class UploadController { private final Map<String, MappedByteBuffer> bufferMap = new ConcurrentHashMap<>(); @PostMapping("/upload") public ResponseEntity<String> uploadChunk(@RequestParam("chunk") MultipartFile chunk, @RequestParam("filename") String filename, @RequestParam("chunkIndex") int chunkIndex, @RequestParam("totalChunks") int totalChunks) throws IOException { String key = filename + "-" + chunkIndex; File tempFile = new File(filename + ".temp"); try (RandomAccessFile raf = new RandomAccessFile(tempFile, "rw")) { raf.seek(chunkIndex * chunk.getSize()); raf.write(chunk.getBytes()); } if (chunkIndex == totalChunks - 1) { File outputFile = new File(filename); try (FileChannel outputChannel = new FileOutputStream(outputFile).getChannel()) { for (int i = 0; i < totalChunks; i++) { String bufferKey = filename + "-" + i; MappedByteBuffer buffer = bufferMap.get(bufferKey); if (buffer == null) { FileChannel inputChannel = new FileInputStream(tempFile).getChannel(); buffer = inputChannel.map(FileChannel.MapMode.READ_ONLY, i * chunk.getSize(), chunk.getSize()); bufferMap.put(bufferKey, buffer); inputChannel.close(); } outputChannel.write(buffer); } } tempFile.delete(); } return ResponseEntity.ok("Upload success"); } } ``` 这里使用了 ConcurrentHashMap 来存储每个切片的 MappedByteBuffer 对象,以避免重复读取文件。最后合并文件时,只需要将每个切片对应的 MappedByteBuffer 写入到目标文件中即可。注意,这里使用了 try-with-resources 语句来确保资源的正确关闭。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值