前言:
近期要做一个视频网站,但是管理平台需要上传音/视频,记录一下这种大文件上传的方法吧。
方案:断点续传(分片上传)
实现断点续传的逻辑
- 在上传前检查文件的哈希值,判断哪些分片已经上传,避免重复上传
- 在所有分片上传完成后,合并分片
技术栈: vue3 + element-plus + vite + web worker + spark-md5
// upload.vue
<template>
<div>
<el-upload class="upload-demo" drag action="#" :auto-upload="false" :limit="1" :on-change="uploadfile">
<el-icon class="el-icon--upload"><upload-filled /></el-icon>
<div class="el-upload__text">将文件拖放到此处或 <em>单击上传</em></div>
</el-upload>
</div>
</template>
<script setup lang="ts">
import { ref } from "vue";
import axios from "axios";
const chunks = ref([]);
const file = ref<any>();
const hash = ref("");
// 使用自定义上传
const uploadfile = async option => {
file.value = option.raw;
// 创建切片
await createFileChunks();
// 进行上传并合并
handleUpload();
};
// 创建切片
const createFileChunks = async () => {
return new Promise((resolve, reject) => {
// 使用web worker, import.meta.url 为当前文件的路径
const worker = new Worker(new URL("./fileWorker.js", import.meta.url), { type: "module" });
console.log(worker, "work");
worker.postMessage({ file: file.value });
worker.onmessage = e => {
chunks.value = e.data.chunks;
hash.value = e.data.hash;
resolve(true);
};
worker.onerror = e => {
console.log(e, "worker error");
reject(e);
};
});
};
// 上传并合并切片
const handleUpload = async () => {
await uploadChunks();
await axios.post("http://localhost:3000/fileUpload/merge", { hash: hash.value, fileName: file.value.name });
};
// 循环上传
const uploadChunks = async () => {
const requests = chunks.value.map((chunk, index) => {
const formData = new FormData();
formData.append("chunk", chunk);
formData.append("hash", `${hash.value}-${index}`);
return axios.post("http://localhost:3000/fileUpload", formData);
});
await Promise.all(requests);
};
</script>
<style scoped></style>
fileWorker.js
import SparkMD5 from "spark-md5";
self.onmessage = async function (e) {
const { file } = e.data;
const chunks = [];
const SIZE = 10 * 1024 * 1024; // 10MB
let cur = 0;
while (cur < file.size) {
chunks.push(file.slice(cur, cur + SIZE));
cur += SIZE;
}
try {
const hash = await calculateHash(file);
self.postMessage({ chunks, hash });
} catch (error) {
self.postMessage({ error: error.message });
}
};
function calculateHash(file) {
return new Promise(resolve => {
const spark = new SparkMD5.ArrayBuffer();
const fileReader = new FileReader();
fileReader.onload = e => {
spark.append(e.target.result);
resolve(spark.end());
};
fileReader.onerror = () => {
reject(new Error("读取文件时出错"));
};
fileReader.readAsArrayBuffer(file);
});
}
这样就完成了。
为何使用web Worker ?
因为文件的MD5值计算是纯运算,将阻塞JS主线程的运行,在文件较大时,计算过程将使得浏览器长时间处于无响应状态而web-worker与JS处于不同的线程,相互并不阻塞,因此可以将一些耗时的纯运算放在web-worker中进行,等待完成后通知到JS主线程。
好了,以上是前端上传大文件的前端demo示例。