文件切片
const getSliceFile = async (file: File, pieceSizes = 50, fileKey: string) => {
const piece = 1024 * 1024 * pieceSizes;
const totalSize = file.size;
const fileName = file.name;
let start = 0;
let index = 1;
let end = start + piece;
const chunks = [];
while (start < totalSize) {
const current = Math.min(end, totalSize);
const blob = file.slice(start, current);
const hash = (await getHash(blob)) as string;
chunks.push({
file: blob,
size: totalSize,
index,
fileSizeInByte: totalSize,
name: fileName,
fileName,
hash,
sliceSizeInByte: blob.size,
fileKey,
});
start = current;
end = start + piece;
index += 1;
}
return chunks;
};
上传 chunks
- 通过Promise.all | Promise.allSettled | 遍历进行上传,出错后进行断点续传
try{
await Promise.all([...])
}catch(e){
}
所有分片上传成功后,发送合并 chunks 请求
- 可以每次上传成功后 id+1,当 id 和 chunks 分片数量相等时,发送合并请求
断点续传
- 通过后端返回已上传成功的chunk的序列号集合或者每次上传成功将序列号存在localStorage里
- 通过上一步获取未上传成功的chunk ,继续上传
const getTasks = (
files: FileInfo[],
uploadId: string,
fileKey: string,
finish: number[],
): Promise<CommonResponse_LargeFileUploadResponse_>[] => {
const tasks: Promise<CommonResponse_LargeFileUploadResponse_>[] = [];
const currentTaskIndex: number[] = [];
files.forEach((chunk: FileInfo) => {
if (finish.includes(chunk.index)) {
return;
}
currentTaskIndex.push(chunk.index);
const formData = new FormData();
formData.append('file', chunk.file);
formData.append('sliceIndex', chunk.index);
formData.append('hash', chunk.hash);
formData.append('uploadId', uploadId);
formData.append('fileSizeInByte', chunk.sliceSizeInByte);
tasks.push(sliceUpload(formData));
});
return tasks;
};
刷新网页继续上传
- 和断点续传同理,将已成功上传的 chunks 序列号保存到 localStorage 中或通过请求后台获取
- 根据上一步找到未上传的chunks,继续上传
优化
闲时上传
- 通过 requestIdleCallback 在浏览器空闲时计算或发请求
webWorker
限制请求数量
- 因为浏览器有6-10个连接数的限制,所以需要控制分片的请求数量,最好一次不超过三个