基于Web Worker实现的附件上传(支持切片、暂停、闪传、续传,含前端具体代码以及简单的后端服务代码)

4 篇文章 0 订阅
2 篇文章 0 订阅

先上图

前言

在现代 Web 应用程序中,文件上传是一个普遍存在的需求,尤其是在涉及到大型文件或持续上传的场景中。然而,传统的文件上传方式往往面临诸多挑战,比如性能、稳定性和用户体验等方面的问题。为了解决这些问题,我们可以利用 Web Worker 技术,实现一个高性能、稳定可靠的附件上传功能。

本文将介绍如何利用 Web Worker 实现附件上传功能,并且支持切片、暂停、闪传和续传等功能。通过将上传任务交给 Web Worker 处理,可以将文件上传操作从主线程中解放出来,避免阻塞用户界面,并且提高了上传的并发性能。

我们将会逐步介绍实现附件上传的全过程,包括如何使用 Web Worker 处理文件切片、如何实现暂停和续传功能、如何优化用户体验等方面的内容。通过本文的学习,读者将能够掌握利用 Web Worker 实现高性能附件上传的技术,并且能够应用到自己的项目中,提升项目的文件上传体验和性能。

  • 不足之处,还请各位大佬们指出😁

一、Web Worker

Web Worker是 HTML5 标准的一部分,这一规范定义了一套 API。
Web Worker 允许我们在js主线程之外运行一个独立的线程(Worker),也就是运行在后台的一个独立的js脚本。
因为Worker线程是独立的线程,与js主线程能够同时允许,互不阻塞。
所以如果有大量运算任务时,可以把任务交给Worker线程去处理,当Worker线程计算完成,再把结果返回给js主线程。这样,js主线程只用专注处理业务逻辑,不用耗费过多时间去处理大量复杂计算,从而减少了阻塞时间,也提高了运行效率,页面流畅度和用户体验自然而然也提高了。
能解决的问题是:
1 解决页面卡死问题;
2 发挥多核CPU的优势,提高JS性能

转载自Web Worker 与 SharedWorker 的介绍和使用

二、上传前的准备工作

1. 生成自定义的文件对象(fileItem

首先我们需要知道,在后面的操作中都要使用到哪些东西,封装一个generateFileItem方法

  • 这里默认切片大小是1M
const generateFileItem = (file, chunkSize = 1 * 1024 * 1024) => {
    const chunkNum = Math.ceil(file.size / chunkSize) // 总切片数
    const fileItem = {
        fid: generateUUID(), // 生成唯一id,这个方法自己实现,这里就不贴代码了
        hash: "", // 文件hash
        file, // 文件对象
        name: file.name, // 文件名称
        size: file.size, // 文件大小
        loaded: 0, // 已上传大小
        progress: 0,
        status: "create", // 初始化文件状态,这里用create表示已创建
        response: {}, // 上传请求的响应数据存放在这里
        chunkNum,
        remainder: "", // 预估剩余上传时间,也可以作为状态展示 
        chunks: [], // 这里存放切片数组
        uploadedIndexList: [], // 这里是已上传的切片索引
    }

    let index = 0
    let start = 0
    while (start < file.size) {
        let end = start + chunkSize
        if (end > file.size) end = file.size

        const chunk = {
            index,
            fileName: file.name,
            fileHash: "",
            start,
            end,
            total: file.size,
            chunk: file.slice(start, end),
            chunkNum,
        }

        // 调用 slice 方法进行文件切割
        fileItem.chunks.push(chunk)
        start += chunkSize
        index++
    }

    return fileItem
}

2. 计算文件hash

这里我使用的是spark-md5
这个操作是异步的,且会随着文件增大而耗费更多的时间,如果和上传操作放在一个线程,显然会阻塞后续任务的执行
所以这里我们选择使用Web Worker,将其放在单独的线程中执行
新建hash.worker.js

  • 注意: 这里需要导入spark-md5
import SparkMD5 from "spark-md5"

self.onmessage = (e) => {
    const spark = new SparkMD5.ArrayBuffer()

    const reader = new FileReader()

    reader.onload = (e) => {
        spark.append(e.target.result)
        self.postMessage({ code: 0, hash: spark.end() })
    }
    reader.onerror = (error) => {
        self.postMessage({ code: -1, error })
    }

    reader.readAsArrayBuffer(e.data)
}

封装calcHash方法

/**
 * 计算文件hash
 * @param file
 * @returns {Promise<any>}
 */
const calcHash = (file) => {
    return new Promise((resolve, reject) => {
        const worker = new Worker(new URL("@/utils/hash.worker.js", import.meta.url), {
            type: "module",
            name: "calcHash ",
        })
        worker.postMessage(file)
        worker.onmessage = (e) => {
            const { code, hash, error } = e.data
            worker.terminate()
            if (code === 0) resolve(hash)
            else reject(error)
        }
    })
}

3. 上传附件的请求

我们先分析一下,上传附件的步骤都有哪些

验证文件上传进度或者说服务器端文件的完整性

有了这个依据,才能判断后续需要上传的切片有哪些

function xhrValidate({ url, data, headers }) {
    return fetch(url, {
        method: "POST",
        headers: {
            ...headers,
            "Content-Type": "application/json",
        },
        body: JSON.stringify(data),
    })
        .then((res) => res.json())
        .then((res) => res.data)
}

然后就是上传切片的请求

这里需要注意到的是,我们需要中断请求,所以就要将xhr对象进行缓存
还有就是上传进度的处理以及上传失败的处理,后续会进行说明

// 定义xhr对象的缓存池
// 这里因为有可能会有多个附件,所以使用map结构,用fileItem.fid为key,附件切片的请求对象使用set存放
const xhrMap = new Map()
/**
 * 估算剩余上传时间
 * @param loaded 当前进度
 * @param total 总大小
 * @param startTime 开始时间(时间戳)
 * @returns {string}
 */
function calcRemainingTime(loaded, total, startTime) {
    const currentTime = Date.now()
    const elapsedTime = currentTime - startTime
    const remainingBytes = total - loaded

    // 计算平均速度(每毫秒字节数)
    const speed = loaded / elapsedTime

    // 计算剩余时间(毫秒)
    const remainingTimeMs = remainingBytes / speed

    // 将剩余时间转换为小时、分钟和秒
    let hours = Math.floor(remainingTimeMs / (1000 * 60 * 60))
    let minutes = Math.floor((remainingTimeMs % (1000 * 60 * 60)) / (1000 * 60))
    let seconds = Math.floor((remainingTimeMs % (1000 * 60)) / 1000)

    hours = hours > 0 && hours != Infinity ? hours + "时" : ""
    minutes = minutes > 0 && minutes != Infinity ? minutes + "分" : ""
    seconds = seconds > 0 ? seconds + "秒" : "0秒"

    // 将剩余时间格式化为“xx时xx分xx秒”的字符串
    return hours + minutes + seconds
}
/**
 * 上传进度改变触发,计算上传进度
 * @param fileItem
 * @param loaded
 * @param total
 */
function onprogress(fileItem, loaded, total) {
    fileItem.loaded += loaded
    fileItem.progress = Math.min(Math.floor((fileItem.loaded / fileItem.size) * 100), 100)
    fileItem.remainder = calcRemainingTime(fileItem.loaded, fileItem.size, fileItem.startTime)
}
function xhrSlice({ url, fileItem, data, headers, onprogress }) {
    return new Promise(function (resolve, reject) {
        const xhr = new XMLHttpRequest()
        xhr.open("POST", url)

        if (headers instanceof Object) {
            Object.entries(headers).forEach(([key, value]) => {
                xhr.setRequestHeader(key, value) // 设置请求头
            })
        }

		// 监听上传进度
        xhr.upload.addEventListener("progress", function (event) {
            if (event.lengthComputable) onprogress(fileItem, event.loaded, event.total)
        })

        xhr.onreadystatechange = function () {
            if (xhr.readyState === XMLHttpRequest.DONE) {
                const response = JSON.parse(xhr.response || "null")
                // 这里根据业务判断成功的response
                if (xhr.status === 200 && response?.code === 0) {
                	// 上传成功返回结果
                    resolve(response)
                } else {
                	// 上传失败
                    fileItem.loaded = 0
                    fileItem.progress = 0
                    reject(response)
                }
            }
        }

        // 缓存xhr对象
        const xhrList = xhrMap.get(fileItem.fid) || new Set()
        xhrList.add(xhr)
        xhrMap.set(fileItem.fid, xhrList)

        xhr.send(data)
    })
}

最后就是上传完毕通知服务器合并切片了

这个请求看后端同事的具体实现,可以不用再请求

function xhrMerge({ url, data, headers }) {
    return fetch(url, {
        method: "POST",
        headers: {
            ...headers,
            "Content-Type": "application/json",
        },
        body: JSON.stringify(data),
    }).then((res) => res.json())
}

到这里,前面的准备工作算是完成

三、定义上传的Worker脚本文件

1. 明确上传过程中一共有哪些状态:

- create: 初始化
- hash: 计算hash值
- validate: 校验文件完整性
- upload: 开始上传
- merge: 合并中
- success: 上传成功

根据以上状态,我们依次实现这个过程 封装handleUpload方法

const handleUpload = async ({
    validateUrl, // 文件校验接口url
    sliceUrl, // 切片上传接口url
    mergeUrl, // 合并接口url
    headers, // 请求头
    fileItem, // 自定义文件对象
    concurrency = 5, // 并发任务数量
}) => {
	// 在调用此方法时,fileItem对象就已经被创建完毕,即初始化已完成
    fileItem.remainder = "准备中..."
    fileList.add(fileItem)

    // 计算文件hash
    fileItem.status = "hash"
    fileItem.hash = await calcHash(fileItem.file)

    // 校验文件完整性
    fileItem.status = "validate"
    const chunkList = fileItem.chunks.map((chunk) => {
        const { start, end, index } = chunk
        return { size: end - start, index }
    })
    // 获取所有切片的索引,存放在数组uploadIndexList中
    let uploadIndexList = chunkList.map((chunk) => chunk.index)
    if (validateUrl) {
        uploadIndexList = await xhrValidate({
            url: validateUrl,
            data: { hash: fileItem.hash, name: fileItem.name, chunkList },
            headers,
        })
        // 矫正已上传进度
        const uploadedChunks = chunkList.filter((chunk) => !uploadIndexList.includes(chunk.index))
        fileItem.loaded = uploadedChunks.reduce((prev, curr) => prev + curr.size, 0)
    }

	// 开始上传文件
    fileItem.status = "upload"
    if (uploadIndexList.length) {
        await uploadFile({ url: sliceUrl, headers, data, fileItem, uploadIndexList,  concurrency })

        // 上传完毕,合并文件
        if (mergeUrl) {
            fileItem.status = "merge"
            fileItem.response = await xhrMerge({
                url: mergeUrl,
                data: { hash: fileItem.hash, name: fileItem.name },
                headers,
            })
        }
    } else {
        fileItem.progress = 100
    }
    fileItem.remainder = "已完成"
    fileItem.uploadTime = moment().format("YYYY-MM-DD HH:mm:ss")
    fileItem.status = "success"
}

2. 实现并发上传切片

const uploadFile = async ({ url, uploadIndexList, fileItem, headers, errMax, concurrency }) => {
    // 计算已上传完毕的切片索引
    fileItem.uploadedIndexList = fileItem.chunks
        .filter((chunk) => !uploadIndexList.includes(chunk.index))
        .map((item) => item.index)

    // 计算待上传的切片
    const uploadChunks = fileItem.chunks.filter((chunk) => uploadIndexList.includes(chunk.index))

    // 设置开始上传时间
    fileItem.startTime = Date.now()

    const pool = [] // 并发池

    // 并发运行任务
    async function runTask() {
        await Promise.all(pool)
        // 任务结束清空并发池
        pool.length = 0
    }

    for (let index = 0; index < uploadChunks.length; index++) {
        const chunk = uploadChunks[index]
        chunk.fileHash = fileItem.hash

        if (["error", "pause"].includes(fileItem.status)) break

        // 调用 xhrUpload 方法进行文件上传
        let response
        const task = xhrSlice({ url, fileItem, data: formatData(chunk), headers, errMax, onprogress })
            .then((res) => {
                response = res
                fileItem.uploadedIndexList.push(chunk.index)
            })
            .catch((err) => {
                response = err
                return Promise.reject(err)
            })
            .finally(() => {
            	// 将结果添加进去
                if (!isEmpty(fileItem.response)) fileItem.response = [fileItem.response, response]
                else if (Array.isArray(fileItem.response)) fileItem.response.push(response)
                else fileItem.response = response
            })

        if (pool.length < concurrency) {
            pool.push(task)
        }
        if (pool.length === concurrency || index === fileItem.chunks.length - 1) {
            await runTask()
        }
    }
}

3. 新建upload.worker.js

/**
 * 发送当前附件数据
 * @param handle
 * @param fileItem
 */
const sendFileItem = (handle, fileItem) => {
    self.postMessage({
        handle,
        fileItem: omit(fileItem, "loaded", "chunks", "chunkNum", "uploadedIndexList"),
    })
}
self.onmessage = (e) => {
    const { handle, validateUrl, sliceUrl, mergeUrl, headers, data, file, fid, chunkSize, concurrency } = e.data || {}

    const proxyItem = fileList.values().find((item) => item.fid === fid) || generateFileItem(file, chunkSize * 1024 * 1024)

    const fileItem = new Proxy(proxyItem, {
        set(target, key, value) {
            Reflect.set(target, key, value)
            if (["status", "response", "remainder", "progress", "uploadTime"].includes(key)) {
                // 先更新一下数据
                sendFileItem("change", fileItem)
                // 触发对应回调
                if (target.status !== "error") sendFileItem(target.status, target)
            }
            return true
        },
    })

    if (["pause", "resume", "restart", "stop"].includes(handle)) fileItem.status = handle
    if (["stop", "pause"].includes(handle) && fid) {
        const xhrList = xhrMap.get(fid) || []
        xhrList.forEach((xhr) => xhr.abort())
    } else {
        handleUpload({ validateUrl, sliceUrl, mergeUrl, headers, data, fileItem, concurrency }).catch((err) => {
            if (err) {
                fileItem.status = "error"
                fileItem.remainder = err.message || "上传失败"
                sendFileItem("error", err)
                // 中止所有相关请求
                const xhrList = xhrMap.get(fid) || []
                xhrList.forEach((xhr) => xhr.abort())
            }
        })
    }
}

四、文件全览upload.worker.js

import { generateUUID } from "@/utils/index.js"
import { isEmpty, omit } from "lodash"
import moment from "moment"

const xhrMap = new Map()
const fileList = new Set()

/**
 * 估算剩余上传时间
 * @param loaded 当前进度
 * @param total 总大小
 * @param startTime 开始时间(时间戳)
 * @returns {string}
 */
function calcRemainingTime(loaded, total, startTime) {
    const currentTime = Date.now()
    const elapsedTime = currentTime - startTime
    const remainingBytes = total - loaded

    // 计算平均速度(每毫秒字节数)
    const speed = loaded / elapsedTime

    // 计算剩余时间(毫秒)
    const remainingTimeMs = remainingBytes / speed

    // 将剩余时间转换为小时、分钟和秒
    let hours = Math.floor(remainingTimeMs / (1000 * 60 * 60))
    let minutes = Math.floor((remainingTimeMs % (1000 * 60 * 60)) / (1000 * 60))
    let seconds = Math.floor((remainingTimeMs % (1000 * 60)) / 1000)

    hours = hours > 0 && hours != Infinity ? hours + "时" : ""
    minutes = minutes > 0 && minutes != Infinity ? minutes + "分" : ""
    seconds = seconds > 0 ? seconds + "秒" : "0秒"

    // 将剩余时间格式化为“xx时xx分xx秒”的字符串
    return hours + minutes + seconds
}

function xhrValidate({ url, data, headers }) {
    return fetch(url, {
        method: "POST",
        headers: {
            ...headers,
            "Content-Type": "application/json",
        },
        body: JSON.stringify(data),
    })
        .then((res) => res.json())
        .then((res) => res.data)
}

/**
 * 上传附件的请求
 * @param url 请求地址
 * @param formData 表单对象,内部包含附件以及各个参数
 * @param headers 请求头的配置
 * @param onprogress 更新进度的函数
 * @returns {Promise<axios.AxiosResponse<any>>}
 */
function xhrSlice({ url, fileItem, data, headers, onprogress }) {
    return new Promise(function (resolve, reject) {
        const xhr = new XMLHttpRequest()
        xhr.open("POST", url)

        if (headers instanceof Object) {
            Object.entries(headers).forEach(([key, value]) => {
                xhr.setRequestHeader(key, value) // 设置请求头
            })
        }

        xhr.upload.addEventListener("progress", function (event) {
            if (event.lengthComputable) onprogress(fileItem, event.loaded, event.total)
        })

        xhr.onreadystatechange = function () {
            if (xhr.readyState === XMLHttpRequest.DONE) {
                const response = JSON.parse(xhr.response || "null")
                if (xhr.status === 200 && response?.code === 0) {
                    resolve(response)
                } else {
                    fileItem.loaded = 0
                    fileItem.progress = 0
                    reject(response)
                }
            }
        }

        // 缓存xhr对象
        const xhrList = xhrMap.get(fileItem.fid) || new Set()
        xhrList.add(xhr)
        xhrMap.set(fileItem.fid, xhrList)

        xhr.send(data)
    })
}

function xhrMerge({ url, data, headers }) {
    return fetch(url, {
        method: "POST",
        headers: {
            ...headers,
            "Content-Type": "application/json",
        },
        body: JSON.stringify(data),
    }).then((res) => res.json())
}

/**
 * 初始化上传附件列表
 * @param files
 * @param chunkSize
 * @returns {*}
 */
const generateFileItem = (file, chunkSize = 1 * 1024 * 1024) => {
    const chunkNum = Math.ceil(file.size / chunkSize) // 总切片数
    const fileItem = {
        fid: generateUUID(),
        hash: "",
        file,
        name: file.name,
        size: file.size,
        loaded: 0,
        progress: 0,
        status: "create",
        response: {},
        chunkNum,
        chunks: [],
        uploadedIndexList: [],
    }

    let index = 0
    let start = 0
    while (start < file.size) {
        let end = start + chunkSize
        if (end > file.size) end = file.size

        const chunk = {
            index,
            fileName: file.name,
            fileHash: "",
            start,
            end,
            total: file.size,
            chunk: file.slice(start, end),
            chunkNum,
        }

        // 调用 slice 方法进行文件切割
        fileItem.chunks.push(chunk)
        start += chunkSize
        index++
    }

    return fileItem
}

/**
 * 上传进度改变触发,计算上传进度
 * @param fileItem
 * @param loaded
 * @param total
 */
function onprogress(fileItem, loaded, total) {
    fileItem.loaded += loaded
    fileItem.progress = Math.min(Math.floor((fileItem.loaded / fileItem.size) * 100), 100)
    fileItem.remainder = calcRemainingTime(fileItem.loaded, fileItem.size, fileItem.startTime)
}

/**
 * 发送当前附件数据
 * @param fileItem
 */
const sendFileItem = (handle, fileItem) => {
    self.postMessage({
        handle,
        fileItem: omit(fileItem, "loaded", "chunks", "chunkNum", "uploadedIndexList"),
    })
}

/**
 * 格式化请求参数
 * @param data
 * @returns {FormData}
 */
function formatData(data = {}) {
    const formData = new FormData()
    const { index, fileName, fileHash, start, end, total, chunk, chunkNum } = data
    const params = {
        file: chunk,
        name: fileName,
        fileMd5: fileHash,
        fileTotalSize: total,
        chunkIndex: index,
        chunkTotal: chunkNum,
    }
    Object.entries(params).forEach(([key, value]) => {
        if (Array.isArray(value)) {
            value.forEach((item) => {
                formData.append(key, item)
            })
        } else {
            formData.append(key, value)
        }
    })
    return formData
}

/**
 * 上传附件
 * @param { string } url 请求地址
 * @param { object } fileItem 文件对象
 * @param { object } headers 请求头
 * @param { number } concurrency 最大并发数量
 * @returns {Promise<void>}
 */
const uploadFile = async ({ url, uploadIndexList, fileItem, headers, errMax, concurrency }) => {
    // 计算已上传完毕的切片索引
    fileItem.uploadedIndexList = fileItem.chunks
        .filter((chunk) => !uploadIndexList.includes(chunk.index))
        .map((item) => item.index)

    // 计算待上传的切片
    const uploadChunks = fileItem.chunks.filter((chunk) => uploadIndexList.includes(chunk.index))

    // 设置开始上传时间
    fileItem.startTime = Date.now()

    const pool = [] // 并发池

    // 并发运行任务
    async function runTask() {
        await Promise.all(pool)
        // 任务结束清空并发池
        pool.length = 0
    }

    for (let index = 0; index < uploadChunks.length; index++) {
        const chunk = uploadChunks[index]
        chunk.fileHash = fileItem.hash

        if (["error", "pause"].includes(fileItem.status)) break

        // 调用 xhrUpload 方法进行文件上传
        let response
        const task = xhrSlice({ url, fileItem, data: formatData(chunk), headers, errMax, onprogress })
            .then((res) => {
                response = res
                fileItem.uploadedIndexList.push(chunk.index)
            })
            .catch((err) => {
                response = err
                return Promise.reject(err)
            })
            .finally(() => {
                if (!isEmpty(fileItem.response)) fileItem.response = [fileItem.response, response]
                else if (Array.isArray(fileItem.response)) fileItem.response.push(response)
                else fileItem.response = response
            })

        if (pool.length < concurrency) {
            pool.push(task)
        }
        if (pool.length === concurrency || index === fileItem.chunks.length - 1) {
            await runTask()
        }
    }
}

/**
 * 计算文件hash
 * @param file
 * @returns {Promise<unknown>}
 */
const calcHash = (file) => {
    return new Promise((resolve, reject) => {
        const worker = new Worker(new URL("@/utils/hash.worker.js", import.meta.url), {
            type: "module",
            name: "计算文件md5",
        })
        worker.postMessage(file)
        worker.onmessage = (e) => {
            const { code, hash, error } = e.data
            worker.terminate()
            if (code === 0) resolve(hash)
            else reject(error)
        }
    })
}

const handleUpload = async ({
    validateUrl,
    sliceUrl,
    mergeUrl,
    headers,
    fileItem,
    concurrency = 5, // 并发任务数量
}) => {
    fileItem.remainder = "准备中..."
    fileList.add(fileItem)

    // 计算文件hash
    fileItem.status = "hash"
    fileItem.hash = await calcHash(fileItem.file)

    // 校验文件完整性
    fileItem.status = "validate"
    const chunkList = fileItem.chunks.map((chunk) => {
        const { start, end, index } = chunk
        return { size: end - start, index }
    })

    let uploadIndexList = chunkList.map((chunk) => chunk.index)
    if (validateUrl) {
        uploadIndexList = await xhrValidate({
            url: validateUrl,
            data: { hash: fileItem.hash, name: fileItem.name, chunkList },
            headers,
        })
        // 矫正已上传进度
        const uploadedChunks = chunkList.filter((chunk) => !uploadIndexList.includes(chunk.index))
        fileItem.loaded = uploadedChunks.reduce((prev, curr) => prev + curr.size, 0)
    }

	// 开始上传文件
    fileItem.status = "upload"
    if (uploadIndexList.length) {
        await uploadFile({ url: sliceUrl, headers, fileItem, uploadIndexList, errMax, concurrency })

        // 上传完毕,合并文件
        if (mergeUrl) {
            fileItem.status = "merge"
            fileItem.response = await xhrMerge({
                url: mergeUrl,
                data: { hash: fileItem.hash, name: fileItem.name },
                headers,
            })
        }
    } else {
        fileItem.progress = 100
    }
    fileItem.remainder = "已完成"
    fileItem.uploadTime = moment().format("YYYY-MM-DD HH:mm:ss")
    fileItem.status = "success"
}

/**
 * 接收上传附件请求,触发上传附件
 * @param handle 操作 [pause: 暂停, upload: 上传, resume: 恢复]
 * @param validateUrl 验证文件完整性接口地址
 * @param sliceUrl 切片上传接口地址
 * @param mergeUrl 切片合并接口地址
 * @param headers 请求头
 * @param file 文件
 * @param fileHash 文件md5
 * @returns {Promise<void>}
 */
self.onmessage = (e) => {
    const { handle, validateUrl, sliceUrl, mergeUrl, headers, file, fid, chunkSize, errMax, concurrency } = e.data || {}

    const proxyItem =
        fileList.values().find((item) => item.fid === fid) || generateFileItem(file, chunkSize * 1024 * 1024)

    const fileItem = new Proxy(proxyItem, {
        set(target, key, value) {
            Reflect.set(target, key, value)
            if (["status", "response", "remainder", "progress", "uploadTime"].includes(key)) {
                // 先更新一下数据
                sendFileItem("change", fileItem)
                // 触发对应回调
                if (target.status !== "error") sendFileItem(target.status, target)
            }
            return true
        },
    })

    if (["pause", "resume", "restart", "stop"].includes(handle)) fileItem.status = handle
    if (["stop", "pause"].includes(handle) && fid) {
        const xhrList = xhrMap.get(fid) || []
        xhrList.forEach((xhr) => xhr.abort())
    } else {
        handleUpload({ validateUrl, sliceUrl, mergeUrl, headers, fileItem, errMax, concurrency }).catch((err) => {
            if (err) {
                fileItem.status = "error"
                fileItem.remainder = err.message || "上传失败"
                sendFileItem("error", err)
                // 中止所有相关请求
                const xhrList = xhrMap.get(fid) || []
                xhrList.forEach((xhr) => xhr.abort())
            }
        })
    }
}

五、简单的后台实现(nodejs + express + express-fileupload)

const {
    mkdirSync,
    statSync,
    rmSync,
    readdirSync,
    readFileSync,
    writeFileSync,
    appendFileSync,
    existsSync,
} = require("fs")
const { resolve} = require("path")

1. 文件完整性校验

async uploadValidate(req, res) {
        const { hash, name, chunkList } = req.body
        const fileDir = resolve(__dirname,`../../public/upload/${hash}`)
        const filePath = resolve(__dirname,`../../public/upload/${hash}.${name}`)

        if (existsSync(filePath)) {
            res.send({
                code: 0,
                data: [],
                message: "文件已存在"
            })
        } else if(!existsSync(fileDir)) {
            res.send({
                code: 0,
                data: chunkList.map(item => item.index),
                message: "文件不存在"
            })
        } else {
            const files = readdirSync(fileDir)

            // 读取源文件大小,比对文件
            const readQueue = Array.from(files).map((file) => {
                return new Promise((resolve, reject) => {
                    const status = statSync(fileDir + "/" + file)
                    resolve({ index: file.split(".")[0], size: status.size })
                })
            });
            const statusList = await Promise.all(readQueue)

            const invalidList = chunkList.filter(chunk => {
                const status = statusList.find(status => chunk.index == status.index)
                return status ? chunk.size < status.size : true
            }).map(item => item.index)

            res.send({
                code: 0,
                data: invalidList,
                message: "文件数据缺失"
            })
        }
    }

2. 切片上传

async uploadSlice(req,res) {
        const {fileName,fileHash,index,start,end} = req.body
        const {chunk} = req.files
        const fileDir = resolve(__dirname,`../../public/upload/${fileHash}`)
        const filePath = `${fileDir}/${index}.${fileName}`

        if(!existsSync(fileDir)) mkdirSync(fileDir)

        if (!existsSync(filePath)) writeFileSync(filePath,chunk.data)
        
        res.send({
            code: 0,
            message: "上传成功!"
        })
    }

3. 切片合并

  • 这里切片合并或许有点小问题,但思路就这么个思路, 咱也不是专业后端,将就测试用吧
async uploadMerge(req,res) {
        const { hash,name } = req.body
        const fileDir = resolve(__dirname,`../../public/upload/${hash}`)
        const filePath = resolve(__dirname,`../../public/upload/${hash}.${name}`)

        const files = readdirSync(fileDir)
        const fileList = Array.from(files).sort((a,b) => a.split(".")[0] - b.split(".")[0])

        // 逐个读取源文件并写入目标文件
        fileList.forEach((file, index) => {
            if (index === 0) writeFileSync(filePath, "")
            const data = readFileSync(fileDir + "/" + file)
            appendFileSync(filePath, data)
        });

        // 合并完毕,删除缓存
        rmSync(fileDir, { recursive: true })

        res.send({
            code: 0,
            data: filePath,
            message: "合并成功!"
        })
    }

六、在项目中使用(js + vue3.3.11 + vite5.2.11 + pinia2.1.7 + arco design2.5 + vxe-table4.5.14)

1. 定义fileStore

import store from "@/store/index.js"
import { defineStore } from "pinia"
import { reactive } from "vue"
import { isFunction } from "lodash"

export const useFileStore = defineStore("file", () => {
    const fileList = reactive([])

    function generateWorker(handler) {
        const worker = new Worker(new URL("@/utils/upload.worker.js", import.meta.url), {
            type: "module",
            name: "上传文件",
        })

        worker.onmessage = (e) => {
            const { handle, fileItem } = e.data
            if (isFunction(handler[handle])) handler[handle](fileItem)

            if (fileItem.fid) {
                const index = fileList.findIndex((item) => item.fid === fileItem.fid)
                if (index < 0) fileList.push(fileItem)
                else fileList.splice(index, 1, fileItem)
            }
        }

        return worker
    }

    function useUpload({ config = {}, ...handler }) {
        let worker

        function upload(file) {
            if (!worker) worker = generateWorker(handler)
            worker.postMessage({ handle: "upload", file, ...config })
        }
        function pause(fid) {
            if (!worker) worker = generateWorker(handler)
            worker.postMessage({ handle: "pause", fid, ...config })
        }
        function resume(fid) {
            if (!worker) worker = generateWorker(handler)
            worker.postMessage({ handle: "resume", fid, ...config })
        }
        function restart(fid) {
            if (!worker) worker = generateWorker(handler)
            worker.postMessage({ handle: "restart", fid, ...config })
        }
        function stop(fid) {
            if (!worker) worker = generateWorker(handler)
            worker.postMessage({ handle: "stop", fid, ...config })
        }

        function destroy() {
            worker?.terminate()
            worker = null
        }

        return {
            upload,
            pause,
            resume,
            restart,
            stop,
            destroy,
        }
    }

    function deleteFile(fileItem) {
        const index = fileList.findIndex((item) => item.fid === fileItem.fid)
        if (index > -1) fileList.splice(index, 1)
    }

    return {
        fileList,
        deleteFile,
        useUpload,
    }
})

export const useFileStoreHook = () => useFileStore(store)

2. 定义UploadTable组件

TABLE_OPTIONS

export const TABLE_OPTIONS = {
    border: true,
    autoResize: true,
    syncResize: true,
    stripe: true,
    id: "common-table",
    align: "center",
    height: "auto",
    keepSource: true,
    showOverflow: true,
    columnConfig: {
        useKey: true,
        resizable: true,
    },
    rowConfig: {
        useKey: true,
        keyField: "uuid",
        isHover: true,
        height: 48, // 只对 show-overflow有效,每一行的高度
    },
    pagerConfig: {
        enabled: false,
        pageSize: 10,
        currentPage: 1,
        total: 0,
        pageSizes: [
            { label: "10 条/页", value: 10 },
            { label: "50 条/页", value: 50 },
            { label: "100 条/页", value: 100 },
            { label: "全部", value: -1 },
        ],
        layouts: ["Sizes", "PrevJump", "PrevPage", "Number", "NextPage", "NextJump", "FullJump", "Total"],
        perfect: true,
    }, // 分页配置
    columns: [],
    data: [], // 表格数据
}

uplaod.js

import Compressor from "compressorjs"

/**
 * 选择文件
 * @param {object} options
 * @param {boolean} options.multiple 是否多选
 * @param {boolean} options.directory 是否选择文件夹
 * @param {string} options.accept 文件类型
 * @param {string[]} options.whites 文件后缀白名单
 * @returns {Promise<array>}
 */
export async function selectFile({ multiple = false, accept = "*", directory = false, whites = [] }) {
    return new Promise((resolve, reject) => {
        const input = document.createElement("input")
        input.style.display = "none"
        input.type = "file"
        input.accept = accept
        input.multiple = multiple
        input.webkitdirectory = directory
        input.mozdirectory = directory
        input.odirectory = directory
        document.body.appendChild(input)
        input.addEventListener("change", (event) => {
            const files = event.target.files
            if (whites.length && Array.from(files).some((item) => !whites.includes(item.name.split(".").pop()))) {
                reject()
            }
            input.remove()
            resolve(files)
        })

        // 触发文件选择操作
        input.click()
    })
}

/**
 * 格式化文件大小
 * @param bytes
 * @returns {string}
 */
export function formatFileSize(bytes) {
    if (bytes < 1024) {
        return bytes + " B"
    } else if (bytes < 1048576) {
        return (bytes / 1024).toFixed(2) + " KB"
    } else if (bytes < 1073741824) {
        return (bytes / 1048576).toFixed(2) + " MB"
    } else {
        return (bytes / 1073741824).toFixed(2) + " GB"
    }
}

/**
 * 图片压缩
 * @param {File} file
 * @param {object} config
 * @param {boolean} config.strict=true 当压缩图像尺寸大于原始图像时,是否输出原始图像而不是压缩图像
 * @param {number} config.quality=0.8 设置压缩图像的质量。值范围为0到1之间,其中0表示最低质量,1表示最高质量。
 * @param {number} config.width=undefined 输出图像的宽度。如果未指定,则将使用原始图像的自然宽度,或者如果设置了高度选项,则将按自然宽高比自动计算宽度。
 * @param {number} config.height=undefined 输出图像的高度。如果未指定,则将使用原始图像的自然高度,或者如果设置了宽度选项,则将根据自然宽高比自动计算高度。
 * @param {number} config.minWidth=0 输出图像的最小宽度。该值应大于 0 并且不应大于 maxWidth。
 * @param {number} config.minHeight=0 输出图像的最小高度。该值应大于 0 并且不应大于 maxHeight
 * @param {number} config.maxWidth=Infinity 限制压缩后的图像宽度的最大值。如果图像的宽度超过此值,将会按比例缩小。
 * @param {number} config.maxHeight=Infinity 限制压缩后的图像高度的最大值。如果图像的高度超过此值,将会按比例缩小。
 * @param {number} config.convertSize=Infinity 文件类型包含在convertTypes列表中且文件大小超过此值的文件将被转换为JPEG。要禁用此功能,只需将该值设置为“Infinity”即可。
 * @param {boolean} config.checkOrientation=false 检查图像的方向信息,并根据需要进行自动旋转。
 * @param {boolean} config.retainExif=false 压缩后是否保留图像的Exif信息。
 * @param {"none"|"contain"|"cover"} config.resize="none" 设置如何将图像的大小调整为宽度和高度选项指定的容器。仅当指定了宽度和高度选项时,此选项才可用。
 * @returns {Promise<File>}
 */
export function compressImg(file, config) {
    return new Promise((resolve, reject) => {
        const options = {
            success(result) {
                // 将压缩后的 Blob 转换为 File 对象(如果组件支持Blob则不用这一步)
                const compressedFile = new File([result], file.name, {
                    type: file.type,
                    lastModified: Date.now(),
                })
                resolve(compressedFile)
            },
            error(e) {
                reject(file)
            },
            convertSize: Infinity,
            checkOrientation: false,
            ...config,
        }
        new Compressor(file, options)
    })
}

/**
 * 将File转换为图片url
 * @param {File} file
 * @returns {Promise<DataUrl>}
 */
export function readerImg(file) {
    return new Promise((resolve, reject) => {
        const reader = new FileReader()
        reader.onload = function (e) {
            resolve(e.target.result)
        }
        reader.readAsDataURL(file)
    })
}

<script setup>
/**
 * Create: 2024-05-11 16:28
 * Remark: 附件上传表格
 * 1. 支持大文件切片上传
 * 2. 支持可开启的自动上传和手动上传
 * 3. 支持可开启的持久化上传,仅支持缓存页面
 */
import { ref,reactive,useAttrs,onMounted,onUnmounted,onDeactivated } from "vue"
import ComTitle from "@/components/ComTitle/index.vue"
import { TABLE_OPTIONS } from "@/utils/frame.js"
import { formatFileSize,selectFile } from "@/utils/upload.js"
import { useFileStore } from "@/store/modules/file.js"
import { getToken } from "@/utils/session.js"
import { cloneDeep,isFunction } from "lodash"
import { useUserStore } from "@/store/modules/user.js"

const emit = defineEmits(["change", "success","error"])
const attrs = useAttrs()
const props = defineProps({
    fieldNames: Object,
    fileList: Array,
    formatValue: Function
})
const { userInfo } = useUserStore()
const { useUpload  } = useFileStore()

const fields = reactive({
    uuid: props.fieldNames?.uuid || "uuid",
    path: props.fieldNames?.path || "path",
    name: props.fieldNames?.name || "name",
    uploadUser: props.fieldNames?.uploadUser || "uploadUser",
    uploadTime: props.fieldNames?.uploadTime || "uploadTime",
})

const xGrid = ref()
const tableOptions = reactive({
    ...TABLE_OPTIONS,
    height: undefined,
    maxHeight: 9999,
    rowConfig: {
        keyField: "fid"
    },
    columns: [
        {
            type: "seq",
            title: "glob.Seq",
            width: 60
        },
        {
            field: fields.name,
            title: "glob.FileName",
            minWidth: 200
        },
        {
            field: "size",
            title: "glob.FileSize",
            width: 100,
            formatter({ cellValue }) {
                return formatFileSize(cellValue)
            }
        },
        {
            field: "progress",
            title: "glob.FileProgress",
            width: 200,
            slots: {
                default: "progress"
            }
        },
        {
            field: "remainder",
            title: "glob.FileRemainder",
            width: 100,
        },
        {
            field: fields.uploadUser,
            title: "glob.UploadUser",
            width: 100
        },
        {
            field: fields.uploadTime,
            title: "glob.UploadTime",
            width: 170
        },
        {
            field: "operation",
            title: "glob.Operation",
            width: 120,
            slots: {
                default: "operation"
            }
        }
    ],
    data: [],
    ...attrs,
})

const token = getToken()
const uploadHandler = useUpload({
    config: {
    	validateUrl: "验证文件完整性接口地址",
        sliceUrl: "切片上传接口地址",
		mergeUrl: "切片合并接口地址",
        headers: {
            Authorization: token ? `Bearer ${token}` : undefined,
        },
        chunkSize: 10
    },
    change(newFileItem) {
        let fileItem = {
            ...cloneDeep(newFileItem),
            [fields.uuid]: newFileItem.uuid,
            [fields.name]: newFileItem.name,
            [fields.path]: newFileItem.path,
            [fields.uploadUser]: userInfo.userName,
            [fields.uploadTime]: newFileItem.uploadTime,
        }
        if (isFunction(props.formatValue)) fileItem = props.formatValue(fileItem)

        const row = xGrid.value.getRowById(fileItem.fid)

        if (row) Object.assign(row, fileItem)
        else tableOptions.data.push(fileItem)

        emit("change", tableOptions.data, row)
    },
    error(err) {
    	emit("error", err)
    },
    success(fileItem) {
    	emit("success", fileItem)
    }
    stop(fileItem) {
        const index = tableOptions.data.findIndex(item => item.fid == fileItem.fid)
        if (index > -1) tableOptions.data.splice(index, 1)
    }
})

const expose = {
    upload: async () => {
        const files = await selectFile({ multiple: true })
        Array.from(files).forEach(file => uploadHandler.upload(file))
    },
    pause: uploadHandler.pause,
    resume: uploadHandler.resume,
    delete(index) {
        tableOptions.data.splice(index, 1)
    },
    restart: uploadHandler.restart,
    stop: uploadHandler.stop,
    destroy: uploadHandler.destroy
}

onMounted(() => {
    Object.assign(expose, xGrid.value)
})

onUnmounted(expose.destroy)
onDeactivated(expose.destroy)

defineExpose(expose)
</script>
<template>
    <vxe-grid ref="xGrid" v-bind="tableOptions">
        <template #progress="{ row }">
            <a-progress :percent="row.progress / 100" />
        </template>
        <template #operation="{ row, rowIndex }">
            <a-space>
                <a-tooltip content="取消">
                    <a-button v-if="row.status === 'upload'" status="danger" @click="expose.stop(row.fid)">
                        <template #icon>
                            <icon-record-stop />
                        </template>
                    </a-button>
                </a-tooltip>
                <a-tooltip content="重新上传">
                    <a-button v-if="row.status === 'error'" type="primary" @click="expose.restart(row.fid)">
                        <template #icon>
                            <icon-refresh />
                        </template>
                    </a-button>
                </a-tooltip>
                <a-tooltip content="删除">
                    <a-button
                        v-if="['success','error'].includes(row.status)"
                        status="danger"
                        @click="expose.delete(rowIndex)"
                    >
                        <template #icon>
                            <icon-delete />
                        </template>
                    </a-button>
                </a-tooltip>
            </a-space>
        </template>
        <template v-for="(slot, key) in $slots" #[key]="bind">
            <slot :name="key" v-bind="bind"></slot>
        </template>
    </vxe-grid>
</template>
<style lang="less" scoped></style>

  • 31
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值