再也不用等!Spring Boot 3.3 搞定大文件分块上传 + 文件“秒传”,速度飞起!

开发 前端
通过本文,我们构建了一个完整的支持大文件上传系统,具备高效、稳定、可扩展的特性,适用于企业级系统中的文档上传、视频管理、素材收集等场景。
  • 在现代化的文件上传场景中,用户往往会面临上传大文件、网络中断、重复上传浪费带宽等挑战。为了解决这些问题,本文基于 Spring Boot 3.3 搭建一个高性能、可扩展的文件上传系统,支持:结合前后端完整示例,本文将带你从零构建一套实用的上传方案,助力各类业务系统高效接入大文件处理能力。

文件秒传(通过 MD5 实现)

分块上传(支持大文件断点续传)

分块合并(支持服务端合并)

构建 Spring Boot 3.3 项目

pom.xml 关键依赖配置如下:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>cn.hutool</groupId>
        <artifactId>hutool-all</artifactId>
        <version>5.8.25</version>
    </dependency>
</dependencies>
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.

定义文件信息实体类 FileInfo

package com.icoderoad.model;


import cn.hutool.core.util.IdUtil;


public class FileInfo {
    private String id = IdUtil.fastUUID();
    private String fileName;
    private String fileMd5;
    private Long fileSize;
    private String filePath;


    public FileInfo(String fileName, String fileMd5, Long fileSize, String filePath) {
        this.fileName = fileName;
        this.fileMd5 = fileMd5;
        this.fileSize = fileSize;
        this.filePath = filePath;
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.

核心服务类 FileService

package com.icoderoad.service;


import cn.hutool.crypto.digest.DigestUtil;
import com.icoderoad.model.FileInfo;
import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;


import java.io.*;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;


@Service
public class FileService {
    private final Map<String, FileInfo> fileStore = new ConcurrentHashMap<>();
    private final String tempDir = System.getProperty("java.io.tmpdir") + File.separator + "chunks";


    public FileInfo findByMd5(String md5) {
        return fileStore.get(md5);
    }


    public FileInfo saveFile(String fileName, String fileMd5, Long fileSize, String filePath) {
        FileInfo info = new FileInfo(fileName, fileMd5, fileSize, filePath);
        fileStore.put(fileMd5, info);
        return info;
    }


    public String calculateMD5(MultipartFile file) throws IOException {
        return DigestUtil.md5Hex(file.getInputStream());
    }


    public void saveChunk(MultipartFile chunk, String identifier, int index) throws IOException {
        File dir = new File(tempDir + File.separator + identifier);
        if (!dir.exists()) dir.mkdirs();
        chunk.transferTo(new File(dir, index + ".part"));
    }


    public File mergeChunks(String identifier, int totalChunks, String fileName) throws IOException {
        File dir = new File(tempDir + File.separator + identifier);
        File merged = new File(System.getProperty("java.io.tmpdir"), fileName);
        try (FileOutputStream out = new FileOutputStream(merged)) {
            for (int i = 0; i < totalChunks; i++) {
                File chunk = new File(dir, i + ".part");
                try (FileInputStream in = new FileInputStream(chunk)) {
                    byte[] buffer = new byte[1024 * 1024];
                    int len;
                    while ((len = in.read(buffer)) > 0) {
                        out.write(buffer, 0, len);
                    }
                }
            }
        }
        return merged;
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.

通用返回结构 Result

package com.icoderoad.common;


public class Result {
    private boolean success;
    private Object data;
    private String message;


    public Result(boolean success, Object data, String message) {
        this.success = success;
        this.data = data;
        this.message = message;
    }


    public static Result success(Object data) {
        return new Result(true, data, "成功");
    }


    public static Result error(String message) {
        return new Result(false, null, message);
    }


}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.

控制器 FileController

package com.icoderoad.controller;


import com.icoderoad.common.Result;
import com.icoderoad.model.FileInfo;
import com.icoderoad.service.FileService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;


import java.io.File;


@RestController
@RequestMapping("/api/file")
public class FileController {


    @Autowired
    private FileService fileService;


    @PostMapping("/check")
    public Result check(@RequestParam("md5") String md5) {
        FileInfo exist = fileService.findByMd5(md5);
        return Result.success(exist);
    }


    @PostMapping("/upload")
    public Result upload(@RequestParam("file") MultipartFile file) {
        try {
            String md5 = fileService.calculateMD5(file);
            FileInfo exist = fileService.findByMd5(md5);
            if (exist != null) return Result.success(exist);
            String path = System.getProperty("java.io.tmpdir") + File.separator + file.getOriginalFilename();
            file.transferTo(new File(path));
            FileInfo saved = fileService.saveFile(file.getOriginalFilename(), md5, file.getSize(), path);
            return Result.success(saved);
        } catch (Exception e) {
            return Result.error("上传失败: " + e.getMessage());
        }
    }


    @PostMapping("/chunk")
    public Result uploadChunk(@RequestParam("chunk") MultipartFile chunk,
                              @RequestParam("identifier") String identifier,
                              @RequestParam("index") int index) {
        try {
            fileService.saveChunk(chunk, identifier, index);
            return Result.success("分块上传成功");
        } catch (Exception e) {
            return Result.error("上传分块失败: " + e.getMessage());
        }
    }


    @PostMapping("/merge")
    public Result mergeChunks(@RequestParam("identifier") String identifier,
                               @RequestParam("total") int total,
                               @RequestParam("fileName") String fileName) {
        try {
            File merged = fileService.mergeChunks(identifier, total, fileName);
            String md5 = DigestUtil.md5Hex(merged);
            FileInfo info = fileService.saveFile(fileName, md5, merged.length(), merged.getAbsolutePath());
            return Result.success(info);
        } catch (Exception e) {
            return Result.error("合并失败: " + e.getMessage());
        }
    }
}
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.

前端: 基于 Bootstrap + SparkMD5 + Axios 实现分块上传与秒传功能

<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<title>大文件上传 Demo</title>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/spark-md5/spark-md5.min.js"></script>
</head>
<body class="container py-5">
<div class="card shadow-lg">
    <div class="card-header bg-primary text-white">
      <h4>大文件分块上传 + 秒传 Demo</h4>
    </div>
    <div class="card-body">
      <div class="mb-3">
        <input type="file" class="form-control" id="fileInput">
      </div>
      <button class="btn btn-success" onclick="upload()">开始上传</button>
      <div class="mt-3">
        <div class="progress">
          <div id="progressBar" class="progress-bar" role="progressbar" style="width: 0%">0%</div>
        </div>
      </div>
    </div>
</div>

<script>
    constCHUNK_SIZE=2*1024*1024;// 每块 2MB
    let file =null;

    document.getElementById('fileInput').addEventListener('change',function(e){
      file = e.target.files[0];
    });

    asyncfunctionupload(){
      if(!file)returnalert('请选择文件');

      const fileMD5 =awaitcalculateMD5(file);
      const chunkCount =Math.ceil(file.size/CHUNK_SIZE);

      const checkRes =await axios.get('/api/file/check',{
        params:{md5: fileMD5,fileName: file.name}
      });

      if(checkRes.data.code===200&& checkRes.data.data.exists){
        alert('服务器已存在该文件,已秒传成功!');
        updateProgressBar(100);
        return;
      }

      for(let i =0; i < chunkCount; i++){
        const start = i *CHUNK_SIZE;
        const end =Math.min(file.size, start +CHUNK_SIZE);
        const chunk = file.slice(start, end);

        const formData =newFormData();
        formData.append('file', chunk);
        formData.append('md5', fileMD5);
        formData.append('chunk', i);
        formData.append('total', chunkCount);
        formData.append('fileName', file.name);

        await axios.post('/api/file/chunk', formData);
        updateProgressBar(Math.round(((i +1)/ chunkCount)*100));
      }

      await axios.post('/api/file/merge',null,{
        params:{md5: fileMD5,fileName: file.name}
      });

      alert('上传并合并完成!');
    }

    asyncfunctioncalculateMD5(file){
      returnnewPromise((resolve, reject)=>{
        const chunkSize =CHUNK_SIZE;
        const chunks =Math.ceil(file.size/ chunkSize);
        let currentChunk =0;
        const spark =newSparkMD5.ArrayBuffer();
        const fileReader =newFileReader();

        fileReader.onload=e=>{
          spark.append(e.target.result);
          currentChunk++;
          if(currentChunk < chunks){
            loadNext();
          }else{
            resolve(spark.end());
          }
        };

        fileReader.onerror=()=>reject('读取失败');

        functionloadNext(){
          const start = currentChunk * chunkSize;
          const end =Math.min(start + chunkSize, file.size);
          fileReader.readAsArrayBuffer(file.slice(start, end));
        }

        loadNext();
      });
    }

    functionupdateProgressBar(percent){
      const bar =document.getElementById('progressBar');
      bar.style.width= percent +'%';
      bar.innerText= percent +'%';
    }
  </script>
</body>
</html>
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105.
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.

结语

通过本文,我们构建了一个完整的支持大文件上传系统,具备高效、稳定、可扩展的特性,适用于企业级系统中的文档上传、视频管理、素材收集等场景。其核心优势在于:

  • 🧠 秒传机制:避免重复上传,节省资源
  • 💾 分块传输:支持大文件,提升上传稳定性
  • 🚀 扩展性强:可进一步结合 Redis、MQ 等组件提升并发处理能力
责任编辑:武晓燕 来源: 路条编程
相关推荐

2020-06-15 08:03:17

大文件OOM内存

2021-01-15 11:40:44

文件Java秒传

2022-08-05 08:40:37

架构

2025-03-28 05:10:00

Spring上传大文件

2022-06-15 09:01:45

大文件秒传分片上传

2023-11-27 17:11:02

数据库oracle

2020-08-14 11:01:32

数据Pandas文件

2024-09-26 09:28:06

内存Spring

2021-12-21 09:05:46

命令Linux敲错

2024-04-15 00:08:00

MySQLInnoDB数据库

2024-11-12 09:54:23

2025-02-28 09:47:36

2024-10-14 13:26:42

2015-05-29 09:01:48

2021-06-08 07:48:26

数据 Python开发

2009-11-16 11:41:19

PHP上传大文件

2022-06-13 14:06:33

大文件上传前端

2020-10-29 15:17:49

代码开发工具

2018-10-11 15:51:32

ChromeGoogle浏览器

2023-03-09 12:04:38

Spring文件校验
点赞
收藏

51CTO技术栈公众号