0
点赞
收藏
分享

微信扫一扫

断点续传

weipeng2k 08-18 18:00 阅读 36

基于Java 1.8、Vue与MySQL的断点续传实现方案

在大文件上传场景中,断点续传是提升用户体验的关键功能。尤其是在网络不稳定或文件体积过大(如视频、压缩包)时,用户无需重新上传整个文件,只需从断点处继续即可。本文将介绍如何基于Java 1.8、Vue和MySQL实现断点续传功能,涵盖前后端完整实现逻辑、数据库设计及关键技术点。

一、断点续传核心原理

断点续传的本质是将大文件分割为多个小分片,分别上传后再合并。其核心机制包括:

  1. 文件分片:将文件按固定大小(如5MB)分割为多个二进制分片
  2. 唯一标识:通过文件MD5或SHA-1哈希值标识文件,确保唯一性
  3. 断点记录:记录已上传的分片信息,支持从失败处继续上传
  4. 分片合并:所有分片上传完成后,按顺序合并为原始文件

相比普通上传,断点续传具有以下优势:

  • 网络中断后无需重新上传整个文件
  • 支持并行上传多个分片,提高上传效率
  • 可暂停/继续上传,提升用户体验

二、技术栈选型

后端技术

  • 基础环境:Java 1.8、Maven 3.6+
  • Web框架:Spring Boot 2.7.x(兼容Java 1.8)
  • 数据库:MySQL 8.0(存储文件元信息和分片记录)
  • ORM框架:MyBatis-Plus(简化数据库操作)
  • 工具类
    • Hutool(文件操作、加密计算)
    • Commons-io(流处理)

前端技术

  • 框架:Vue 3 + Vite
  • UI组件:Element Plus(上传组件)
  • 工具
    • spark-md5(计算文件MD5)
    • axios(分片上传请求)

三、数据库设计

使用MySQL存储文件和分片的元信息,需要设计两张核心表:

1. 文件信息表(file_info)

记录文件整体信息,包括唯一标识、名称、大小等:

CREATE TABLE `file_info` (
  `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键',
  `file_md5` varchar(32) NOT NULL COMMENT '文件MD5值',
  `file_name` varchar(255) NOT NULL COMMENT '文件名',
  `file_size` bigint NOT NULL COMMENT '文件总大小(字节)',
  `file_type` varchar(50) DEFAULT NULL COMMENT '文件类型',
  `chunk_size` int NOT NULL COMMENT '分片大小(字节)',
  `total_chunks` int NOT NULL COMMENT '总分片数',
  `storage_path` varchar(255) DEFAULT NULL COMMENT '最终存储路径',
  `upload_status` tinyint NOT NULL DEFAULT 0 COMMENT '上传状态(0:未完成,1:已完成)',
  `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_file_md5` (`file_md5`) COMMENT '文件MD5唯一'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='文件信息表';

2. 分片信息表(file_chunk)

记录每个分片的上传状态:

CREATE TABLE `file_chunk` (
  `id` bigint NOT NULL AUTO_INCREMENT COMMENT '主键',
  `file_id` bigint NOT NULL COMMENT '关联file_info.id',
  `chunk_number` int NOT NULL COMMENT '分片序号(从0开始)',
  `chunk_size` int NOT NULL COMMENT '当前分片大小(字节)',
  `chunk_path` varchar(255) NOT NULL COMMENT '分片临时存储路径',
  `upload_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_file_chunk` (`file_id`,`chunk_number`) COMMENT '同一文件的分片序号唯一'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='文件分片表';

四、后端实现(Java)

1. 项目配置

pom.xml核心依赖

<dependencies>
    <!-- Spring Boot Web -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    
    <!-- MySQL驱动 -->
    <dependency>
        <groupId>com.mysql</groupId>
        <artifactId>mysql-connector-j</artifactId>
        <scope>runtime</scope>
    </dependency>
    
    <!-- MyBatis-Plus -->
    <dependency>
        <groupId>com.baomidou</groupId>
        <artifactId>mybatis-plus-boot-starter</artifactId>
        <version>3.5.3.1</version>
    </dependency>
    
    <!-- 工具类 -->
    <dependency>
        <groupId>cn.hutool</groupId>
        <artifactId>hutool-all</artifactId>
        <version>5.8.16</version>
    </dependency>
    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-io</artifactId>
        <version>2.11.0</version>
    </dependency>
    
    <!-- Lombok -->
    <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <optional>true</optional>
    </dependency>
</dependencies>

application.yml配置

spring:
  datasource:
    url: jdbc:mysql://localhost:3306/file_upload?useUnicode=true&characterEncoding=utf8&serverTimezone=Asia/Shanghai
    username: root
    password: root
    driver-class-name: com.mysql.cj.jdbc.Driver

# 文件存储配置
file:
  temp-path: ./upload/temp/  # 分片临时存储路径
  storage-path: ./upload/files/  # 最终文件存储路径
  chunk-size: 5242880  # 分片大小5MB(5*1024*1024)

# MyBatis-Plus配置
mybatis-plus:
  mapper-locations: classpath*:mapper/**/*.xml
  global-config:
    db-config:
      id-type: auto

2. 实体类定义

FileInfo.java

import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;

import java.util.Date;

@Data
@TableName("file_info")
public class FileInfo {
    @TableId(type = IdType.AUTO)
    private Long id;
    private String fileMd5;
    private String fileName;
    private Long fileSize;
    private String fileType;
    private Integer chunkSize;
    private Integer totalChunks;
    private String storagePath;
    private Integer uploadStatus; // 0:未完成,1:已完成
    private Date createTime;
    private Date updateTime;
}

FileChunk.java

import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;

import java.util.Date;

@Data
@TableName("file_chunk")
public class FileChunk {
    @TableId(type = IdType.AUTO)
    private Long id;
    private Long fileId;
    private Integer chunkNumber;
    private Integer chunkSize;
    private String chunkPath;
    private Date uploadTime;
}

3. 核心业务逻辑

FileService.java

import cn.hutool.core.io.FileUtil;
import cn.hutool.core.util.IdUtil;
import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper;
import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import org.springframework.web.multipart.MultipartFile;

import javax.annotation.PostConstruct;
import java.io.File;
import java.io.IOException;
import java.nio.channels.FileChannel;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.List;
import java.util.stream.Collectors;

@Service
public class FileService extends ServiceImpl<FileInfoMapper, FileInfo> {

    @Value("${file.temp-path}")
    private String tempPath;
    
    @Value("${file.storage-path}")
    private String storagePath;
    
    @Value("${file.chunk-size}")
    private Integer defaultChunkSize;

    private final FileChunkMapper chunkMapper;

    public FileService(FileChunkMapper chunkMapper) {
        this.chunkMapper = chunkMapper;
    }

    // 初始化存储目录
    @PostConstruct
    public void init() {
        FileUtil.mkdir(tempPath);
        FileUtil.mkdir(storagePath);
    }

    /**
     * 检查文件是否已上传或部分上传
     */
    public FileInfo checkFile(String fileMd5) {
        return getOne(new LambdaQueryWrapper<FileInfo>()
                .eq(FileInfo::getFileMd5, fileMd5));
    }

    /**
     * 获取已上传的分片序号
     */
    public List<Integer> getUploadedChunks(Long fileId) {
        List<FileChunk> chunks = chunkMapper.selectList(
                new LambdaQueryWrapper<FileChunk>().eq(FileChunk::getFileId, fileId));
        return chunks.stream()
                .map(FileChunk::getChunkNumber)
                .collect(Collectors.toList());
    }

    /**
     * 上传分片
     */
    @Transactional
    public void uploadChunk(MultipartFile file, String fileMd5, Integer chunkNumber) throws IOException {
        // 1. 获取文件信息
        FileInfo fileInfo = checkFile(fileMd5);
        if (fileInfo == null) {
            throw new IllegalArgumentException("文件信息不存在,请先初始化");
        }

        // 2. 保存分片到临时目录
        String chunkFileName = fileMd5 + "_" + chunkNumber;
        Path chunkPath = Path.of(tempPath, chunkFileName);
        file.transferTo(chunkPath);

        // 3. 记录分片信息
        FileChunk chunk = new FileChunk();
        chunk.setFileId(fileInfo.getId());
        chunk.setChunkNumber(chunkNumber);
        chunk.setChunkSize((int) file.getSize());
        chunk.setChunkPath(chunkPath.toString());
        chunkMapper.insert(chunk);
    }

    /**
     * 合并分片
     */
    @Transactional
    public void mergeChunks(String fileMd5) throws IOException {
        // 1. 获取文件信息
        FileInfo fileInfo = checkFile(fileMd5);
        if (fileInfo == null) {
            throw new IllegalArgumentException("文件信息不存在");
        }

        // 2. 验证所有分片是否上传完成
        List<FileChunk> chunks = chunkMapper.selectList(
                new LambdaQueryWrapper<FileChunk>().eq(FileChunk::getFileId, fileInfo.getId()));
        
        if (chunks.size() != fileInfo.getTotalChunks()) {
            throw new IllegalStateException("分片未全部上传完成,无法合并");
        }

        // 3. 按序号排序分片
        chunks.sort((c1, c2) -> c1.getChunkNumber().compareTo(c2.getChunkNumber()));

        // 4. 创建目标文件
        String extension = FileUtil.extName(fileInfo.getFileName());
        String targetFileName = IdUtil.fastSimpleUUID() + (extension.isEmpty() ? "" : "." + extension);
        Path targetPath = Path.of(storagePath, targetFileName);

        // 5. 合并所有分片
        try (FileChannel outChannel = FileChannel.open(targetPath, StandardOpenOption.CREATE, StandardOpenOption.WRITE)) {
            for (FileChunk chunk : chunks) {
                Path chunkPath = Path.of(chunk.getChunkPath());
                try (FileChannel inChannel = FileChannel.open(chunkPath, StandardOpenOption.READ)) {
                    inChannel.transferTo(0, inChannel.size(), outChannel);
                }
                // 删除临时分片文件
                Files.delete(chunkPath);
            }
        }

        // 6. 更新文件状态
        fileInfo.setStoragePath(targetPath.toString());
        fileInfo.setUploadStatus(1); // 标记为已完成
        updateById(fileInfo);

        // 7. 删除分片记录
        chunkMapper.delete(new LambdaQueryWrapper<FileChunk>().eq(FileChunk::getFileId, fileInfo.getId()));
    }

    /**
     * 初始化文件信息
     */
    @Transactional
    public FileInfo initFile(String fileMd5, String fileName, Long fileSize, String fileType) {
        FileInfo fileInfo = checkFile(fileMd5);
        if (fileInfo != null) {
            // 文件已存在且上传完成,直接返回
            if (fileInfo.getUploadStatus() == 1) {
                return fileInfo;
            }
            // 文件存在但未完成,返回现有信息
            return fileInfo;
        }

        // 计算总分片数
        int totalChunks = (int) (fileSize % defaultChunkSize == 0 
                ? fileSize / defaultChunkSize 
                : fileSize / defaultChunkSize + 1);

        // 创建新文件信息
        fileInfo = new FileInfo();
        fileInfo.setFileMd5(fileMd5);
        fileInfo.setFileName(fileName);
        fileInfo.setFileSize(fileSize);
        fileInfo.setFileType(fileType);
        fileInfo.setChunkSize(defaultChunkSize);
        fileInfo.setTotalChunks(totalChunks);
        fileInfo.setUploadStatus(0); // 初始状态为未完成
        save(fileInfo);
        
        return fileInfo;
    }
}

4. 控制器实现

FileController.java

import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;

import java.util.HashMap;
import java.util.List;
import java.util.Map;

@RestController
@RequestMapping("/api/file")
public class FileController {

    private final FileService fileService;

    public FileController(FileService fileService) {
        this.fileService = fileService;
    }

    /**
     * 检查文件状态
     */
    @GetMapping("/check")
    public ResponseEntity<Map<String, Object>> checkFile(@RequestParam String fileMd5) {
        FileInfo fileInfo = fileService.checkFile(fileMd5);
        Map<String, Object> result = new HashMap<>();
        
        if (fileInfo == null) {
            result.put("exists", false);
        } else {
            result.put("exists", true);
            result.put("uploaded", fileInfo.getUploadStatus() == 1);
            result.put("totalChunks", fileInfo.getTotalChunks());
            if (fileInfo.getUploadStatus() == 0) {
                // 获取已上传的分片序号
                List<Integer> uploadedChunks = fileService.getUploadedChunks(fileInfo.getId());
                result.put("uploadedChunks", uploadedChunks);
            }
        }
        
        return ResponseEntity.ok(result);
    }

    /**
     * 初始化文件信息
     */
    @PostMapping("/init")
    public ResponseEntity<FileInfo> initFile(
            @RequestParam String fileMd5,
            @RequestParam String fileName,
            @RequestParam Long fileSize,
            @RequestParam String fileType) {
        FileInfo fileInfo = fileService.initFile(fileMd5, fileName, fileSize, fileType);
        return ResponseEntity.ok(fileInfo);
    }

    /**
     * 上传分片
     */
    @PostMapping("/upload-chunk")
    public ResponseEntity<Void> uploadChunk(
            @RequestParam MultipartFile file,
            @RequestParam String fileMd5,
            @RequestParam Integer chunkNumber) {
        try {
            fileService.uploadChunk(file, fileMd5, chunkNumber);
            return ResponseEntity.ok().build();
        } catch (Exception e) {
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
        }
    }

    /**
     * 合并分片
     */
    @PostMapping("/merge")
    public ResponseEntity<Void> mergeChunks(@RequestParam String fileMd5) {
        try {
            fileService.mergeChunks(fileMd5);
            return ResponseEntity.ok().build();
        } catch (Exception e) {
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
        }
    }
}

五、前端实现(Vue 3)

1. 安装依赖

npm install spark-md5 element-plus axios

2. 核心组件(FileUploader.vue)

<template>
  <div class="upload-container">
    <el-upload
      ref="uploadRef"
      action=""
      :auto-upload="false"
      :on-change="handleFileChange"
      :show-file-list="false"
      class="upload-demo"
    >
      <el-button type="primary">选择文件</el-button>
    </el-upload>

    <div v-if="file" class="file-info">
      <p>文件名: {{ file.name }}</p>
      <p>大小: {{ formatFileSize(file.size) }}</p>
      <el-progress 
        :percentage="progress" 
        :stroke-width="4" 
        style="margin: 10px 0;"
      ></el-progress>
      <el-button 
        @click="handleUpload" 
        :disabled="isUploading"
        :loading="isUploading"
        type="success"
      >
        {{ isUploading ? '上传中' : '开始上传' }}
      </el-button>
      <el-button 
        @click="handlePause" 
        :disabled="!isUploading || isPaused"
        style="margin-left: 10px;"
      >
        暂停
      </el-button>
      <el-button 
        @click="handleResume" 
        :disabled="!isPaused"
        style="margin-left: 10px;"
      >
        继续
      </el-button>
    </div>
  </div>
</template>

<script setup>
import { ref, onMounted } from 'vue';
import SparkMD5 from 'spark-md5';
import { ElMessage, ElProgress, ElButton, ElUpload } from 'element-plus';
import axios from 'axios';

// 上传相关状态
const uploadRef = ref(null);
const file = ref(null);
const fileMd5 = ref('');
const progress = ref(0);
const isUploading = ref(false);
const isPaused = ref(false);
const uploadedChunks = ref([]);
const totalChunks = ref(0);
const chunkSize = 5 * 1024 * 1024; // 5MB分片大小

// 处理文件选择
const handleFileChange = (uploadFile) => {
  file.value = uploadFile.raw;
  progress.value = 0;
  // 计算文件MD5
  calculateFileMd5(uploadFile.raw);
};

// 计算文件MD5
const calculateFileMd5 = (file) => {
  const fileReader = new FileReader();
  const spark = new SparkMD5.ArrayBuffer();
  const chunkSize = 2 * 1024 * 1024; // 2MB一块计算MD5
  let offset = 0;

  const loadNextChunk = () => {
    const blob = file.slice(offset, offset + chunkSize);
    fileReader.readAsArrayBuffer(blob);
  };

  fileReader.onload = (e) => {
    spark.append(e.target.result);
    offset += chunkSize;
    
    if (offset < file.size) {
      loadNextChunk();
    } else {
      fileMd5.value = spark.end();
      // MD5计算完成后检查文件状态
      checkFileStatus();
    }
  };

  loadNextChunk();
};

// 检查文件上传状态
const checkFileStatus = async () => {
  try {
    const response = await axios.get('/api/file/check', {
      params: { fileMd5: fileMd5.value }
    });
    
    if (response.data.exists) {
      if (response.data.uploaded) {
        ElMessage.success('文件已上传完成');
        progress.value = 100;
      } else {
        // 有部分分片已上传
        uploadedChunks.value = response.data.uploadedChunks;
        totalChunks.value = response.data.totalChunks;
        progress.value = Math.round((uploadedChunks.value.length / totalChunks.value) * 100);
        ElMessage.info(`检测到已上传${uploadedChunks.value.length}/${totalChunks.value}个分片`);
      }
    } else {
      // 文件未上传过,初始化文件信息
      await initFileInfo();
    }
  } catch (error) {
    ElMessage.error('检查文件状态失败');
    console.error(error);
  }
};

// 初始化文件信息
const initFileInfo = async () => {
  try {
    await axios.post('/api/file/init', null, {
      params: {
        fileMd5: fileMd5.value,
        fileName: file.value.name,
        fileSize: file.value.size,
        fileType: file.value.type
      }
    });
  } catch (error) {
    ElMessage.error('初始化文件信息失败');
    console.error(error);
  }
};

// 开始上传
const handleUpload = async () => {
  if (!file.value || !fileMd5.value) return;
  
  isUploading.value = true;
  isPaused.value = false;
  
  // 获取文件信息
  const response = await axios.get('/api/file/check', {
    params: { fileMd5: fileMd5.value }
  });
  
  totalChunks.value = response.data.totalChunks;
  uploadedChunks.value = response.data.uploadedChunks || [];
  
  // 开始上传未完成的分片
  uploadChunks();
};

// 上传分片
const uploadChunks = async () => {
  if (isPaused.value) return;
  
  // 计算需要上传的分片
  const chunksToUpload = [];
  for (let i = 0; i < totalChunks.value; i++) {
    if (!uploadedChunks.value.includes(i)) {
      chunksToUpload.push(i);
    }
  }
  
  if (chunksToUpload.length === 0) {
    // 所有分片已上传,合并文件
    await mergeChunks();
    return;
  }
  
  // 并发上传分片(这里限制并发数为3)
  const concurrency = 3;
  const chunkPromises = [];
  
  for (let i = 0; i < Math.min(concurrency, chunksToUpload.length); i++) {
    chunkPromises.push(uploadChunk(chunksToUpload[i]));
  }
  
  await Promise.all(chunkPromises);
  
  // 继续上传剩余分片
  uploadChunks();
};

// 上传单个分片
const uploadChunk = async (chunkNumber) => {
  if (isPaused.value) return;
  
  const start = chunkNumber * chunkSize;
  const end = Math.min(start + chunkSize, file.value.size);
  const chunk = file.value.slice(start, end);
  
  const formData = new FormData();
  formData.append('file', chunk);
  formData.append('fileMd5', fileMd5.value);
  formData.append('chunkNumber', chunkNumber);
  
  try {
    await axios.post('/api/file/upload-chunk', formData, {
      headers: { 'Content-Type': 'multipart/form-data' }
    });
    
    uploadedChunks.value.push(chunkNumber);
    progress.value = Math.round((uploadedChunks.value.length / totalChunks.value) * 100);
  } catch (error) {
    ElMessage.error(`分片${chunkNumber}上传失败,将重试`);
    console.error(error);
    // 重试当前分片
    await uploadChunk(chunkNumber);
  }
};

// 合并分片
const mergeChunks = async () => {
  try {
    await axios.post('/api/file/merge', null, {
      params: { fileMd5: fileMd5.value }
    });
    ElMessage.success('文件上传完成');
    isUploading.value = false;
    progress.value = 100;
  } catch (error) {
    ElMessage.error('文件合并失败');
    console.error(error);
    isUploading.value = false;
  }
};

// 暂停上传
const handlePause = () => {
  isPaused.value = true;
  isUploading.value = false;
};

// 继续上传
const handleResume = () => {
  isUploading.value = true;
  isPaused.value = false;
  uploadChunks();
};

// 格式化文件大小
const formatFileSize = (bytes) => {
  if (bytes < 1024) return bytes + ' B';
  if (bytes < 1024 * 1024) return (bytes / 1024).toFixed(2) + ' KB';
  return (bytes / (1024 * 1024)).toFixed(2) + ' MB';
};
</script>

<style scoped>
.upload-container {
  max-width: 800px;
  margin: 20px auto;
  padding: 20px;
  border: 1px solid #eee;
  border-radius: 4px;
}

.file-info {
  margin-top: 20px;
  padding: 10px;
  border: 1px solid #e5e7eb;
  border-radius: 4px;
}
</style>

六、功能测试与验证

  1. 基础测试

    • 选择大文件(如100MB视频),验证分片上传和合并功能
    • 上传过程中刷新页面,检查是否能从断点继续
    • 网络中断后重新连接,验证断点续传效果
  2. 边界测试

    • 测试文件大小正好为分片大小整数倍的情况
    • 验证同一文件多次上传是否会重复存储
    • 测试并发上传多个大文件的性能

七、优化方向

  1. 分片大小自适应:根据文件大小动态调整分片大小(小文件使用大分片,大文件使用小分片)
  2. 断点保存到本地:使用localStorage保存上传进度,避免页面刷新后丢失状态
  3. 分片校验:上传前计算分片MD5,确保分片传输完整性
  4. 分布式存储:将文件存储到分布式文件系统(如MinIO、FastDFS),支持横向扩展
  5. 后台合并:分片上传完成后,通过异步任务合并文件,避免前端等待
  6. 上传速度限制:添加上传速度控制,避免占用过多带宽

八、总结

可根据业务需求进一步优化,如添加权限控制、文件加密、上传进度通知等功能。断点续传作为大文件上传的基础能力,在视频平台、云存储、企业文档管理等场景中具有广泛应用价值。

举报

相关推荐

0 条评论