SoFunction
Updated on 2025-04-05

Example of implementation of vue+springboot uploading large files

Preface

As we all know, uploading large files is a very troublesome thing. If one road goes black and upload the file directly at one time, this can be done for small files. However, for large files, there may be network problems, request response time, etc., resulting in the file upload failure. So this time, let’s teach you how to upload large files using vue+srpingboot project.

logic

If you need to upload large files, the following logic should be taken into account:

  • Uploading large files generally requires uploading file chunks, and then merging all slices into complete files. It can be implemented according to the following logic:
  • The front-end selects the file to be uploaded in the page and uses the method to slice the file. Generally, each slice size is a fixed value (such as 5MB), and records how many slices there are in total.
  • Upload slices to backend services separately, and you can use libraries such as XMLHttpRequest or Axios to send Ajax requests. For each slice, three parameters need to be included: the current slice index (starting from 0), the total number of slices, and the slice file data.
  • After receiving the slice, the backend service saves it to a temporary file under the specified path and records the uploaded slice index and upload status. If a slice fails to upload, the front-end is notified to retransmit the slice.
  • When all slices are uploaded successfully, the backend service reads all slice contents and merges them into a complete file. File merging can be implemented using BufferedOutputStream.
  • Finally, return the response result of successful file upload to the front-end.

front end

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>File Upload</title>
</head>
<body>
    <input type="file" >
    <button onclick="upload()">Upload</button>
    <script>
        function upload() {
            let file = ("fileInput").files[0];
            let chunkSize = 5 * 1024 * 1024; // Slice size is 5MB            let totalChunks = ( / chunkSize); // Calculate the total number of slices            let index = 0;
            while (index < totalChunks) {
                let chunk = (index * chunkSize, (index + 1) * chunkSize);
                let formData = new FormData();
                ("file", chunk);
                ("index", index);
                ("totalChunks", totalChunks);
                // Send an Ajax request to upload slices                $.ajax({
                    url: "/uploadChunk",
                    type: "POST",
                    data: formData,
                    processData: false,
                    contentType: false,
                    success: function () {
                        if (++index >= totalChunks) {
                            // All slices are uploaded and notified the server to merge files                            $.post("/mergeFile", {fileName: }, function () {
                                alert("Upload complete!");
                            })
                        }
                    }
                });
            }
        }
    </script>
</body>
</html>

rear end

Controller layer:

@RestController
public class FileController {

    @Value("${-path}")
    private String uploadPath;

    @PostMapping("/uploadChunk")
    public void uploadChunk(@RequestParam("file") MultipartFile file,
                            @RequestParam("index") int index,
                            @RequestParam("totalChunks") int totalChunks) throws IOException {
        // Save the tiled file with file name + slice index number as file name        String fileName = () + "." + index;
        Path tempFile = (uploadPath, fileName);
        (tempFile, ());
        // Record upload status        String uploadFlag = ().toString();
        ().set("upload:" + fileName, index, uploadFlag);
        // If all slices have been uploaded, notify the merge file        if (isAllChunksUploaded(fileName, totalChunks)) {
            sendMergeRequest(fileName, totalChunks);
        }
    }

    @PostMapping("/mergeFile")
    public void mergeFile(String fileName) throws IOException {
        // All slices have been uploaded successfully and files are merged        List<File> chunkFiles = new ArrayList<>();
        for (int i = 0; i < getTotalChunks(fileName); i++) {
            String chunkFileName = fileName + "." + i;
            Path tempFile = (uploadPath, chunkFileName);
            (());
        }
        Path destFile = (uploadPath, fileName);
        try (OutputStream out = (destFile);
             SequenceInputStream seqIn = new SequenceInputStream((chunkFiles));
             BufferedInputStream bufIn = new BufferedInputStream(seqIn)) {
            byte[] buffer = new byte[1024];
            int len;
            while ((len = (buffer)) > 0) {
                (buffer, 0, len);
            }
        }
        // Clean up temporary files and upload status records        for (int i = 0; i < getTotalChunks(fileName); i++) {
            String chunkFileName = fileName + "." + i;
            Path tempFile = (uploadPath, chunkFileName);
            (tempFile);
            ("upload:" + chunkFileName);
        }
    }

    private int getTotalChunks(String fileName) {
        // Get the total number of slices based on file name        return ((uploadPath, fileName).toFile().listFiles()).length;
    }

    private boolean isAllChunksUploaded(String fileName, int totalChunks) {
        // Determine whether all slices have been uploaded        List<String> uploadFlags = ().range("upload:" + fileName, 0, -1);
        return uploadFlags != null && () == totalChunks;
    }

    private void sendMergeRequest(String fileName, int totalChunks) {
        // Send a merge file request        new Thread(() -> {
            try {
                URL url = new URL("http://localhost:8080/mergeFile");
                HttpURLConnection conn = (HttpURLConnection) ();
                ("POST");
                (true);
                (true);
                ("Content-Type", "application/x-www-form-urlencoded;charset=utf-8");
                OutputStream out = ();
                String query = "fileName=" + fileName;
                (());
                ();
                ();
                BufferedReader br = new BufferedReader(new InputStreamReader((), StandardCharsets.UTF_8));
                while (() != null) ;
                ();
            } catch (IOException e) {
                ();
            }
        }).start();
    }

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
}

Where -path is the save path for file upload, and can be configured in or in. At the same time, you need to add a RedisTemplate bean to record the upload status.

RedisTemplate configuration

If you need to use RedisTemplate, you need to introduce the following package

<dependency>
    <groupId></groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

Configure Redis information in yml

=localhost
=6379
=0

Then use it in your own class

@Component
public class myClass {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    public void set(String key, Object value) {
        ().set(key, value);
    }

    public Object get(String key) {
        return ().get(key);
    }
}

Things to note

  • The size of the slices for each upload needs to be controlled to take into account the upload speed and stability, and avoid upload failure due to excessive server resources or network instability.
  • There is a sequence of slice uploads. You need to ensure that all slices are uploaded before merging, otherwise there may be incomplete files or file merging errors.
  • After the upload is completed, temporary files need to be cleaned in time to avoid server crashes due to excessive disk space. You can set up a regular task to clean out expired temporary files.

Conclusion

This is the article about the implementation example of vue+springboot uploading large files. For more related contents of vue springboot uploading large files, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!