SoFunction
Updated on 2025-04-03

Detailed code for uploading large file fragments in front-end

Large file shard upload is a technology that splits large files into multiple small file fragments for uploading. This method can improve upload efficiency, reduce network transmission time, and can easily restore upload progress when the network is unstable or errors occur during uploading.

The steps for uploading large file fragments are as follows:

  • Divide large files into multiple fixed-size fragments, usually the size of each fragment is between tens of KB and several MB.
  • Upload each file fragment one by one, and can be transmitted using HTTP, FTP and other protocols.
  • After the server receives each file fragment, it determines its MD5 value for saving or merging.
  • During the upload process, an upload progress record is maintained through the MD5 value, marking the file fragment that has been uploaded successfully, so that the upload progress can be restored after the upload is interrupted.
  • After all file fragments are uploaded, the server merges the file fragments to obtain a complete large file.

The benefits of uploading large files shards are:

  • Improve upload speed: Split large files into small fragments, and multiple fragments can be uploaded at the same time, thereby improving upload speed.
  • Breakpoint continuous transmission: If an interruption or error occurs during the upload process, you can record only the lost or errored file fragments again according to the upload progress, thereby reducing network transmission time.
  • Easy to manage: Split large files into small fragments, which can be managed and stored more easily, avoiding the memory usage problems that may be caused by uploading the entire large file at one time.

Large file shard upload technology has been widely used in various cloud storage, file transfer and other fields, providing users with better upload experience and efficiency.

View code

Large file shards require reading time, so they need to be loaded. The following example is only suitable for single file upload and with upload progress.

<template>
  <div class="slice-upload" v-loading="loading" element-loading-text="File sharding is being read"
    element-loading-spinner="el-icon-loading">
    <form  method="post" style="display: inline-block">
      <el-button size="small" @click="inputChange" class="file-choose-btn" :disabled="uploading">
        Select a file
        <input v-show="false"  ref="fileValue" :accept="accept" type="file" @change="choseFile" />
      </el-button>
    </form> 
    <slot name="status"></slot> 
    <div class="el-upload__tip">
      Please upload not more than &lt;span style="color: #e6a23c">{{ maxCalc }}</span>'s file    &lt;/div&gt;
    &lt;div class="file-list"&gt;
      &lt;transition name="list" tag="p"&gt;
        &lt;div v-if="file" class="list-item"&gt;
          &lt;i class="el-icon-document mr5"&gt;&lt;/i&gt;
          &lt;span&gt;{{  }}
            &lt;em v-show="uploading" style="color: #67c23a">Uploading....</em></span>          &lt;span class="percentage"&gt;{{ percentage }}%&lt;/span&gt;
          &lt;el-progress :show-text="false" :text-inside="false" :stroke-width="2" :percentage="percentage" /&gt;
        &lt;/div&gt;
      &lt;/transition&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/template&gt;

Logical code

Md5 needs to be introduced

npm install spark-md5
&lt;script&gt;
import SparkMD5 from "spark-md5";
import axios from "axios";
import {
  getImagecheckFile,//Check whether it has been uploaded for breakpoint continuous transmission  Imageinit,//Exchange the minIo upload address with shards  Imagecomplete,//Merge shards} from "/*Interface Address*/";
export default {
  name: "sliceUpload",
  /**
    * External data
    * @type {Object}
    */
  props: {
    /**
      * @Description
      * Code comment description
      * Interface url
      * @Return
      */
    findFileUrl: String,
    continueUrl: String,
    finishUrl: String,
    removeUrl: String,
    /**
      * @Description
      * Code comment description
      * Maximum uploaded file size 100G
      * @Return
      */
    maxFileSize: {
      type: Number,
      default: 100 * 1024 * 1024 * 1024,
    },
    /**
      * @Description
      * Code comment description
      * Slice size
      * @Return
      */
    sliceSize: {
      type: Number,
      default: 50 * 1024 * 1024,
    },
    /**
      * @Description
      * Code comment description
      * Can upload it
      * @Return
      */
    show: {
      type: Boolean,
      default: true,
    },
    accept: String,
  },
  /**
    * Data definition
    * @type {Object}
    */
  data() {
    return {
      /**
        * @Description
        * Code comment description
        * document
        * @Return
        */
      file: null,// Source file      imageSize: 0,//File size unit GB      uploadId: "",//Upload id      fullPath: "",//Upload address      uploadUrls: [],//Shash upload address collection      hash: "",//File MD5      /**
        * @Description
        * Code comment description
        * Sharded file
        * @Return
        */
      formDataList: [],
      /**
        * @Description
        * Code comment description
        * No shard uploaded
        * @Return
        */
      waitUpLoad: [],
      /**
        * @Description
        * Code comment description
        * Number of not uploaded
        * @Return
        */
      waitNum: NaN,
      /**
        * @Description
        * Code comment description
        * Upload size limit
        * @Return
        */
      limitFileSize: false,
      /**
        * @Description
        * Code comment description
        * Progress bar
        * @Return
        */
      percentage: 0,
      percentageFlage: false,
      /**
        * @Description
        * Code comment description
        * Read loading
        * @Return
        */
      loading: false,
      /**
        * @Description
        * Code comment description
        * Uploading
        * @Return
        */
      uploading: false,
      /**
        * @Description
        * Code comment description
        * Pause upload
        * @Return
        */
      stoped: false,

      /**
        * @Description
        * Code comment description
        * Uploaded file data
        * @Return
        */
      fileData: {
        id: "",
        path: "",
      },
    };
  },
  /**
    * Data monitoring
    * @type {Object}
    */
  watch: {
    //Monitor the upload progress    waitNum: {
      handler(v, oldVal) {
        let p = (
          (( - v) / ) * 100
        );
        // debugger;
         = p &gt; 100 ? 100 : p;
      },
      deep: true,
    },
    show: {
      handler(v, oldVal) {
        if (!v) {
           = null
        }
      },
      deep: true,
    },
  }, 
  /**
    * Method collection
    * @type {Object}
    */
  methods: {
    /**
      * Code comment description
      * Memory filter
      * @param {[type]} ram [description]
      * @return {[type]} [description]
      */
    ramFilter(bytes) {
      if (bytes === 0) return "0";
      var k = 1024;
      let sizes = ["B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"];
       let i = ((bytes) / (k));
      return (bytes / (k, i)).toFixed(2) + " " + sizes[i]; 
    },
    /**
      * Trigger upload file processing
      * @param e
      */
    async choseFile(e) {
      const fileInput = [0]; // Get the current file       = ();//Record file size      if (!fileInput &amp;&amp; !) {
        return;
      }
      const pattern = /[\u4e00-\u9fa5]/;
      if ((fileInput?.name)) {
        this.$("Please don't upload a mirror file with a Chinese name!");
        return;
      }

       = fileInput; // File is convenient to use later. It can be improved to func parameter transmission form       = 0;
      if ( &lt; ) {
         = true;
        const FileSliceCap = ; // Number of sharded bytes        let start = 0; // Define where the shard starts to cut        let end = 0; // Where each piece ends cut a        let i = 0; // Which piece         = []; // A pool of sharded storage drops the global         = []; // A pool of sharded storage drops the global        while (end &lt;  &amp;&amp; ) {
          /**
            * @Description
            * Code comment description
            * When the ending number is greater than the total size of the file, the end slice is
            * @Return
            */
          start = i * FileSliceCap; // Calculate the starting position of each piece          end = (i + 1) * FileSliceCap; // Calculate the end position of each slice          var fileSlice = (start, end); // Start cutting to h5 method Slice the file Parameters are the number of start and end bytes          const formData = new (); // Create FormData to store information passed to the backend          // ('fileMd5', this.fileMd5) // Md5 storing the total file lets the backend know who it is          ("file", fileSlice); // Current slice          ("chunkNumber", i); // Which one is the current one          ("fileName", ); // The file name of the current file. Name of the backend file slice. Method to add parameters to the formData object.          ({ key: i, formData }); //Save the current slice information to the pool we have just prepared          i++;
        }
        //Get the MD5 value of the file        this.computeFileMD5(, FileSliceCap).then(
          (res) =&gt; {
            if (res) {
               = res;
              //("Get:", res);              // = `File read successfully (${res}), file uploading...`;              //Check whether it has been uploaded through Md5 value              getImagecheckFile({ fileCode: res }).then(
                (res2) =&gt; {
                   = false;
                  /**
                    * @Description
                    * Code comment description
                    * After all cut, send a request to the backend to pull the slice information stored in the background of the current file to detect how many slices have been uploaded successfully.
                    * fileUrl: If there is an address, it is transmitted in seconds because the file already exists.
                    * shardingIndex: Returns which have been uploaded for breakpoint continuous transmission
                    * @Return
                    */
                  let { fileUrl, shardingIndex } = ; //Check whether it has been uploaded                  if (!fileUrl) {
                    /**
                      * @Description
                      * Code comment description
                      * When it is the breakpoint continuous transmission
                      * Remember to process the default is that the default has not been uploaded yet. The backend cannot return the uploaded data. If you are awesome in your backend, you can handle the interruption and relay here.
                      * @Return
                      */
                     = ;//The current default is that no interruption has been uploaded and must be processed                    () 
                  } else {
                    // debugger;
                     = [{ key: fileUrl }];
                     = 1;
                     = []; // There is no slice that needs to be uploaded in seconds                    this.$("The file has been transferred in seconds");
                    this.$emit("fileinput", {
                      url: fileUrl,
                      code: ,
                      imageSize: ,
                    });
                     = 0;
                    //  = null;
                    this.$ = ''
                     = false;
                     = false;
                    return;
                  }
                   = ; // Record length is used for percentage display                },
                (err) =&gt; {
                  this.$("Failed to obtain file data");
                   = null;
                  this.$ = ''
                   = false;
                   = false;
                  return;
                }
              );
            } else {
              // = "File reading failed";            }
          },
          (err) =&gt; {
            // = "File reading failed";             = false;
             = false;
            this.$("File reading failed");
          }
        );
      } else {
        // = true;
        this.$("Please upload files less than 100G");
         = null;
        this.$ = ''
         = false;
      }
    },
    //Prepare to upload    getFile() {
      /**
        * @Description
        * Code comment description
        * OK button
        * @Return
        */
      if ( === null) {
        this.$("Please upload the file first");
        return;
      }
       =  == 100;
      (); // Upload slices    },
    async sliceFile() {
      /**
        * @Description
        * Code comment description
        * If the file has been uploaded and the file path has been generated
        * @Return
        */
      if () {
        return;
      }
      /**
        * @Description
        * Code comment description
        * If the slices have been uploaded, but the merge and removal of chunk operations have not been completed yet, and no file path is generated
        * @Return
        */
      if ( &amp;&amp; !) {
        ();
        return;
      }
       = true;
       = false;
      //Submit slice      ();
    },
    async upLoadFileSlice() {
      if () {
         = false;
        return;
      }
      /**
        * @Description
        * Code comment description
        * When the number of remaining slices is 0, the call ends the upload interface
        * @Return
        */ 
      try { 
        let suffix = /\.([0-9A-z]+)$/.exec()[1]; // The file suffix name is the file type        let data = {
          bucketName: "static",//The name of the bucket          contentType:  || suffix,//File Type          filename: ,//File name          partCount: ,//The number of pieces is divided into as many pieces        };
        //Get the shard upload address, upload ID and file address according to the shard length        Imageinit(data).then((res) =&gt; {
          if ( == 200 &amp;&amp; ) {
             = ;//The corresponding id of the file             = ;//Upload the merged address             = ;//The corresponding position of each shard            if ( &amp;&amp; ) {
              /**
                * Used for concurrent upload parallelRun
                */
              // ((item, i) =&gt; {
              //   ("Upurl", [i]);
              // });
              // ()

              // return;
              let i = 0;//The corresponding address of the fragment of the number              /**
                * File shard merge
                */
              const complete = () =&gt; {
                Imagecomplete({
                  bucketName: "static",//MinIO bucket name                  fullPath: ,//The upload address returned by Imageinit                  uploadId: ,//The upload id returned by Imageinit                }).then(
                  (res) =&gt; {
                    if () {
                       = false;
                      this.$emit("fileinput", {
                        url: "/*minIo bucket address*/" + ,//For the final file path form submission                        code: ,//md5 value verification                        imageSize: ,//File size                        name: ,//file name                      });
                      this.$message({
                        type: "success",
                        message: "Uploading the image successfully",
                      });
                      this.$ = ''
                       = false;
                    } else {
                       = null;
                      this.$ = ''
                       = false;
                      this.$("Merge failed");
                    }
                  },
                  (err) =&gt; {
                     = null;
                    this.$ = ''
                     = false;
                    this.$("Merge failed");
                  }
                );
              };
              /**
                * Uploading of shards
                */
              const send = async () =&gt; {
                if (!) {
                   = null;
                  this.$ = ''
                   = false;
                  return;
                }

                /**
                  * No uploadable request merge
                  */
                if (i &gt;= ) {
                  // alert('Send completed')                  // Send it                  complete();
                  return;
                }
                if ( == 0) return;

                /**
                  * Pass the corresponding shard file to the corresponding bucket through AXIOS put
                  */
                try {
                  axios
                    .put(
                      [i],
                      [i].("file")
                    )
                    .then(
                      (result) =&gt; {
                        /*According to a shard successfully, it will be reduced by one and then the next shard will be uploaded.
                         */
                        --;
                        i++;
                        send();
                      },
                      (err) =&gt; {
                         = null;
                        this.$ = ''
                         = false;
                        this.$("Upload failed");
                      }
                    );
                } catch (error) {
                   = null;
                  this.$ = ''
                   = false;
                  this.$("Upload failed");
                }
              };
              send(); // Send a request            }
          }
        });
      } catch (error) {
         = null;
        this.$ = ''
         = false;
        this.$("Upload failed");
      }
    },
   
    inputChange() {
      this.$refs["fileValue"].dispatchEvent(new MouseEvent("click"));
    },   
    /**
      * For concurrent shard upload
      * requestList upload list max several uploads and executions concurrently
      */
    async parallelRun(requestList, max = 10) {
      const requestSliceList = [];
      for (let i = 0; i &lt; ; i += max) {
        ((i, i + max));
      }

      for (let i = 0; i &lt; ; i++) {
        const group = requestSliceList[i];
        (group);
        try {

          const res = await ((fn =&gt; (
            ("Upurl"),
            ("file")
          )));
          (item =&gt; {
            --
          })
          ('The interface returns value:', res);
          if ( === 0) { 
            //alert('Send Completed')
            // Send it            ();
            return;
          }
          // const res = await ((fn =&gt; fn));

        } catch (error) {
          (error);
        }
      }

    },
    complete() {
      Imagecomplete({
        bucketName: "static",//The corresponding bucket        fullPath: ,//The address of the bucket        uploadId: ,//The id of the bucket      }).then(
        (res) =&gt; {
          if () {
             = false;
            this.$emit("fileinput", {
              url: "/*minIo bucket address*/" + ,//'https://asgad/fileinput'+'/1000/20240701/'
              code: ,//File MD5 value              imageSize: ,//File size            });
            this.$message({
              type: "success",
              message: "Uploading the image successfully",
            });
            this.$ = ''
             = false;
          } else {
             = null;
            this.$ = ''
             = false;
            this.$("Merge failed");
          }
        },
        (err) =&gt; {
           = null;
          this.$ = ''
           = false;
          this.$("Merge failed");
        }
      );
    },
    /**
      * Get the MD5 value of a large file
      * @param {*} file
      * @param {*} n Shard size unit M
      */
    computeFileMD5(file, n = 50 * 1024 * 1024) {
      //("Start calculation...", file);      return new Promise((resolve, reject) =&gt; {
        let blobSlice =
           ||
           ||
          ;
        let chunkSize = n; // By default, it is divided into 50MB pieces        let chunks = ( / chunkSize); // Number of films        let currentChunk = 0;
        let spark = new ();
        let fileReader = new FileReader();
        let that = this;
         = function (e) {
          //("read chunk nr", currentChunk + 1, "of", chunks);
          ();
          currentChunk++;
          // ("Execution progress:" + (currentChunk / chunks) * 100 + "%");          if (currentChunk &lt; chunks &amp;&amp; ) {
            loadNext();
          } else {
            // ("finished loading");

            let md5 = (); //The final md5 value            (); //Release the cache            if (currentChunk === chunks) {
              resolve(md5);
            } else {
              reject(e);
            }
          }
        };

         = function (e) { 
          reject(e);
        };

        function loadNext() {
          let start = currentChunk * chunkSize;
          let end =
            start + chunkSize &gt;=  ?  : start + chunkSize;
          ((file, start, end));
        }

        loadNext();
      });
    },
  },
};
&lt;/script&gt;

Page style

Modify by yourself

&lt;!-- Current component page style definition --&gt;
&lt;style lang="scss" scoped&gt;
.file-choose-btn {
  overflow: hidden;
  position: relative;

  input {
    position: absolute;
    font-size: 100px;
    right: 0;
    top: 0;
    opacity: 0;
    cursor: pointer;
  }
}

.tips {
  margin-top: 30px;
  font-size: 14px;
  font-weight: 400;
  color: #606266;
}

.file-list {
  margin-top: 10px;
}

.list-item {
  display: block;
  margin-right: 10px;
  color: #606266;
  line-height: 25px;
  margin-bottom: 5px;
  width: 90%;

  .percentage {
    float: right;
  }
}

.list-enter-active,
.list-leave-active {
  transition: all 1s;
}

.list-enter,
.list-leave-to

/* .list-leave-active for below version 2.1.8 */
  {
  opacity: 0;
  transform: translateY(-30px);
}
&lt;/style&gt;

Summarize

This is the article about the upload of front-end large file shard MinIO. For more related contents of front-end large file shard MinIO, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!