SoFunction
Updated on 2025-04-07

Example of React+Koa implementing file upload

background

Recently, when writing a graduation setup, some file upload functions were involved, including ordinary file upload, large file upload, breakpoint renewal, etc.

Server-side dependencies

  • koa (frame)
  • koa-router(Koa routing)
  • koa-body (Koa body parsing middleware, which can be used to parse post request content)
  • koa-static-cache (Koa static resource middleware, used to handle static resource requests)
  • koa-bodyparser(the content of parsed)

Backend configuration cross-domain

(async (ctx, next) => {
 ('Access-Control-Allow-Origin', '*');
 (
  'Access-Control-Allow-Headers',
  'Content-Type, Content-Length, Authorization, Accept, X-Requested-With , yourHeaderFeild',
 );
 ('Access-Control-Allow-Methods', 'PUT, POST, GET, DELETE, OPTIONS');
 if ( == 'OPTIONS') {
   = 200;
 } else {
  await next();
 }
});

Configuring static resource access in the backend

// Static resource processing(
 KoaStaticCache('./pulbic', {
  prefix: '/public',
  dynamic: true,
  gzip: true,
 }),
);

Backend configuration request body parser Use koa-bodyparser

const bodyParser = require('koa-bodyparser');
(bodyParser());

Front-end dependencies

  • React
  • Antd
  • axios

Normal file upload

rear end

The backend only needs to use koa-body to configure options, as middleware, pass in ('url', middleware, callback)

Backend code

 // Upload configurationconst uploadOptions = {
// Support file format multipart: true,
 formidable: {
  // Upload the directory here directly to the public folder for easy access. Remember to add/  uploadDir: (__dirname, '../../pulbic/'),
  // Keep the file extension  keepExtensions: true,
 },
};
('/upload', new KoaBody(uploadOptions), (ctx, next) => {
 // Get the uploaded file const file = ;
 const fileName = ('/')[('/').length-1];
  = {
   code:0,
   data:{
    url:`public/${fileName}`
   },
   message:'success'

 }
});

front end

I am using formData to pass it here. The front-end accesses the file selector through <input type='file'/>, and obtains the selected file through the onChange event [0], and then creates the file ('file', targetFile) that the FormData object will get.

Front-end code

   const Upload = () =&gt; {
   const [url, setUrl] = useState&lt;string&gt;('')
   const handleClickUpload = () =&gt; {
     const fileLoader = ('#btnFile') as HTMLInputElement;
     if (isNil(fileLoader)) {
       return;
     }
     ();
   }
   const handleUpload = async (e: any) =&gt; {
     //Get uploaded file     const file = [0];
     const formData = new FormData()
     ('file', file);
     // Upload file     const { data } = await uploadSmallFile(formData);
     ();
     setUrl(`${baseURL}${}`);
   }
   return (
     &lt;div&gt;
       &lt;input type="file"  onChange={handleUpload} style={{ display: 'none' }} /&gt;
       &lt;Button onClick={handleClickUpload}&gt;Upload small files&lt;/Button&gt;
       &lt;img src={url} /&gt;
     &lt;/div&gt;
   )
 }

Other optional methods

  • input+form Set form's aciton as the backend page, enctype="multipart/form-data", type="post"
  • Use fileReader to read file data for uploading. Compatibility is not particularly good

Upload large files

When uploading files, the request timeout may be caused by the file being too large. At this time, you can use the method of sharding. Simply put, split the files into small pieces and pass them to the server. These small pieces identify which file they belong to. After all the small pieces are passed, the backend executes merge to merge these files to complete the entire transfer process.

front end

  • Getting the file is the same as before, and I won't repeat it again
  • Set the default shard size, file slice, each slice name is , and the request is merged until the entire file is sent
  const handleUploadLarge = async (e: any) =&gt; {
     //Get uploaded file     const file = [0];
     // For file fragmentation     await uploadEveryChunk(file, 0);
   }
   const uploadEveryChunk = (
     file: File,
     index: number,
   ) =&gt; {
     (index);
     const chunkSize = 512; // Slice width     // [File name, file suffix]     const [fname, fext] = ('.');
     // Get the starting byte of the current slice     const start = index * chunkSize;
     if (start &gt; ) {
       // When the file size is exceeded, stop recursive upload       return mergeLargeFile();
     }
     const blob = (start, start + chunkSize);
     // Name each piece     const blobName = `${fname}.${index}.${fext}`;
     const blobFile = new File([blob], blobName);
     const formData = new FormData();
     ('file', blobFile);
     uploadLargeFile(formData).then((res) =&gt; {
       // Recursive shard upload       uploadEveryChunk(file, ++index);
     });
   };

rear end

The backend needs to provide two interfaces

Upload

Store each uploaded chunk into the folder of the corresponding name for easier subsequent merging

const uploadStencilPreviewOptions = {
multipart: true,
formidable: {
 uploadDir: (__dirname, '../../temp/'), // File storage address keepExtensions: true,
 maxFieldsSize: 2 * 1024 * 1024,
},
};

('/upload_chunk', new KoaBody(uploadStencilPreviewOptions), async (ctx) =&gt; {
try {
 const file = ;
 // [ name, index, ext ] - split file name const fileNameArr = ('.');

 const UPLOAD_DIR = (__dirname, '../../temp');
 // Table of contents for slicing const chunkDir = `${UPLOAD_DIR}/${fileNameArr[0]}`;
 if (!(chunkDir)) {
  // Create a directory without a directory  // Create a temporary directory for large files  await (chunkDir);
 }
 // Original file name.index - The specific address and name of each shard const dPath = (chunkDir, fileNameArr[1]);

 // Move the shard file from temp to the temporary directory where the large file is uploaded this time await (, dPath, { overwrite: true });
  = {
  code: 0,
  message: 'File upload successfully',
 };
} catch (e) {
  = {
  code: -1,
  message: `File upload failed:${()}`,
 };
}
});

merge

According to the merge request sent by the front-end, the name you carry is temporarily cached with the folder in the folder where large files are chunked and found the folder belonging to the name. After reading the chunks according to the index order, merge the files (path, data) (apply writes in order, merge), and then delete the temporary stored folder to free up memory space.

('/merge_chunk', async (ctx) =&gt; {
 try {
  const { fileName } = ;
  const fname = ('.')[0];
  const TEMP_DIR = (__dirname, '../../temp');
  const static_preview_url = '/public/previews';
  const STORAGE_DIR = (__dirname, `../..${static_preview_url}`);
  const chunkDir = (TEMP_DIR, fname);
  const chunks = await (chunkDir);
  chunks
   .sort((a, b) =&gt; a - b)
   .map((chunkPath) =&gt; {
    // Merge files    (
     (STORAGE_DIR, fileName),
     (`${chunkDir}/${chunkPath}`),
    );
   });
  // Delete temporary folders  (chunkDir);
  // The URL to access the picture  const url = `http://${}${static_preview_url}/${fileName}`;
   = {
   code: 0,
   data: { url },
   message: 'success',
  };
 } catch (e) {
   = { code: -1, message: `Merge failed:${()}` };
 }
});

Breakpoint continuous transmission

During the transfer process of large files, if refreshing the page or temporary failure causes the transmission to fail, it is very bad for the user's experience. Therefore, you need to mark the location where the transmission fails, and transfer it directly here next time. What I am using is reading and writing in localStorage

  const handleUploadLarge = async (e: any) =&gt; {
    //Get uploaded file    const file = [0];
    const record = (('uploadRecord') as any);
    if (!isNil(record)) {
      // For the sake of convenience of display, we do not consider the collision problem first, and determine whether the file is the same way to use hash files      // For large files, you can use hash (one file + file size) to determine whether the two files are the same      if( === ){
        return await uploadEveryChunk(file, );
      }
    }
    // For file fragmentation    await uploadEveryChunk(file, 0);
  }
  const uploadEveryChunk = (
    file: File,
    index: number,
  ) =&gt; {
    const chunkSize = 512; // Slice width    // [File name, file suffix]    const [fname, fext] = ('.');
    // Get the starting byte of the current slice    const start = index * chunkSize;
    if (start &gt; ) {
      // When the file size is exceeded, stop recursive upload      return mergeLargeFile().then(()=&gt;{
        // Delete the record after the merge is successful        ('uploadRecord')
      });
    }
    const blob = (start, start + chunkSize);
    // Name each piece    const blobName = `${fname}.${index}.${fext}`;
    const blobFile = new File([blob], blobName);
    const formData = new FormData();
    ('file', blobFile);
    uploadLargeFile(formData).then((res) =&gt; {
      // Record the location after the return of each block is successfully transmitted      ('uploadRecord',({
        name:,
        index:index+1
      }))
      // Recursive shard upload      uploadEveryChunk(file, ++index);
    });
  };

The same judgment on the file

It is possible to calculate files MD5, hash, etc. When the file is too large, it may take a lot of time to have it. A chunk of the file can be taken to hash the file size and perform local sampling and comparison. Here, the code for calculating md5 through the crypto-js library and the FileReader to read the file is shown.

// Calculate md5 to see if it already exists   const sign = (0, 512);
   const signFile = new File(
    [sign, ( as unknown) as BlobPart],
    '',
   );
   const reader = new FileReader();
    = function (event) {
    const binary = event?.target?.result;
    const md5 = binary &amp;&amp; CryptoJs.MD5(binary as string).toString();
    const record = ('upLoadMD5');
    if (isNil(md5)) {
     const file = blobToFile(blob, `${getRandomFileName()}.png`);
     return uploadPreview(file, 0, md5);
    }
    const file = blobToFile(blob, `${md5}.png`);
    if (isNil(record)) {
     // Transfer directly from the beginning to record this md5     return uploadPreview(file, 0, md5);
    }
    const recordObj = (record);
    if (recordObj.md5 == md5) {
     // Start transfer from the record location     //Repeat the breakpoint     return uploadPreview(file, , md5);
    }
    return uploadPreview(file, 0, md5);
   };
   (signFile);

Summarize

I have never had much understanding of uploading files before. Through the function of the completion, I have a preliminary understanding of the front and back-end codes for uploading files. Perhaps these methods are just options and do not include all of them. I hope that they can be continuously improved in future learning.
The first time I was writing a blog in Nuggets. After participating in the internship, I found that my knowledge size was insufficient. I hope that I could sort out my knowledge system and record my learning process by insisting on writing a blog. I also hope that all the great gods will give me advice when they find problems. thx

The above is the detailed content of the example of React+Koa implementing file upload. For more information about React+Koa implementing file upload, please pay attention to my other related articles!