Simple reading
func ReadFile(filePath string) (chunks []byte, err error) { f, err := (filePath) if err != nil { return } defer () reader := (f) for { dataByte := make([]byte, 5*1024) var n int n, err = (dataByte) if err != nil || 0 == n { break } chunks = append(chunks, dataByte[:n]...) ("file: %s, len(chunks):%v", filePath, len(chunks)) } isEOF := ((), "EOF") if isEOF == 0 { err = nil ("read %s success: \n, len=%v", filePath, len(chunks)) return } ("readFile over") return }
You can see that if the file is larger, the chunks will become large. This method only applies to general practices under specific conditions.
Read & Shard Write
Read file stream + write shard-1
var bufLen = 2 * 1024 * 1024 func DownLoadFileShardByFilePath1(writerFilePath string, body ) (err error) { f, err := (writerFilePath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, ) defer () if err != nil { ("open err:" + ()) return } writer := (f) bs := make([]byte, bufLen) for { var read int read, err = (bs) if err != nil || 0 == read { break } _, err = (bs[:read]) if err != nil { ("write err:" + ()) break } } if err == { err = nil } if err != nil { return } if err = (); err != nil { ("writer flush err: ", ()) return } ("downLoad over") return }
Read file stream + shard writing -2
var bufLen = 2 * 1024 * 1024 func DownLoadFileShard(writerFilePath string, body ) (err error) { f, err := (writerFilePath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, ) if err != nil { ("open err:" + ()) return } defer () bs := make([]byte, bufLen) writer := (f) for { var read int switch read, err = (bs[:]); true { case read < 0: ("read err: ", ()) return case read == 0, err == : ("downLoad over") return () case read > 0: _, err = (bs[:read]) if err != nil { ("write err:" + ()) return } } } return }
Read file stream + concurrent shard writing
type FileShard struct { Data []byte Err error Code int // 0-Normal-1=Failed} var bufLen = 2 * 1024 * 1024 func DownLoadFileShardCon(writerFilePath string, body ) (err error) { writerFile, err := (writerFilePath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, ) if err != nil { ("open err:" + ()) return } defer () ch, complete := make(chan *FileShard), make(chan struct{}) go func() { writer := (writerFile) youKnow: for { select { case data := <-ch: if data == nil { err = () break youKnow } if != 0 { err = break youKnow } if _, err = (); err != nil { ("write err:", ()) } } } close(complete) }() go func() { bs := make([]byte, bufLen) for { switch read, readErr := (bs[:]); true { case read < 0: ch <- &FileShard{Code: -1, Err: readErr} close(ch) return case read == 0, err == : close(ch) return case read > 0: ch <- &FileShard{Data: bs[:read], Code: 0} } } }() select { case <-complete: break } ("downLoad over") return }
There are many kinds of concurrency ideas, depending on how you write your code, all the roads lead to Rome!
Better Copy method
It should be noted that there is another very good method in the io package, which is (), which can prevent memory overflow during processing of large files, and can also replace the above main process, such as:
func IOCopyExample(writerFilePath string, body ) (err error) { f, err := (writerFilePath, os.O_CREATE|os.O_TRUNC, 0666) if err != nil { return } defer () writer := (f) _, err = (writer, body) _ = () return }
http concurrency and shard download
Completed based on http-based Range:
func DownloadFileRange(url, writeFile string) error { f, err := (writeFile , os.O_CREATE|os.O_TRUNC, 0666) if err != nil { return err } defer () resp, err := (url) if err != nil { return err } size, err := (("Content-Length")) if err != nil { return err } con := getSize(size) // getSize function is used to calculate the number of concurrencies each time, and can be specified in your own way var start, end int64 for i := 0; i < con; i++ { start = int64(i) * int64(size/con) end = start + int64(size/con) - 1 go func(n int, offset, end int64) { req := &{} req, err = (, url, nil) ("Range", ("bytes=%v-%v", offset, end)) client := &{} resp, err = (req) if err != nil { return } defer () (offset, 0) _, err = (f, ) if err != nil { // log } }(i, start, end) } return nil }
This is the article about the implementation examples of Go concurrent reading and writing files, shard writing, and shard download files. For more related Go concurrent reading and writing, shard writing, and shard download content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!