1. WaitGroup for Go language concurrency
goroutine and chan, one for concurrency and the other for communication. Channels without buffering have synchronization function, and in addition, the sync package is also included.
It provides multiple goroutine synchronization mechanisms, mainly implemented through WaitGroup.
WaitGroup is used to wait for multiple goroutines to complete. The main goroutine call Add to set the number of goroutines to wait for each
Done() is called at the end of the goroutine, and Wait() is used by main to wait for all goroutines to complete.
The main data structures and operations are as follows:
type WaitGroup struct { // contains filtered or unexported fields } // Add waiting signalfunc (wg*WaitGroup) Add (delta int) // Release the waiting signalfunc (wg*WaitGroup) Done() // waitfunc (wg*WaitGroup) Wait()
The following program demonstrates how to use multiple goroutines to work together.
package main import ( "net/http" "sync" ) var wg var urls = []string{ "/", "/", "/", } func main() { for _, url := range urls { //Each url starts a goroutine and add 1 to wg at the same time (1) // Start a goroutine to get the URL go func(url string) { // After the current goroutine is finished, the count of wg is reduced by 1, () is equivalent to (-1) // defer (-1) defer () // Send http get request and print http return code resp, err := (url) if err == nil { println() } }(url) } // Wait for all HTTP acquisition to complete () }
#Output
501 Not Implemented
200 OK
200 OK
1.1 No locking
Using sleep functions in multithreading is not elegant, use it directly to ensure that a goroutine can be executed as soon as it exits, without requiring self-isolation.
Guess how long it takes to sleep.
package main import ( "fmt" "sync" ) var wg func main() { // If you start a goroutine, register +1, and if you start ten goroutine, register +10 (10) var count = 0 for i := 0; i < 10; i++ { go func() { // Register when the goroutine is over-1 defer () for j := 0; j < 100000; j++ { count++ } }() } // Wait for all registered goroutines to end () // 346730 (count) }
Start ten goroutine counts by increasing by 10w times, ideally 100w, but since there is no lock, the actual situation will be far smaller and not equal.
The situation. Why does such a result occur? Because self-increase is not an atomic operation, it is very likely that several goroutines read the same number at the same time.
Increasing, and writing the same number back.
1.2 Mutex
1.2.1 Direct use of locks
Shared resources are count variables, and the critical area is count++. Locking is added before the critical area, causing other goroutines to block in the critical area and leave the critical area solution.
Lock can solve the problem of this data race. Go language implements this function through .
package main import ( "fmt" "sync" ) var wg func main() { // Define lock var mu (10) var count = 0 for i := 0; i < 10; i++ { go func() { defer () for j := 0; j < 100000; j++ { // Add lock () count++ // Unlock () } }() } () // 1000000 (count) }
1.2.2 Use locks in embed fields
Embed Mutex into struct, you can use Lock/Unlock directly on this struct.
package main import ( "fmt" "sync" ) var wg type Counter struct { Count uint64 } func main() { var counter Counter (10) for i := 0; i < 10; i++ { go func() { defer () for j := 0; j < 100000; j++ { () ++ () } }() } () // 1000000 () }
1.2.3 Method of encapsulating the addition/unlock into
package main import ( "fmt" "sync" ) var wg type Counter struct { mu count uint64 } func (c *Counter) Incr() { () ++ () } func (c *Counter) Count() uint64 { () defer () return } func main() { var counter Counter // If you start a goroutine, register +1, and if you start ten goroutine, register +10 (10) for i := 0; i < 10; i++ { go func() { // Register when the goroutine is over-1 defer () for j := 0; j < 100000; j++ { () } }() } // Wait for all registered goroutines to end () // 1000000 (()) }
This is the introduction to this article about the detailed explanation of the usage of WaitGroup in Go concurrency. For more related Go language WaitGroup content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!