SoFunction
Updated on 2025-03-05

Detailed explanation of how to avoid race conditions and data competition during Go concurrent programming

What are the scenarios where race conditions and data competition will occur

  • Multiple goroutines read and write to the same variable. For example, multiple goroutines simultaneously increase a counter variable.
  • Multiple goroutines read and write to the same array, slice or map at the same time. For example, multiple goroutines perform operations on adding or removing elements to a slice at the same time.
  • Multiple goroutines read and write to the same file at the same time. For example, multiple goroutines write data to the same file at the same time.
  • Multiple goroutines read and write operations on the same network connection at the same time. For example, multiple goroutines write data to the same TCP connection at the same time.
  • Multiple goroutines read and write operations on the same channel at the same time. For example, multiple goroutines send or receive data to the same unbuffered channel at the same time.

So, one thing we need to understand is that as long as multiple goroutines access shared resources concurrently, race conditions and data competition may occur.

Ways to avoid pits

Now, we already know. When writing concurrent programs, if you are not careful and do not consider the access methods and synchronization mechanisms of shared resources, then problems such as race conditions and data competition will occur. How to avoid traps? What are the ways to avoid race conditions and data competition? Please see below:

  • Mutex: Use Mutex or RWMutex in the sync package to ensure that there is only one goroutine access at the same time by locking shared resources.
  • Read and write lock: Use RWMutex in the sync package to allow multiple goroutines to read shared resources at the same time through the read and write lock mechanism, but only one goroutine is allowed to write to shared resources.
  • Atomic operations: Using the atomic operations provided in the sync/atomic package, you can perform atomic operations on shared variables, so as to ensure that race conditions and data competition will not occur.
  • Channel: Using the channel mechanism in Go language, data can be passed through the channel, thereby avoiding direct access to shared resources.
  • WaitGroup: Using WaitGroup in the sync package, you can wait for multiple goroutines to complete before continuing to execute, thereby ensuring the order of multiple goroutines.
  • Context: Using the Context in the context package, you can pass context information and control the life cycle of multiple goroutines, thereby avoiding the situation where the entire program is blocked due to a certain goroutine blocking.

Practical scenes

1. Mutex lock

For example, in a web server, multiple goroutines need to access variables in the same global counter at the same time to achieve the purpose of recording website visits.

In this case, if access to the access counter is not synchronized and protected, there will be problems with race conditions and data competition. Suppose there are two goroutines A and B that read the value of the counter variable at the same time to N, then both increase 1 and write the result back to the counter, then the final counter value will only increase 1 instead of 2, which is a race condition.

To solve this problem, mechanisms such as locks can be used to ensure synchronization and mutual exclusion of access counters. In Go, you can use mutex() to protect shared resources. When a goroutine needs to access a shared resource, it needs to acquire the lock first, then access the resource and complete the operation, and finally release the lock. This ensures that only one goroutine can access shared resources at a time, thereby avoiding race conditions and data competition issues.

Look at the following code:

package main

import (
 "fmt"
 "sync"
)

var count int
var mutex 

func main() {
 var wg 
 // Start 10 goroutines concurrently increase the counter value for i := 0; i < 10; i++ {
  (1)
  go func() {
   // Get the lock   ()
   // Access the counter and increase the value   count++
   // Release the lock   ()
   ()
  }()
 }
 // Wait for all goroutines to be executed ()
 // The final value of the output counter (count)
}

In the above code, a mutex is used to protect access to counter variables. Each goroutine acquires the lock before accessing the counter variable, then performs an increase operation of the counter, and finally releases the lock. This ensures the consistency and correctness of counter variables, and avoids race conditions and data competition problems.

The specific idea is to call (1) when starting each goroutine to increase the counter of the waiting group. Then, after all goroutines are executed, call () to wait for them to complete. Finally, the final value of the counter is output.

Note that this hypothetical scenario and this code example are just to demonstrate how to use mutexes to protect shared resources, and the actual situation may be more complicated. For example, in actual operation and maintenance development, if the lock is used too many times, it may affect the performance of the program. Therefore, in actual development, it is also necessary to select appropriate synchronization mechanisms based on specific circumstances to ensure the correctness and performance of concurrent programs.

2. Read and write lock

Here is a code case that uses RWMutex in the sync package to implement read and write locks:

package main

import (
 "fmt"
 "sync"
 "time"
)

var (
 count  int
 rwLock 
)

func readData() {
 // Read shared data and obtain read lock ()
 defer ()
 ("reading data...")
 (1 * )
 ("data is %d\n", count)
}

func writeData(n int) {
 // Write shared data and obtain write lock ()
 defer ()
 ("writing data...")
 (1 * )
 count = n
 ("data is %d\n", count)
}

func main() {
 // Start 5 read coroutines for i := 0; i < 5; i++ {
  go readData()
 }

 // Start 2 write coroutines for i := 0; i < 2; i++ {
  go writeData(i + 1)
 }

 // Wait for all coroutines to end (5 * )
}

In this example, there are 5 read coroutines and 2 write coroutines, all accessing a shared variable count. The read coroutine uses the RLock() method to obtain the read lock, and the write coroutine uses the Lock() method to obtain the write lock. Through the read-write lock mechanism, multiple read coroutines can read shared data at the same time, while the write coroutines will wait until all read coroutines are finished before they can be executed, thus avoiding the problem of read coroutines reading dirty data during the write coroutine execution.

3. Atomic operation

Here is a code example of using atomic operations provided in the sync/atomic package to implement concurrently safe counters:

package main

import (
    "fmt"
    "sync/atomic"
    "time"
)

func main() {
    var counter int64

    // Start 10 coroutines to perform incremental operations on the counter    for i := 0; i < 10; i++ {
        go func() {
            for j := 0; j < 100; j++ {
                atomic.AddInt64(&counter, 1)
            }
        }()
    }

    // Wait for all coroutines to end    ()

    // Output counter value    ("counter: %d\n", atomic.LoadInt64(&counter))
}

In this example, there are 10 coroutines that perform incremental operations on the counter concurrently. Since multiple coroutines operate on counters at the same time, race conditions and data competition will occur if the synchronization mechanism is not used. In order to ensure the correctness and robustness of the program, the atomic operation provided in the sync/atomic package is used, and the counter is atomically added through the AddInt64() method to ensure the concurrency safety of the counter. Finally, use the LoadInt64() method to obtain the counter value and output it.

4. Channel

Here is a code example of using channel mechanism to achieve concurrent security counters:

package main

import (
    "fmt"
    "sync"
)

func main() {
    var counter int

    // Create a buffered channel with a capacity of 10    ch := make(chan int, 10)

    // Create a waiting group to wait for all coroutines to complete    var wg 
    (10)

    // Start 10 coroutines to perform incremental operations on the counter    for i := 0; i < 10; i++ {
        go func() {
            for j := 0; j < 10; j++ {
                // Send incremental operations to the channel                ch <- 1
            }
            // The task is completed, send a signal to the waiting group            ()
        }()
    }

    // Wait for all coroutines to complete    ()

    // Receive incremental operations from the channel and accumulate into the counter    for i := 0; i < 100; i++ {
        counter += <-ch
    }

    // Output counter value    ("counter: %d\n", counter)
}

In this example, there are 10 coroutines that perform incremental operations on the counter concurrently. To avoid direct access to shared resources, a buffered channel of 10 is used, the incremental operations are passed through the channel, and then the incremental operations are received from the channel in the main coroutine and accumulated into the counter. In the coroutine, the waiting group is used to wait for all coroutines to complete tasks, ensuring the correctness and robustness of the program. Finally output the value of the counter.

Here is a code case that uses waiting for multiple Goroutines to complete before continuing to execute:

package main

import (
    "fmt"
    "sync"
)

func main() {
    var wg 

    for i := 1; i <= 3; i++ {
        (1) // Counter is increased by 1        go func(i int) {
            defer () // Decrement by 1 when finished            ("goroutine %d is running\n", i)
        }(i)
    }

    () // Wait for all Goroutines to complete
    ("all goroutines have completed")
}

In this example, there are 3 Goroutines executed concurrently, using (1) to increment the counter by 1, indicating that there is a Goroutine that needs to be waited. Use defer () in each Goroutine to indicate that the task is completed, and the counter is reduced by 1. Finally, use () to wait for all Goroutines to complete the task, and then output "all goroutines have completed".

Here is a code case that uses to control the life cycle of multiple Goroutines:

package main

import (
    "context"
    "fmt"
    "time"
)

func worker(ctx , id int, wg *) {
    defer ()

    ("Worker %d started\n", id)

    for {
        select {
        case <-():
            ("Worker %d stopped\n", id)
            return
        default:
            ("Worker %d is running\n", id)
            ()
        }
    }
}

func main() {
    ctx, cancel := (())

    var wg 
    for i := 1; i <= 3; i++ {
        (1)
        go worker(ctx, i, &wg)
    }

    (3 * )

    cancel()

    ()

    ("All workers have stopped")
}

In this example, a context is created using , and passed to multiple Goroutines in the main function. Each Goroutine performs a task in a for loop, ends the task and exits the loop if the () signal is received, otherwise the running information is printed and waits for a while. In the main function, a signal is sent by calling cancel() to notify all Goroutines to end the task. Use Wait for all Goroutines to end the task, and then output "All workers have stopped".

This is the end of this article about how to avoid race conditions and data competition in Go concurrent programming. For more related Go concurrent programming, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!