SoFunction
Updated on 2025-03-05

Brief analysis of sync or channel selection for GO concurrent processing

How to choose sync and channel

When I wrote C before,We usually communicate through shared memory, For concurrent operation of a certain piece of data, in order to ensure data security and control synchronization between threads, we will use mutex locks and unlock them to process it.

HoweverWhen it is recommended in the GO language, share memory through communication., use channel to complete the synchronization mechanism of critical zones

However, the channel in the GO language is a relatively advanced primitive after all, and naturally it is not as good as the lock mechanism in the sync package in performance.Interested students can write a simple benchmark test to confirm the effect, and comments can be communicated.

In addition, when using the sync package to control synchronization, we will not lose ownership of structural objects, and we can also allow multiple coroutines to synchronize the resources in the critical area. So if our needs can meet this situation, it is recommended to use the sync package to control synchronization more reasonably and efficiently

Why use the sync package to control synchronization conclusion:

  • While not expecting to lose control of the structure, multiple coroutines can safely access critical area resources.
  • There will be higher performance requirements

Mutex and RWMutex for sync

View the source code of the sync package (xxx\Go\src\sync), we can see that the following structures are shown in the sync package:

  • Mutex
  • RWMutex
  • Once
  • Cond
  • Pool
  • atomic package atomic operation

Mutex is often used above, especially when I was not good at using channel at the beginning, I felt that using Mutex was very easy. Secondly, RWMutex is relatively less used.

I wonder if you have paid attention to the performance of using Mutex and RWMutex. Most people use mutex locks by default. Let’s write a demo to see their performance comparison.

var (
        mu   
        murw 
        tt1  = 1
        tt2  = 2
        tt3  = 3
)
// Use Mutex to control the reading of datafunc BenchmarkReadMutex(b *) {
        (func(pp *) {
                for () {
                        ()
                        _ = tt1
                        ()
                }
        })
}
// Use RWMutex to control the reading of datafunc BenchmarkReadRWMutex(b *) {
        (func(pp *) {
                for () {
                        ()
                        _ = tt2
                        ()
                }
        })
}
// Use RWMutex to control reading and writing datafunc BenchmarkWriteRWMutex(b *) {
        (func(pp *) {
                for () {
                        ()
                        tt3++
                        ()
                }
        })
}

Three simple benchmarks were written

  • Read data using mutex lock
  • Read data using read locks of read and write locks
  • Read and write data using read and write locks
$ go test -bench . bbb_test.go --cpu 2
goos: windows
goarch: amd64
cpu: Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
BenchmarkReadMutex-2            39638757                30.45 ns/op
BenchmarkReadRWMutex-2          43082371                26.97 ns/op
BenchmarkWriteRWMutex-2         16383997                71.35 ns/op
$ go test -bench . bbb_test.go --cpu 4
goos: windows
goarch: amd64
cpu: Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
BenchmarkReadMutex-4            17066666                73.47 ns/op
BenchmarkReadRWMutex-4          43885633                30.33 ns/op
BenchmarkWriteRWMutex-4         10593098               110.3 ns/op
$ go test -bench . bbb_test.go --cpu 8
goos: windows
goarch: amd64
cpu: Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
BenchmarkReadMutex-8             8969340               129.0 ns/op
BenchmarkReadRWMutex-8          36451077                33.46 ns/op
BenchmarkWriteRWMutex-8          7728303               158.5 ns/op
$ go test -bench . bbb_test.go --cpu 16
goos: windows
goarch: amd64
cpu: Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
BenchmarkReadMutex-16            8533333               132.6 ns/op
BenchmarkReadRWMutex-16         39638757                29.98 ns/op
BenchmarkWriteRWMutex-16         6751646               173.9 ns/op
$ go test -bench . bbb_test.go --cpu 128
goos: windows
goarch: amd64
cpu: Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
BenchmarkReadMutex-128          10155368               116.0 ns/op
BenchmarkReadRWMutex-128        35108558                33.27 ns/op
BenchmarkWriteRWMutex-128        6334021               195.3 ns/op

It can be seen that when concurrency is small, the performance of using mutex locks is similar to that of using read and write locks. When concurrency gradually becomes larger, the performance of read and write locks does not change significantly. The performance of mutex locks and read and write locks will decrease as concurrency becomes larger.

Then it's obvious,Read and write locks are suitable for scenarios where more reads, less writes,When large concurrent reading data, multiple coroutines can get read locks at the same time, reducing lock competition and waiting time

When mutex locks are concurrent, only one of the multiple coroutines can get the lock, and other coroutines will block and wait, affecting performance

For example, we use mutex locks normally to see what problems may arise

What to note when using sync

When using locks in sync packages, it is important to note that you should not copy the already used Mutex or RWMutex.

Write a simple demo:

var mu 
// sync's mutex lock, read and write lock, do not copy this object after it is used. If you want to copy, you need to do it when it is not usedfunc main() {
    go func(mm ) {
            for {
                    ()
                    ( * 1)
                    ("g2")
                    ()
            }
    }(mu)
    ()
    go func(mm ) {
            for {
                    ()
                    ( * 1)
                    ("g3")
                    ()
            }
    }(mu)
    ( * 1)
    ("g1")
    ()
    ( * 20)
}

If you are interested, you can run it and you can see that there is no print result.g3, thereforeg3The coroutine where you are located has a deadlock, and there is no chance to call unlock

This is why this happens. Let’s take a look at the internal structure of Mutex:

//...
// A Mutex must not be copied after first use.
//...
type Mutex struct {
        state int32
        sema  uint32
}

Because for example, the internal structure in Mutex has a state (representing the state of the mutex) and sema (representing the semaphore that controls the mutex), and when initializing Mutex, they are both 0, but when we use Mutex to lock, the Mutex state becomes the state of Locked. At this time,One of the coroutines copies this Mutex and adds a lock in its own coroutine, which will cause a deadlock. This is a very important thing to pay attention to.

If this kind of coroutine uses Mutex,You can use the closure or pass the structure address or pointer of the package lock, this will avoid unexpected results when using locks and avoid confusing

Other members of the sync package, I don’t know how much xdm is used, but the ones that are relatively frequently used should be. Other members xdm can check the source code by themselves, or leave a message in the comment area.Let’s take a look at how to use it and what should you pay attention to?

Remember when writing C or C++ before, when there was only one instance of the program life cycle, we would choose to use singleton mode to process it.It's very suitable for singleton mode

It can be guaranteed that any function will be executed only once during the program operation, which is relatively more important than in each package.init functionBetter flexible

Pay attention here,If the function executed inpanic, it will also be considered that the execution has been completed once, and if there is any logic to enterIt is impossible to enter and execute function logic

Generally,For initialization and cleaning actions of object resources to avoid repeated operations, you can take a look at a demo:

  • The main function opens 3 coroutines, and uses to control and wait for the child coroutine to exit
  • After the main function opens all coroutines, wait 2 seconds, start creating and obtaining the instance
  • Instances are also being retrieved in coroutines
  • As long as a coroutine gets to enter Once, panic will appear after executing the logic.
  • The panic coroutine caught an exception. At this time, the global instance has been initialized, and other coroutines still cannot enter the functions within Once.
type Instance struct {
        Name string
}
var instance *Instance
var on 
func GetInstance(num int) *Instance {
        defer func() {
                if err := recover(); err != nil {
                        ("num %d ,get instance and catch error ... \n", num)
                }
        }()
        (func() {
                instance = &Instance{Name: "A Bingyun native"}
                ("%d enter once ... \n", num)
                panic("panic....")
        })
        return instance
}
func main() {
        var wg 
        for i := 0; i < 3; i++ {
                (1)
                go func(i int) {
                        ins := GetInstance(i)
                        ("%d: ins:%+v  , p=%p\n", i, ins, ins)
                        ()
                }(i)
        }
        ( * 2)
        ins := GetInstance(9)
        ("9: ins:%+v  , p=%p\n", ins, ins)
        ()
}

From the printing results, we can see that the coroutine corresponding to 0 enters Once and panic occurs. Therefore, the result of the GetInstance function obtained by the current coroutine is nil

Other coroutines, including the main coroutine call the GetInstance function, can get the instance address normally. It can be seen that the address is the same, and the global is only initialized once.

$ go run
0 enter once ...
num %d ,get instance and catch error ...
 0
0: ins:<nil>  , p=0x0
1: ins:&{Name:Abingyun native}  , p=0xc000086000
2: ins:&{Name:Abingyun native}  , p=0xc000086000
9: ins:&{Name: Abingyun native}  , p=0xc000086000

This is the article about a brief analysis of GO concurrency processing, which is about choosing sync or channel. For more related contents of GO concurrency processing, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!