and
You know, mutex (mutex) is like an old guy to us gopher. When dealing with goroutines, it is very important to make sure they don't access resources at the same time, and mutex can help us do that.
Looking at this simple example, I did not use a mutex to protect our variable a:
var a = 0 func Add() { a++ } func main() { for i := 0; i < 500; i++ { go Add() } (5 * ) (a) }
The results of this code are unpredictable. If you are lucky you might get 500, but usually the result will be less than 500. Now, let's enhance our Add function using mutexes:
var mtx = {} func Add() { () defer () a++ }
Now, the code provides the expected results. But what about using it?
Why use
Imagine you are checking a variable, but other goroutines are adjusting it too. You may get outdated information. So, what is the solution to this problem?
Let's step back and use our old method to add to our Get() function:
func Add() { () defer () a++ } func Get() int { () defer () return a }
But the problem here is that if your service or program calls Get() millions and only calls Add() a few times, we are actually wasting resources because we lock everything up without even modifying it most of the time.
This is where suddenly appears to save our day, this clever gadget is designed to help us handle situations where we read and write simultaneously.
var mtx = {} func Add() { () defer () a++ } func Look() { () defer () (a) }
So, what's so great about RWMutex? Well, it allows millions of concurrent reads while ensuring that only one write can be done at a time. Let me clarify how it works:
- When writing, read is locked.
- When reading, write is locked.
- Multiple reads will not lock each other.
Oh by the way, both Mutex and RWMutex implement interface {}, and the signature is like this:
// A Locker represents an object that can be locked and unlocked. type Locker interface { Lock() Unlock() }
If you want to create a function that accepts Locker, you can use this function with your custom locker or synchronous mutex:
func Add(mtx ) { () defer () a++ }
2.
You may have noticed that I used (5 * ) to wait for all goroutines to complete, but honestly, this is a very ugly solution.
This is where it comes:
func main() { wg := {} for i := 0; i < 500; i++ { (1) go func() { defer () Add() }() } () (a) }
There are 3 main methods: Add, Done, and Wait.
First is Add(delta int): This method increases the WaitGroup counter by delta. You usually call it before generating a goroutine, indicating that there is an additional task to complete.
If we put WaitGroup in go func() {}, what do you think will happen?
go func() { (1) defer () Add() }()
My compiler shouted, "It should be called (1) before starting goroutine to avoid competition", and I had panic when running, "panic: sync: WaitGroup is reused before previous
Wait has returned”。
The other two methods are very simple:
When a goroutine ends its task, Done is called.
Wait blocks the caller until the WaitGroup counter is zeroed, which means that all derived goroutines have completed their tasks.
3.
Suppose you have a CreateInstance() function in a package, but you need to make sure it is initialized before use. So you call it multiple times in different places and your implementation looks like this:
var i = 0 var _isInitialized = false func CreateInstance() { if _isInitialized { return } i = GetISomewhere() _isInitialized = true }
But what if there are multiple goroutines calling this method? i = GetISomeWhere rank runs multiple times, even if you only want it to be executed once for stability.
You can use the mutex we discussed earlier, but the sync package provides a more convenient way:
var i = 0 var once = &{} func CreateInstance() { (func() { i = GetISomewhere() }) }
Using , you can make sure a function is executed only once, regardless of how many times it is called or how many goroutines are called at the same time.
4.
Imagine you have a pool with a bunch of objects you want to use repeatedly. This can relieve some of the pressure on the garbage collector, especially when the cost of creating and destroying these resources is high.
So, whenever you need an object, you can take it out of the pool. When you're done using it, you can put it back into the pool for reuse later.
var pool = { New: func() interface{} { return 0 }, } func main() { (1) (2) (3) a := ().(int) b := ().(int) c := ().(int) (a, b, c) // Output: 1, 3, 2 (order may vary) }
Remember that the order in which objects are placed in the pool is not necessarily the order in which they come out, even if the above code is run multiple times, the order is random.
Let me share some tips for using:
- It is ideal for objects that exist for a long time and have multiple instances to manage, such as database connections (1000 connections?), worker goroutines, and even buffers.
- Always reset the state of the object before returning it to the pool. This way, you can avoid any unintentional data leaks or strange behavior.
- Don't expect objects that already exist in the pool, as they may be released unexpectedly.
5.
When you use map at the same time, it's a bit like using RWMutex. You can do multiple reads at the same time, but not multiple reads, writes or writes. If there is a conflict, your service will crash instead of overwriting data or causing unexpected behavior.
This is where it comes in handy, because it helps us avoid this problem. Let's take a closer look at what to offer us:
- CompareAndDelete (go 1.20): Delete the key's entry if the value matches; return false if there is no value or the old value is nil.
- CompareAndSwap (go 1.20): If the old and new values match, swap a key, just make sure that the old values are comparable.
- Swap (go 1.20): Swap the value of the key and return the old value (if it exists).
- LoadOrStore: Get the current key value or save and return the provided value (if not present)
- Range (f func(key, value any): traverses the map and applies the function f to each key-value pair. If f says returns false, it stops.
- Store
- Delete
- Load
- LoadAndDelete
Q: Why don't we use regular maps with Mutex?
I usually choose maps with RWMutex, but in some cases I recognize the power of . So, where is it really shining?
If you have many goroutines accessing separate keys in a map, a regular map with a single mutex can lead to contention because it locks the entire map only for a single write operation.
On the other hand, using a more complete locking mechanism helps minimize contention in such scenarios.
6.
Think of as a condition variable that supports multiple goroutine waiting and interaction with each other. To understand better, let's see how to use it.
First, we need to create a Locker with:
var mtx var cond = (&mtx)
goroutine calls and waits for signals from elsewhere to continue execution:
func dummyGoroutine(id int) { () defer () ("Goroutine %d is waiting...\n", id) () ("Goroutine %d received the signal.\n", id) }
Then another goroutine (like the main goroutine) calls (), let's continue the goroutine we're waiting for:
func main() { go dummyGoroutine(1) (1 * ) ("Sending signal...") () (1 * ) }
The results are as follows:
Goroutine 1 is waiting...
Sending signal...
Goroutine 1 received the signal.
What if there are multiple goroutines waiting for our signal? This is when we can use broadcast:
func main() { go dummyGoroutine(1) go dummyGoroutine(2) (1 * ) () // broadcast to all goroutines (1 * ) }
The results are as follows:
Goroutine 1 is waiting...
Goroutine 2 is waiting...
Goroutine 2 received the signal.
Goroutine 1 received the signal.
This is the end of this article about the 6 key concepts of the Sync package of Go concurrency. For more relevant contents of the Sync package, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!