Spin lock
The thread that acquires the lock is always active, but does not perform any valid tasks. Using this lock will cause busy-waiting. It is a lock mechanism proposed to protect shared resources. In fact, spin locks are similar to mutex locks. They are both designed to solve the mutually exclusive use of a certain resource. Whether it is a mutex or a spin lock, at any moment, there can only be one holder, that is, at most one execution unit can obtain the lock. But the two are slightly different in scheduling mechanism. For mutexes, if the resource has been occupied, the resource applicant can only enter sleep state. However, the spin lock will not cause the caller to sleep. If the spin lock has been maintained by other execution units, the caller will keep looping there to see if the holder of the spin lock has released the lock. The word "spin" is therefore named.
Golang implements spin lock
type spinLock uint32 func (sl *spinLock) Lock() { for !atomic.CompareAndSwapUint32((*uint32)(sl), 0, 1) { () } } func (sl *spinLock) Unlock() { atomic.StoreUint32((*uint32)(sl), 0) } func NewSpinLock() { var lock spinLock return &lock }
Reentrant spin locks and non-reentrant spin locks
After a careful analysis of the above code, you can see that it does not support reentry, that is, when a thread has acquired the lock for the first time and reacquisitioned the lock again before the lock is released, it cannot be successfully obtained the second time. Since CAS is not satisfied, the second acquisition will enter the while loop and wait. If it is a reentrant lock, the second acquisition should be successfully obtained.
Moreover, even if the second time can be successfully acquired, the lock obtained the second time will be released, which is unreasonable.
In order to implement a reentrable lock, we need to introduce a counter to record the number of threads that acquire the lock
type spinLock struct { owner int count int } func (sl *spinLock) Lock() { me := GetGoroutineId() if spinLock .owner == me { // If the current thread has acquired the lock, increase the number of threads by one and then return ++ return } // If no lock is obtained, spin it through CAS for !atomic.CompareAndSwapUint32((*uint32)(sl), 0, 1) { () } } func (sl *spinLock) Unlock() { if != GetGoroutineId() { panic("illegalMonitorStateError") } if >0 { // If it is greater than 0, it means that the current thread has acquired the lock many times. Release the lock and simulate it by deducting count one. -- }else { // If count==0, the lock can be released, so that the number of times the lock is acquired is the same as the number of times the lock is released. atomic.StoreUint32((*uint32)(sl), 0) } } func GetGoroutineId() int { defer func() { if err := recover(); err != nil { ("panic recover:panic info:%v", err) } }() var buf [64]byte n := (buf[:], false) idField := ((string(buf[:n]), "goroutine "))[0] id, err := (idField) if err != nil { panic(("cannot get goroutine id: %v", err)) } return id } func NewSpinLock() { var lock spinLock return &lock }
Other variants of spin lock
1. TicketLock
TicketLock mainly solves the issue of fairness.
Idea: Whenever a thread acquires a lock, it will assign an incremental id to the thread, which we call it the queue number. At the same time, the lock corresponds to a service number. Whenever a thread releases the lock, the service number will increase. At this time, if the service number is consistent with a thread's queue number, the thread will obtain the lock. Since the queue number is incremented, it ensures that the thread that first requests to acquire the lock can obtain the lock first, which achieves fairness.
It can be imagined as a bank queuing for business. Each customer in line represents a thread that needs to request a lock. The bank service window represents a lock. Whenever a window service is completed, add one of your service number. At this time, among all the customers in line, only those whose queue number is the same as the service number can be received.
2. CLHLock
CLH lock is a scalable, high-performance, fair spin lock based on linked lists. The application thread only spins on local variables. It constantly polls the predecessor's state. If it is found that the predecessor releases the lock, it ends the spin and obtains the lock.
3. MCSLock
MCSLock loops on nodes of local variables.
4. CLHLock and MCSLock
They are all based on linked lists. The difference is that CLHLock is based on implicit linked lists and does not have real attributes for subsequent nodes. MCSLock is displaying linked lists and has an attribute pointing to subsequent nodes.
The thread state that acquires the lock is saved with the help of nodes, and each thread has an independent node, which solves the problem of TicketLock multiprocessor cache synchronization.
Spinlock and mutex lock
- Both spin locks and mutex locks are designed to achieve mechanisms for protecting resource sharing.
- Whether it is a spin lock or a mutex lock, at any time, there can only be one holder at most.
- The thread that acquires the mutex lock will enter a sleep state if the lock has been occupied; the thread that acquires the spin lock will not sleep, but will loop around and wait for the lock to be released.
Summarize
- Spinlock: When a thread acquires a lock, if the lock is held by another thread, the current thread will loop and wait until the lock is acquired.
- During spin lock waiting, the state of the thread will not change, the thread is always user-state and active.
- If the spin lock is held for too long, it will cause other threads waiting to acquire the lock to exhaust the CPU.
- The spin lock itself cannot guarantee fairness, and it cannot guarantee reentrability.
- Based on spin locks, locks with fairness and reentrability can be realized.
- TicketLock: Use a method similar to bank rankings to achieve fairness of spin locks, but due to the continuous reading of serviceNum, each read and write operation must be cached synchronization between multiple processor caches, which will lead to heavy system bus and memory traffic, greatly reducing the overall performance of the system.
- CLHLock and MCSLock avoid reducing processor cache synchronization through linked lists, greatly improving performance. The difference is that CLHLock polls the state of its predecessor node, while MCS checks the lock status of the current node.
- There will be problems with CLHLock using NUMA architecture. In NUMA system architecture without cache, since CLHLock is spinning on the previous node of the current node, the processor accesses local memory faster than the memory of other nodes through the network, CLHLock is not the optimal spin lock on NUMA architecture.
This is the article about this article about Golang spin lock. For more related content on Golang spin lock, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!