1. What is cache breakdown
When a key is a hotspot key, cache is generally used to resist large-scale concurrency, but when the cache fails, these large number of concurrent requests will break down the cache and directly request the database
To avoid cache breakdown, one solution can set the cache to never expire, and another can use golang's package singleflight/x/sync/singleflight
II. Principle
When multiple concurrent requests obtain data on a invalid key, only one of them will directly obtain the data, and the other requests will block and wait for the first request to return to their results.
Implementation
package singleflight import ( "sync" ) var WaitCount int var DirectCount int type Caller struct { val interface{} err error wg } type Group struct { mu m map[string]*Caller } func (g *Group) Do(key string, fn func() (interface{}, error)) (interface{}, error) { () if == nil { = make(map[string]*Caller) } c, ok := [key] if ok { //Block and wait for other results that have been executed to return () () WaitCount++ return , } //Request to get data directly c = &Caller{} [key] = c (1) () , = fn() () () delete(, key) () DirectCount++ return , }
test:
func TestGroup_Do(t *) { sg := &Group{} wg := {} for i := 0; i < 10000; i++ { fn := func() (interface{}, error) { return i, nil } (1) go func() { defer () got, err := ("test-key", fn) _, _ = got, err //("got:", i) }() } () ("waitCount:", WaitCount) ("DirectCount:", DirectCount) }
Output:
waitCount: 8323
DirectCount: 1401
This is the end of this article about the implementation of golang anti-cache breakdown singleflight. For more related golang anti-cache breakdown singleflight content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!