Concurrent unsafe Memo
First, use an example to demonstrateFunction memory:
// A Memo caches the results of calling a Func. type Memo struct { f Func cache map[string]result } // Func is the type of the function to memoize. type Func func(key string) (interface{}, error) type result struct { value interface{} err error } func New(f Func) *Memo { return &Memo{f: f, cache: make(map[string]result)} } // NOTE: not concurrency-safe! func (memo *Memo) Get(key string) (interface{}, error) { res, ok := [key] if !ok { , = (key) [key] = res } return , }
Where functionsf
It is a heavyweight calculation function, and it is expensive to call it, so you have to cache the result to amap
speed up each call. This is function memory.
Each callGet
, willmemo
If the query result is not found, the function must be calledf
Calculate the result and record it in the cache.
The above implementationGet
Method updates cache without synchronizationcache
,entireGet
functionNot concurrently safe。
Safe but pseudo-concurrent Memo
Consider each callGet
All methods are locked:
type Memo struct { f Func mu // guards cache cache map[string]result } // Get is concurrency-safe. func (memo *Memo) Get(key string) (value interface{}, err error) { () res, ok := [key] if !ok { , = (key) [key] = res } () return , }
Since each call requests a mutex,Get
The parallel request operations are serialized again.
Memo that will cause redundant calculations
Consider the following improvements:
func (memo *Memo) Get(key string) (value interface{}, err error) { () res, ok := [key] () if !ok { , = (key) // Between the two critical sections, several goroutines // may race to compute f(key) and update the map. () [key] = res () } return , }
This version acquires locks in two times: the first time is used for query cache, and the second time is used for updates when there is no result in the query.
Ideally, we should avoid this extra treatment. This function is sometimes called duplication suppression.
Repeat suppression through the channel
In the fourth version of the cache, we will use eachentry
A new channel has been addedready
. Finished in setupentry
ofresult
After the field, the channel will be closed, and the waiting goroutine will receive a broadcast, and you canentry
The result was read.
// Func is the type of the function to memoize. type Func func(string) (interface{}, error) type result struct { value interface{} err error } type entry struct { res result ready chan struct{} // closed when res is ready } func New(f Func) *Memo { return &Memo{f: f, cache: make(map[string]*entry)} } type Memo struct { f Func mu // guards cache cache map[string]*entry //The cache now returns an entry} func (memo *Memo) Get(key string) (value interface{}, err error) { () e := [key] if e == nil { // This is the first request for this key. // This goroutine becomes responsible for computing // the value and broadcasting the ready condition. e = &entry{ready: make(chan struct{})} [key] = e () , = (key) close() // broadcast ready condition } else { // This is a repeat request for this key. () <- // wait for ready condition } return , }
whenGet
Function discovery cachememo
When there is no record in it, it constructs aentry
Put it in cache, but at this timekey
The corresponding results have not been calculated yet.
At this time, if other goroutines are calledGet
Function query the samekey
When it arrives<-
Statements are blocked by waiting for channel data. Only when the calculation is completed and the goroutine responsible for the calculation results closes the channel, other goroutines can continue to be executed and the results are queried.
- When a goroutine tries to query a result that does not exist, it creates a
entry
Put it in the cache, unlock it, and call itf
Perform calculations. Update the corresponding calculation after completionentry
You canready
Channel closes; - When a goroutine tries to query an existing result, it should immediately give up the lock and wait for the found
entry
the channel closes.
Another design of "shared memory through communication"
The above introductionShare variables and lock themAnother way to do this isCommunication sequence process。
In the new design,map
The variable is limited to a monitoring goroutine, andGet
The caller ofSend a message。
// Func is the type of the function to memoize. type Func func(key string) (interface{}, error) // A result is the result of calling a Func. type result struct { value interface{} err error } type entry struct { res result ready chan struct{} // closed when res is ready } // A request is a message requesting that the Func be applied to key. type request struct { key string response chan<- result // the client wants a single result } type Memo struct{ requests chan request } // New returns a memoization of f. Clients must subsequently call Close. func New(f Func) *Memo { memo := &Memo{requests: make(chan request)} go (f) return memo } func (memo *Memo) Get(key string) (interface{}, error) { response := make(chan result) <- request{key, response} res := <-response return , } func (memo *Memo) Close() { close() } //!-get //!+monitor func (memo *Memo) server(f Func) { cache := make(map[string]*entry) for req := range { e := cache[] if e == nil { // This is the first request for this key. e = &entry{ready: make(chan struct{})} cache[] = e go (f, ) // call f(key) } go () } } func (e *entry) call(f Func, key string) { // Evaluate the function. , = f(key) // Broadcast the ready condition. close() } func (e *entry) deliver(response chan<- result) { // Wait for the ready condition. <- // Send the result to the client. response <- }
Get
Method to create aresponse
channel, put it in a request, then send it to the monitor goroutine, and then create it from your ownresponse
Read in the channel.
Monitor goroutine (i.e.server
Method) Continuouslyrequest
Read in the channel until the channel is closed. For each request, it first querys from the cache, creates and inserts a new one if not foundentry
:
To monitor goroutine, create aentry
Put it in cache, and then it callsgo (f, )
Create a gorouitne to calculate the result and close itready
aisle. At the same time it callsgo ()
waitready
The channel is closed and the result is sent toresponse
in the channel;
If monitoringgoroutine
The result was found directly from the cache, then according tokey
Foundentry
It already contains a closed channel, which callsgo ()
You can put the results directly inresponse
in the channel.
To sum up,server
Responsible for reading requests from the request channel, for unfinished calculationskey
, it creates a new goroutine to perform calculation tasks, and then passes the included in the requestresp
Channel reply request.
Further transformation can limit the number of goroutines to be calculated and passcontext
Package controlserver
life cycle, etc.
This is the end of this article about Go's example code to implement concurrent cache. For more related contents of Go's concurrent cache, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!