groupcache Introduction
Using cache in software systems can reduce system response time, improve user experience, and reduce the pressure on certain system modules.
groupcache is an open source cache component. Unlike memcache and redis, groupcache does not need to be deployed separately, but can be used as a library for your program. This facilitates the deployment of the programs we developed.
This article mainly analyzes the key parts of the groupcache source code, the definition of lru and how to make the same key only load once.
Implementation of cache filling and load suppression
It was mentioned in the previous articleload
The implementation of functions and the logic of cache filling are also reflected here.
groupcache tries to avoid getting data from the source. When local data is missing, it will be retrieved from the peer first. The hit in the peer will be directly filled to the local area, and the miss will be loaded from the source. This is the implementation logic of cache filling.
The function of load suppression and avoiding repeated loading is based onsingleflight
Package implementation.
There are two main structures in this package:
call
Used to store the result (val) and error (err), each key corresponds to onecall
Example.wg
Used to control request waiting.
type call struct { wg val interface{} err error }
Group
Used to store allcall
, record all requests.
type Group struct { mu // protects m m map[string]*call // lazily initialized }
It is the implementation of functions.
When a request is received, the lock will be first added and the request will be initialized.map
。map
The key is requestedkey
, the value iscall
() if == nil { = make(map[string]*call) }
If the current key is already in the process of requesting to load, then the conflict lock defined in the previous step is released and the existing loading request is returned after the end.
if c, ok := [key]; ok { () () return , }
If the current key does not have an existing loading process, create acall
Example, joinmap
Record andAdd a record to block other requests and unblock the conflict lock defined in the previous step.
c := new(call) (1) [key] = c ()
Call the incoming function (the author did not limit this function to data acquisition, through the incoming functionfunc
Can implement control of different functions), assign the result tocall
, after obtainingEnd blocking.
, = fn() ()
Then deletemap
Record
() delete(, key) ()
This function is mainly based onThe blocking implementation is also the most difficult thing for beginners to understand.
Imagine a scenario:
In the college dormitory, you and your roommate have to go to the cafeteria to buy lunch, and you say to your roommate, "Just go by yourself, bring me one." Then you wait in the dormitory for your roommate to come back.
In this scene, you and your roommate areaskYou are waitingblock。
cache(lru)
The main cache and hot cache mentioned in the previous article are both implemented by cache.
The implementation of cache relies on two-way linked lists.MaxEntries
Maximum storage capacityOnEvicted
Actions performed when an eviction occurs (i.e., arriving at MaxEntries)ll
Two-way linked list bodycache
The key corresponds to the elements in the linked list
type Cache struct { // MaxEntries is the maximum number of cache entries before // an item is evicted. Zero means no limit. MaxEntries int // OnEvicted optionally specifies a callback function to be // executed when an entry is purged from the cache. OnEvicted func(key Key, value interface{}) ll * cache map[interface{}]* }
Initialization will be performed first when addingmap
,ifkey
Already exists, thenkey
ofindex
Mention the first position (the linked list here does not have an index, just for easy understanding), and update its value.
If it does not exist, it is inserted directly into the first position.
If the length after insertion exceeds the limit, a cleanup operation will be performed
func (c *Cache) Add(key Key, value interface{}) { if == nil { = make(map[interface{}]*) = () } if ee, ok := [key]; ok { (ee) .(*entry).value = value return } ele := (&entry{key, value}) [key] = ele if != 0 && () > { () } }
The tail element will be deleted during cleaning, which explains why the element is mentioned at the first position every time the operation is performed.
func (c *Cache) RemoveOldest() { if == nil { return } ele := () if ele != nil { (ele) } }
The above is the detailed content of groupcache for the use of cache components in golang. For more information about the usage of go groupcache, please follow my other related articles!