SoFunction
Updated on 2025-03-01

Cache implementation details in C# .NET

1. Basic concepts of caching

cache . This is a simple but very effective concept. The core of this idea is to record process data and reuse operation results. When performing heavy operations, we save the results in our cache container. The next time we need that result, we will pull it out of the cache container instead of doing the heavy lifting again.

For example, to get a person's avatar, you may need to access a database. Instead of performing that trip every time, we save the Avatar in cache and extract it from memory every time we need it.

Caching is ideal for data that is not changed frequently. Or even better, never changing. Constantly changing data, such as the time of the current machine, should not be cached, otherwise you will get wrong results.

2. Cache

There are 3 types of caches:

  • In-Memory Cache:Used to implement cache in a single process. When the process terminates, the cache also terminates. If you run the same process on multiple servers, you will provide a separate cache for each server.
  • Persistent in-process cache:It refers to backing up the cache outside process memory. It may be in a file or in a database. This is more difficult, but the cache is not lost if your process restarts. Best for a wide range of use when getting cache items, and your processes tend to restart a lot.
  • Distributed Cache:It means you want to share cache for multiple machines. Usually, it will be multiple servers. Use a distributed cache, which is stored in an external service. This means that if one server saves a cache item, other servers can use it as well. Services like Redis [1] are perfect for this.

We will only discussIn-process cache

3. Early practices of in-process caching

Let's create a very simple cache implementation using C#:

public class NaiveCache<TItem>


{


    Dictionary<object, TItem> _cache = new Dictionary<object, TItem>();

 

 

    public TItem GetOrCreate(object key, Func<TItem> createItem)


    {


        if (!_cache.ContainsKey(key))


        {


            _cache[key] = createItem();


        }


        return _cache[key];


    }


}

usage:

var _avatarCache = new NaiveCache<byte[]>();


// ...


var myAvatar = _avatarCache.GetOrCreate(userId, () => _database.GetAvatar(userId));

This simple code solves a key problem. To obtain the user's avatar, only the first request will actually perform access to the database. Then add the avatar data (byte[]) Save in process memory. All subsequent requests to the avatar will be extracted from memory, saving time and resources.

But, like most things in programming, nothing is that simple. The above solution is not good for a variety of reasons. on the one hand,This implementation is not thread-safe. Exceptions may occur when used from multiple threads. Other than that, the cached items will remain in memory forever, which is actually very bad.

This is why we should delete items from the cache:

  • The cache will take up a lot of memory, which will eventually lead to insufficient memory exceptions and crashes.
  • High memory consumption can cause GC pressure (also known as memory pressure). In this state, the garbage collector's workload exceeds its due level, thus impairing performance.
  • If the data changes, the cache may need to be refreshed. Our caching infrastructure should support this capability.

To deal with these issues, the cache framework has an eviction policy (aka removal policy). These are rules for removing items from the cache based on some logic.Common eviction policies are:

  • anyway,Absolutely expiredThe policy will delete items from the cache after a fixed time.
  • If a project is not accessed within a fixed time period,Sliding expiredThe policy will delete the item from the cache. So if I set the expiration time to 1 minute, the item stays in the cache as long as I use it every 30 seconds. Once I don't use it for more than a minute, the item is banished.
  • Size limitThe policy will limit the cache memory size.

Now that we know what we need, let's keep looking for better solutions.

4. Better solutions

As a blogger, I'm very frustrated that Microsoft has created a great cache implementation. This deprived me of myself of creating similar implementations, but at least I had less work to write this blog post.

I'll show you Microsoft's solution, how to use it effectively, and then how to improve it in some scenarios.

/MemoryCache with

Microsoft There are 2 solutions 2 differentNuGet Packages are used for caching. Both are great. according toMicrosoft Suggestions, prefer to use, Because it has Core Better integration. It can be easily injected intoAsp .NET Core in the dependency injection mechanism.

1、 

Here is a basic example:

public class SimpleMemoryCache<TItem>


{

    private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions()); 

    public TItem GetOrCreate(object key, Func<TItem> createItem)


    {


        TItem cacheEntry;


        if (!_cache.TryGetValue(key, out cacheEntry))// Look for cache key.


        {


            // Key not in cache, so get data.


            cacheEntry = createItem();


            // Save data in cache.


            _cache.Set(key, cacheEntry);


        }


        return cacheEntry;
    }
}

usage:

var _avatarCache = new SimpleMemoryCache<byte[]>();


// ...


var myAvatar = _avatarCache.GetOrCreate(userId, () => _database.GetAvatar(userId));

This is very similar to mineNaiveCache , so what changes have been made? Well, on the one hand, this is oneThread safetyImplementation of . You can call it safely from multiple threads at once.

The second thing isMemoryCache Allow all the eviction policies we talked about earlier.

Here is an example:

2. IMemoryCache with eviction strategy

public class MemoryCacheWithPolicy<TItem>
{
    private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions()

    {
        SizeLimit = 1024

    });

    public TItem GetOrCreate(object key, Func<TItem> createItem)

  {
        TItem cacheEntry;
        if (!_cache.TryGetValue(key, out cacheEntry))// Look for cache key.

        {
            // Key not in cache, so get data.
            cacheEntry = createItem();

            var cacheEntryOptions = new MemoryCacheEntryOptions()
             .SetSize(1)//Size amount
             //Priority on removing when reaching size limit (memory pressure)

                .SetPriority()

            // Keep in cache for this time, reset time if accessed.

             .SetSlidingExpiration((2))

              // Remove from cache after this time, regardless of sliding expiration
                .SetAbsoluteExpiration((10));

            // Save data in cache.
            _cache.Set(key, cacheEntry, cacheEntryOptions);
        }
        return cacheEntry;
    }
}
  • SizeLimit Added toMemoryCacheOptions . This adds a size-based policy to our cache container. No unit size. Instead, we need to set the number of sizes on each cache entry. In this case, we set the amount to 1 SetSize(1). This means the cache limit is 1024 One project.
  • When we reach the size limit, which cache item should be deleted? You can actually use .SetPriority() . Level is Low,Normal, High, and NeverRemove
  • SetSlidingExpiration((2))Added, it sets the Slide Expiration Time to 2 seconds. This means that if an item is not accessed within 2 seconds, it will be deleted.
  •   SetAbsoluteExpiration((10)) Added, setting the Absolute Expiration time to 10 seconds. This means the project will be evicted within 10 seconds if it hasn't.
  • In addition to the options in the example, you can set aRegisterPostEvictionCallback Delegation, which will be called when the project is evict.
  • This is a very comprehensive feature set. It makes you wonder if there is anything else to add. There are actually a few things.

3. Problems and missing functions

There are several important missing parts in this implementation.

  • While you can set size limits, the cache doesn't actually monitor gc pressure. If you really monitor it, you can tighten the policy when the pressure is high, and relax the policy when the pressure is low.
  • When multiple threads request the same project at the same time, the request does not wait for the first to complete. The project will be created multiple times. For example, suppose we are cached, it takes 10 seconds to get the avatar from the database. If we request the avatar 2 seconds after the first request, it will check if the avatar is cached (not yet cached) and start another access to the database.

In fact, this is aMemoryCache Completely resolved its implementation:

public class WaitToFinishMemoryCache<TItem>

{

    private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions());
    private ConcurrentDictionary<object, SemaphoreSlim> _locks = new ConcurrentDictionary<object, SemaphoreSlim>();

    public async Task<TItem> GetOrCreate(object key, Func<Task<TItem>> createItem)

    {

        TItem cacheEntry;

        if (!_cache.TryGetValue(key, out cacheEntry))// Look for cache key.
        {
            SemaphoreSlim mylock = _locks.GetOrAdd(key, k => new SemaphoreSlim(1, 1));

            await ();
            try
            {
                if (!_cache.TryGetValue(key, out cacheEntry))
                {
                    // Key not in cache, so get data.
                    cacheEntry = await createItem();

                    _cache.Set(key, cacheEntry);
                }
            }
            finally
            {
                ();
            }

        }

        return cacheEntry;
    }

}

usage:

var _avatarCache = new WaitToFinishMemoryCache<byte[]>();
// ...
var myAvatar = 

 await _avatarCache.GetOrCreate(userId, async () => await _database.GetAvatar(userId));

4. Code description

This implementation locks the creation of the project. The lock is key-specific. For example, if we are waiting for a fetchAlex ofAvatar, we can still get it on another threadJohn orSarah cache value.

dictionary _locks All locks are stored. Regular locks are not suitable forasync/await , so we need to use SemaphoreSlim [5] .

if(!_cache.TryGetValue(key, out cacheEntry)),There are 2 checks to see if the value has been cached. The one inside the lock is the one that ensures that there is only one created. The one outside the lock is for optimization.

5. When to use WaitToFinishMemoryCache

This implementation obviously has some overhead. Let's consider when it's even necessary.

Use WaitToFinishMemoryCache in the following cases:

  • When the creation time of a project has some cost, you want to create as little as possible.
  • When a project is created for a long time.
  • When it is necessary to ensure that each key creates an item.

Do not use WaitToFinishMemoryCache if:

  • There is no danger of multiple threads accessing the same cache entry.
  • You don't mind creating the project multiple times. For example, if additional access to the database does not change much.

Summary:

Caching is a very powerful pattern, it is also dangerous and has its own complexity. Too much cache can cause GC stress, and too little cache can cause performance problems. And distributed caching, this is a new world that needs to be explored. This is how the software development career is, there is always something new to learn.

This is the end of this article about the details of cache implementation in C# .NET. For more related cache implementation content in C# .NET, please search for my previous articles or continue browsing the related articles below. I hope you will support me in the future!