1. Why use Redis as cache?
Compared with traditional databases, Redis has the following advantages:
- Low latency & high throughput: Redis is based on memory operations, and the read and write speed is much higher than that of the disk storage database.
-
Supports multiple data structures:support
String
、Hash
、List
、Set
、Sorted Set
and other rich data types, suitable for different cache scenarios. - Persistence support: Optionally use AOF and RDB for data persistence to prevent data loss.
- Distributed support: Supports master-slave replication, sentinel mode and cluster mode, and can scale horizontally.
- Rich expiration strategies: Supports multiple cache elimination strategies to avoid cache consuming too much memory.
2. Common usage patterns of Redis cache
Redis is generally used as a cacheLook-aside CacheorWrite-through cachemodel.
2.1. Look-aside Cache
principle:
- First query the Redis cache, and return it directly if it hits;
- If a Cache Miss is missed, the database is queried and the results are written to the Redis cache for easier access.
Code example (using Python + Redis):
import redis import time # Connect Rediscache = (host='localhost', port=6379, decode_responses=True) def get_data_from_db(key): """Simulate database query""" (1) # Simulated query delay return f"Value of {key}" def get_data(key): """Check Redis first, check the database if it misses, and store it in Redis""" value = (key) if value is None: print("Cache Miss, Fetching from DB...") value = get_data_from_db(key) (key, 3600, value) # Set 1 hour expiration else: print("Cache Hit!") return value # testprint(get_data("user:1001")) print(get_data("user:1001"))
advantage:
- Suitable forRead more and write lessScenarios, such as hot data query.
- Cache validity periodIt can be controlled to avoid long-term storage of expired data.
shortcoming:
- May encounterCache penetration、Cache breakdownandCache avalancheetc. (I will explain in detail later).
2.2. Write-through cache
principle:
- Update the database and Redis at the same time while writing data, ensure data consistency;
- When reading data, check Redis first, hit directly returns, misses will be queried from the database and the cache will be updated.
Code example:
def update_data(key, value): """Update the database and update the cache""" print("Updating database...") # Here we simulate the update database (1) # Simulate write delay (key, 3600, value) # Update the cache now print("Cache updated!") # testupdate_data("user:1001", "Updated Value") print(get_data("user:1001")) # The new value should be returned
advantage:
- Suitable forSimilar read and write frequencyScenarios, such as e-commerce inventory and user account balance.
- Because the cache is updated on write, it canReduce cache breakdown issues。
shortcoming:
- Each write operation requires update of the cache, which may causeIncreased writing pressure。
3. Solve FAQs on Caching
3.1. Cache penetration
question:
- The data requested by the user is in the databaseDoes not exist, resulting in every requestUnable to hit cache, directly query the database.
- May cause the databaseThe pressure increased dramatically, even collapsed.
Solution:
Cache null values: For keys whose query results are empty, they are also stored in Redis to avoid frequent query of the database:
value = ("user:9999") if value is None: db_value = get_data_from_db("user:9999") if db_value is None: ("user:9999", 3600, "NULL") # Save a null value else: ("user:9999", 3600, db_value)
Bloom Filter: Before requesting Redis, use the Bloom filter to determine whether the key is possible.
3.2. Cache breakdown
question:
- A hot topic key expiresAfterwards, a large number of concurrent requests queried the database at the same time, causing excessive pressure on the database.
Solution:
Set a reasonable expiration time, adoptRandom expiration timeAvoid multiple keys expire at the same time.
Mutex lock: After the cache expires, onlyA thread updates the cache, other threads are waiting:
lock = ("lock:user:1001", 1) # Try adding a lockif lock: value = get_data_from_db("user:1001") ("user:1001", 3600, value) # Update cache ("lock:user:1001") # Release the lock
3.3. Cache Avalanche
question:
- A large number of cache keys expire at the same time, resulting in a large number of requests to access the database directly, causing the risk of downtime.
Solution:
-
Set different expiration times for cache key(like
3600 + random(600)
Second). - Using Redis clusters, disperse the buffering pressure.
- Preload data, update the cache regularly to avoid large-scale expiration.
4. Redis Advanced Optimization Tips
4.1. Use appropriate data structures
- String: Suitable for simple key-value storage, such as user information cache.
- Hash: Suitable for storing structured data.
- List: Applicable to message queues.
- Set (Set): Suitable for deduplication operation.
- Ordered Set: Applicable to rankings.
4.2. Redis LRU Elimination Strategy
CONFIG SET maxmemory-policy allkeys-lru
4.3. Adopt Redis distributed architecture
- Master-slave copy: Suitable for scenarios where more reads, less writes.
- Redis Sentinel: Provides automatic failure recovery.
- Redis Cluster:supportSharded storage。
Summarize
As an efficient cache, Redis can greatly improve data access speed and reduce database pressure. However, in actual use, it is necessary to combine cache strategies, elimination strategies and distributed architectures to avoid cache penetration, breakdown and avalanche problems to achieve a highly available and high-performance cache system.
The above is a detailed explanation of how to use Redis as an efficient cache. For more information about using Redis efficient cache, please pay attention to my other related articles!