1. Optimization of data structure
1. Use the minimum storage form of data structure. For example, if you need to store a unique set of user IDs, you can merge this information into a large hash table instead of creating a separate hash table for each user to reduce the fixed overhead.
2. Use integer encoding. For example, when storing user's age information, storing age as an integer value instead of a string can save memory.
3. Use Redis's HyperLogLog for cardinality estimation, which saves more memory than storing a large number of unique values.
2. Enable object compression
Redis 6 and above introduces built-in LZF compression support for strings.
The configuration parameter activerehashing can enable object compression and reduce storage space.
3. Set the appropriate expiration time
Set an appropriate expiration time for cached data to prevent memory leaks.
This ensures that memory is freed in time when data is no longer needed.
4. Slices
Slice the data into multiple Redis instances so that only part of the data is stored per instance.
This can reduce memory requirements per instance, especially when deploying at a large scale.
5. Use persistence method
If you use Redis's persistence mechanism, consider using RDB snapshots to periodically snapshot data in memory to disk for recovery if needed.
6. Memory defragmentation
Periodically execute the MEMORY DOCTOR command to check and repair memory fragmentation.
This can be achieved by moving fragmented data into a new instance.
7. Client and configuration optimization
1. Pay attention to the input and output buffer memory usage of TCP connections connected to Redis servers, especially in scenarios where master-slave or high concurrency is deployed in remote locations.
2. Use the client-output-buffer-limit parameter to configure output buffer occupation to avoid too many slave nodes mounting on the master node.
3. For the replication backlog buffer (a reusable fixed-size buffer provided after v2.8), the repl-backlog-size parameter is reasonably configured to avoid full replication.
4. For the AOF buffer, the cache size is reasonably configured according to the AOF rewrite time and increment.
Summarize
The above is personal experience. I hope you can give you a reference and I hope you can support me more.