Implement cache using concurrentHashMap
1. The concurrentHashMap itself is thread-safe
2. Use @PostConstruct to annotate the init function, and the init function will be called when generating the bean. The init function initializes the concurrenthashMap and allocates the initial space size. And turn on the thread to process and clear the cache every once in a while, flushAlll.
3. If a key needs to store multiple messages and the concurrency is very high, you can consider bucketing and using multiple concurrentHashMaps. Decide which map to store according to the key, and use another key to hash in the map.
4. Add elements to the cache. If the number of map elements is greater than the threshold, the cache processing will be cleaned.
Difficulty one
- The cache requires more read, more write, and more concurrency.
- It is more appropriate to use concurrentHashMap. It hashed according to the id. There are multiple messages with the same id, so the val of concurrentHashMap should be a collection with thread-safe and good concurrency efficiency.
- Use ConcurrentLinkedQueue.
Difficulty 2
When it is released online, the machine needs to be restarted and the cache instance is destroyed. How to avoid cache loss?
Scene:
- Consume mq and put it in cache.
- Use @BeforeDestory, void destroy function, set flag to false, add function to cache, first determine if flag is flase, throw an exception directly, mq does not confirm ack.
- The flag should be volatile.
Difficulty Three
- The destroy function sets flag to false, and the add function judges based on the flag that add is highly concurrent. Maybe after adding is judged to true, it is about to be put into the cache before destory sets flag to flase, and a small number of messages will be lost.
- If you use CountDownLatch, wait for add to put all messages into the cache, then execute uploadAll, flush all the cache, and clean it. The key is to set the initial value of countDownLatch to what?
Summarize
The above is personal experience. I hope you can give you a reference and I hope you can support me more.