Guava and Redis implement secondary caching
1. Purpose
Why not use local cachehashMap or concurrentHashMap?
ConcurrentHahMap, like hashMap, is a long-standing cache, unless the remove method is called, otherwiseThe cached data cannot be released actively。
Use only Guava local cacheWhat are the problems?
If used as an API or a certain functional system, regardless of whether it is a stand-alone/cluster (cluster actually forms a nearly Guava copy), Guava cannot support the data in Guava when it grows to an incalculable later; while in the case of microservices, it cannot be cached globally. If the data volume increases unlimitedly and is uncontrollable, it is still not recommended to use it.
Use Redis cache onlyWhat are the problems?
In the case of large numbers (hot searches) it is easy to cause cache avalanches and thus server avalanches.
In summary, combining Guava and Redis, Guava as the first-level cache and Redis as the second-level cache can make the "cache" line more reliable on the basis of reducing database pressure.
2. Example of Level 2 cache scenarios
The company has a camera that is placed in a luxury house where my home is often uninhabited. The camera includes abnormal portrait alarm, power outage alarm, abnormal signal alarm, capture screen dynamic alarm, etc. (Skip other settings and continue to alarm if the alarm of the same type still exists within 5 seconds).
Now there is a requirement: I can configure the alarm type I want to call on the platform (or if I go back to the luxury house one weekend, it will keep calling the alarm to the platform to disturb me for rest). When my alarm information comes and matches the alarm information I configured, this alarm will be pushed to the homepage of my platform.
//Ignore the alarm system code here. The alarm system pushes alarm messages through RocketMQtopic: alarm-camera
@Configuration public class RocketMqConsumer { private static Logger logger = (); public void init() { pullAlarm(); ("Rocketmq pulls the alarm data successfully!"); } /** * pullAlarm: Pull the alarm source data. * @author liaokh * @since JDK 1.8 */ public static void pullAlarm() { new Thread() { public void run() { ("------------------------------------------------------------------------------------------------------------------------------); try { // Declare and initialize a consumer DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("rocketmq-consumer-dev-camera" + "-alarm"); // Also set NameServer address ("My RocketMQ Server Address"); // Broadcast Mode When the Consumer uses broadcast mode, each message is consumed once by all Consumer instances in the Consumer cluster. (); // What is set here is a consumer consumption strategy // CONSUME_FROM_LAST_OFFSET default policy, starting from the end of the queue, that is, skipping historical messages // CONSUME_FROM_FIRST_OFFSET starts consumption from the beginning of the queue, that is, all historical messages (also stored in broker) are consumed once // CONSUME_FROM_TIMESTAMP starts consumption from a certain point in time and is used in conjunction with setConsumeTimestamp(). The default is half an hour ago (ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET); // Set the Topic and Tags subscribed by the consumer, * represents all tags ("alarm-camera", "*"); // Set up a Listener to mainly perform logical processing of messages (new MessageListenerConcurrently() { @Override public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) { for (MessageExt msg : msgs) { try { String tag = (); String alarmJson = new String(()); ("Received alarm-camera data: tag:" + tag + " alarmJson:" + alarmJson); CameraAlarmResp resultAlarm = new CameraAlarmResp(); AlarmMQResp alarm = (alarmJson, ); // Check whether the current alarm type is in the list configured by the user //Get user information based on camera device number Camera cameraEntity = (()); //This kind of core data can also be loaded into the cache UserAlarm userAlarm = (()); if (userAlarm == null || (())){ ("Equipment Number" + () + "The user has not configured the alarm type that needs to be pushed"); return ConsumeConcurrentlyStatus.CONSUME_SUCCESS; } boolean isReturn = true; //Get the user alert list filter String[] userAlarmArr = ().split(","); for (String s : userAlarmArr) { if (().equals(s)){ //Instructions need to be pushed isReturn = false; } } if (isReturn){ //If the alarm is matched, the alarm does not need to be pushed, and the direct consumption will be successful ("The user of this device number does not configure the alarm type that needs to be pushed"); return ConsumeConcurrentlyStatus.CONSUME_SUCCESS; } WebSocket webSocket = (); //Create business message information JSONObject obj = new JSONObject(); ("cmd", "alarm");//Business Type ("msgId", ());//Message id ("msgTxt", (alarm));//Message content //Send in a single user ((), ()); } catch (Exception e) { ("Request Exception", e); } } // Return to consumption status, consumption is successful return ConsumeConcurrentlyStatus.CONSUME_SUCCESS; } }); // Call the start() method to start the consumer (); ("Rocketmq consumer creation successfully"); } catch (Exception e) { ("Request Exception", e); } } }.start(); } }
The consumer will use the alarm information pushed from the alarm system. If it meets the alarm type configured by the user, it will be pushed to the front-end through WebSocket (here you only need to know that the websocket is used to establish a long connection with the front-end. If you need to know its significance and use in detail, please refer to the relevant article).
In fact, the above example mentioned that if there is still an alarm for the same type of alarm within 5s, it will be pushed to the platform after 5s. Therefore,In order to avoid going to the database to check the configuration table once every MQ comes, it may be very small for one and a half users to report alarm messages, but if this camera is sold well and involves large-scale users, this kind of MQ will become particularly large. Every time the MQ pushes an alarm, it has to judge whether it is pushed. The database search process will be particularly low when the scale is large.
It simply saves this configuration data in the cache. As for using only Guava or only Redis or combined like this article, or using it in progress according to project development, it depends on yourself.
@Component public class Utils { private static final Logger logger = (); /** * Get user alarm configuration information */ public static UserAlarm getUserAlarm(String userId){ if((userId)){ return null; } UserAlarm userAlarm = null; try { userAlarm = (userId).orNull(); if(null == userAlarm){ (userId); //Clear the cache of Guava //Try to get it from Redis String userAlarmJson = ("alarm_camera", userId); userAlarm = (userAlarmJson, ); } } catch (ExecutionException e) { ("Get user configuration cache exception",e); } return userAlarm; } /** * Get camera information */ public static Camera getCameraById(String cameraId){ Camera camera= null; try { camera= (cameraId).orNull(); } catch (ExecutionException e) { ("Exception of the device data",e); } return device; } }
/** * ClassName:GuavaCacheUtils <br/> * @version * @since JDK 1.8 * @see java (jvm) cache storage */ @Component public class GuavaCacheUtils { private static final Logger logger = (); /** * User alert push list cache * * expireAfterWrite: No updates within 10 minutes will be recycled and re-acquisitioned * * load: Execute when the cache is empty (go to the database query and put the result into the cache) */ public static LoadingCache<String, Optional<UserAlarm>> userAlarmCache = () .expireAfterAccess(10, ).build(new CacheLoader<String, Optional<UserAlarm>>() { @Override public Optional<UserAlarm> load(String userId) throws Exception { UserAlarm userAlarm = () .getOne(new LambdaQueryWrapper<UserAlarm>() .eq(UserAlarm::getUserId,userId)); return (userAlarm); } }); /** * Camera device information cache */ public static LoadingCache<String, Optional<Camera>> cameraCache = () .expireAfterAccess(10, ) .build(new CacheLoader<String, Optional<Camera>>() { @Override public Optional<Camera> load(String cameraId) throws Exception { String cameraJson = ("camera", cameraId); Cameracamera= (cameraJson, ); return (camera); } }); }
@Service public class UserAlarmServiceImpl extends ServiceImpl<UserAlarmMapper, UserAlarm> implements UserAlarmService{ //Add new user alert configuration @Override public String insert(UserAlarm userAlarm){ try{ (userAlarm); //Save it in Redis immediately ("alarm_camera",,userAlarm); } catch (Exception e) { return "Failed"; } return "Successful"; } //Modify user alarm configuration @Override public String update(UserAlarm userAlarm){ try{ UpdateWrapper<UserAlarm> wrapper = new UpdateWrapper(); ("alarmType",()); .eq("user_id",); (userAlarm); //Redis is updated immediately ("alarm_camera",,userAlarm); } catch (Exception e) { return "Failed"; } return "Successful"; } }
3. Guava parameter mechanism
#Recycling mechanism
-
expireAfterAccess
: When the cache item is not read or written within the specified time period, it will be recycled. -
expireAfterWrite
: When the cache item is not updated within the specified time period, it will be recycled. -
refreshAfterWrite
: How long will it take for the cache item to be refreshed after the last update operation?
#Refresh mechanism
-
expireAfterAccess
: Reload will be reloaded if there is no read cache within the set time. -
expireAfterWrite
/refreshAfterWrite
: If there is a read cache within the set time, it will not affect the reload. Regardless of whether the reference in the database has been modified at this time (and also read cache), it will be reloaded directly when the time comes.
/** * ClassName:GuavaCacheUtils <br/> * @version * @since JDK 1.8 * @see java (jvm) cache storage */ @Component public class GuavaCacheUtils { private static final Logger logger = (); /** * LoadingCache login cache * Chain call * removalListener: Set the listening task after the cache is removed * build: build object */ public static LoadingCache<String, Optional<User>> loginCache = () .expireAfterAccess(720, ).removalListener(new MyRemovalListener()) .build(new CacheLoader<String, Optional<User>>() { @Override public Optional<User> load(String token) throws Exception { User user = null; try { //Match in redis String loginJson = (token); user = (loginJson, ); } catch (Exception e) { ("Login cache query exception", e); } return (user); } }); /** * MyRemovalListener custom cache removal listener, it is necessary to implement the RemovalListener interface and implement the RemovalListener<K,V> interface, K, V is the generics with key and value * Optional: It is mainly used to solve null pointer exceptions, and is simple to judge null * (): The reason for the cache failure that was heard */ private static class MyRemovalListener implements RemovalListener<String, Optional<User>> { @Override public void onRemoval(RemovalNotification<String, Optional<User>> notification) { if (().toString().equals("EXPIRED")) { String token = (); (0,token); } } } }
Summarize
The above is personal experience. I hope you can give you a reference and I hope you can support me more.