Question: I set the waiting time a little larger, and I will throw an error immediately without waiting; I personally think that I should throw an error immediately. First of all, if the concurrency of the same lock is too large and there is no current limit, the number of server connections will immediately reach the maximum number of connections and the service will not be available; secondly, waiting is meaningless, and the corresponding time of the interface will be increased; the real-time interface directly reports an error, if you do not want to throw an error, you can join the message queue to process
What happens if the waiting time for redis is too long
In Spring Boot application, if the waiting time of the Redis lock is set too long, the following consequences may be caused:
- Waste of resources: Long-term waiting for locks will cause thread resources to be occupied and other tasks cannot be processed, thereby reducing the overall performance and throughput of the system.
- Increased response time: User requests need to wait longer to get a response, resulting in poor user experience. Especially in high concurrency scenarios, the increase in response time may significantly affect user satisfaction.
- Deadlock risk: If the lock release mechanism is incomplete, waiting for a long time may increase the risk of deadlock, causing the system to fail to operate normally.
- System load increases: Long-term waiting time locks may cause a large number of threads to be in a waiting state, increasing the load on the system, and may lead to excessive consumption of CPU and memory resources.
- Business logic delay: Certain key business logics may be delayed due to waiting for locks, affecting the timeliness and accuracy of business processes.
- Slow recovery: In the event of a failure, waiting for the lock for a long time may delay the speed of failure recovery, affecting the availability and stability of the system.
In order to avoid these problems, it is recommended to set the waiting time of Redis lock reasonably and optimize it in combination with business needs and system performance. At the same time, other implementation methods of distributed locks, such as Zookeeper, can be considered to improve the reliability and performance of the system.
Maximum number of connections in springboot
-threads=200
-spare-threads=10
-count=100
-
max-threads
: The maximum number of threads allowed in the Tomcat thread pool, the default value is 200. -
min-spare-threads
: The minimum number of free threads maintained in the Tomcat thread pool, the default value is 10. -
accept-count
: When all available processing threads are used, the number of connection requests that Tomcat can accept, the default value is 100.
What happens when the number of requests exceeds the maximum number of threads
When the number of requests exceeds-threads
When the number of requests is incurable, the Tomcat server will not be able to process all requests immediately. Specifically manifested as:
- Request a queue: Extra requests will enter the waiting queue and wait for available threads to process. If the waiting queue is also full, new requests will be denied.
- Response delay: Since requests need to be queued for processing, the response time will increase and users may feel significant delay.
- Exhausted resources: If the request continues to exceed the maximum number of threads, server resources (such as memory and CPU) may be exhausted, resulting in performance degradation and may even cause server crashes.
- Error response: In extreme cases, Tomcat may return an error response, such as HTTP 503 (the service is not available), indicating that the server is temporarily unable to process the request.
To avoid these problems, the following measures can be taken:
- Increase
-threads
value, but pay attention to the limitations of server hardware resources. - Optimize application code to improve the processing efficiency of each request. Use load balancing to spread requests across multiple servers.
- Implement request flow restriction to prevent excessive requests from pouring into the server.
How to limit the current of the interface in springboot
In Spring Boot, there are many ways to limit the current of an interface. Here are a few common methods:
1. Use Spring AOP and Guava RateLimiter
Guava provides a RateLimiter class that can easily implement current limiting. You can combine Spring AOP to limit current to specific interfaces.
step:
Add Guava dependencies:
<dependency> <groupId></groupId> <artifactId>guava</artifactId> <version>30.1.1-jre</version> </dependency>
Create a current limit annotation:
@Target() @Retention() public @interface RateLimit { double value(); }
Create an AOP section to handle current limiting logic:
@Aspect @Component public class RateLimitAspect { private final Map<String, RateLimiter> limiters = new ConcurrentHashMap<>(); @Around("@annotation(rateLimit)") public Object around(ProceedingJoinPoint joinPoint, RateLimit rateLimit) throws Throwable { MethodSignature signature = (MethodSignature) (); Method method = (); String key = ().getName() + "." + (); RateLimiter rateLimiter = (key, k -> (())); if (!()) { throw new RuntimeException("Rate limit exceeded"); } return (); } }
Use annotations on interfaces that require current limiting:
@RestController public class MyController { @RateLimit(1.0) // Every second1A request @GetMapping("/limited") public String limitedEndpoint() { return "This endpoint is rate limited"; } }
2. Use Spring Cloud Gateway
If you use Spring Cloud Gateway, you can configure it to achieve current limit.
step:
Add Spring Cloud Gateway dependencies:
<dependency> <groupId></groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency>
existCurrent limit is configured:
spring: cloud: gateway: routes: - id: limited_route uri: http://localhost:8080 predicates: - Path=/limited filters: - name: RequestRateLimiter args: : 1 : 1
Make sure you have Redis dependencies and configuration, because Spring Cloud Gateway's current limit depends on Redis.
3. Use third-party libraries such as Bucket4j
Bucket4j is a Java library that can be used to implement current limiting of token bucket algorithms.
step:
Add Bucket4j dependencies:
<dependency> <groupId>-bukhtoyarov</groupId> <artifactId>bucket4j-core</artifactId> <version>7.0.0</version> </dependency>
Create a current limit filter:
@Component @Order(Ordered.HIGHEST_PRECEDENCE) public class RateLimitFilter extends OncePerRequestFilter { private final Bucket bucket; public RateLimitFilter() { Bandwidth limit = (1, (1, (1))); = ().addLimit(limit).build(); } @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { if ((1)) { (request, response); } else { (HttpStatus.TOO_MANY_REQUESTS.value()); ().write("Rate limit exceeded"); } } }
Register filter:
@Configuration public class FilterConfig { @Bean public FilterRegistrationBean<RateLimitFilter> rateLimitFilter() { FilterRegistrationBean<RateLimitFilter> registrationBean = new FilterRegistrationBean<>(); (new RateLimitFilter()); ("/limited"); return registrationBean; } }
This is the article about the waiting time setting of redis concurrent locks in springboot. For more related content on the waiting time setting of redis concurrent locks, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!