SoFunction
Updated on 2025-03-05

Example code for implementing common current limiting algorithms in Golang

Current limiting is a tool that is often used in projects. It is generally used to limit the frequency of user requests, and can also avoid system crashes caused by excessive instantaneous traffic, or stabilize message processing speed.

This article mainly uses Go to implement common current limiting algorithms. The code references the articleInterviewer: Come on, young man! Please use 5 common current limiting algorithms!andInterview must-have: Explanation of 4 classic current limiting algorithmsIf you need Java implementation or more detailed algorithm introduction, you can read these two articles

Fixed window

Every time a new window is opened, within the window time, you can request an upper limit of requests through the window.

This algorithm mainly has critical problems. If the traffic is concentrated at the junction of the two windows, the burst traffic will be twice the upper limit.

package limiter

import (
   "sync"
   "time"
)

// FixedWindowLimiter Fixed Window Current Limitertype FixedWindowLimiter struct {
   limit    int           // window request upper limit   window    // Window time size   counter  int           // Counter   lastTime      // The last request time   mutex        // Avoid concurrency problems}

func NewFixedWindowLimiter(limit int, window ) *FixedWindowLimiter {
   return &FixedWindowLimiter{
      limit:    limit,
      window:   window,
      lastTime: (),
   }
}

func (l *FixedWindowLimiter) TryAcquire() bool {
   ()
   defer ()
   // Get the current time   now := ()
   // If the current window fails, the counter is cleared to 0 and open a new window   if () >  {
       = 0
       = now
   }
   // If the window request limit is reached, the request fails   if  >=  {
      return false
   }
   // If the window request upper limit is not reached, the counter +1 will be successful   ++
   return true
}

Sliding window

A sliding window is similar to a fixed window, it simply divides the large window into multiple small windows, moving one small window to the right at a time, and it can avoid twice the burst traffic.

Fixed windows can be said to be a special case of sliding windows, as long as the small windows inside the sliding windows are the same size as the large windows.

There is a problem with the window algorithm. When the traffic reaches the upper limit, subsequent requests will be rejected.

package limiter

import (
   "errors"
   "sync"
   "time"
)

// SlidingWindowLimiter Sliding Window Current Limitertype SlidingWindowLimiter struct {
   limit        int           // window request upper limit   window       int64         // Window time size   smallWindow  int64         // Small window time size   smallWindows int64         // Number of small windows   counters     map[int64]int // Small window counter   mutex            // Avoid concurrency problems}

// NewSlidingWindowLimiter Creates a sliding window current limiterfunc NewSlidingWindowLimiter(limit int, window, smallWindow ) (*SlidingWindowLimiter, error) {
   // Window time must be divided by small window time   if window%smallWindow != 0 {
      return nil, ("window cannot be split by integers")
   }

   return &SlidingWindowLimiter{
      limit:        limit,
      window:       int64(window),
      smallWindow:  int64(smallWindow),
      smallWindows: int64(window / smallWindow),
      counters:     make(map[int64]int),
   }, nil
}

func (l *SlidingWindowLimiter) TryAcquire() bool {
   ()
   defer ()

   // Get the current small window value   currentSmallWindow := ().UnixNano() /  * 
   // Get the starting small window value   startSmallWindow := currentSmallWindow - *(-1)

   // Calculate the total number of requests in the current window   var count int
   for smallWindow, counter := range  {
      if smallWindow < startSmallWindow {
         delete(, smallWindow)
      } else {
         count += counter
      }
   }

   // If the window request limit is reached, the request fails   if count >=  {
      return false
   }
   // If the window request upper limit is not reached, the current small window counter +1, the request is successful   [currentSmallWindow]++
   return true
}

Leak bucket algorithm

A leaky bucket is a leaky bucket. The request is equivalent to pouring water into the bucket. The speed of processing the request is equivalent to the speed of water leakage.

It is mainly used for services with relatively stable request processing rates. It requires the use of producer-consumer mode to put requests in a queue, so that consumers can process them at a relatively stable rate.

package limiter

import (
   "sync"
   "time"
)

// LeakyBucketLimiter LeakyBucketLimitertype LeakyBucketLimiter struct {
   peakLevel       int        // The highest water level   currentLevel    int        // Current water level   currentVelocity int        // Water flow rate/second   lastTime          // Last release time   mutex            // Avoid concurrency problems}

func NewLeakyBucketLimiter(peakLevel, currentVelocity int) *LeakyBucketLimiter {
   return &LeakyBucketLimiter{
      peakLevel:       peakLevel,
      currentVelocity: currentVelocity,
      lastTime:        (),
   }
}

func (l *LeakyBucketLimiter) TryAcquire() bool {
   ()
   defer ()

   // Try to release water   now := ()
   // The time from last release   interval := ()
   if interval >=  {
      // Current water level - time from last release of water (seconds) * water flow speed       = maxInt(0, -int(interval/)*)
       = now
   }

   // If the maximum water level is reached, the request fails   if  >=  {
      return false
   }
   // If the highest water level is not reached, the current water level is +1, the request is successful   ++
   return true
}

func maxInt(a, b int) int {
   if a > b {
      return a
   }
   return b
}

Token bucket

In contrast to the leaked bucket algorithm, the token bucket will continuously add the token to the bucket, and the request will obtain the token from the bucket. Only requests with the token can be accepted.

Because some tokens can be retained in advance in the bucket, it allows certain burst traffic to pass.

package limiter

import (
   "sync"
   "time"
)

// TokenBucketLimiter TokenBucketLimitertype TokenBucketLimiter struct {
   capacity      int        // capacity   currentTokens int        // Token number   rate          int        // Token issuance rate/second   lastTime        // The last time to issue the token   mutex          // Avoid concurrency problems}

func NewTokenBucketLimiter(capacity, rate int) *TokenBucketLimiter {
   return &TokenBucketLimiter{
      capacity: capacity,
      rate:     rate,
      lastTime: (),
   }
}

func (l *TokenBucketLimiter) TryAcquire() bool {
   ()
   defer ()

   // Try to issue a token   now := ()
   // Time to last time the token was issued   interval := ()
   if interval >=  {
      // Current number of tokens + time from last token issuance (seconds) * token issuance rate       = minInt(, +int(interval/)*)
       = now
   }

   // If there is no token, the request fails   if  == 0 {
      return false
   }
   // If there is a token, the current token -1, the request is successful   --
   return true
}

func minInt(a, b int) int {
   if a < b {
      return a
   }
   return b
}

Sliding log

Sliding logs are similar to sliding window algorithms, but sliding logs are mainly used in multi-level stream limit scenarios, such as SMS verification codes once in 1 minute, 10 times in 1 hour, and 20 times in 1 day.

The algorithm flow is the same as the sliding window, except that it can specify multiple policies, and when the request fails, the caller needs to be notified which policy is intercepted by.

package limiter

import (
   "errors"
   "fmt"
   "sort"
   "sync"
   "time"
)

// ViolationStrategyError Policy violation errortype ViolationStrategyError struct {
   Limit  int           // window request upper limit   Window  // Window time size}

func (e *ViolationStrategyError) Error() string {
   return ("violation strategy that limit = %d and window = %d", , )
}

// SlidingLogLimiterStrategy Sliding Log Current Limiter Strategytype SlidingLogLimiterStrategy struct {
   limit        int   // window request upper limit   window       int64 // Window time size   smallWindows int64 // Number of small windows}

func NewSlidingLogLimiterStrategy(limit int, window ) *SlidingLogLimiterStrategy {
   return &SlidingLogLimiterStrategy{
      limit:  limit,
      window: int64(window),
   }
}

// SlidingLogLimiter Sliding Log Current Limitertype SlidingLogLimiter struct {
   strategies  []*SlidingLogLimiterStrategy // List of sliding log current limiter policies   smallWindow int64                        // Small window time size   counters    map[int64]int                // Small window counter   mutex                          // Avoid concurrency problems}

func NewSlidingLogLimiter(smallWindow , strategies ...*SlidingLogLimiterStrategy) (*SlidingLogLimiter, error) {
   // Copy policy avoids modification   strategies = append(make([]*SlidingLogLimiterStrategy, 0, len(strategies)), strategies...)

   // You cannot set a policy   if len(strategies) == 0 {
      return nil, ("must be set strategies")
   }

   // Sorting strategy, the row with large window time is in front of the row, and the row with large upper limit of the same window is in front of the row with large window   (strategies, func(i, j int) bool {
      a, b := strategies[i], strategies[j]
      if  ==  {
         return  > 
      }
      return  > 
   })
   (strategies[0], strategies[1])

   for i, strategy := range strategies {
      // As the window time becomes smaller, the upper limit of the window should also become smaller      if i > 0 {
         if  >= strategies[i-1].limit {
            return nil, ("the smaller window should be the smaller limit")
         }
      }
      // Window time must be divided by small window time      if %int64(smallWindow) != 0 {
         return nil, ("window cannot be split by integers")
      }
       =  / int64(smallWindow)
   }

   return &SlidingLogLimiter{
      strategies:  strategies,
      smallWindow: int64(smallWindow),
      counters:    make(map[int64]int),
   }, nil
}

func (l *SlidingLogLimiter) TryAcquire() error {
   ()
   defer ()

   // Get the current small window value   currentSmallWindow := ().UnixNano() /  * 
   // Get the starting small window value for each policy   startSmallWindows := make([]int64, len())
   for i, strategy := range  {
      startSmallWindows[i] = currentSmallWindow - *(-1)
   }

   // Calculate the total number of requests for each policy's current window   counts := make([]int, len())
   for smallWindow, counter := range  {
      if smallWindow < startSmallWindows[0] {
         delete(, smallWindow)
         continue
      }
      for i := range  {
         if smallWindow >= startSmallWindows[i] {
            counts[i] += counter
         }
      }
   }

   // If the request limit of the corresponding policy window is reached, the request fails and the violated policy is returned.   for i, strategy := range  {
      if counts[i] >=  {
         return &ViolationStrategyError{
            Limit:  ,
            Window: (),
         }
      }
   }

   // If the window request upper limit is not reached, the current small window counter +1, the request is successful   [currentSmallWindow]++
   return nil
}

Summarize

  • If you need a simple and efficient algorithm, you can use a fixed window, but it can generate twice the burst traffic
  • The sliding window can avoid burst traffic problems, but the window may cut off traffic for a period of time.
  • If smoother traffic is needed, you can use the leaky bucket algorithm to match the producer and consumer model
  • If you can handle a certain burst traffic, you can use the token bucket algorithm
  • When encountering multi-level current limit scenarios, sliding logs will be more suitable

All source codes:/jiaxwu/limiter

This is the article about Golang's example code to implement common current limiting algorithms. For more related contents of Golang's current limiting algorithm, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!