SoFunction
Updated on 2025-04-14

Redis implements red lock example code

To give a real example: your team has just launched a flash sale system, using Redis locks to prevent overselling. The test environment was running well, but the promotion appeared that night100 pieces sold in stocksupernatural events. After checking the log, I found that at the moment when the user clicked wildly, the Redis master node suddenly hung up, and the new master node had not yet received the lock information. As a result, the two users grabbed the "same lock" at the same time.

This is the pitfall that many developers have struck - you think everything will be fine if you use Redis distributed locks, but in fact, these situations may invalidate the lock at any time:

  • The master node crashed as soon as it added a lock to you. When the slave node took over, he looked confused: "What lock? I haven't heard of it."
  • Your program is being processed and suddenly stuck (such as GC pause). When you come back to your senses, the lock will expire long ago.
  • The lock information is not transmitted in place on the Internet, and multiple clients feel that they have obtained the lock.

To solve these headaches, the author of Redis proposed the RedLock solution. Simply put, "Don't put the eggs in one basket": let multiple independent Redis nodes vote to decide the ownership of the lock, and only if more than half of the agreement is considered to have the lock.

But this plan has also sparked heated debate, and some even said that it is "mathematically unsafe". This article will use the most straightforward language:

  • Let's first show you why traditional Redis locks are prone to overturning in cluster environments
  • Disassemble the red lock, a solution to "the minority obeys the majority"
  • Teach you step by step to implement red locks using Java code
  • Revealing the Redisson framework simplifies the use of red locks

After reading this article, you will understand: there is no perfect distributed lock, only the choices suitable for the scenario. Next time you design the system, you can at least know how confident the lock in your hand is.

Flaws and challenges of cluster locks

In the Redis Cluster environment, traditionalSETNXDistributed locks have the following fatal flaws: Master-slave switching causes the lock to fail.

Problem steps reappear:

  • Client A passesSET key random_val NX PX 30000existMaster nodeAcquisition of lock successfully

  • The master node goes down, Redis Cluster triggers failover, and the slave node is upgraded to a new master node

  • Since Redis master-slave replication isasynchronous, the lock may not be synchronized to the new master node

  • Client B applies for the lock of the same resource to the new master node, and successfully obtains itData competition

# Master node write lockSET resource_1 8a3e72 NX PX 10000  
OK

# Master node is down, slave node is promoted but data is not locked synchronous# The new master node handles the request from client BSET resource_1 5b9fd2 NX PX 10000  
OK  # Lock is repeatedly acquired!

Design and implementation of RedLock

existN independent Redis nodes(non-Cluster mode), when the client is inMore than half of the nodesOnly when the lock is successfully acquired and the total time is less than the lock validity period is considered to be successfully acquired.

Detailed explanation of implementation steps

Suppose you deploy 5 Redis nodes (N=5):

  • Get the current time: Record the start timeT1(millisecond accuracy)

  • Apply for locks to all nodes in turn

SET lock_key valueNX PX $ttl
  • value: Globally unique value (such as UUID)
  • ttl: Lock automatic release time (such as 10 seconds)
  • Calculate lock validity
  • Client calculates the total time to acquire locksT_elapsed = T2 - T1(T2 is the last response time)

  • The lock is only valid if the following two conditions are met:

    Number of nodes successfully acquired the lock ≥ 3 (N/2 + 1)

    T_elapsed < ttl(Make sure the lock has not expired)

  • Locking successfully, go to operate shared resources

  • Release the lock: Send Lua script to all nodes to delete the lock (value required)

if ("get",KEYS[1]) == ARGV[1] then
   return ("del",KEYS[1])
else
   return 0
end

NPC disputes

Since its birth, the red lock algorithm has been accompanied by three core disputes: **N (network delay), P (process pause), and C (clock drift). These uncertainties in the real world have shaken the absolute security of red lock in the mathematical sense.

Fatal time difference for network delay

Problem scenario

  • The client successfully acquires the lock in nodes A, B, and C, and the total time is 48ms (less than TTL 50ms)

  • But becauseCross-computer room network fluctuations, There is a difference in the valid time of actual locking on the node:

    • Lock expiration time recorded by Node A: Client local time +50ms = T+50
    • Due to network delay, the actual lock expiration time of Node B is T+52.
    • Due to network congestion, the actual lock expiration time is only T+48.
  • In the time windowT+48arriveT+50In between, the client believes that the lock is still valid, but the lock of node C has expired in advance.

as a result of
Other clients may acquire the lock of node C during this period, resulting inLock state split, multiple clients enter the critical area at the same time.

Process Pause's "Schrödinger Lock"

Classic Case

// Pseudocode: Execute business logic after obtaining the lockif (()) {
    // Trigger Full GC pause for 300ms    (); 
    
    // The lock has expired at this time, but the client is still writing data    updateInventory(); 
}

Key timeline

  • T0: Acquire lock (TTL=200ms)
  • T0+100ms: Enter GC pause for 300ms
  • T0+400ms: GC ends, continue to execute business logic
  • The lock has expired at T0+200ms, but the client still thinks that it holds the lock in T0+400ms

Data disaster
Other clients may modify data during T0+200ms to T0+400ms, resulting in the final result being in a confusing manner.

Space-time distortion of Clock Drift

Experimental data of physical machine clock offset

node Clock error range Common triggers
Node A ±200ms/min The virtual machine clock is not synchronized
Node B ±500ms/day NTP service exception
Node C ±10 seconds/hour Host hardware clock failure

chain reaction

  • The client calculates the lock validity period based on the local clock (assuming T+100ms)
  • But the clock of Node B is 30 seconds faster than the actual time, causing its recorded lock expiration time T-29000ms
  • The lock is automatically released by Node B in advance within the valid period considered by the client.

Head-to-head confrontation between industry leaders

Martin Kleppmann (Author of Data-intensive Application Design)

"The assumption of red lock dependency - "The client can accurately sense the latch survival time" is simply not guaranteed in an asynchronous distributed system. Even without node failure, NPC problems can lead to uncertainty in the lock state."

Antirez (Author of Redis) Refutation

"In engineering practice, risks can be controlled through the following means:

Use atomic clock hardware with temperature compensation

Disable clock jump adjustment for NTP service

Monitoring process pause (such as GC log analysis)

Set redundant buffering time for locked TTL (such as an additional 20%)"

Example of Java implementation of red lock

Implementing red locks using Jedis client:

package ;

import ;
import ;
import ;

import ;
import ;
import ;
import ;

/**
  * Handwritten RedLock with jedis
  */
public class JedisRedLock {

    public static final int EXPIRE_TIME = 30_000;

    private final List&lt;JedisPool&gt; jedisPoolList;

    private final String lockKey;

    private final String lockValue;

    public JedisRedLock(List&lt;JedisPool&gt; jedisPoolList, String lockKey) {
         = jedisPoolList;
         = lockKey;
         = ().toString();
    }

    public void lock() {
        while (!tryLock()) {
            try {
                (100); // Wait briefly after failure            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        }
    }

    public boolean tryLock() {
        long startTime = ();
        int successCount = 0;
        try {
            for (JedisPool jedisPool : jedisPoolList) {
                try (Jedis jedis = ();) {
                    // Atomized locking: SET lockKey UUID NX PX expireTime                    String result = (lockKey, lockValue,
                            ().nx().px(EXPIRE_TIME));
                    if ("OK".equals(result)) {
                        successCount++;
                    }
                }
            }
            // Calculate the time to acquire the lock            long elapsedTime = () - startTime;

            // Verification: Most nodes are successful and take less than TTL            return successCount &gt;= (() / 2 + 1) &amp;&amp; elapsedTime &lt; EXPIRE_TIME;
        } finally {
            // If the lock is failed, immediately release the obtained lock            if (successCount &lt; (() / 2 + 1)) {
                unlock();
            }
        }
    }

    public void unlock() {
        String script = "if ('get', KEYS[1]) == ARGV[1] then return ('del', KEYS[1]) else return 0 end";

        for (JedisPool jedisPool : jedisPoolList) {
            try (Jedis jedis = ()) {
                (script, (lockKey), (lockValue));
            }
        }
    }

}

Use of handwritten RedLock:

package ;

import ;
import ;

import ;
import ;
import ;
import ;
import ;
import ;

/**
  * Use of handwritten RedLock
  */
public class JedisRedLockDemo {

    private volatile static int count;

    public static void main(String[] args) throws InterruptedException {
        List&lt;JedisPool&gt; jedisPoolList = new ArrayList&lt;&gt;();
        (new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6379));
        (new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6380));
        (new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6381));
        (new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6382));
        (new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6383));

        int threadCount = 3;
        CountDownLatch countDownLatch = new CountDownLatch(threadCount);
        ExecutorService executorService = (threadCount);

        for (int i = 0; i &lt; threadCount; i++) {
            (() -&gt; {
                JedisRedLock jedisRedLock = new JedisRedLock(jedisPoolList, "lock-key");
                ();
                try {
                    (().getName() + "Get the lock and start executing the business logic....");
                    try {
                        (3);
                    } catch (InterruptedException e) {
                        throw new RuntimeException(e);
                    }
                    (().getName() + "Get locks, end execution of business logic....");
                    count++;
                } finally {
                    ();
                }
                ();
            });
        }

        ();
        ();
        (count);
    }

}

Use of red locks in Redisson

Redisson has encapsulated red lock implementation, automatically handling node communication and lock renewal:

package ;

import ;
import ;
import ;
import ;
import ;

import ;
import ;
import ;
import ;
import ;
import ;
import ;

/**
  * Use of red locks in Redisson
  */
public class RedissonRedLockDemo {

    private volatile static int count;

    public static void main(String[] args) throws InterruptedException {

        List&lt;String&gt; serverList = ("redis://127.0.0.1:6379", "redis://127.0.0.1:6380", "redis://127.0.0.1:6381",
                "redis://127.0.0.1:6382", "redis://127.0.0.1:6383");

        List&lt;RedissonClient&gt; redissonClientList = new ArrayList&lt;&gt;(());
        for (String server : serverList) {
            Config config = new Config();
            ()
                    .setAddress(server);

            ((config));
        }

        List&lt;RLock&gt; lockList = new ArrayList&lt;&gt;(());
        for (RedissonClient redissonClient : redissonClientList) {
            (("java-lock"));
        }
        
        int threadCount = 3;
        CountDownLatch countDownLatch = new CountDownLatch(threadCount);
        ExecutorService executorService = (threadCount);

        for (int i = 0; i &lt; threadCount; i++) {
            (() -&gt; {
                RedissonRedLock redissonRedLock = new RedissonRedLock((new RLock[0]));
                ();
                try {
                    (().getName() + "Get the lock and start executing the business logic....");
                    try {
                        (3);
                    } catch (InterruptedException e) {
                        throw new RuntimeException(e);
                    }
                    (().getName() + "Get locks, end execution of business logic....");
                    count++;
                } finally {
                    ();
                }
                ();
            });
        }

        ();
        ();
        (count);

        for (RedissonClient redissonClient : redissonClientList) {
            ();
        }
    }

}

Redisson Advantages

  • Automatic renewal: extend the lock validity period through WatchDog mechanism
  • Simplified API: Encapsulate underlying details, support asynchronous/responsive programming
  • Fault tolerance: Automatically skip the downtime node, ensuring that more than half of it is successful

Summarize

Red locks significantly improve the reliability of distributed locks through a multi-node voting mechanism, but they need to weigh their implementation complexity and operation and maintenance costs. It is recommended to select the red lock in the following scenarios:

  • Need to be deployed across computer rooms/regional areas
  • Business requirements for data consistency are extremely high
  • Already have independent Redis node operation and maintenance capabilities

For most scenarios, mature frameworks such as Redisson can be preferred to avoid repeated wheel creation. If there are extreme requirements for consistency, consider solutions based on consensus algorithms such as ZooKeeper/etcd.

This is the article about the example code of redis to implement red locks. For more related content on redis to implement red locks, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!