SoFunction
Updated on 2025-04-13

C++ implements synchronization method in multi-threaded concurrency scenario

If the access to global variables is not properly synchronized in a multithreaded program (such as using mutexes, atomic variables, etc.), race conditions will occur when multiple threads access and modify global variables at the same time. This race condition can lead to a range of uncertain and serious consequences.

In C++, mutex access to global variables can be achieved by using mutex, atomic operations, and read and write locks.

1. The consequences of lack of synchronization control

1. Data Race

Data race occurs when multiple threads access the same variable at the same time, and at least one thread does not synchronize when writing the variable. Due to the lack of synchronization mechanism, operations of multiple threads on global variables may interfere with each other, resulting in unpredictable values ​​of variables.

Example:

#include <iostream>
#include <thread>

int globalVar = 0;

void increment() {
    globalVar++;
}

int main() {
    std::thread t1(increment);
    std::thread t2(increment);
    ();
    ();

    std::cout << "Global variable: " << globalVar << std::endl;
    return 0;
}

as a result of:

  • In the above code,globalVar++Not an atomic operation. It consists of multiple steps: reading, adding value, writing back. In this code,t1andt2May be read at the same timeglobalVarThe value of the two threads will modify its value at the same time, and the final result will be smaller than expected2. This is typical data competition.

2. Inconsistent State

Without synchronous control, multiple threads may perform simultaneous read and write operations on global variables, causing variables to be inconsistent. For example, multiple threads may read and modify the same variables simultaneously, resulting in the final state not meeting expectations.

Example: Suppose you have a program that requires maintaining a global counter. If no lock is added to ensure thread safety, the counter may be written as a meaningless value when both threads execute simultaneously.

#include &lt;iostream&gt;
#include &lt;thread&gt;

int counter = 0;

void increment() {
    for (int i = 0; i &lt; 100000; ++i) {
        counter++;  // Non-thread-safe operation    }
}

int main() {
    std::thread t1(increment);
    std::thread t2(increment);

    ();
    ();

    std::cout &lt;&lt; "Counter: " &lt;&lt; counter &lt;&lt; std::endl;
    return 0;
}

as a result of:

  • Without synchronization,counter++It may cause multiple threads to read the same counter value at the same time and write the same update value back to the variable at the same time, which will makecounterThe final value is much smaller than expected200000
  • This can cause business logic errors in the program, especially if global variables are used as identification of critical states.

3. Crash or program undefined behavior

Due to data competition or inconsistent states, the program may enter an unpredictable state, causing a crash. The value of a global variable may be corrupted in multi-threaded competition, resulting in undefined behavior.

For example:

  • Access to freed memory: One thread modifies the global variable and frees the relevant memory, but other threads still try to access that memory.
  • Memory overwrite: Multiple threads modify global variables at the same time, causing operations of different threads to overwrite each other, causing crashes.

2. Mutex std::mutex realizes synchronization

std::mutexIt is a mechanism in the C++ standard library to avoid race conditions when multiple threads access the same resource (such as global variables) at the same time.

Here is an example showing how to use itstd::mutexTo protect global variables:

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;mutex&gt;

std::mutex mtx;  // Define global mutex lockint globalVar = 0;  // Define global variables
void threadFunction() {
    std::lock_guard&lt;std::mutex&gt; lock(mtx);  // Lock to ensure mutually exclusive    // Access and modify global variables    ++globalVar;
    std::cout &lt;&lt; "Global variable: " &lt;&lt; globalVar &lt;&lt; std::endl;
    // The lock will be automatically released when lock_guard leaves scope}

int main() {
    std::thread t1(threadFunction);
    std::thread t2(threadFunction);

    ();
    ();

    return 0;
}

illustrate:

  • std::mutex: Used to protect shared resources (such as global variables).
  • std::lock_guard<std::mutex>: It is a RAII-style encapsulator. It is automatically locked during construction and unlocked during destruction, ensuring thread safety.
  • threadFunctionIn each thread accessesglobalVarBefore, you will get a mutex first, which ensures that global variables will not be accessed and modified at the same time between threads.

usestd::mutexIt can prevent errors or inconsistencies between different threads from competing to access global variables.

Sometimes if you need more fine-grained control, you can also consider using itstd::unique_lock, it's better thanstd::lock_guardMore flexible, allowing manual control of the acquisition and release of locks.

3. Exclusive lock std::unique_lock realizes synchronization

std::unique_lockis a mutex wrapper in the C++11 standard library, which provides a comparisonstd::lock_guardMore flexible lock management methods.std::unique_lockAllows manual control of the acquisition and release of locks, not just automatic release of locks at the end of the object's life cycle (e.g.std::lock_guardWhat is done). This makes it better thanstd::lock_guardMore flexible and suitable for more complex scenarios, such as needing to lock or unlock multiple times within the same scope, or requiring some other operations during locking.

Key features of std::unique_lock:

  • Manually control the acquisition and release of locks:std::unique_lockSupports manual unlocking and relocking, it is better thanstd::lock_guardMore flexible.
  • Delay locking and advance unlocking: You can choose to delay locking when object creation, or manually release the lock after locking.
  • Supported condition variables:std::unique_lockSupports use with condition variables, this isstd::lock_guardIt cannot be done.

Basic usage:

1. Automatic locking during construction:

std::unique_lockBy default, the lock will be automatically added during construction.

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;mutex&gt;

std::mutex mtx;

void threadFunction() {
    std::unique_lock&lt;std::mutex&gt; lock(mtx);  // Automatic locking during construction    std::cout &lt;&lt; "Thread is running\n";
    // Operation of critical zones    // The lock will be automatically released when the lock object is out of scope}

int main() {
    std::thread t1(threadFunction);
    std::thread t2(threadFunction);

    ();
    ();

    return 0;
}

2. Manual unlocking and re-locking:

std::unique_lockAllows you to manually unlock and re-lock during locking, which is very useful for some scenarios where locks need to be temporarily released.

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;mutex&gt;

std::mutex mtx;

void threadFunction() {
    std::unique_lock&lt;std::mutex&gt; lock(mtx);  // Automatic locking during construction    std::cout &lt;&lt; "Thread is running\n";
    
    // Operation of critical zones    ();  // Manual unlock    
    std::cout &lt;&lt; "Lock released temporarily\n";
    
    // Operations outside the critical area    
    ();  // Re-lock    
    std::cout &lt;&lt; "Lock acquired again\n";
    // Critical area operation continues}

int main() {
    std::thread t1(threadFunction);
    std::thread t2(threadFunction);

    ();
    ();

    return 0;
}

3. Delay locking:

std::unique_lockAlso allows you to delay locking by passing onestd::defer_lockThe parameters are used to implement the constructor. This creates an unlockedstd::unique_lock, you can call it manually laterlock()Come and add lock.

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;mutex&gt;

std::mutex mtx;

void threadFunction() {
    std::unique_lock&lt;std::mutex&gt; lock(mtx, std::defer_lock);  // Delay locking    std::cout &lt;&lt; "Thread is preparing to run\n";
    
    // Do some operations that do not require locking    
    ();  // Manual locking    std::cout &lt;&lt; "Thread is running under lock\n";
    
    // Operation of critical zones}

int main() {
    std::thread t1(threadFunction);
    std::thread t2(threadFunction);

    ();
    ();

    return 0;
}

4. Condition variables:

std::unique_lockIdeal for use with condition variables, it supports manual unlocking and re-locking of mutexes. This is very useful in use scenarios for condition variables, because the mutex needs to be unlocked while waiting for the condition and re-locked when the condition is met.

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;mutex&gt;
#include &lt;condition_variable&gt;

std::mutex mtx;
std::condition_variable cv;
bool ready = false;

void threadFunction() {
    std::unique_lock&lt;std::mutex&gt; lock(mtx);  // Lock    while (!ready) {  // Wait for ready to be true        (lock);  // Wait, automatically unlock and suspend threads    }
    std::cout &lt;&lt; "Thread is running\n";
}

void notify() {
    std::this_thread::sleep_for(std::chrono::seconds(1));  // Simulate some operations    std::cout &lt;&lt; "Notifying the threads\n";
    std::unique_lock&lt;std::mutex&gt; lock(mtx);  // Lock    ready = true;
    cv.notify_all();  // Notify all threads}

int main() {
    std::thread t1(threadFunction);
    std::thread t2(threadFunction);

    std::thread notifier(notify);
    
    ();
    ();
    ();

    return 0;
}

explain:

  • std::condition_variableandstd::unique_lock

    • existthreadFunctionmiddle,(lock)The lock will be released and the condition variable will be notified.
    • std::unique_lockCan be calledwaitWhen the lock is automatically released, and inwaitThe lock will be re-locked when returned, which makesstd::unique_lockBecome the best choice for using condition variables.
  • cv.notify_all(): Notify all threads waiting for this condition.thread1andthread2They will continue to be executed when the conditions are met.

4. Shared lock std::shared_mutex implements synchronization

std::shared_mutexIt is a synchronization primitive introduced in C++17. It provides a read-write lock mechanism that allows multiple threads to share reading the same resource, while only one thread can write to the resource exclusively. Compared with traditionalstd::mutex(Only exclusive locks are supported),std::shared_mutexIt can improve concurrency, especially when read operations are far more than write operations.

How does std::shared_mutex work:

  • Shared lock: Multiple threads can acquire shared locks at the same time, which means that multiple threads can read shared resources at the same time. There will be no conflict when multiple threads acquire shared locks.
  • Unique lock: Only one thread can acquire the exclusive lock, which means that the write operation will block all other operations (whether it is a read operation or a write operation) to ensure the consistency of the data.

Use std::shared_mutex:

std::shared_mutexTwo types of locks are provided:

  • std::unique_lock<std::shared_mutex>: Used to acquire exclusive locks.
  • std::shared_lock<std::shared_mutex>: Used to acquire a shared lock.

1. Basic usage examples:

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;shared_mutex&gt;
#include &lt;vector&gt;

std::shared_mutex mtx;  // Define a shared_mutexint sharedData = 0;

void readData(int threadId) {
    std::shared_lock&lt;std::shared_mutex&gt; lock(mtx);  // Obtain the shared lock    std::cout &lt;&lt; "Thread " &lt;&lt; threadId &lt;&lt; " is reading data: " &lt;&lt; sharedData &lt;&lt; std::endl;
}

void writeData(int threadId, int value) {
    std::unique_lock&lt;std::shared_mutex&gt; lock(mtx);  // Get exclusive lock    sharedData = value;
    std::cout &lt;&lt; "Thread " &lt;&lt; threadId &lt;&lt; " is writing data: " &lt;&lt; sharedData &lt;&lt; std::endl;
}

int main() {
    std::vector&lt;std::thread&gt; threads;

    // Start multiple threads for read operations    for (int i = 0; i &lt; 5; ++i) {
        threads.push_back(std::thread(readData, i));
    }

    // Start a thread for write operation    threads.push_back(std::thread(writeData, 100, 42));

    // Wait for all threads to end    for (auto&amp; t : threads) {
        ();
    }

    return 0;
}

explain:

  • Shared lock (std::shared_lock): ThreadreadDatausestd::shared_lockAcquire the shared lock, which allows multiple threads to read at the same time.sharedData, because read operations are thread-safe.
  • Exclusive lock (std::unique_lock): ThreadwriteDatausestd::unique_lockAcquire the exclusive lock, which ensures that only one thread can writesharedData, and write operations block all other threads (including read and write operations).

2. Concurrent control of multiple read threads and a single write thread:

In this example, multiple read threads can be executed in parallel because they all acquire a shared lock. Only when the write thread (acquire the exclusive lock) is executed, other threads (whether it is a read thread or a write thread) will be blocked.

  • Write operation: Obtain the exclusive lock, and all read and write operations will be blocked until the write operation is completed.
  • Read operation: Multiple threads can acquire shared locks at the same time and will only be executed if there is no write operation.

3. The conflict between shared locks and exclusive locks:

  • Shared lock: Multiple threads can acquire shared locks at the same time, as long as no thread holds the exclusive lock. Shared locks do not block other shared lock requests.
  • Exclusive Lock: When a thread holds an exclusive lock, any other thread's shared lock or exclusive lock request will be blocked until the exclusive lock is released.

4. Usage scenarios:

std::shared_mutexMainly suitable for scenarios where more reads, less writes. Suppose there is a resource (such as cache, data structure) that is read by multiple threads most of the time, but needs to be updated occasionally. in this case,std::shared_mutexMultiple read operations can be executed in parallel, while avoiding unnecessary blockage caused by write operations.

For example:

  • Cache data reading: Multiple threads can read data in the cache concurrently, and when the cache needs to be updated, the exclusive lock ensures data consistency.
  • Concurrent query and modification of database: Multiple threads can concurrently query the database, but only one thread can perform write operations.

5. Comparison between std::shared_mutex and std::mutex:

  • std::mutex: Provides exclusive locks, suitable for scenarios where write operations are frequent and no concurrent reading is required. Each time the lock is added, other threads cannot enter the critical section.
  • std::shared_mutex: Suitable for scenarios where more reads and fewer writes, allowing multiple threads to read shared resources at the same time, but write operations will block all other operations.

6. Performance considerations:

  • When reading operations are frequent: usestd::shared_mutexConcurrency can be improved because multiple threads can read data at the same time.
  • When writing operations are frequent: performance may be lower thanstd::mutex, because write operations require exclusive resources and block all other operations.

7. Condition variables:

andstd::mutexSame,std::shared_mutexIt can also be associated with condition variables (std::condition_variable) Use it together, but when using it, you should note that different threads need to lock and unlock the corresponding lock.

#include &lt;iostream&gt;
#include &lt;thread&gt;
#include &lt;shared_mutex&gt;
#include &lt;condition_variable&gt;

std::shared_mutex mtx;
std::condition_variable_any cv;
int sharedData = 0;

void readData() {
    std::shared_lock&lt;std::shared_mutex&gt; lock(mtx);  // Obtain the shared lock    while (sharedData == 0) {  // Wait for data to be available        (lock);  // Wait for data to be written    }
    std::cout &lt;&lt; "Reading data: " &lt;&lt; sharedData &lt;&lt; std::endl;
}

void writeData(int value) {
    std::unique_lock&lt;std::shared_mutex&gt; lock(mtx);  // Get exclusive lock    sharedData = value;
    std::cout &lt;&lt; "Writing data: " &lt;&lt; sharedData &lt;&lt; std::endl;
    cv.notify_all();  // Notify all waiting threads}

int main() {
    std::thread reader(readData);
    std::thread writer(writeData, 42);

    ();
    ();

    return 0;
}

explain:

  • std::shared_lock: Used for shared read locks, allowing multiple threads to read simultaneously.
  • (lock): Use a shared lock to wait for certain conditions to change.
  • cv.notify_all(): Notify all waiting threads and wake them up to continue execution.

5. std::atomic achieves synchronization

std::atomicis a type introduced by the C++11 standard for implementing atomic operations. Atomic operations refer to operations that cannot be interrupted during execution, so they can ensure the consistency and correctness of data.

std::atomicProvides some basic atomic operation methods, which are inseparable and ensure thread safety in a multi-threaded environment. It is mainly used for data synchronization and collaboration, avoiding the performance bottlenecks brought about by traditional synchronization primitives (such as locks and conditional variables).

Basic concepts of atomic operations:

  • Atomicity: During execution, operations cannot be interrupted, ensuring that there will be no race conditions for operations on shared variables between threads.
  • Memory Ordering: Controls the execution order of operations and the visibility of shared data.std::atomicAllows explicitly specifying synchronization behavior between different threads through memory order.

Atomic operations provided by std::atomic:

  • Load: Read data from atomic variables.
  • Store: Store data into atomic variables.

std::atomic Supported Memory Ordering:

  • std::memory_order_acquire: Make sure that the previous operation is executed after loading, that is, it will prevent subsequent operations from being executed before this.
  • std::memory_order_release: Make sure that the subsequent operation is performed before storage, i.e. it will prevent the previous operation from being performed after this.

Usually, in usestd::atomicWhen synchronizing, usememory_order_releaseexiststoreWhen operating, usememory_order_acquireexistloadIt is a common mode when operating, especially in the producer-consumer mode or other similar synchronization modes.

memory_order_releaseandmemory_order_acquireGenerally used in combination.

This combination is to ensure consistency in memory order and ensure correct visibility of data. Specifically:

  • memory_order_release: In executionstoreWhen operating, it will ensure thatstoreAll previous operations (such as data writing) will not be reordered tostoreAfter that, ensure that the write operation of the current thread is visible to other threads. therefore,storeOperations ensure that all pre-write operations will be in thisstoreAfter completion, it is seen by other threads.

  • memory_order_acquire: In executionloadWhen operating, it will ensure thatloadAll subsequent operations (such as data reading) will not be reordered toloadBefore, ensure that the current thread can see the correct data after reading the shared data. existloadAll previous operations (including writing to shared variables) will be visible to the current thread after reading this value.

The two are used in conjunction to ensure synchronization between threads and avoid data race conditions.

Specific scenarios

Consider a producer-consumer model where the producer is responsible for writing data and notifying consumers, and the consumer is responsible for reading data and processing it.

Example:

#include &lt;iostream&gt;
#include &lt;atomic&gt;
#include &lt;thread&gt;

std::atomic&lt;int&gt; data(0);
std::atomic&lt;bool&gt; ready(false);

void consumer() {
    while (!(std::memory_order_acquire)) {
        // Wait for ready to be true    }
    std::cout &lt;&lt; "Data: " &lt;&lt; (std::memory_order_relaxed) &lt;&lt; std::endl;
}

void producer() {
    (42, std::memory_order_relaxed);  // Write data    (true, std::memory_order_release);  // Set ready to true}

int main() {
    std::thread t1(consumer);
    std::thread t2(producer);

    ();
    ();

    return 0;
}

explain:

  • (true, std::memory_order_release): Producer thread is writingreadyWhen usingmemory_order_release, which meansreadySet totrueAfter that, all operations before this (such asdatawrite) is visible to the consumer thread.

  • (std::memory_order_acquire): Consumer thread is readingreadyWhen usingmemory_order_acquire, which means that the consumer thread is readingreadyAfter that, make sure it can see the producer thread instore readyAll the modifications made before (such asdatavalue).

This combination ensures the write operations of the producer thread (e.g.(42)) is visible to the consumer thread and is being readreadyAfter that, the consumer thread can safely read the updateddata

This is the article about C++ implementing synchronization methods in multi-threaded concurrency scenarios. For more related C++ multi-threaded concurrency synchronization content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!