If the access to global variables is not properly synchronized in a multithreaded program (such as using mutexes, atomic variables, etc.), race conditions will occur when multiple threads access and modify global variables at the same time. This race condition can lead to a range of uncertain and serious consequences.
In C++, mutex access to global variables can be achieved by using mutex, atomic operations, and read and write locks.
1. The consequences of lack of synchronization control
1. Data Race
Data race occurs when multiple threads access the same variable at the same time, and at least one thread does not synchronize when writing the variable. Due to the lack of synchronization mechanism, operations of multiple threads on global variables may interfere with each other, resulting in unpredictable values of variables.
Example:
#include <iostream> #include <thread> int globalVar = 0; void increment() { globalVar++; } int main() { std::thread t1(increment); std::thread t2(increment); (); (); std::cout << "Global variable: " << globalVar << std::endl; return 0; }
as a result of:
- In the above code,
globalVar++
Not an atomic operation. It consists of multiple steps: reading, adding value, writing back. In this code,t1
andt2
May be read at the same timeglobalVar
The value of the two threads will modify its value at the same time, and the final result will be smaller than expected2
. This is typical data competition.
2. Inconsistent State
Without synchronous control, multiple threads may perform simultaneous read and write operations on global variables, causing variables to be inconsistent. For example, multiple threads may read and modify the same variables simultaneously, resulting in the final state not meeting expectations.
Example: Suppose you have a program that requires maintaining a global counter. If no lock is added to ensure thread safety, the counter may be written as a meaningless value when both threads execute simultaneously.
#include <iostream> #include <thread> int counter = 0; void increment() { for (int i = 0; i < 100000; ++i) { counter++; // Non-thread-safe operation } } int main() { std::thread t1(increment); std::thread t2(increment); (); (); std::cout << "Counter: " << counter << std::endl; return 0; }
as a result of:
- Without synchronization,
counter++
It may cause multiple threads to read the same counter value at the same time and write the same update value back to the variable at the same time, which will makecounter
The final value is much smaller than expected200000
。 - This can cause business logic errors in the program, especially if global variables are used as identification of critical states.
3. Crash or program undefined behavior
Due to data competition or inconsistent states, the program may enter an unpredictable state, causing a crash. The value of a global variable may be corrupted in multi-threaded competition, resulting in undefined behavior.
For example:
- Access to freed memory: One thread modifies the global variable and frees the relevant memory, but other threads still try to access that memory.
- Memory overwrite: Multiple threads modify global variables at the same time, causing operations of different threads to overwrite each other, causing crashes.
2. Mutex std::mutex realizes synchronization
std::mutex
It is a mechanism in the C++ standard library to avoid race conditions when multiple threads access the same resource (such as global variables) at the same time.
Here is an example showing how to use itstd::mutex
To protect global variables:
#include <iostream> #include <thread> #include <mutex> std::mutex mtx; // Define global mutex lockint globalVar = 0; // Define global variables void threadFunction() { std::lock_guard<std::mutex> lock(mtx); // Lock to ensure mutually exclusive // Access and modify global variables ++globalVar; std::cout << "Global variable: " << globalVar << std::endl; // The lock will be automatically released when lock_guard leaves scope} int main() { std::thread t1(threadFunction); std::thread t2(threadFunction); (); (); return 0; }
illustrate:
-
std::mutex
: Used to protect shared resources (such as global variables). -
std::lock_guard<std::mutex>
: It is a RAII-style encapsulator. It is automatically locked during construction and unlocked during destruction, ensuring thread safety. -
threadFunction
In each thread accessesglobalVar
Before, you will get a mutex first, which ensures that global variables will not be accessed and modified at the same time between threads.
usestd::mutex
It can prevent errors or inconsistencies between different threads from competing to access global variables.
Sometimes if you need more fine-grained control, you can also consider using itstd::unique_lock
, it's better thanstd::lock_guard
More flexible, allowing manual control of the acquisition and release of locks.
3. Exclusive lock std::unique_lock realizes synchronization
std::unique_lock
is a mutex wrapper in the C++11 standard library, which provides a comparisonstd::lock_guard
More flexible lock management methods.std::unique_lock
Allows manual control of the acquisition and release of locks, not just automatic release of locks at the end of the object's life cycle (e.g.std::lock_guard
What is done). This makes it better thanstd::lock_guard
More flexible and suitable for more complex scenarios, such as needing to lock or unlock multiple times within the same scope, or requiring some other operations during locking.
Key features of std::unique_lock:
- Manually control the acquisition and release of locks:
std::unique_lock
Supports manual unlocking and relocking, it is better thanstd::lock_guard
More flexible. - Delay locking and advance unlocking: You can choose to delay locking when object creation, or manually release the lock after locking.
- Supported condition variables:
std::unique_lock
Supports use with condition variables, this isstd::lock_guard
It cannot be done.
Basic usage:
1. Automatic locking during construction:
std::unique_lock
By default, the lock will be automatically added during construction.
#include <iostream> #include <thread> #include <mutex> std::mutex mtx; void threadFunction() { std::unique_lock<std::mutex> lock(mtx); // Automatic locking during construction std::cout << "Thread is running\n"; // Operation of critical zones // The lock will be automatically released when the lock object is out of scope} int main() { std::thread t1(threadFunction); std::thread t2(threadFunction); (); (); return 0; }
2. Manual unlocking and re-locking:
std::unique_lock
Allows you to manually unlock and re-lock during locking, which is very useful for some scenarios where locks need to be temporarily released.
#include <iostream> #include <thread> #include <mutex> std::mutex mtx; void threadFunction() { std::unique_lock<std::mutex> lock(mtx); // Automatic locking during construction std::cout << "Thread is running\n"; // Operation of critical zones (); // Manual unlock std::cout << "Lock released temporarily\n"; // Operations outside the critical area (); // Re-lock std::cout << "Lock acquired again\n"; // Critical area operation continues} int main() { std::thread t1(threadFunction); std::thread t2(threadFunction); (); (); return 0; }
3. Delay locking:
std::unique_lock
Also allows you to delay locking by passing onestd::defer_lock
The parameters are used to implement the constructor. This creates an unlockedstd::unique_lock
, you can call it manually laterlock()
Come and add lock.
#include <iostream> #include <thread> #include <mutex> std::mutex mtx; void threadFunction() { std::unique_lock<std::mutex> lock(mtx, std::defer_lock); // Delay locking std::cout << "Thread is preparing to run\n"; // Do some operations that do not require locking (); // Manual locking std::cout << "Thread is running under lock\n"; // Operation of critical zones} int main() { std::thread t1(threadFunction); std::thread t2(threadFunction); (); (); return 0; }
4. Condition variables:
std::unique_lock
Ideal for use with condition variables, it supports manual unlocking and re-locking of mutexes. This is very useful in use scenarios for condition variables, because the mutex needs to be unlocked while waiting for the condition and re-locked when the condition is met.
#include <iostream> #include <thread> #include <mutex> #include <condition_variable> std::mutex mtx; std::condition_variable cv; bool ready = false; void threadFunction() { std::unique_lock<std::mutex> lock(mtx); // Lock while (!ready) { // Wait for ready to be true (lock); // Wait, automatically unlock and suspend threads } std::cout << "Thread is running\n"; } void notify() { std::this_thread::sleep_for(std::chrono::seconds(1)); // Simulate some operations std::cout << "Notifying the threads\n"; std::unique_lock<std::mutex> lock(mtx); // Lock ready = true; cv.notify_all(); // Notify all threads} int main() { std::thread t1(threadFunction); std::thread t2(threadFunction); std::thread notifier(notify); (); (); (); return 0; }
explain:
-
std::condition_variable
andstd::unique_lock
:- exist
threadFunction
middle,(lock)
The lock will be released and the condition variable will be notified. -
std::unique_lock
Can be calledwait
When the lock is automatically released, and inwait
The lock will be re-locked when returned, which makesstd::unique_lock
Become the best choice for using condition variables.
- exist
cv.notify_all()
: Notify all threads waiting for this condition.thread1
andthread2
They will continue to be executed when the conditions are met.
4. Shared lock std::shared_mutex implements synchronization
std::shared_mutex
It is a synchronization primitive introduced in C++17. It provides a read-write lock mechanism that allows multiple threads to share reading the same resource, while only one thread can write to the resource exclusively. Compared with traditionalstd::mutex
(Only exclusive locks are supported),std::shared_mutex
It can improve concurrency, especially when read operations are far more than write operations.
How does std::shared_mutex work:
- Shared lock: Multiple threads can acquire shared locks at the same time, which means that multiple threads can read shared resources at the same time. There will be no conflict when multiple threads acquire shared locks.
- Unique lock: Only one thread can acquire the exclusive lock, which means that the write operation will block all other operations (whether it is a read operation or a write operation) to ensure the consistency of the data.
Use std::shared_mutex:
std::shared_mutex
Two types of locks are provided:
-
std::unique_lock<std::shared_mutex>
: Used to acquire exclusive locks. -
std::shared_lock<std::shared_mutex>
: Used to acquire a shared lock.
1. Basic usage examples:
#include <iostream> #include <thread> #include <shared_mutex> #include <vector> std::shared_mutex mtx; // Define a shared_mutexint sharedData = 0; void readData(int threadId) { std::shared_lock<std::shared_mutex> lock(mtx); // Obtain the shared lock std::cout << "Thread " << threadId << " is reading data: " << sharedData << std::endl; } void writeData(int threadId, int value) { std::unique_lock<std::shared_mutex> lock(mtx); // Get exclusive lock sharedData = value; std::cout << "Thread " << threadId << " is writing data: " << sharedData << std::endl; } int main() { std::vector<std::thread> threads; // Start multiple threads for read operations for (int i = 0; i < 5; ++i) { threads.push_back(std::thread(readData, i)); } // Start a thread for write operation threads.push_back(std::thread(writeData, 100, 42)); // Wait for all threads to end for (auto& t : threads) { (); } return 0; }
explain:
- Shared lock (
std::shared_lock
): ThreadreadData
usestd::shared_lock
Acquire the shared lock, which allows multiple threads to read at the same time.sharedData
, because read operations are thread-safe. - Exclusive lock (
std::unique_lock
): ThreadwriteData
usestd::unique_lock
Acquire the exclusive lock, which ensures that only one thread can writesharedData
, and write operations block all other threads (including read and write operations).
2. Concurrent control of multiple read threads and a single write thread:
In this example, multiple read threads can be executed in parallel because they all acquire a shared lock. Only when the write thread (acquire the exclusive lock) is executed, other threads (whether it is a read thread or a write thread) will be blocked.
- Write operation: Obtain the exclusive lock, and all read and write operations will be blocked until the write operation is completed.
- Read operation: Multiple threads can acquire shared locks at the same time and will only be executed if there is no write operation.
3. The conflict between shared locks and exclusive locks:
- Shared lock: Multiple threads can acquire shared locks at the same time, as long as no thread holds the exclusive lock. Shared locks do not block other shared lock requests.
- Exclusive Lock: When a thread holds an exclusive lock, any other thread's shared lock or exclusive lock request will be blocked until the exclusive lock is released.
4. Usage scenarios:
std::shared_mutex
Mainly suitable for scenarios where more reads, less writes. Suppose there is a resource (such as cache, data structure) that is read by multiple threads most of the time, but needs to be updated occasionally. in this case,std::shared_mutex
Multiple read operations can be executed in parallel, while avoiding unnecessary blockage caused by write operations.
For example:
- Cache data reading: Multiple threads can read data in the cache concurrently, and when the cache needs to be updated, the exclusive lock ensures data consistency.
- Concurrent query and modification of database: Multiple threads can concurrently query the database, but only one thread can perform write operations.
5. Comparison between std::shared_mutex and std::mutex:
-
std::mutex
: Provides exclusive locks, suitable for scenarios where write operations are frequent and no concurrent reading is required. Each time the lock is added, other threads cannot enter the critical section. -
std::shared_mutex
: Suitable for scenarios where more reads and fewer writes, allowing multiple threads to read shared resources at the same time, but write operations will block all other operations.
6. Performance considerations:
- When reading operations are frequent: use
std::shared_mutex
Concurrency can be improved because multiple threads can read data at the same time. - When writing operations are frequent: performance may be lower than
std::mutex
, because write operations require exclusive resources and block all other operations.
7. Condition variables:
andstd::mutex
Same,std::shared_mutex
It can also be associated with condition variables (std::condition_variable
) Use it together, but when using it, you should note that different threads need to lock and unlock the corresponding lock.
#include <iostream> #include <thread> #include <shared_mutex> #include <condition_variable> std::shared_mutex mtx; std::condition_variable_any cv; int sharedData = 0; void readData() { std::shared_lock<std::shared_mutex> lock(mtx); // Obtain the shared lock while (sharedData == 0) { // Wait for data to be available (lock); // Wait for data to be written } std::cout << "Reading data: " << sharedData << std::endl; } void writeData(int value) { std::unique_lock<std::shared_mutex> lock(mtx); // Get exclusive lock sharedData = value; std::cout << "Writing data: " << sharedData << std::endl; cv.notify_all(); // Notify all waiting threads} int main() { std::thread reader(readData); std::thread writer(writeData, 42); (); (); return 0; }
explain:
-
std::shared_lock
: Used for shared read locks, allowing multiple threads to read simultaneously. -
(lock)
: Use a shared lock to wait for certain conditions to change. -
cv.notify_all()
: Notify all waiting threads and wake them up to continue execution.
5. std::atomic achieves synchronization
std::atomic
is a type introduced by the C++11 standard for implementing atomic operations. Atomic operations refer to operations that cannot be interrupted during execution, so they can ensure the consistency and correctness of data.
std::atomic
Provides some basic atomic operation methods, which are inseparable and ensure thread safety in a multi-threaded environment. It is mainly used for data synchronization and collaboration, avoiding the performance bottlenecks brought about by traditional synchronization primitives (such as locks and conditional variables).
Basic concepts of atomic operations:
- Atomicity: During execution, operations cannot be interrupted, ensuring that there will be no race conditions for operations on shared variables between threads.
- Memory Ordering: Controls the execution order of operations and the visibility of shared data.
std::atomic
Allows explicitly specifying synchronization behavior between different threads through memory order.
Atomic operations provided by std::atomic:
- Load: Read data from atomic variables.
- Store: Store data into atomic variables.
std::atomic Supported Memory Ordering:
-
std::memory_order_acquire
: Make sure that the previous operation is executed after loading, that is, it will prevent subsequent operations from being executed before this. -
std::memory_order_release
: Make sure that the subsequent operation is performed before storage, i.e. it will prevent the previous operation from being performed after this.
Usually, in usestd::atomic
When synchronizing, usememory_order_release
existstore
When operating, usememory_order_acquire
existload
It is a common mode when operating, especially in the producer-consumer mode or other similar synchronization modes.
memory_order_release
andmemory_order_acquire
Generally used in combination.
This combination is to ensure consistency in memory order and ensure correct visibility of data. Specifically:
memory_order_release
: In executionstore
When operating, it will ensure thatstore
All previous operations (such as data writing) will not be reordered tostore
After that, ensure that the write operation of the current thread is visible to other threads. therefore,store
Operations ensure that all pre-write operations will be in thisstore
After completion, it is seen by other threads.memory_order_acquire
: In executionload
When operating, it will ensure thatload
All subsequent operations (such as data reading) will not be reordered toload
Before, ensure that the current thread can see the correct data after reading the shared data. existload
All previous operations (including writing to shared variables) will be visible to the current thread after reading this value.
The two are used in conjunction to ensure synchronization between threads and avoid data race conditions.
Specific scenarios
Consider a producer-consumer model where the producer is responsible for writing data and notifying consumers, and the consumer is responsible for reading data and processing it.
Example:
#include <iostream> #include <atomic> #include <thread> std::atomic<int> data(0); std::atomic<bool> ready(false); void consumer() { while (!(std::memory_order_acquire)) { // Wait for ready to be true } std::cout << "Data: " << (std::memory_order_relaxed) << std::endl; } void producer() { (42, std::memory_order_relaxed); // Write data (true, std::memory_order_release); // Set ready to true} int main() { std::thread t1(consumer); std::thread t2(producer); (); (); return 0; }
explain:
(true, std::memory_order_release)
: Producer thread is writingready
When usingmemory_order_release
, which meansready
Set totrue
After that, all operations before this (such asdata
write) is visible to the consumer thread.(std::memory_order_acquire)
: Consumer thread is readingready
When usingmemory_order_acquire
, which means that the consumer thread is readingready
After that, make sure it can see the producer thread instore
ready
All the modifications made before (such asdata
value).
This combination ensures the write operations of the producer thread (e.g.(42)
) is visible to the consumer thread and is being readready
After that, the consumer thread can safely read the updateddata
。
This is the article about C++ implementing synchronization methods in multi-threaded concurrency scenarios. For more related C++ multi-threaded concurrency synchronization content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!