introduction
In C++, thread safety issues arise when two or more threads need to access shared data. This is because, without proper synchronization mechanisms, one thread may start accessing data before another thread has completed modifications to the data, which will lead to data inconsistency and unpredictability of the program. To solve this problem, C++ provides multiple thread synchronization and mutual exclusion mechanisms.
1. Mutex
Mutex is a synchronization mechanism used to prevent multiple threads from accessing shared resources at the same time. In C++, you can use the std::mutex class to create mutexes.
#include <thread> #include <mutex> std::mutex mtx; // Global mutexint shared_data = 0; // Share data void thread_func() { for (int i = 0; i < 10000; ++i) { (); // Get ownership of mutex ++shared_data; // Modify shared data (); // Release ownership of mutex } } int main() { std::thread t1(thread_func); std::thread t2(thread_func); (); (); std::cout << shared_data << std::endl; // Output 20000 return 0; }
In the above code, we create a global mutex mtx and a shared data shared_data. We then use() and () in the thread_func function to protect access to shared_data, ensuring that at any time there is only one thread that can modify shared_data.
2. Lock
In addition to using mutexes directly, C++ also provides two locks::lock_guard and std::unique_lock, which are used to automatically manage ownership of mutexes.
#include <thread> #include <mutex> std::mutex mtx; // Global mutexint shared_data = 0; // Share data void thread_func() { for (int i = 0; i < 10000; ++i) { std::lock_guard<std::mutex> lock(mtx); // Create locks to automatically obtain ownership of mutexes ++shared_data; // Modify shared data // Lock automatically releases ownership of mutexes when leaving scope } } int main() { std::thread t1(thread_func); std::thread t2(thread_func); (); (); std::cout << shared_data << std::endl; // Output 20000 return 0; }
In the above code, we use std::lock_guard to automatically manage ownership of mutexes. When creating a std::lock_guard object, it automatically acquires ownership of the mutex, and when the std::lock_guard object leaves the scope, it automatically releases ownership of the mutex. In this way, we do not need to call() and() manually, which can avoid deadlocks caused by forgetting to release mutexes.
3. Condition Variable
Condition variables are a synchronization mechanism used to synchronize changes in conditions between multiple threads. In C++, you can use the std::condition_variable class to create condition variables.
#include <thread> #include <mutex> #include <condition_variable> std::mutex mtx; // Global mutexstd::condition_variable cv; // Global condition variablebool ready = false; // Sharing conditions void print_id(int id) { std::unique_lock<std::mutex> lock(mtx); // Create locks to automatically obtain ownership of mutexes while (!ready) { // If the conditions are not met (lock); // Wait for notification of condition variables } // When a notification is received from the condition variable and the condition is met, continue to execute std::cout << "thread " << id << '\n'; } void go() { std::unique_lock<std::mutex> lock(mtx); // Create locks to automatically obtain ownership of mutexes ready = true; // Modify sharing conditions cv.notify_all(); // Notify all waiting threads} int main() { std::thread threads[10]; for (int i = 0; i < 10; ++i) threads[i] = std::thread(print_id, i); std::cout << "10 threads ready to race...\n"; go(); // Start the competition for (auto& th : threads) (); return 0; }
In the above code, we create a global mutex mtx, a global condition variable cv, and a shared condition ready. Then, we use (lock) in the print_id function to wait for notification of the condition variable, and continue to execute when the notification of the condition variable is received and the condition is met. In the go function, we modify the sharing conditions and use cv.notify_all() to notify all waiting threads.
4. Atomic Operation
Atomic operations are special operations that can safely read and write data in a multithreaded environment without the use of mutexes or locks. In C++, you can use the std::atomic template class to create atomic types.
#include <thread> #include <atomic> std::atomic<int> shared_data(0); // Share data void thread_func() { for (int i = 0; i < 10000; ++i) { ++shared_data; // Atomic operation } } int main() { std::thread t1(thread_func); std::thread t2(thread_func); (); (); std::cout << shared_data << std::endl; // Output 20000 return 0; }
In the above code, we create a shared data of atomic typeshared_data
. Then, wethread_func
Used in functions++shared_data
To perform atomic operations, we do not need to use mutexes or locks, and we can also ensure that only one thread can modify it at any timeshared_data
。
5. Comparison
Strategy | advantage | shortcoming |
---|---|---|
Single global mutex | Simple | May cause serious performance problems and reduce concurrency |
Multiple mutexes | Improve concurrency | Increase program complexity and need to avoid deadlocks |
Atomic operation | Improve concurrency and avoid mutex overhead | Increase program complexity, requiring understanding and use of atomic operations |
Read and write lock | Improve concurrency, especially when read operations are more than write operations | Increase program complexity, need to manage read and write locks, and need to avoid dead locks |
Case example
Suppose we are developing an online chat server that needs to handle a large number of concurrent connections. Each connection has an associated user object, and the user object contains the user's status information, such as user name, online status, etc.
In this case, we can use multiple mutex strategies. We can divide the user object into several groups, each with an associated mutex. When a thread needs to access a user object, it only needs to lock the mutex of the group where the user object is located, not all user objects. In this way, different threads can access different user objects at the same time, thereby improving concurrency.
At the same time, we can also use the read and write lock strategy. Because in most cases, threads only need to read the user's status information without modification. So, we can use read and write locks, allowing multiple threads to read user objects at the same time, but the lock is required when modifying user objects.
In practice, we may need to use both strategies in combination to achieve the best results.
6. Go further: Atomic operation + lock
Atomic operations and locks are two different thread synchronization mechanisms, which can be used alone or together, depending on your application scenario.
Atomic operations are a low-level synchronization mechanism that ensures that the read and write operations of a single memory location are atomic, that is, only one thread can operate on the memory location at any time. Atomic operations are often used to implement advanced synchronization mechanisms such as locks and conditional variables.
A lock is an advanced synchronization mechanism that ensures that access to a piece of code or multiple memory locations is atomic, that is, at any time only one thread can execute locked protected code or access locked protected memory locations.
If you use locks while using atomic operations, then you need to make sure your code correctly understands and uses both synchronization mechanisms. For example, if you use atomic operations in a locked-protected code segment, you need to make sure that the atomic operations do not violate the semantics of the lock, that is, at any time only one thread can execute the locked-protected code.
Here is an example of using atomic operations and locks:
#include <thread> #include <mutex> #include <atomic> std::mutex mtx; // Global mutexstd::atomic<int> counter(0); // Atomic counter void thread_func() { for (int i = 0; i < 10000; ++i) { std::lock_guard<std::mutex> lock(mtx); // Get ownership of mutex ++counter; // Atomic operation // Lock automatically releases ownership of mutexes when leaving scope } } int main() { std::thread t1(thread_func); std::thread t2(thread_func); (); (); std::cout << counter << std::endl; // Output 20000 return 0; }
In the above code, we use std::lock_guard to get ownership of the mutex, and then use ++counter to perform atomic operations. In this way, we not only ensure that only one thread can execute locked protected code at any time, but also ensure that the operation of counter is atomic.
Overall, atomic operations and locks can be used together, but you need to make sure your code correctly understands and uses both synchronization mechanisms.
Summarize
In C++, when two or more threads need to access shared data, multiple thread synchronization and mutual exclusion mechanisms such as mutex, locks, conditional variables and atomic operations can be used to ensure thread safety. Which mechanism to choose depends on the specific application scenario and requirements.
The above is the detailed introduction and comparison of four methods of thread synchronization and mutual exclusion in C++. For more information about C++ thread synchronization and mutual exclusion, please pay attention to my other related articles!