introduction
During iOS development, time-consuming operations such as network requests and downloading pictures cannot be avoided. These operations are processed in the main thread, which will cause lag, so we put them in the child thread for processing, and then return to the main thread for display after the processing is completed.
Multithreading runs through our entire development process. iOS's multithreading operations include NSThread, GCD, and NSOperation, among which the most commonly used is GCD.
Processes and threads
Before we understand GCD, let’s first understand processes and threads, their connections and differences
1. Process definition
- A process refers to an application that is running on the system.
- Each process is independent, and each process runs in its dedicated and protected memory space.
- Inter-process communication generally uses URL Scheme, UIPasteboard, Keychain, UIActivityViewController, etc.
2. Definition of threads
- Threads are the basic execution unit of a process, and all tasks of a process are executed in the thread.
- If a process wants to execute a task, it must have threads, and the process must have at least one thread.
- A thread will be opened by default, and this thread will be the main thread.
- Communication between threads generally uses performSelector
3. The relationship between process and thread
- 1. Threads are the execution unit of the process, and all tasks of the process are executed in the thread.
- 2. Threads are the minimum unit of CPU allocation resources and scheduling
- 3. A program in a mobile phone corresponds to a process. There can be multiple threads in a process, but at least one thread must be
- 4. Threads in the same process share process resources
4. Multi-threading
At the same time, the CPU can only process 1 thread, and only 1 thread is executing. Multi-threaded concurrent execution is actually the CPU quickly schedules (switches) between multiple threads. If the CPU schedules threads quickly enough, it will cause the false image of concurrent execution of multi-threads
If there are very many threads, the CPU will schedule between N multiple threads, consuming a large amount of CPU resources, and the frequency of each thread being scheduled will be reduced (the execution efficiency of the thread will be reduced)
Advantages of multithreading:
- 1. Can appropriately improve the execution efficiency of the program;
- 2. Can appropriately improve resource utilization (CPU, memory utilization) 3. After the task on the thread is executed, the thread will be automatically destroyed.
Disadvantages of multithreading:
- 1. Opening the thread requires a certain amount of memory space (by default, the main thread occupies 1M and the child thread occupies 512KB). If you open a large number of threads, it will occupy a large amount of memory space, which will reduce the performance of the program. It takes about 90 microseconds to open the thread.
- 2. The more threads, the lower the number of times each thread is scheduled, the lower the execution efficiency of the thread, and the greater the overhead of the CPU on scheduling threads.
- 3. Programming design is more complex, such as communication between threads and data sharing of multiple threads
The life cycle of multithreading is: New - Ready - Run - Blocking - Death
- Create: Instantiate thread object
- Ready: Send a start message to the thread object, and the thread object is added to the scheduleable thread pool to wait for CPU scheduling.
- Run: The CPU is responsible for scheduling the execution of threads in the scheduleable thread pool. Before thread execution is complete, the state may switch back and forth between ready and running. The state changes between ready and running are the CPU responsible for it and programmers cannot intervene.
- Blocking: When a predetermined condition is met, it can be used to perform a thread using sleep or lock. sleepForTimeInterval (sleep specified duration), sleepUntilDate (sleep to specified date), @synchronized(self): (mutex).
- Death: Normal death, thread execution is completed. Unusual death. When a certain condition is met, the thread aborts the execution internally/the thread object is aborted in the main thread.
5. Time film
Time slice: The CPU directly switches quickly on multiple tasks, and this time interval is the time slice
The number of concurrent executions of devices is limited. Use [NSProcessInfo processInfo].activeProcessorCount to view the maximum number of threads that the current device can support. For example, the maximum number of concurrency is 8, which represents 8 core CPUs. If 10 threads are enabled at the same time, the CPU will allow one or two threads to execute for a period of time through time slice rotation.
6. Thread pool
GCD maintains a thread pool internally, with the purpose of reusing threads. When it is necessary to start threads, it will first query the opened free thread cache in the thread pool to achieve the purpose of saving memory space and time.
- Are all core threads executing tasks - No - Create a new worker thread to execute
- Is the thread pool work queue saturated - No - Store tasks in the work queue
- All threads in the thread pool are in execution state - No - Schedule threads to execute
- Leave it to the saturation strategy to handle 64 threads cached in the GCD thread pool, which means that 64 threads can be executed at the same time, and the maximum concurrent execution thread is determined based on the CPU.
GCD
GCD is the full name of Grand Central Dispatch. It is pure C language and provides many powerful functions.
Advantages of GCD:
- GCD is Apple's solution for multi-core parallel computing
- GCD will automatically utilize more CPU cores (such as dual-core and quad-core)
- GCD will automatically manage the life cycle of threads (create threads, schedule tasks, destroy threads)
- The programmer only needs to tell GCD what tasks to perform, and does not need to write any thread management code. The core of GCD - add tasks to the queue and specify the functions to perform tasks.
1. Mission
It means to execute an operation, that is, the piece of code executed in the thread. It is placed in the block in the GCD. There are two ways to execute tasks: synchronous execution (sync) and asynchronous execution (async)
- Synchronization (Sync): Synchronize the task to the specified queue. Before the added task is executed, it will wait until the task in the queue is completed before continuing to execute, which will block the thread. You can only execute tasks in the current thread (the current thread, not necessarily the main thread), and do not have the ability to start a new thread.
- Asynchronous (Async): The thread will return immediately, and the following tasks will continue to be executed without waiting, without blocking the current thread. You can execute tasks in new threads and have the ability to start a new thread (not necessarily to start a new thread). If not added to the main queue, asynchronously executes tasks in the child thread
2. Queue
Dispatch Queue: The queue here refers to the waiting queue for executing tasks, that is, the queue used to store tasks. Queues are a special linear table that adopts the FIFO (first-in-first-out) principle, that is, new tasks are always inserted at the end of the queue, and when reading tasks, they are always read from the head of the queue. The function of a queue is to store tasks and has nothing to do with threads. Each task is read, one task is released from the queue.
There are two types of queues in GCD: serial queue and concurrent queue. Both conform to the FIFO (first in, first out) principle. The main differences between the two are: different execution order and different number of threads to be opened.
- Serial Dispatch Queue:
At the same time, only one task can be executed in the queue, and the next task can be executed only after the current task is completed. (Only one thread is turned on, and after one task is executed, the next task will be executed). The main queue is a serial queue on the main thread, which is automatically created for us by the system. - Concurrent Dispatch Queue:
Multiple tasks are allowed to be executed concurrently at the same time. (Multiple threads can be enabled and tasks can be executed simultaneously). The concurrency function of a concurrent queue is only valid under the asynchronous (dispatch_async) function.
3. Deadlock
If another synchronization task added to the same queue is contained in the block of adding a task to the same queue, it will die-lock because the synchronization task is executed immediately. The queue needs to comply with the FIFO (first-in-first-out) principle. The serial queue needs to wait for the previous task to end before the next task is executed, resulting in deadlocks when waiting for each other.
Next, let’s learn from the source code what GCD’s serial queue and concurrent queue do and what’s the difference.
dispatch_queue_t main = dispatch_get_main_queue(); //Home queuedispatch_queue_t global = dispatch_get_global_queue(0, 0); //Global queuedispatch_queue_t serial = dispatch_queue_create("WT", DISPATCH_QUEUE_SERIAL); // Serial Queuedispatch_queue_t concurrent = dispatch_queue_create("WT", DISPATCH_QUEUE_CONCURRENT); //Concurrent queue// Main queue source codedispatch_get_main_queue(void) { return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q); } struct dispatch_queue_static_s _dispatch_main_q = { DISPATCH_GLOBAL_OBJECT_HEADER(queue_main), #if !DISPATCH_USE_RESOLVERS .do_targetq = _dispatch_get_default_queue(true), #endif .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) | DISPATCH_QUEUE_ROLE_BASE_ANON, .dq_label = "-thread", .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1), .dq_serialnum = 1, }; // Global queue source codedispatch_get_global_queue(intptr_t priority, uintptr_t flags) { dispatch_qos_t qos = _dispatch_qos_from_queue_priority(priority); …… return _dispatch_get_root_queue(qos, flags & DISPATCH_QUEUE_OVERCOMMIT); } #define DISPATCH_QUEUE_WIDTH_FULL 0x1000ull #define DISPATCH_QUEUE_WIDTH_POOL (DISPATCH_QUEUE_WIDTH_FULL - 1) struct dispatch_queue_global_s _dispatch_root_queues[] = { #define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \ ((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \ DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \ DISPATCH_ROOT_QUEUE_IDX_##n##_QOS) #define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \ [_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \ DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \ .dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \ .do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \ .dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \ .dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \ _dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \ _dispatch_priority_make(DISPATCH_QOS_##n, 0)), \ __VA_ARGS__ \ } _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0, .dq_label = "-qos", .dq_serialnum = 4, ), ...Omitted part... _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT, .dq_label = "", .dq_serialnum = 15, ), }; // Serial queue source code#define DISPATCH_QUEUE_SERIAL NULL // Concurrent queue source code#define DISPATCH_QUEUE_CONCURRENT DISPATCH_GLOBAL_OBJECT(dispatch_queue_attr_t, _dispatch_queue_attr_concurrent) dispatch_queue_t dispatch_queue_create(const char *label, dispatch_queue_attr_t attr) { return _dispatch_lane_create_with_target(label, attr, DISPATCH_TARGET_QUEUE_DEFAULT, true); } #if OS_OBJECT_USE_OBJC #define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))
#define DISPATCH_QUEUE_WIDTH_MAX (DISPATCH_QUEUE_WIDTH_FULL - 2) static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, bool legacy) { // Serial queue dqai is {} dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa); ...Omitted part - according todqaiNormalized parameters... dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); // Initialize queue Concurrent queue DISPATCH_QUEUE_WIDTH_MAX Serial queue 1 _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); ...Omitted part... return _dispatch_trace_queue_create(dq)._dq; } _dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa) { dispatch_queue_attr_info_t dqai = { }; // Serial queue returns directly {} if (!dqa) return dqai; // #if DISPATCH_VARIANT_STATIC if (dqa == &_dispatch_queue_attr_concurrent) { // Concurrent queue dqai.dqai_concurrent = true; return dqai; } #endif ...A series of operations... return dqai; } static inline dispatch_queue_class_t _dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf, uint16_t width, uint64_t initial_state_bits) { uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width); dispatch_queue_t dq = dqu._dq; dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK | DISPATCH_QUEUE_INACTIVE)) == 0); if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) { dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) { dq->do_ref_cnt++; // released when DSF_DELETED is set } } dq_state |= initial_state_bits; dq->do_next = DISPATCH_OBJECT_LISTLESS; dqf |= DQF_WIDTH(width); // Serial Queue DQF_WIDTH(1) -- Main Queue -- Serial Queue os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed); dq->dq_state = dq_state; dq->dq_serialnum = os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed); return dqu; }
We can see that when initializing, the main queue and serial queue initialization queue are DQF_WIDTH(1), the global concurrent queue DQF_WIDTH(0x1000ull - 1 = 15), and the concurrent queue DQF_WIDTH(0x1000ull - 2 = 14);
The number of the main queue = 1, the global queue number = 4-15, and other numbers can also be found in the Dispatch Source/ file.
Summarize
Serial queues are similar to single-lane lanes, and concurrent queues are equivalent to multi-lane. Although they are all FIFO data structures, serial queues can only add tasks to one queue and will be executed in the order they are placed in the queue;
A concurrent queue can add tasks to multiple queues, waiting for the thread to execute tasks in the queue. The situation of the thread's scheduling queue and the complexity of the task determine the execution order of the task.
The above is the detailed explanation of the example of iOS development exploring multi-threaded GCD queues. For more information about iOS development multi-threaded GCD queues, please pay attention to my other related articles!