Preface
There is a common scenario in iOS network programming: we need to process two requests in parallel and before we can perform the next process. Here are some common ways to deal with it, but errors are also prone to occur during use:
- DispatchGroup: Put multiple requests into a group through the GCD mechanism, and then pass
()
and()
Processing after successful. - OperationQueue: Instantiate an Operation object for each request, then add these objects to OperationQueue, and determine the execution order based on their dependencies.
- Synchronous DispatchQueue: Avoid data competition through synchronous queues and NSLock mechanisms, and realizes synchronous secure access in asynchronous multithreads.
- Third-party class libraries: Futures/Promises and responsive programming provide higher-level concurrency abstractions.
During many years of practice, I realized that the above methods have certain flaws. In addition, it is still difficult to use these library completely correctly.
Challenges in concurrent programming
It is difficult to think about problems using concurrent thinking: most of the time, we read the code in the way we read the story: from the first line to the last line. If the logic of the code is not linear, it may cause us to understand it certainly. In a single-threaded environment, debugging and tracking the execution of programs of multiple classes and frameworks is already a very headache. This situation is simply unimaginable in a multi-threaded environment.
Data competition problem: In a multi-threaded concurrency environment, data read operations are thread-safe while write operations are non-thread-safe. If multiple threads write to a memory at the same time, data competition will occur, resulting in potential data errors.
Understanding dynamic behavior in a multi-threaded environment is not an easy task, and finding out the threads that lead to data competition is even more troublesome. Although we can solve the data competition problem through the mutex mechanism, maintaining the mutex mechanism will be very difficult for possible modifications.
Difficult to test: Many problems in concurrent environments will not appear during development. Although Xcode and LLVM provideThread Sanitizer This type of tool is used to check these problems, but debugging and tracking these problems is still very difficult. Because in a concurrent environment, in addition to the influence of the code itself, the application will also be affected by the system.
Simple way to deal with concurrent situations
Given the complexity of concurrent programming, how should we solve multiple requests in parallel?
The easiest way is to avoid writing parallel code but instead talk about multiple requests linearly connected together:
let session = (with: request1) { data, response, error in // check for errors // parse the response data (with: request2) { data, response error in // check for errors // parse the response data // if everything succeeded... { completionHandler(result1, result2) } }.resume() }.resume()
In order to keep the code concise, many detailed processing is ignored here, such as error handling and request cancellation operations. However, this actually contains some problems in the linear ordering of unrelated requests. For example, if the server supports the HTTP/2 protocol, we do not use the feature of the HTTP/2 protocol to handle multiple requests through the same link, and linear processing also means we do not take advantage of the performance of the processor.
Misunderstanding about URLSession
To avoid possible data race and thread safety issues, I rewrite the above code for nested requests. That is to say, if it is changed to a concurrent request: the request will not be nested, and the two requests may write operations on the same piece of memory, and data competition is very difficult to reproduce and debug.
A feasible solution to the problem of change is through a lock mechanism: only one thread can be allowed to write to shared memory for a period of time. The execution process of the lock mechanism is also very simple: request the lock, execute the code, and release the lock. Of course, there are some tricks to use the lock mechanism completely correctly.
But according to URLSessiondocumentDescription, here is a simpler solution to concurrent requests.
init(configuration: URLSessionConfiguration, delegate: URLSessionDelegate?, delegateQueue queue: OperationQueue?)
[…]
queue : An operation queue for scheduling the delegate calls and completion handlers. The queue should be a serial queue, in order to ensure the correct ordering of callbacks. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
This means that all instance objects of URLSession include singleton callbacks and will not be executed concurrently unless you explicitly inherit a concurrent queue to the parameter queue.
URLSession expansion concurrency support
Based on the new understanding of URLSession above, we will expand it to support thread-safe concurrent requests (complete code address).
enum URLResult { case response(Data, URLResponse) case error(Error, Data?, URLResponse?) } extension URLSession { @discardableResult func get(_ url: URL, completionHandler: @escaping (URLResult) -> Void) -> URLSessionDataTask } // Example let zen = URL(string: "/zen")! (zen) { result in // process the result }
First, we used a simple URLResult enum to simulate the different results we can get in the URLSessionDataTask callback. This enumeration type helps us simplify the processing of multiple concurrent request results. For the simplicity of the article, I have not posted it here(_:completionHandler:)
The complete implementation of the method is to use the GET method to request the corresponding URL and execute it automaticallyresume()
Finally, encapsulate the execution result into a URLResult object.
@discardableResult func get(_ left: URL, _ right: URL, completionHandler: @escaping (URLResult, URLResult) -> Void) -> (URLSessionDataTask, URLSessionDataTask) { }
This segment of API code accepts two URL parameters and returns two URLSessionDataTask instances. The following code is the first paragraph of the function implementation:
precondition( == 1, "URLSession's delegateQueue must be configured with a maxConcurrentOperationCount of 1.")
Because concurrent OperationQueue objects can still be passed in when instantiating URLSession objects, we need to use the above code to exclude this situation.
var results: (left: URLResult?, right: URLResult?) = (nil, nil) func continuation() { guard case let (left?, right?) = results else { return } completionHandler(left, right) }
This code continues to be added to the implementation, which defines a tuple variable that represents the result returned. In addition, we also define another tool function inside the function to check whether both requests have completed the result processing.
let left = get(left) { result in = result continuation() } let right = get(right) { result in = result continuation() } return (left, right)
Finally, this code is appended to the implementation, where we requested two URLs separately and returned the result once after the requests are completed. It is worth noting that we have performed twice herecontinuation()
To determine whether all requests are completed:
- First execution
continuation()
When one of the requests has not completed the result is nil, the callback function will not be executed. - During the second execution, all two requests are completed and the callback is executed.
Next we can test this code through a simple request:
extension URLResult { var string: String? { guard case let .response(data, _) = self, let string = String(data: data, encoding: .utf8) else { return nil } return string } } (zen, zen) { left, right in guard case let (quote1?, quote2?) = (, ) else { return } print(quote1, quote2, separator: "\n") // Approachable is better than simple. // Practicality beats purity. }
Parallel Paradox
I found that the easiest and most elegant way to solve parallel problems is to use concurrent programming as little as possible, and our processors are very suitable for executing those linear code. However, splitting large code blocks or tasks into multiple small code blocks and tasks executed in parallel will make the code more readable and maintainable.
Summarize
The above is the entire content of this article. I hope that the content of this article has certain reference value for everyone's study or work. If you have any questions, you can leave a message to communicate. Thank you for your support.
Author: Adam Sharp, time: 2017/9/21
Translation: BigNerdCoding, please point it out if there are any errors. Original link