This article example describes the python implementation of inter-process multitasking functions through the queue. Shared for your reference, as follows:
1. Completion of inter-process multitasking through the queue
import multiprocessing def download_data(q): """Download data""" # Simulate downloading data from the web data = [11, 22, 33, 44] # Write data to the queue for temp in data: (temp) print("---- data download complete and deposited into queue ----") def analysis_data(q): """Data processing""" waitting_analysis_data = list() # Getting data from the queue while True: data = () waitting_analysis_data.append(data) if (): break print(waitting_analysis_data) def main(): # 1. Create a queue q = () q1 = (target=download_data, args=(q,)) q2 = (target=analysis_data, args=(q,)) () () if __name__ == '__main__': main()
2. Process pool pool
In the actual processing of the program, when busy, there will be thousands of tasks to be executed, and when idle, there may be only sporadic tasks. So when there are thousands of tasks to be executed, do we need to create thousands of processes? First of all, it takes time to create processes and time to destroy them. Secondly, even if thousands of processes are opened, the operating system can't let them all execute at the same time, which will affect the efficiency of the program. So we can't open or close processes based on tasks without any limitations. So what do we do?
Here, to introduce you to the concept of a process pool, define a pool, put a fixed number of processes in it, there is a need to come, take a process in the pool to deal with the task, wait until the end of the process, the process is not closed, but will be put back into the process pool to continue to wait for the task. If there are a lot of tasks to be performed, the number of processes in the pool is not enough, the task will have to wait for the previous process to return from the task, get the free process to continue to execute. In other words, the number of processes in the pool is fixed, so there are at most a fixed number of processes running at the same time. This does not increase the difficulty of scheduling the operating system, but also saves the time of opening and closing the process, but also to a certain extent to achieve the effect of concurrency.
Case in point:
from multiprocessing import Pool import os, time, random def worker(msg): t_start = () print("Process %s started with process number %d." % (msg, ())) # () randomly generates a floating point number between 0 and 1 (()*2) t_stop = () print("Process",msg,"Execution complete, time %0.2f." % (t_stop-t_start)) def main(): # Define a process pool with a maximum number of processes of 3 po = Pool(3) for i in range(10): # Pool().apply_async(target to call, (tuple of arguments passed to target,)) # Each loop will call the target with a free child process # po.apply_async(worker,(i,)) print("----start----") # Close the process pool, po will not accept new requests after closure () # Wait for all child processes in the po to finish executing, must be placed after the close statement. () print("----end----") if __name__ == '__main__': main()
Readers interested in more Python related content can check out this site's topic: theSummary of Python process and thread manipulation techniques》、《Python Data Structures and Algorithms Tutorial》、《Summary of Python function usage tips》、《Summary of Python string manipulation techniques》、《Python introductory and advanced classic tutorials》、《Python + MySQL Database Programming Tutorial for Beginnersand theSummary of common database manipulation techniques in Python》
I hope that what I have said in this article will help you in Python programming.