Speaking of shell controllable multi-threading, most of the solutions shared online are pipeline control solutions. Zhang Ge's blog has also tried this kind of plan and shared it once:Shell+Curl website health status check script, capture the missing site of China Blog Alliance》, interested friends can take a look.
Share an entry-level controllable multi-threaded shell script solution
Below, Zhang Ge's blog will share another entry-level controllable multi-threaded shell script solution that is easier to understand: task cutting and each break.
Let’s first describe 1 scene:
One day, I received this task at Tencent. I needed to ping a few thousand IPs in a Linux server, just obtain the IP that can reach. If a single IP ping test can also be completed, the task will be fine with thousands of IPs, what if there are more?
Given the simplicity of this case, I gave up the pipeline scheme I used before and instead adopted the idea of breaking it out.
Simple idea:
According to the "strategic idea" of task cutting, I first save these thousands of IPs into an iplist file, then write a split function, divide the file into multiple temporary IP lists, and finally, traversing these temporary IP files with multiple threads can realize multi-threading in disguise.
Specific code:
#!/bin/sh #Text segmentation function: divide text $1 by number of copies $2SplitFile() { linenum=`wc -l $1 |awk '{print $1}'` if [[ $linenum -le $2 ]] then echo "The lines of this file is less then $2, Are you kidding me..." exit fi Split=`expr $linenum / $2` Num1=1 FileNum=1 test -d SplitFile || mkdir -p SplitFile rm -rf SplitFile/* while [ $Num1 -lt $linenum ] do Num2=`expr $Num1 + $Split` sed -n "${Num1}, ${Num2}p " $1 > SplitFile/$1-$FileNum Num1=`expr $Num2 + 1` FileNum=`expr $FileNum + 1` done } #Define some variables SPLIT_NUM=${1:-10} # Parameter 1 indicates how many copies are divided into, that is, how many threads are opened, and the default is 10FILE=${2:-iplist} # Parameter 2 represents the divided object, the default iplist file #Segment of filesSplitFile $FILE $SPLIT_NUM #Loop through temporary IP filesfor iplist in $(ls ./SplitFile/*) do #Loop ping to test the IP in the temporary IP file (lost the background) cat $iplist | while read ip do ping -c 4 -w 4 $ip >/dev/null && echo $ip | tee -ai #ping The reachable IP is written to the log done & #Add the & symbol after the while loop, so that the nested loop is executed in the backgrounddone
After saving the code as, the process of executing sh iplist 100 is as follows:
First cut the iplist into 100 copies and store it in the SplitFile folder
Then, these split files are read through a for loop and ping commands are executed on the ip in the background using a while loop.
Since while is lost in the background, the for loop will execute 100 while at one time, which is equivalent to opening 100 threads, and the speed is naturally incomparable.
Among them, the number of cut parts is the number of multi-threads you want to start. It is obvious that although the idea of this task segmentation is not as high-end as the pipeline solution, its idea is simpler and easier to understand, and it is more versatile, and it is suitable for entry-level simple multi-threaded tasks.