1. Preparation
Prepare the data:
Generate random numbers and write to the file, and then read out the data
//Newly generated integer random numbers and stored in txt file.func NewIntRandm(fileName string, number, maxrandm int) { filename := fileName file, err := (filename) if err != nil { return } r := ((().UnixNano())) rans := make([]string, 0, number) for i := 0; i < number; i++ { rans = append(rans, ((maxrandm))) } ((rans, " ")) defer () } //Save a string of arrays into the filefunc SavaRandmInt(fileName string, data []int) { if fileName == " " || len(data) == 0 { return } var file * var openerr error file, openerr = (fileName) if openerr != nil { var newerr error file, newerr = (fileName) if newerr != nil { return } } rans := make([]string, 0, len(data)) for _, v := range data { rans = append(rans, (v)) } ((rans, " ")) defer () }
Preparing timing procedures:
package util import "time" type Stopwatch struct { start stop } func (s *Stopwatch) Start() { = () } func (s *Stopwatch) Stop() { = () } //Nanosecfunc (s Stopwatch) RuntimeNs() int { return () - () } //subtlefunc (s Stopwatch) RuntimeUs() float64 { return (float64)(()-()) / 1000.00 } //millisecondfunc (s Stopwatch) RuntimeMs() float64 { return (float64)(()-()) / 1000000.00 } //Secondfunc (s Stopwatch) RuntimeS() float64 { return (float64)(()-()) / 10000000000.00 }
2. Start writing sorting
I imitated the writing method in the sort source code package in golang, exposed an interface, and wrote the sorting implementations inside
package sort // package main type Interface interface { //Get the length of the data Len() int //Perform the size of the values whose index is i and index is j. If you judge that i>j returns true during implementation, it is in ascending order, otherwise it is in descending order. Less(i, j int) bool //The value of exchange index i and j Swap(i, j int) } //Bubbling sortfunc BubbleSort(data Interface) { n := () for index := 0; index < n; index++ { for j := index + 1; j < n; j++ { if (index, j) { (index, j) } } } } //This method is faster than the above bubble algorithm, because I look for the smallest element means remembering the subscript, and there is no element exchange every timefunc SelectSort(data Interface) { n := () var min int for index := 0; index < n; index++ { min = index for j := index + 1; j < n; j++ { if (min, j) { min = j } } (index, min) } } //Insert sortfunc InsertSrot(data Interface) { count := () for index := 1; index < count; index++ { for j := index; j > 0 && (j, j-1); j-- { //j>0 Do a boundary guard, and do not allow the subscript to be less than 0 (j, j-1) } } } //Hill sortfunc ShellSort(data Interface) { N := () h := 1 for h < N/3 { h = 3*h + 1 } for h > 0 { for index := h; index < N; index++ { for j := index; j >= h && (j, j-h); j -= h { //j>0 Do a boundary guard, and do not allow the subscript to be less than 0 (j, j-h) } } h = h / 3 } } //Quick sortfunc QuickSort(data Interface) { n := () low, row := 0, n-1 quickSort(data, low, row) } func quickSort(data Interface, low, row int) { if low < row { i, j, x, last := low, row, low, 0 //0 means using the first as the reference value. When last variable is exchanged for the reference, the last variable appears when the last variable is exchanged. for i < j { for i < j && (x, j) { //Subscribe to the pit that appears in front of the j-- } if i < j { (i, j) i++ x = j last = 1 } for i < j && (i, x) { //Put the one larger than x in the pit that appears behind i++ } if i < j { (i, j) j-- x = i last = -1 } } if last == 1 { (j, x) } else if last == -1 { (i, x) } quickSort(data, low, i-1) quickSort(data, i+1, row) } } //Control ascending and descending order by controlling the Less methodfunc HeapSort(data Interface) { makeHeap(data) n := () for i := n - 1; i >= 1; i-- { (0, i) heapFixdown(data, 0, i) } } func makeHeap(data Interface) { n := () for i := (n - 1) >> 1; i >= 0; i-- { heapFixdown(data, i, n) } } func heapFixdown(data Interface, r, n int) { root := r //Follow the node for { leftChildIndex := root<<1 + 1 if leftChildIndex >= n { break } if leftChildIndex+1 < n && (leftChildIndex+1, leftChildIndex) { leftChildIndex++ } if (root, leftChildIndex) { return } (leftChildIndex, root) root = leftChildIndex } }
3. Get started
//Implement this sorting interface firsttype InSort []int func (is InSort) Len() int { return len(is) }//Dimmersive orderfunc (is InSort) Less(i, j int) bool { return is[i] > is[j] } func (is InSort) Swap(i, j int) { is[i], is[j] = is[j], is[i] } func main() { fileName := "" // (fileName, 100000, 10000) // Encapsulation generates 50000000 random numbers fileUtil := {} insort := InSort{} insort = (fileName) //Read the generated random number (()) t := new() //The method of encapsulation time counting () // (insort) //Start sorting, 519.8732 ms (insort) //Start sorting, 7.0267 ms () ((), "ms") ("", insort) }
Quick Row: 10000 array 7.0267 ms, 100000 array 37.7612 ms
Heap sort: 10000 array 10.0039 ms, 1000000 array 358.6429 ms
Here are some data I tested:
HeapSort(insort) //Heap sort 10000 numbers 4.0013 ms, 100000 numbers 54.0659 ms, very stable, 500000 numbers 208.1511 ms very stable(insort, 0, len(insort)-1) //Quick sort 10000 numbers 3.0017 ms, 100000 numbers, 33.0222 ms, very stable, 500000 numbers 150.1096 ms Very stable, 100000 numbers 94.0823 ms Very stable(insort) //Select sort 10000 numbers 130.8017 ms, 100000 numbers Time is very long(insort) //Bubblestone 10000 numbers 203.5344ms, 100000 numbers 187.7438 ms(insort) // Insert sort 10000 numbers 858.6085 ms, 100000 numbers, a long time(insort) //Hill inserts 10000 numbers 10.9876 ms, 100000 numbers 46.0322 m, just do this range, it is very stable, 500000 numbers 141.8833 ms, relatively stable(insort) //Sorting of golang source code 10000 numbers 6.0062 ms, 100000 numbers 19.9988 ms~89.0574 ms Unstable, 500000 numbers 358.2536 ms Stable
Supplement: Comparison of the advantages and disadvantages of golang timing tasks
Golang writes the timing tasks executed by loops, and there are three common implementation methods:
1. Method:
for { () ("I perform tasks on schedule") }
2. Function:
t1:=(3*) for { select { case <-t1: ("t1 timer") } }
3. In the Tick timing task, you can also use functions to obtain the Ticker structure first, and then block the monitoring information. This method can manually select the stop timing task to reduce memory waste when stopping the task.
t:=() for { select { case <-: ("t1 timer") () } }
The second and third types can be classified into the same category
The implementation principle of these three timers
Generally speaking, when you use execution timed tasks, others will advise you not to use the completion of timed tasks, but why can't you use the Sleep function to complete the timed tasks? What are the disadvantages of it compared with the Tick function? This requires us to discuss and read the source code and analyze the advantages and disadvantages between them.
First, let's study the Tick function, func Tick(d Duration) <-chan Time
Calling the Tick function will return a channel of time. If you have a little understanding of the channel, we will first think that since it is returning a channel, a goroutine must be created during the process of calling the Tick method. The goroutine is responsible for sending data and waking up the blocked timing task. After reading the source code, I did find that a coroutine was out of the function to handle the timing tasks.
According to the current understanding, using a tick requires a coroutine to go out. The efficiency and memory space occupancy must not be stronger than the sleep function. We need to continue reading the source code to get the truth.
I won't state the simple call process. I will introduce the core structure and methods here (deleting some judgment codes, and the explanation I wrote in the table):
func (tb *timersBucket) addtimerLocked(t *timer) { = len() //Calculate the length of the current timing task in the timesBucket = append(, t)// Add the current timed task to timesBucket siftupTimer(, ) //Maintain the minimum heap (quad-tree) of a timer structure, and the sort keyword is the execution time, that is, the time of the next execution of the timing task if ! { = true go timerproc(tb)// If you have not created a coroutine to manage timed tasks, create a coroutine to execute the coroutine to notify the management timer, the most core code } }
timesBucket, as the name suggests, is a global variable that is invisible to the outside world. Whenever there is a new timer timer task, the timer is added to the timer slice in the timesBucket.
The timerBucket structure is as follows:
type timersBucket struct { lock mutex //Lock is required when adding a new timing task (the conflict point is in the maintenance heap) t []*timer // Timer slice, constructed as a quad-tree minimum heap}
func timerproc(tb *timersBucket) detailed introduction
It can be called a timing task processor. All timing tasks will be added to timesBucket and then wait for processing in this function. The timer waiting for processing is formed according to the when field (the time of task execution, int type, nanosecond level). Each time a timer at the top of the heap is processed, the timed task cycle interval time will be added to its when field (that is, the d parameter in Tick(d Duration)), and then the heap is remained to ensure that when the smallest timer is on the top of the heap. When there is no timer that can be processed in the heap (there is a timer, but it is less than the execution time), it is necessary to calculate the difference between the task execution time of the timer in the heap and the timer in the top of the heap. The timing task processor sleeps in the delta for a while, waiting to be woken up by the scheduler. The core code is as follows (the comment is written at the back of each line of code, and some judgment codes and non-core codes that are not conducive to reading):
func timerproc(tb *timersBucket) { for { lock(&) // Add lock now := nanotime() //Nanosecond value of current time delta := int64(-1) //The difference between the timer to be executed recently and the current time for { if len() == 0 { delta = -1 break }//There is no executable timer, and the loop will be directly jumped out of t := [0] delta = - now //Please take the timer of the group and calculate the difference in the current time if delta > 0 { break }// Delta is greater than 0, which means that the channel time has not yet reached, and the loop needs to be broken to sleep delta time if > 0 { // leave in heap but adjust next time to fire += * (1 + -delta/)// Calculate the time the timer will execute the task next time siftdownTimer(, 0) //Adjust the stack } else { // remove from heap. If the next execution time is not set, the timer will be removed from the heap (and the function means that the timed task will be executed only once) last := len() - 1 if last > 0 { [0] = [last] [0].i = 0 } [last] = nil = [:last] if last > 0 { siftdownTimer(, 0) } = -1 // mark as removed } f := arg := seq := unlock(&)//Unlock f(arg, seq) //Send time structure in channel to wake up blocking coroutine lock(&) } if delta < 0 { // No timers left - put goroutine to sleep. goparkunlock(&, "timer goroutine (idle)", traceEvGoBlock, 1) continue }// Delta is less than 0 means there is no timed task at present, and it is directly blocked for sleep = true = now + delta unlock(&) notetsleepg(&, delta) //Sleep delta time, after waking up, you can execute the timing tasks at the top of the pile } }
At this point, the main functions involved in the function have been explained. To sum up, when starting the timed task, a unique coroutine will be created to process the timer, and all timers will be processed in the coroutine.
Then, let's read the sleep source code implementation again, the core source code is as follows:
//go:linkname timeSleep func timeSleep(ns int64) { *t = timer{} //Create a timed task = nanotime() + ns // Calculate the execution time point of the timing task = goroutineReady //Execution method (t) // Join the timer heap and wait for execution in the timer timer timed task execution coroutine goparkunlock(&, "sleep", traceEvGoSleep, 2) //Sleep, wait for the timed task coroutine notification to wake up}
After reading the core code of sleep, did you suddenly realize that the content of the Tick function is very similar, both created timers and added timed task processing coroutines. The magic is that in fact, the timers generated by these two functions are placed in the same timer heap, and are waiting to be processed in the timed task processing coroutine.
Comparison of advantages and disadvantages, recommendations for use
Now we know that the timer structures used by Tick, Sleep, and functions are all placed in the same coroutine to process them uniformly, so it seems that there is no difference between using Tick and Sleep.
In fact, there is a difference. Sleep uses sleep to complete timing tasks and needs to be awakened by dispatch. The Tick function uses channel to block the current coroutine and completes the execution of the timed task. It is not clear at present what the difference between golang blocking and sleep will consume resources, and no suggestions can be given in this regard.
However, using channel blocking coroutines to complete timing tasks is more flexible. You can combine select to set the timeout time and the default execution method, and you can set the active shutdown of the timer, and you do not need to generate a timer every time (in this regard, saves system memory, and garbage collection also takes time).
Therefore, it is recommended to use the completion of the scheduled task.
The above is personal experience. I hope you can give you a reference and I hope you can support me more. If there are any mistakes or no complete considerations, I would like to give you advice.