Preface
Sort, for each programming language, it is something to face. Here I will share with you some sorting algorithms implemented by golang and explain how to generate random numbers. I won’t say much below, let’s take a look at the detailed introduction together.
Classic sorting algorithm
The learning of algorithms is very important and is an important criterion for testing a programmer's level. Learning algorithms cannot be memorized by rote, and you need to understand the ideas in them so that they can be flexibly applied to actual development.
Seven classic sorting algorithms
- Insert sort
- Select Sort
- Bubble sort
- Hill sort
- Merge sort
- Heap sorting
- Quick sort
Insert sort
Let’s first consider a question: for arrays of length n, the first n-1 bits are incrementally ordered, so how to sort them?
1. Traverse the array from 1st bit to n-1st bit and find that the nth digit should be placed in the kth digit
2. Move the numbers from k to n-1 one by one in sequence
3. In this way, arrays of length n are incrementally ordered
Specific implementation method:
package main import "fmt" func insertionSort(arr []int) { for i := 1; i < len(arr); i++ { value := arr[i] for j := i - 1; j >= 0; j-- { if value < arr[j] { arr[j+1], arr[j] = arr[j], value } else { break } } } } func main() { arr := []int{6, 5, 4, 3, 2, 1, 0} insertionSort(arr) ("Sorted arr: ", arr) }
Complexity:
Time complexity: O(n*n)
Space complexity: extra space O(1)
O expressions (Big O notation) are commonly used to represent the complexity of algorithms in computer science, including:
Time complexity: Measure the running time of the algorithm
Space complexity: Measure the space occupied by algorithms, such as memory or hard disk.
Generally speaking, O expressions represent the worst-case complexity.
The same is true for algorithm analysis. If you look for a certain number among n subsequent numbers, the best case is that the first number is, the time complexity is O(1). If the last number is what we are looking for, then the time complexity is O(n), which is the worst case. The average running time is from the perspective of probability. If the number may appear in every position, the average number of searches is n/2 times.
The average run time is the most meaningful of all cases because it is the expected run time. But in reality, the average running time is difficult to obtain through analysis, and is generally estimated after running a certain amount of experimental data. The worst run time is a guarantee, that is, the run time will not break again. In applications, this is the most important requirement, and generally, unless specifically specified, the run time we mentioned is the worst-case run time. That is, the time complexity is the worst-case time complexity.
Common algorithm time complexity is from small to large:
O(1)<O(log2n)<O(n)<O(n log2 n)<O(n^2)<O(n^3)<O(2^n)
Here O is a flag that generally represents complexity, similar to the name of a function that calculates complexity.
Both complexities are an estimate.
The way to estimate is to analyze the formula for complexity based on the logic of the code.
In terms of time complexity, the main record is a loop with variables.
For example, for (i = 0; i < n; i ++) {...} can be understood as O(n)
And x = n + 1; y = x + 1; z = x + y; although it is three statements, there is no loop operation, so it is understood as O(1)
In terms of spatial complexity, the main record is space applications with variables.
For example, int[n] x; can be understood as O(n)
Although int x; int y; int z; is a three variable, there is no change in the application operation, so it is understood as O(1)
The large O symbol is a mathematical symbol used to describe the asymptotic behavior of a function. It can represent both infinity asymptotic
Infinitely small asymptotic. Depends on whether you use it in an algorithm or describing the error terms in mathematical function estimation
Let's take a look at our insertion sort:
- When the array is in reverse order, the time complexity is O(n*n)
- When the array is almost ordered, the time complexity is O(n)
In addition, the overhead of the insertion sort is very small, which can be understood as the constant is equal to 1
In practical applications, constants are also a very important factor. Some algorithms have low complexity but high constants; coupled with the characteristics of data, sometimes they are not as good as algorithms with higher complexity but low constants.
In the process of understanding the insertion sorting algorithm, you should understand an algorithm idea:
- Decompose the problem into subproblems
- Find the initial state of the problem
- From the initial state of the problem, through the sub-problem, we can get the final solution step by step
In practical applications, there are several key points to consider to choose algorithms:
- Complexity: including time complexity, space complexity, constants, etc.
- Implementation complexity: If the algorithm is difficult to implement, it is also a big problem if it is not easy to test and maintain.
- Applicability: Are there more appropriate algorithms in specific business scenarios?
In general, we need to analyze the specific situation in detail, and solve problems concisely while meeting the business.
go Generate random intervals
// Function: Generate random numbers// Summary:// Parameters:// min: minimum value// max: maximum value// Return value:// int64: The generated random numberfunc RandInt64(min, max int64) int64 { if min >= max || min == 0 || max == 0 { return max } return rand.Int63n(max-min) + min }
Reference article:【BAT Backend Introduction】Lesson 2: Array and Sorting
Summarize
The above is the entire content of this article. I hope that the content of this article has certain reference value for everyone's study or work. If you have any questions, you can leave a message to communicate. Thank you for your support.