Linear search is definitely a linear way to find an element in a collection or array.
Understand linear search through code
What is "linearity"? It's better to experience it in the code.
First, you need a collection or array. How do you get it? Just generate a fixed-length random array. Then enter a search key, if it is found, it returns the index of the element, and if it is not found, it returns -1, it is that simple.
class Program { private static int[] arr; private static Random r = new Random(); static void Main(string[] args) { SeedData(10); ShowArray(); ("\n"); ("Please enter the integer number to be found"); var temp = Convert.ToInt32(()); ("What you are looking for{0}The location is{1}", temp, LinearSearch(temp)); (); } //Linear search private static int LinearSearch(int key) { for (int i = 0; i < ; i++) { if (arr[i] == key) return i; //Return the index if found } return -1;//If not found, return -1 } //Seed data of array private static void SeedData(int size) { arr = new int[size]; for (int i = 0; i < size; i++) { arr[i] = (1, 100); } } //Print all elements of the array private static void ShowArray() { foreach (var item in arr) { (item + " "); } } }
As a result, we can define what "linear search" is. That is to say, when we enter a search key, we search each element in the collection in sequence (actually through loop traversal), if it cannot be found, it returns a value, such as -1, and if it is found, it returns the index position of the element.
Time complexity
Linear search is just the simplest algorithm for search, and there are other relatively complex algorithms. How to measure the efficiency of various algorithms? The answer is measured by "time complexity". Any concept comes from practice and does not come out of thin air, and the same is true for "time complexity".
O(1)
Assuming there are 10 elements in an array, you need to compare whether the first element is equal to the second element. The algorithm only needs to run it once to get the result. What if there are 100 elements in the array? The algorithm will still get the result once it runs. So, people think: the operation of an algorithm has nothing to do with the size of the array, so they call this algorithm "constant running time". But it is not intuitive enough, what graphic to use to represent it? If people think of it, use O(1) to express it.
Although O(1) is very intuitive, it is easy to cause ambiguity. Some people think that only if the algorithm is run once and the number of times it runs does not change with the size of the array, it can be represented by O(1). This is very obvious "looking on the text to create meaning". O(1) A more accurate explanation is: the number of runs of the algorithm is a constant and will not change with the size of the array.
O(n)
The excitement of life comes from diversity. Assuming there are 10 elements in an array, you need to compare whether the first element is equal to any other element in the array. The algorithm needs to be run 9 times. If there are 100 elements in the array, the algorithm needs to be run 99 times, that is, n-1 times, and the number of times the algorithm runs changes with different n. People write this algorithm as O(n), 1 can be ignored and read as "n order".
O(n²)
Suppose there is another algorithm that requires comparing whether any element in the array is equal to other elements. The first element needs to be compared with the n-1 following elements, and the second element needs to be compared with the n-2 following elements except the first element...that is: (n-1) + (n-2) + ... + 2 + 1. This formula is simply calculated on paper with a pen, and can also be refined into n²/2-n/2. As n increases, the constant factor can be ignored and written as O(n²), called "square running time", and read as "n² order".
When n is tens of thousands, the O(n²) algorithm will not have a significant performance impact on personal computers that run billions of times per second today. But if n becomes millions, it requires trillions of calculations, which may take several hours to execute. What's more, if n becomes billions, the calculation will take decades to complete. Therefore, each algorithm has its scope of application.
Now, the "time complexity" can be defined slightly in a slightly complete way, which is a function that quantitatively describes the running time of the algorithm. Time complexity is the worst-case time complexity.
Common time complexities are:
- O(1), constant order, such as search for Hash tables
- O(log2n), logarithmic order, such as binary search
- O(n), linear order
- O(nlog2n), linear logarithmic order, such as the average complexity of fast sorting
- O(n^2), square order, such as bubble sort
- O(n^3), cubic order, such as the Floyd algorithm for finding the shortest path
- O(n^k), k power order
- O(2^n), exponential order, such as Tower of Hannover
What is an algorithm
It is a description of the steps to solve a specific problem. In a computer it appears as a finite sequence of instructions, and each instruction represents one or more operations.
sum = 1 + 2 + 3 + ... + 100 sum = 100 + 99 + 98+ ... + 1 2*sum = 101*100 = 10100 sum = 5050
The above is the algorithm for summing 1 to 100. Yes, it was created by a small German village in the 18th century, a child named Gauss, it was so magical!
The above is the entire content of this article. I hope that the content of this article has certain reference value for your study or work. Thank you for your support. If you want to know more about it, please see the following links