1. Greedy Algorithm
Greedy algorithm is an algorithm design method to solve optimization problems. Its core idea is to select the optimal solution in the current state at each step, so as to finally achieve the global optimal solution. The following will introduce the principles and implementation steps of greedy algorithms, and provide implementation examples of C# and Java.
1.1 Principle
The principle of greedy algorithm is based on local optimal selection. By selecting the current optimal solution at each step, it is ultimately expected to obtain the global optimal solution. It does not consider past choices or future impacts, but only focuses on local optimal decisions in front of you.
1.2 Implementation steps
- Problem modeling: Abstract the problem into a set of choices and constraints.
- Select a policy: Determine how to choose the optimal solution at each step. This requires the development of greed strategies based on the characteristics of the problem.
- Test feasibility: Check whether the current selection meets the constraints of the problem.
- Update status: Update the status of the problem according to the selection.
- Repeat steps 2-4: Iteratively select the optimal solution, test feasibility and update status until the end condition is met.
1.3 C# implementation example
Suppose we want to solve the backpack problem. Given a set of items and backpack capacity, we need to select items to put them in the backpack so that the total value is maximum and does not exceed the backpack capacity.
using System; using ; class GreedyAlgorithm { public static List<Item> Knapsack(List<Item> items, int capacity) { ((a, b) => ()); List<Item> selectedItems = new List<Item>(); int currentWeight = 0; foreach (var item in items) { if (currentWeight + <= capacity) { (item); currentWeight += ; } } return selectedItems; } } class Item { public string Name { get; set; } public int Weight { get; set; } public int Value { get; set; } public double ValuePerWeight => (double)Value / Weight; } class Program { static void Main() { List<Item> items = new List<Item> { new Item { Name = "Item1", Weight = 2, Value = 10 }, new Item { Name = "Item2", Weight = 3, Value = 5 }, new Item { Name = "Item3", Weight = 5, Value = 15 }, }; int capacity = 7; List<Item> selectedItems = (items, capacity); ("Selected Items:"); foreach (var item in selectedItems) { ($"{} (Weight: {}, Value: {})"); } } }
1.4 Java implementation example
Take the backpack problem as an example, the following is a Java implementation example:
import ; import ; import ; import ; class GreedyAlgorithm { public static List<Item> knapsack(List<Item> items, int capacity) { (items, (Item::getValuePerWeight).reversed()); List<Item> selectedItems = new ArrayList<>(); int currentWeight = 0; for (Item item : items) { if (currentWeight + () <= capacity) { (item); currentWeight += (); } } return selectedItems; } } class Item { private String name; private int weight; private int value; public Item(String name, int weight, int value) { = name; = weight; = value; } public String getName() { return name; } public int getWeight() { return weight; } public int getValue() { return value; } public double getValuePerWeight() { return (double) value / weight; } } public class Main { public static void main(String[] args) { List<Item> items = new ArrayList<>(); (new Item("Item1", 2, 10)); (new Item("Item2", 3, 5)); (new Item("Item3", 5, 15)); int capacity = 7; List<Item> selectedItems = (items, capacity); ("Selected Items:"); for (Item item : selectedItems) { (() + " (Weight: " + () + ", Value: " + () + ")"); } } }
The above example demonstrates how to solve a backpack problem using a greedy algorithm, selecting items to put in the backpack to maximize the total value. Note that the applicability of greedy algorithms depends on the nature of the problem and are not necessarily applicable to all optimization problems.
2. Dynamic planning
Dynamic programming is an algorithm design method used to solve optimization problems. It decomposes the problem into sub-problems, and solves the original problem by solving the sub-problems to avoid repeated calculations and improves efficiency. The following will introduce the principles and implementation steps of dynamic programming, and provide implementation examples of C# and Java.
2.1 Principle
The core idea of dynamic programming is to use the solutions of solved subproblems to construct the solutions of the original problem, thereby reducing repeated calculations. Generally, dynamic programming problems meet two conditions:
- Optimal substructure properties: The optimal solution to the problem can be constructed through the optimal deconstruction of subproblems.
- Overlapping subproblems: Problems can be broken down into many overlapping subproblems, each subproblem can be used multiple times.
2.2 Implementation steps:
- Problem modeling: Divide the problem into subproblems, define the state and transfer equation of the subproblem.
- initialization: Initialize boundary conditions, which is usually the solution to the smallest-scale sub-problem.
- Status transfer: Calculate the solution of the subproblem in a recursive or iterative manner according to the relationship between subproblems and save the results in a table.
- Solve the original problem: By solving sub-problems, the optimal solution to the original problem is gradually constructed.
- Return result: Return to the optimal solution to the original problem.
2.3 C# implementation example:
Suppose we want to solve the classic Fibonacci sequence problem and calculate the nth Fibonacci number.
using System; class DynamicProgramming { public static long Fibonacci(int n) { if (n <= 1) return n; long[] fib = new long[n + 1]; fib[0] = 0; fib[1] = 1; for (int i = 2; i <= n; i++) { fib[i] = fib[i - 1] + fib[i - 2]; } return fib[n]; } } class Program { static void Main() { int n = 10; long result = (n); ($"Fibonacci({n}) = {result}"); } }
2.4 Java implementation example:
Here is an example of Java implementation:
public class DynamicProgramming { public static long fibonacci(int n) { if (n <= 1) return n; long[] fib = new long[n + 1]; fib[0] = 0; fib[1] = 1; for (int i = 2; i <= n; i++) { fib[i] = fib[i - 1] + fib[i - 2]; } return fib[n]; } public static void main(String[] args) { int n = 10; long result = fibonacci(n); ("Fibonacci(" + n + ") = " + result); } }
The above example demonstrates how to calculate the value of the nth number in a Fibonacci sequence using dynamic programming. By saving intermediate results, repeated calculations are avoided and efficiency is improved. Dynamic programming can be used to solve various complex problems and is an important algorithm design method.
3. Dividing and consolidation algorithm
Divide and Conquer is an algorithm design method for solving problems. It decomposes problems into subproblems, solves subproblems and merges the solutions of subproblems to get the solution to the original problem. The following will introduce the principles and implementation steps of the partitioning and conquer algorithm, and provide implementation examples of C# and Java.
3.1 Principle
The core idea of the partitioning and conquer algorithm is to decompose the problem into several smaller sub-problems, solve these sub-problems separately, and then merge their solutions into the solutions of the original problem. Generally, the partitioning and conquer algorithm problem meets three conditions:
- The problem can be broken down into several identical subproblems of smaller scale.
- The solution to the subproblem can be obtained recursively.
- The solutions of the sub-problems can be combined into the solutions of the original problem.
3.2 Implementation steps
- Problem modeling: Divide the original problem into several sub-problems, defining the state and recursive relationship of the sub-problems.
- Recursive solution: Recursively solve the sub-problem until the scale of the problem is small enough to be solved directly.
- Solution to the merge subproblem: Combine the solutions of the sub-problem into the solutions of the original problem.
- Return result: Return to the solution to the original problem.
3.3 C# implementation example
Suppose we want to solve the problem of merge sorting and sort an array of integers.
using System; class DivideAndConquer { public static void MergeSort(int[] arr) { if ( <= 1) return; int mid = / 2; int[] left = new int[mid]; int[] right = new int[ - mid]; for (int i = 0; i < mid; i++) left[i] = arr[i]; for (int i = mid; i < ; i++) right[i - mid] = arr[i]; MergeSort(left); MergeSort(right); Merge(arr, left, right); } private static void Merge(int[] arr, int[] left, int[] right) { int i = 0, j = 0, k = 0; while (i < && j < ) { if (left[i] < right[j]) arr[k++] = left[i++]; else arr[k++] = right[j++]; } while (i < ) arr[k++] = left[i++]; while (j < ) arr[k++] = right[j++]; } } class Program { static void Main() { int[] arr = { 12, 11, 13, 5, 6, 7 }; (arr); ("Sorted array:"); foreach (var num in arr) { (num + " "); } } }
3.4 Java implementation example:
Here is an example of Java implementation:
public class DivideAndConquer { public static void mergeSort(int[] arr) { if ( <= 1) return; int mid = / 2; int[] left = new int[mid]; int[] right = new int[ - mid]; (arr, 0, left, 0, mid); (arr, mid, right, 0, - mid); mergeSort(left); mergeSort(right); merge(arr, left, right); } private static void merge(int[] arr, int[] left, int[] right) { int i = 0, j = 0, k = 0; while (i < && j < ) { if (left[i] < right[j]) arr[k++] = left[i++]; else arr[k++] = right[j++]; } while (i < ) arr[k++] = left[i++]; while (j < ) arr[k++] = right[j++]; } public static void main(String[] args) { int[] arr = { 12, 11, 13, 5, 6, 7 }; mergeSort(arr); ("Sorted array:"); for (int num : arr) { (num + " "); } } }
The above example demonstrates how to use the partitioning and conquer algorithm to sort an array of integers. An efficient sorting algorithm is achieved by decomposing the problem into subproblems and then merging the solutions of the subproblems. The partitioning and conquer algorithm can be used to solve various complex problems and is an important algorithm design method.
4. Backtracking algorithm
Backtracking is an algorithm design method used to solve combinatorial and search problems. It gradually builds solutions by constantly trying various possibilities and goes back to the previous step and reselects when encountering the inability to continue or does not meet the conditions. The following will introduce the principles and implementation steps of the backtracking algorithm, and provide implementation examples of C# and Java.
4.1 Principle
The core idea of the backtracking algorithm is depth-first search, which explores the solution space tree of the problem through recursion or iterative methods. During the search process, if you find that the current path cannot meet the requirements of the problem, go back to the previous step and try other possibilities until you find the solution to the problem or determine that there is no solution. Backtrace algorithms are usually suitable for the following types of problems:
- Combination problem: Select some elements from a group of elements to form combinations, such as arrangement, subset, combination sum, etc.
- Search questions: Search for solutions in the state space, such as the Eight Queens problem, Sudoku, Maze problem, etc.
4.2 Implementation steps
- Problem modeling: Abstract the problem into a state space tree, defining the state, selection, constraints and goals of the problem.
- Select a path: Starting from the current state, select a path to move forward, and try a possible choice.
- Recursion or iterative: Enter the next layer state recursively or iteratively according to the selection and continue to select the path.
- Check conditions: Check whether the constraints of the problem are met at each step. If they are not met, go back to the previous step.
- Find or no solution: If the solution to the problem is found, record the solution or process the solution; if all possibilities cannot be continued or have been explored, go back to the previous step.
- Return result: Return the final solution or processing result.
4.3 C# implementation example
Suppose we want to solve the problem of combining sum, find all possible combinations in the array, so that the sum equals the target value.
using System; using ; class Backtracking { public static IList<IList<int>> CombinationSum(int[] candidates, int target) { IList<IList<int>> result = new List<IList<int>>(); List<int> current = new List<int>(); CombinationSumHelper(candidates, target, 0, current, result); return result; } private static void CombinationSumHelper(int[] candidates, int target, int start, List<int> current, IList<IList<int>> result) { if (target == 0) { (new List<int>(current)); return; } for (int i = start; i < ; i++) { if (target - candidates[i] >= 0) { (candidates[i]); CombinationSumHelper(candidates, target - candidates[i], i, current, result); ( - 1); } } } } class Program { static void Main() { int[] candidates = { 2, 3, 6, 7 }; int target = 7; IList<IList<int>> result = (candidates, target); ("Combination Sum:"); foreach (var list in result) { ((", ", list)); } } }
4.4 Java implementation example
Here is an example of Java implementation:
import ; import ; public class Backtracking { public static List<List<Integer>> combinationSum(int[] candidates, int target) { List<List<Integer>> result = new ArrayList<>(); List<Integer> current = new ArrayList<>(); combinationSumHelper(candidates, target, 0, current, result); return result; } private static void combinationSumHelper(int[] candidates, int target, int start, List<Integer> current, List<List<Integer>> result) { if (target == 0) { (new ArrayList<>(current)); return; } for (int i = start; i < ; i++) { if (target - candidates[i] >= 0) { (candidates[i]); combinationSumHelper(candidates, target - candidates[i], i, current, result); (() - 1); } } } public static void main(String[] args) { int[] candidates = { 2, 3, 6, 7 }; int target = 7; List<List<Integer>> result = combinationSum(candidates, target); ("Combination Sum:"); for (List<Integer> list : result) { (list); } } }
The above example demonstrates how to solve the combined sum problem using a backtracking algorithm, finding all possible combinations in an array so that the sum equals the target value. All solutions can be found by constantly selecting paths and backtracking. Backtrace algorithms are powerful tools for solving combinatorial and search problems.
5. Summary
Greedy algorithm is a method to solve optimization problems. By selecting the current optimal solution at each step, it is expected to achieve the global optimal solution. Dynamic programming breaks down the problem into sub-problems, and solves the original problem by solving the sub-problems to avoid repeated calculations. The partitioning and conquer algorithm decomposes the problem into sub-problems, solves the sub-problems and merges the solutions of the sub-problems to obtain the solution to the original problem. The backtracking algorithm gradually builds solutions by constantly trying various possibilities, suitable for combinatorial and search problems. These algorithms have different application fields and implementation steps, and the appropriate algorithm can be selected according to the characteristics of the problem.
This is the end of this article about algorithm design and analysis on the basics of algorithms. For more related algorithm design and analysis content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!