Preface
When we develop backend projects, we often encounter a situation where the function module list data is exported to Excel function, but a certain field in the list cannot be queried through the Sql conjunction table, and it is not convenient to match it once. At this time, the list data loops and adds the query results one by one to the list, which will inevitably take a long time. How can we improve the download speed? (The Go development server is used here)
1. Goroutine
Of course, the first thing that comes to mind is to use coroutine processing to process the data to be queried in the loop
type Card struct { Name string `json:"name"` Balance float64 `json:"balance"` } func main() { // Get card list data list := getList() var data = make([]Card, 0, len(list)) for _, val := range list { go func(card Card) { // Query the service and add the value to the record var balance = getBalance() data = append(data, Card{ Name: , Balance: balance, }) }(val) } ("Data: %+v", data) } // Get the data listfunc getList() []Card { var list = make([]Card, 0) for i := 0; i < 10000; i++ { list = append(list, Card{ Name: "Card-" + (i+1), }) } return list } // Get balancefunc getBalance() float64 { ( * 100) return float64(rand.Int63n(1000)) }
Run the above code and the result is: "Data: []", why is this? The main reason is that the coroutine process business takes time and the loop ends early, so this result occurs. How can all results be processed and output before outputting?
two,
This method is to wait for the group to synchronize multiple tasks. Waiting for the group can ensure that the specified number of tasks are completed in a concurrent environment
func main() { list := getList() // Get card list data var data = make([]Card, 0, len(list)) var wg // Declare a waiting group for _, val := range list { (1) // At the beginning of each task, the waiting group will be increased by 1 go func(card Card) { defer () // Use defer, which means that when the function completes, you will wait for the group value to be reduced by 1. // Query the business, sleep 100 subtle, add the value to the record var balance = getBalance() data = append(data, Card{ Name: , Balance: balance, }) }(val) } () // Wait for all tasks to complete ("Data: %+v", data) }
The operation result will output all data, but if we are careful, we will find that the order of data is messy at this time, which also meets business needs. How can we further improve it?
3. Data sorting
As mentioned above, the quota data after coroutine processing is disordered. Here we know the number of data hops. We directly initialize a space where len and cap equal to len(list), and change the data previously append to data to copy through subscripts. In this way, the output data is the data order of the list.
func main() { list := getList() // Get card list data var data = make([]Card, len(list), len(list)) var wg // Declare a waiting group for k, val := range list { (1) // At the beginning of each task, the waiting group will be increased by 1 go func(k int, card Card) { defer () // Use defer, which means that when the function completes, you will wait for the group value to be reduced by 1. // Query the business, sleep 100 subtle, add the value to the record var balance = getBalance() data[k] = Card{ Name: , Balance: balance, } }(k, val) } () // Wait for all tasks to complete ("Data: %+v", data) }
Run the above code and you can get the desired data sorting, but you download more data next time and open too many coroutines, which will inevitably lead to too much resource overhead and bring about a series of problems. How to optimize and limit the number of coroutines?
4. Limit the number of coroutines
Everyone knows that too many coroutines will naturally consume too many resources, which may lead to other problems; here we use chan to limit the number of coroutines.
// Limit 100 coroutinestype pool struct { queue chan int wg * } func main() { list := getList() // Get card list data var data = make([]Card, len(list), len(list)) var gl = &pool{queue: make(chan int, 500), wg: &{}} // Display the maximum number of coroutines 500 for k, val := range list { <- 1 // When each task starts, enter 1 chan (1) // At the beginning of each task, the waiting group will be increased by 1 go func(k int, card Card) { defer func() { <- // When finished, take out 1 () // When completed, wait for the group value to be reduced by 1 }() // Query the business, sleep 100 subtle, add the value to the record var balance = getBalance() data[k] = Card{ Name: , Balance: balance, } }(k, val) } () // Wait for all tasks to complete ("Data: %+v", data) }
By using chan, you can define the maximum number of coroutines by yourself; now it seems that there is no problem, but if the coroutine gets data panic, it will cause the entire program to crash.
V. Coroutine Panic processing
For the panic() of coroutines, we need to receive and use recover to process it.
func main() { list := getList() // Get card list data var data = make([]Card, len(list), len(list)) var gl = &pool{queue: make(chan int, 500), wg: &{}} // Display the maximum number of coroutines 500 for k, val := range list { <- 1 // When each task starts, enter 1 chan (1) // At the beginning of each task, the waiting group will be increased by 1 go func(k int, card Card) { // Solve coroutine panic so that the program will not crash defer func() { recover() }() defer func() { <- // When finished, take out 1 () // When completed, wait for the group value to be reduced by 1 }() // Query the business, sleep 100 subtle, add the value to the record var balance = getBalance() data[k] = Card{ Name: , Balance: balance, } }(k, val) } () // Wait for all tasks to complete ("Data: %+v", data) } // Get balancefunc getBalance() float64 { panic("Get balance panic") ( * 100) return float64(rand.Int63n(1000)) }
Use defer recover() in coroutines; in this way, the panic thrown by coroutines will be accepted and will not cause the program to crash.
Summarize
Coroutines improve efficiency by using more resources to process data and improve efficiency. Too many coroutines will temporarily use other service resources. When we use too many coroutines, we need to consider limiting the need to process panic in coroutines, otherwise it will cause the program to crash.
This is the end of this article about what problems are there in Go language coroutine processing data processing. For more related Go coroutine processing data content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!