SoFunction
Updated on 2025-03-03

golang limits concurrent operations at the same time

The concurrency volume of go is very powerful, and the cost of creating goroutines is extremely small. One of the important reasons is that go uses segmented stack technology, and each goroutine only takes up a very small space. At the same time, goroutine is language-level, reducing the switching overhead between kernel state and user state, and goroutine abandons some os thread system calls that golang cannot use, and creates a low cost.

We can create many goroutines in a flash, which is quite easy.

At first glance, this is completely inconsistent with the question. I have said so much before. Isn’t it encourage us to create more goroutines? No, no, goroutine is really good, but if it is not restricted, there is a high possibility of other unpredictable errors.

For example, in the web field, a connection, under linux/unix, is equivalent to opening a file and occupying a file descriptor. However, the system will set the upper limit of the file descriptor, and we can use ulimit -n to view it. If we follow the large amount, then the rush to request the connection will report an error instantly.

2018/06/30 10:09:54 dial tcp :8080: socket: too many open files

The above error message originated from a tool that I wrote for loop requests

package main
import (
  "sync"
  "net"
  "strconv"
  "fmt"
  "log"
)
const (
  MAX_CONCURRENCY = 10000 
)
var waitGroup 
func main(){
  concurrency()
  ()
}
//Open the Internet iofunc request(currentCount int){
  ("request" + (currentCount) + "\r")
  conn, err := ("tcp",":8080")
  if err != nil { (err) }
  defer ()
  defer ()
}
//Concurrent requestfunc concurrency(){
  for i := 0;i < MAX_CONCURRENCY;i++ {
    (1)
    go request(i)
  }
}

It is very simple to use Go to build a server. I will simply post the server code here.

package main
import (
  "io"
  "os"
  "fmt"
  "net"
)
func checkErr(err error){
  if err != nil { (, err) }
}
func main() {
  listener, err := ("tcp",":8080")
  checkErr(err)
  for {
    conn, err := ()
    checkErr(err)
    go func(conn ){ 
      _, err := (conn, "welcome!") 
      checkErr(err)
      defer ()
    }(conn)
  }
}

Now back to the topic, we can see that there are actually disadvantages to rush forward. To solve this problem, we can limit the number of concurrency at the same time and use channel to achieve this, which is a bit similar to the semaphore (Semaphore)

Create a channel with cache where CHANNEL_CACHE is the maximum concurrency amount at the same time

I would like to briefly explain why the type of chan here uses an empty struct. This is because in this scenario (limiting the concurrency amount at the same time), the type of data transmitted through the channel does not matter. We only need to make a notification effect (just like when you notify your friend to get up, you only need to flash a phone call, and not actually connect, saving the overhead of the phone bill). The empty struct here actually does not take up any space, so you use an empty struct here.

const (
  CHANNEL_CACHE = 200
)
var tmpChannel = make(chan struct{}, CHANNEL_CACHE)

Writing this way when establishing a connection with the server (isn't it very similar to a semaphore)

tmpChan <- struct{}{}
conn, err := ("tcp",":8080")
<- tmpChan

In this way, the concurrency amount at the same time will be limited by CHANNEL_CACHE

The goroutine opened by loop will send a message to the channel before requesting the server. If the cache is full, it means that there are already CHANNEL_CACHE goroutines connecting to the server, and then it will block here. After one of the goroutines is processed, an empty struct is read out from the channel. At this time, the blocking ground sends an empty struct to the channel, and then the connection to the server can be established.

Here are all the codes

package main
import (
  "sync"
  "net"
  "strconv"
  "fmt"
  "log"
)
const (
  MAX_CONCURRENCY = 10000 
  CHANNEL_CACHE = 200
)
var tmpChan = make(chan struct{}, MAX_CONCURRENCY)
var waitGroup 
func main(){
  concurrency()
  ()
}
//Open the Internet iofunc request(currentCount int){
  ("request" + (currentCount) + "\r")
  tmpChan &lt;- struct{}{}
  conn, err := ("tcp",":8080")
  &lt;- tmpChan
  if err != nil { (err) }
  defer ()
  defer ()
}
//concurrentfunc concurrency(){
  for i := 0;i &lt; MAX_CONCURRENCY;i++ {
    (1)
    go request(i)
	}
}

This way you can happily concurrently! ! !

Supplement: Golang limits N concurrent running simultaneously

I won't say much nonsense, let's just read the code~

package main 
import (
  "fmt"
  "sync"
  "time"
) 
var wg  
func main() {
  var wg 
 
  sem := make(chan struct{}, 2) // Allow up to 2 concurrent executions at the same time  taskNum := 10
 
  for i := 0; i &lt; taskNum; i++ {
    (1)
 
    go func(id int) {
      defer ()
 
      sem &lt;- struct{}{}    // Get the signal      defer func() { &lt;-sem }() // Release signal 
      // do something for task
      ( * 2)
      (id, ())
    }(i)
  }
  ()
}

The above is personal experience. I hope you can give you a reference and I hope you can support me more. If there are any mistakes or no complete considerations, I would like to give you advice.