SoFunction
Updated on 2025-04-08

Detailed explanation of concurrent parallelism and global locks in ruby

Preface

This article mainly introduces the relevant content about ruby ​​concurrent parallelism and global locks. We will share it for your reference and learning. I won’t say much below, let’s take a look at the detailed introduction together.

Concurrency and parallelism

During development, we often come into contact with two concepts: concurrency and parallelism. Almost all articles talking about concurrency and parallelism will mention one thing: Concurrency does not equal parallelism. So how do you understand this sentence?

  • Concurrency: The chef received a menu ordered by 2 guests at the same time and needed to be processed.
  • Sequential execution: If there is only one chef, then he can only complete one menu after another.
  • Parallel execution: If there are two chefs, then you can go in parallel and two people cook together.

Expand this example into our web development and you can understand it like this:

  • Concurrency: The server receives requests initiated by two clients at the same time.
  • Sequential execution: The server only has one process (thread) to process the request. Only when the first request is completed can the second request be completed, so the second request needs to be waited.
  • Parallel execution: The server has two processes (threads) to process the request, and both requests can be responded, without the problem of sequence.

Based on the example described above, how do we simulate such a concurrent behavior in ruby? See the following code:

1. Execute in sequence:

Simulate operations when there is only one thread.

require 'benchmark'

def f1
 puts "sleep 3 seconds in f1\n"
 sleep 3
end

def f2
 puts "sleep 2 seconds in f2\n"
 sleep 2 
end

 do |b|
  do
 f1
 f2
 end 
end
## 
## user  system  total  real
## sleep 3 seconds in f1
## sleep 2 seconds in f2
## 0.000000 0.000000 0.000000 ( 5.009620)

The above code is very simple, using sleep to simulate time-consuming operations. The time consumed during sequential execution.

2. Parallel execution

Simulate multithreading operations

#Connect the above code do |b|
  do
 threads = []
 threads <<  { f1 }
 threads <<  { f2 }
 (&:join)
 end 
end
##
## user  system  total  real
## sleep 3 seconds in f1
## sleep 2 seconds in f2
## 0.000000 0.000000 0.000000 ( 3.005115)

We found that the time-consuming time under multithreading is similar to that of f1, which is the same as we expected. Parallelism can be achieved using multithreading.

Ruby's multi-threading can cope with IO Block. When a thread is in the IO Block state, other threads can continue to execute, which greatly shortens the overall processing time.

Threads in Ruby

The above code example uses the Thread class in ruby, and Ruby can easily write multithreaded programs of the Thread class. Ruby threads are a lightweight and efficient way to implement parallelism in your code.

Next, let’s describe a concurrency scenario

 def thread_test
 time = 
 threads =  do 
   do
  sleep 3 
  end
 end
 puts "No need to wait3You can see me in seconds:#{ - time}"
 (&:join)
 puts "Need to wait now3You can see me in seconds:#{ - time}"
 end
 test
 ## You can see me without waiting for 3 seconds: 8.6e-05 ## Need to wait now3You can see me in seconds:3.003699

The creation of Thread is non-blocking, so the text can be output immediately. This simulates a concurrent behavior. Each thread sleeps for 3 seconds. In the case of blocking, multiple threads can achieve parallelism.

So at this time, have we completed the ability to parallelize?

Unfortunately, my description above just mentions that we can simulate parallelism in non-blocking situations. Let's look at other examples:

require 'benchmark'
def multiple_threads
 count = 0
 threads =  do 
  do
   { count += 1}
 end
 end
 (&:join)
end

def single_threads
 time = 
 count = 0
  do
  { count += 1}
 
end

 do |b|
  { multiple_threads }
  { single_threads }
end
##  user  system  total  real
## 0.600000 0.010000 0.610000 ( 0.607230)
## 0.610000 0.000000 0.610000 ( 0.623237)

From here we can see that even though we divide the same task into 4 threads in parallel, the time has not decreased. Why is this?

Because there is a global lock (GIL)! ! !

Global lock

The ruby ​​we usually use uses a mechanism called GIL.

Even if we want to use multiple threads to implement code parallelism, due to the existence of this global lock, only one thread can execute the code at a time. As for which thread can execute, this depends on the implementation of the underlying operating system.

Even if we have multiple CPUs, we only provide several more options for the execution of each thread.

In our code above, there is only one thread that can execute count += 1 at a time.

Ruby multi-threading cannot reuse multi-core CPUs. The overall time spent after using multi-threading is not shortened. On the contrary, due to the influence of thread switching, the time spent may increase slightly.

But when we sleep before, we clearly realized parallelism!

This is the advanced part of Ruby's design - all blocking operations can be parallel, including reading and writing files and network requests.

require 'benchmark'
require 'net/http'

# Simulate network requestsdef multiple_threads
 uri = URI("")
 threads =  do 
  do
   { Net::(uri) }
 end
 end
 (&:join)
end

def single_threads
 uri = URI("")
  do
  { Net::(uri) }
 
end

 do |b|
  { multiple_threads }
  { single_threads }
end

 user  system  total  real
0.240000 0.110000 0.350000 ( 3.659640)
0.270000 0.120000 0.390000 ( 14.167703)

The program blocks during network requests, and these blocks can be parallelized under the operation of Ruby, so the time-consuming process is greatly shortened.

GIL's Thoughts

So, since this GIL lock exists, does it mean that our code is thread-safe?

Unfortunately, GIL will switch to another work thread during ruby ​​execution. If some class variables are shared, it may be trapped.

So, when will GIL switch to another thread to work during the execution of ruby ​​code?

There are several clear work points:

  • Method calls and method return, in these two places, check whether the current thread's gil lock timed out and whether it needs to be scheduled to another thread to work.
  • All io-related operations will also release the Gil lock for other threads to work.
  • Manually release Gil's lock in code extended by c
  • Another difficult to understand is that when ruby ​​stack enters c stack, gil detection will also be triggered.

An example

@a = 1
r = []
 do |e|

 {
 @c = 1
 @c += @a
 r << [e, @c]
}
end
r
## [[3, 2], [1, 2], [2, 2], [0, 2], [5, 2], [6, 2], [7, 2], [8, 2], [9, 2], [4, 2]]

Although the order of e in r is different in the above, the value of @c is always maintained at 2, that is, the current value of @c can be retained at every thread. There is no thread scheduling.

If added to the above code thread, an operation that may trigger GIL, for example, puts print to the screen:

@a = 1
r = []
 do |e|

 {
 @c = 1
 puts @c
 @c += @a
 r << [e, @c]
}
end
r
## [[2, 2], [0, 2], [4, 3], [5, 4], [7, 5], [9, 6], [1, 7], [3, 8], [6, 9], [8, 10]]

This will trigger the GIL lock and the data will be abnormal.

summary

Most web applications are IO-intensive. Using Ruby multi-process + multi-threading model will greatly improve system throughput. The reason is that when a certain Ruby thread is in the IO Block state, other threads can continue to execute, thereby reducing the overall impact of IO Block. However, due to the existence of Ruby GIL (Global Interpreter Lock), MRI Ruby cannot truly use multi-threading for parallel computing.

PS. It is said that JRuby removes GIL and is a truly multi-threading. It can not only cope with IO Blocks, but also make full use of multi-core CPUs to speed up the overall computing speed, and have a plan to understand some things.

Summarize

The above is the entire content of this article. I hope that the content of this article has a certain reference value for everyone's study or work. If you have any questions, you can leave a message to communicate. Thank you for your support.