SoFunction
Updated on 2025-03-01

Detailed explanation of how to optimize memory and garbage collector management in Go

Stacks and heaps in Go

This post won't delve into how the garbage collector works internally, as a large number of articles and official documentation already cover this topic. In addition to this, I will introduce key concepts to clarify the topics explored in this article.

In Go, data can be divided into two main memory stores: stack and heap.

Generally speaking, the size and life cycle of data stored in the stack are expected by the Go compiler. This includes local function variables, function parameters, return values, etc.

The stack is automatically managed and follows the last-in-first-out (LIFO) principle. When a function is called, all associated data is placed at the top of the stack, and after the function is completed, the data will be deleted. The stack runs efficiently, minimizing memory management overhead. The process of retrieving and storing data on the stack is fast.

Nevertheless, not all program data can reside in the stack. Data that changes dynamically during execution or data that requires access beyond the scope of the function cannot be contained in the stack because the compiler cannot predict its usage. This data finds its own home in the heap.

Retrieving data from the heap and its management is a more resource-consuming process than a stack.

Stack and heap allocation

As mentioned earlier, the stack provides values ​​for predictable size and lifetime. Examples of such values ​​include local variables declared within a function, such as basic data types (such as numbers and booleans), function parameters, and function return values ​​(if they no longer find references after returning from the function).

The Go compiler takes various nuances when deciding whether to allocate data on the stack or heap. For example, preallocated slices with a size of up to 64 KB will be allocated to the stack, while slices with a size of more than 64 KB will be assigned to the heap. The same standard applies to arrays: arrays over 10 MB will be dispatched to the heap.

To determine the allocation location of a particular variable, escape analysis can be used. To do this, you can double-check the application by compiling it from the command line using the following flag-gcflags=-m

go build -gcflags=-m 

When compiling the following application with the following flag-gcflags=-m

package main

func main() {
  var arrayBefore10Mb [1310720]int
  arrayBefore10Mb[0] = 1

  var arrayAfter10Mb [1310721]int
  arrayAfter10Mb[0] = 1

  sliceBefore64 := make([]int, 8192)
  sliceOver64 := make([]int, 8193)
  sliceOver64[0] = sliceBefore64[0]
}

The results show thatarrayAfter10MbSince the array size exceeds 10 MB, the array is relocated to the heap. on the contrary,arrayBefore10MbStay on the stack. also,sliceBefore64Since its size is less than 64 KB, it is not migrated to the heap, andsliceOver64Stored in the heap.

For a more in-depth look at heap allocation, see the documentation.

Garbage collector: manage heap

An effective way to deal with the heap is to avoid using it. But what if the data has entered the heap?

Contrary to the stack, the heap has unlimited size and continuous growth. The heap is the location of dynamically generated objects, such as structures, slices, maps, and large amounts of memory blocks that cannot adapt to stack constraints.

The garbage collector is the only tool to reclaim the heap memory and prevent it from blocking completely.

Understand the garbage collector

A garbage collector (commonly known as GC) is a dedicated system designed to identify and free dynamically allocated memory.

Go uses a garbage collection algorithm rooted in tracking and tagging and cleaning methods. During the tagging phase, the garbage collector specifies the data actively used by the application as the active heap. Then, during the cleanup phase, the GC will traverse the unmarked memory to make it reusable.

However, the operation of the garbage collector comes at a cost, consuming two important system resources: CPU time and physical memory.

The memory in the garbage collector includes:

  • Active heap memory (memory marked "active" in the previous garbage collection cycle).
  • New heap memory (heap memory not yet analyzed by the garbage collector).
  • Metadata storage, which is usually trivial compared to the first two entities.

The CPU time consumed by the garbage collector depends on its operating mode. Some garbage collector implementations (labeled "stop-the-world") completely pause program execution during garbage collection, resulting in a wasted CPU time on non-productive tasks.

In the context of Go, the garbage collector is not exactly "stop the world", and most of its work (including heap tags) is parallel to the execution of the application. However, it does require some limitations and periodically stops the execution of the active code within a cycle.

So far, let's go a step further.

Manage garbage collectors

Controlling the garbage collector in Go can be implemented with specific parameters: GOGC environment variable or its functional equivalent SetGCPercent, which can be found in the runtime/debug package.

The GOGC parameter indicates the percentage of new, unallocated heap memory associated with active memory at garbage collection startup.

By default, GOGC is set to 100, indicating that garbage collection is triggered when the new memory reaches 100% of the active heap memory.

Consider a sample program and track changes in heap size through the go tool. We will use Go version 1.20.1 to execute the program.

In this example, theperformMemoryIntensiveTaskFunctions consume a lot of memory allocated in the heap. This function starts a queue sizeNumWorkerWork pool with the number of tasks equal toNumTasks

package main

import (
 "fmt"
 "os"
 "runtime/debug"
 "runtime/trace"
 "sync"
)

const (
 NumWorkers    = 4     // Number of workers.
 NumTasks      = 500   // Number of tasks.
 MemoryIntense = 10000 // Size of memory-intensive task (number of elements).
)

func main() {
 // Write to the trace file.
 f, _ := ("")
 (f)
 defer ()

 // Set the target percentage for the garbage collector. Default is 100%.
 (100)

 // Task queue and result queue.
 taskQueue := make(chan int, NumTasks)
 resultQueue := make(chan int, NumTasks)

 // Start workers.
 var wg 
 (NumWorkers)
 for i := 0; i < NumWorkers; i++ {
  go worker(taskQueue, resultQueue, &wg)
 }

 // Send tasks to the queue.
 for i := 0; i < NumTasks; i++ {
  taskQueue <- i


 }
 close(taskQueue)

 // Retrieve results from the queue.
 go func() {
  ()
  close(resultQueue)
 }()

 // Process the results.
 for result := range resultQueue {
  ("Result:", result)
 }

 ("Done!")
}

// Worker function.
func worker(tasks <-chan int, results chan<- int, wg *) {
 defer ()

 for task := range tasks {
  result := performMemoryIntensiveTask(task)
  results <- result
 }
}

// performMemoryIntensiveTask is a memory-intensive function.
func performMemoryIntensiveTask(task int) int {
 // Create a large-sized slice.
 data := make([]int, MemoryIntense)
 for i := 0; i < MemoryIntense; i++ {
  data[i] = i + task
 }

 // Imitation of latency.
 (10 * )

 // Calculate the result.
 result := 0
 for eachValue := range data {
  result += eachValue
 }
 return result
}

To track the execution of the program, the result is written to the file

// Writing to the trace file.
f, _ := ("")
(f)
defer ()

By usinggo tool trace,We can observe the fluctuations in the heap size and analyze the behavior of the garbage collector in the program.

Please note the specific details and features of different Go versionsgo tool traceIt may vary, so it is recommended to consult the official documentation for version-specific information.

GOGC default value

GOGC parameters can be set through functions in the packageruntime/debug. By default, GOGC is configured to 100%.

To run our program, use the following command:

go run 

After the program is executed,A file will be generated. To analyze it, execute the following command:

go tool trace 

When the GOGC value is 100, the garbage collector is fired 16 times, and in our example, it consumes a total of 14 milliseconds.

Increase GC frequency

If we run the code after setting it to 10%(10), the garbage collector will be called more frequently. In this case, the garbage collector will be activated when the current heap size reaches 10% of the active heap size.

In other words, if the active heap size is 10 MB, the garbage collector will start when the current heap size reaches 1 MB.

When the GOGC value is 10, the garbage collector is called 38 times, and the total garbage collection time is 28 ms.

Reduce GC frequency

Run the same program at 1000%(1000)This causes the garbage collector to fire when the current heap size reaches 1000% of the active heap size.

In this case, the garbage collector is activated once and executed for 2 milliseconds.

Disable GC

You can also disable the garbage collector by setting GOGC=off or using(-1).

After closing GC, the heap size in the application continues to grow until the program executes.

Heap memory usage

In the real memory allocation of the real-time heap, the process does not occur regularly and predictably as seen in the tracking.

The active heap can change dynamically with each garbage collection cycle and under certain conditions, its absolute value may peak.

To simulate this, running a program in a container with a memory limit may result in an out of memory (OOM) error.

In this example, the program runs in a container with a memory limit of 10 MB for testing. The Dockerfile description is as follows:

FROM golang:latest as builder

WORKDIR /src
COPY .

RUN go env -w GO111MODULE=on

RUN go mod vendor
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -a -installsuffix cgo -o app ./cmd/

FROM golang:latest
WORKDIR /root/
COPY --from=builder /src/app .
EXPOSE 8080
CMD ["./app"]

The description of Docker-compose is:

version: '3'
services:
 my-app:
   build:
     context: .
     dockerfile: Dockerfile
   ports:
     - 8080:8080
   deploy:
     resources:
       limits:
         memory: 10M

Start the container(1000%)Causes an OOM error:

docker-compose build
docker-compose up

The container crashes with an error code 137 indicating insufficient memory.

Avoid OOM errors

Starting with Go version 1.19, Golang introduced "soft memory management" with the GOMEMLIMIT option. This feature uses the GOMEMLIMIT environment variable to set the overall memory limit that can be used by the Go runtime. For example,GOMEMLIMIT = 8MiB, where 8 MB is the memory size.

This mechanism is designed to solve OOM problems. After GOMEMLIMIT is enabled, the garbage collector is called regularly to keep the heap size within a certain limit to avoid memory overload.

Cost performance issues

GOMEMLIMIT is a powerful but also a double-edged sword tool. It can lead to a situation called the "death spiral." When the overall memory size approaches GOMEMLIMIT due to real-time heap growth or continuous Goroutine leaks, the garbage collector is constantly invoked according to the limit.

Frequent garbage collector calls may lead to increased CPU usage and program performance degradation. Unlike OOM errors, the death spiral is difficult to detect and fix.

GOMEMLIMIT does not provide 100% guarantees that memory limits are strictly enforced, allowing memory utilization to exceed limits. It also sets CPU usage limits to prevent excessive resource consumption.

Where to apply for GOMEMLIMIT and GOGC

GOMEMLIMIT has advantages in a variety of situations:

  • It is a good practice to run an application in a container with limited memory and leave 5-10% of available memory.
  • Real-time management of GOMEMLIMIT can be useful when dealing with resource-intensive code.
  • Disabling the garbage collector but setting GOMEMLIMIT improves performance and prevents resource limits from exceeding the container when the application runs as a script in the container.

Avoid GOMEMLIMIT when:

  • Do not define memory limits when your program is already close to the memory limit of its operating environment.
  • Avoid enforcing memory limits when deploying programs in an execution environment that you do not supervise, especially if the program's memory consumption is directly related to its input data. This is especially important for tools such as command line interfaces or desktop applications.

Obviously, by taking a deliberate approach, we can effectively control specific program settings, including garbage collectors and GOMEMLIMIT. Nevertheless, it is crucial to thoroughly evaluate the approaches to implementing these settings.

The above is a detailed explanation of how to perform memory optimization and garbage collector management in Go. For more information about Go garbage collector, please pay attention to my other related articles!