SoFunction
Updated on 2025-03-05

Detailed explanation of Golang memory allocation mechanism

The basic principles of memory allocation

In computer science, memory allocation refers to the process of allocating storage space for variables and data structures in a program. In Golang, memory allocation is mainly managed by the Go runtime system (runtime), which mainly includes the following two aspects:

  • Heap memory allocation: Heap memory is a memory area used for dynamic allocation, mainly used to store objects and data structures created during program operation. When memory allocation is performed through new, make and other functions or using pointers, memory will be allocated on the heap.
  • Stack memory allocation: Stack memory is a memory area used to store information such as local variables and return addresses when calling a function. Memory allocations of basic types (such as integers, floating point, booleans, etc.) and small objects (usually less than 64 bytes) usually occur on the stack.

Go uses a memory allocator called "tcmalloc" (thread-caching malloc), which was originally developed by Google. The design goal of tcmalloc is to reduce competition for global locks and improve the performance of multi-threaded programs. Go's memory allocator is an implementation based on the concept of tcmalloc and has several key components:

  • M: Represents the operating system thread (machine).
  • P: Represents a processor, managing a set of local caches.
  • G: represents goroutine, which is the smallest unit for Go program execution.

Each P has its own memory cache (mcache) for quick allocation of small objects. When mcache is used up, P will get memory from the central cache (mcentral). If mcentral is insufficient, the memory allocator will request more memory from the operating system.

Golang Memory Allocation Mechanism

  • Small object allocation, small objects (generally less than 32KB) are allocated through mcache of P. mcache contains a series of fixed-size blocks of memory called "span". Each span is specifically used for objects of one size. When a goroutine needs to allocate a small object, it will look for the corresponding size span and allocate a piece of memory from it.
  • Large object allocation, large objects (usually greater than 32KB) are allocated directly from the heap. This is because large objects are allocated and recycled less than smaller objects, and operating directly on the heap can reduce the complexity of fragmentation and management.
  • Memory allocation optimization. In order to reduce the cost of memory allocation, Go's memory allocator will perform some optimizations:
    • Size and class allocation: In order to reduce memory fragmentation and improve memory reuse, Go divides objects into different size classes. Objects of each size class are assigned to the corresponding span.
    • Object Alignment:Go ensures that objects are aligned in memory, which helps improve the efficiency of CPU cache.
    • Batch allocation: When the span in mcache runs out, instead of just getting one new span at a time, multiple spans are obtained from mcentral in batches, it can reduce the number of interactions with mcentral.

Garbage Collect (GC)

Go's garbage collector is a concurrent, mark-sweep garbage collector. Garbage recycling is divided into several stages:

  • Tag phase: The garbage collector stops all goroutines (STW - stop the world), quickly scans the stack and global variables, and marks all reachable objects.
  • Concurrent tag: goroutine is resumed and execution, and the garbage collector completes the tagging work concurrently in the background.
  • Clearing phase: Clearing unlabeled objects is usually performed concurrently.

Go's garbage collector is designed with low latency and minimizes interference with program execution.

Memory escape

In Go, the compiler will try to allocate memory on the stack, because memory allocation and recycling on the stack are very fast. However, not all memory allocations can be done on the stack. When the compiler cannot guarantee that the life cycle of an object is limited to the scope it defines, these objects are allocated to the heap, which is called "memory escape".

Factors influencing memory allocation

The performance of memory allocation may be affected by a variety of factors, including the following aspects:

  • Memory allocation frequency: Frequent memory allocation and recycling will increase the workload of the garbage collector, which will affect performance.
  • Object size: The allocation of large objects is usually slower than that of small objects because they do not go through mcache.
  • Object Lifecycle: Long-life cycle objects may cause increased memory usage because they will not be recycled frequently.

Best practices for memory allocation

In order to optimize memory allocation, we can start from the following aspects:

  • Reuse objects: Reduce allocations by reusing objects.
  • Pooling resources: Use to pool reusable objects.
  • Avoid memory escape: Avoid unnecessary memory escape by reducing pointer usage and closure capture.
  • Reasonable data structure: Choose the right data structure to reduce memory usage and fragmentation.

summary

Go's memory allocator is designed for concurrent and multithreading, providing efficient memory allocation through a series of optimizations. The performance of memory allocation depends not only on the allocator itself, but also on the design and encoding of the program. In daily development, tools such as pprof can be used to analyze and optimize the memory usage of a program. Through practice and analysis, we can have a deeper understanding and master Go's memory management mechanism.

The above is the detailed explanation of Golang's memory allocation mechanism. For more information about Golang's memory allocation mechanism, please pay attention to my other related articles!