SoFunction
Updated on 2025-04-11

Implementation of map scaling in Golang

Basic Analysis

In Go's underlying source code src/runtime/, the processing method of scaling is handled by the method of growing as the prefix.

The expansion involves the operation of inserting elements, corresponding to the mapsign method:

func mapassign(t *maptype, h *hmap, key )  {
  ...
 if !() && (overLoadFactor(+1, ) || tooManyOverflowBuckets(, )) {
  hashGrow(t, h)
  goto again
 }
  ...
}
func (h *hmap) growing() bool {
 return  != nil
}
func overLoadFactor(count int, B uint8) bool {
 return count > bucketCnt && uintptr(count) > loadFactorNum*(bucketShift(B)/loadFactorDen)
}
func tooManyOverflowBuckets(noverflow uint16, B uint8) bool {
 if B > 15 {
  B = 15
 }
 return noverflow >= uint16(1)<<(B&15)
} 

The core sees the logic of judgment for expansion and expansion:

Currently not scaling: the condition is oldbuckets and not nil.

Is it possible to expand capacity: The condition is > hash bucket number (2^B)*6.5. It refers to the number of data in map, 2^B refers only to the size of the hash array and does not contain overflow buckets.

Is it possible to shrink: The condition is the number of overflow buckets >= 32768 (1<<15).

It can be noted that whether it is expansion or shrinking, it is handled by the hashGrow method:

 func hashGrow(t *maptype, h *hmap) {
 bigger := uint8(1)
 if !overLoadFactor(+1, ) {
  bigger = 0
   |= sameSizeGrow
 }
  ...
}

If it is expansion, the bigger is 1, that is, B+1. Represents the hash table capacity is 1 times larger. Not satisfied means shrinking, that is, the capacity of the hash table remains unchanged.

It can be concluded that the main difference between the expansion and shrinkage of map is the change in capacity size. Since the shrinkage has not changed at all, the memory space usage has not changed.

Hidden dangers

This method actually has a potential operational risk, that is, it will not release memory when elements are deleted, causing the total allocated memory to continue to increase. If you are not careful, use map to make the key/value storage, and do not pay attention to management, it is easy to cause memory explosion.

That is, maps in Go language currently implement "pseudo-condensation", which is only for cases where there are too many overflow buckets. If the scaling is triggered, the memory size occupied by the hash array remains unchanged (equal expansion).

To achieve "true shrink", Go Contributor @joshalian means that the only workaround available is to create a new map and copy elements from the old map.

Examples are as follows:

old := make(map[int]int, 9999999)
new := make(map[int]int, len(old))
for k, v := range old {
    new[k] = v
}
old = new
...

Why not support scaling

The following content will be analyzed mainly based on the following two issues and proposals:

《runtime: shrink map as elements are deleted[1]》
《proposal: runtime: add way to clear and reuse a map's working storage[2]》

At present, the size reduction of maps is difficult to deal with. The earliest issues were proposed in 2016. Some people have also proposed some proposals, but they were rejected for various reasons.

Simply put, it is not found a good method to implement it. There is a clear implementation cost problem. There is no way to be very convenient. "Tell "Go runtime, I want to:

  • Remember to keep the storage space, I want to reuse the map immediately.
  • Release the storage space quickly, the map will be much smaller from now on.

The crux of the problem is that it is necessary to ensure that the growth result is completed before the next start. The growth here refers to the complex process of "from small to large, from one size to the same size, from large to small".

This belongs to a multiple case, which leads to keep dragging and thinking slowly.

This is the end of this article about the implementation of map scaling in Golang. For more related Golang map scaling, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!