SoFunction
Updated on 2025-03-09

Native Memory Tracking Tracking Area Example Analysis

Compiler

Compiler is the memory used by the JIT compiler thread when compiling code.

View NMT details:

[0x0000ffff93e3acc0] Thread::allocate(unsigned long, bool, MemoryType)+0x348
[0x0000ffff9377a498] CompileBroker::make_compiler_thread(char const*, CompileQueue*, CompilerCounters*, AbstractCompiler*, Thread*)+0x120
[0x0000ffff9377ce98] CompileBroker::init_compiler_threads(int, int)+0x148
[0x0000ffff9377d400] CompileBroker::compilation_init()+0xc8
                             (malloc=37KB type=Thread #12)

Tracking the call link:

InitializeJVM ->Threads::create_vm ->CompileBroker::compilation_init ->CompileBroker::init_compiler_threads ->CompileBroker::make_compiler_thread

It was found that the number of threads that make_compiler_thread last was calculated in compilation_init():

# hotspot/src/share/vm/compiler/
void CompileBroker::compilation_init() {
  ......
  // No need to initialize compilation system if we do not use it.
  if (!UseCompiler) {
    return;
  }
#ifndef SHARK
  // Set the interface to the current compiler(s).
  int c1_count = CompilationPolicy::policy()->compiler_count(CompLevel_simple);
  int c2_count = CompilationPolicy::policy()->compiler_count(CompLevel_full_optimization);
  ......
  // Start the CompilerThreads
  init_compiler_threads(c1_count, c2_count);
  ......
}

To trace the calculation logic of c1_count and c2_count, first set the compilation policy when the JVM is initialized (Threads::create_vm -> init_globals -> compilationPolicy_init) CompilationPolicy:

# hotspot/src/share/vm/runtime/
void Arguments::set_tiered_flags() {
  // With tiered, set default policy to AdvancedThresholdPolicy, which is 3.
  if (FLAG_IS_DEFAULT(CompilationPolicyChoice)) {
    FLAG_SET_DEFAULT(CompilationPolicyChoice, 3);
  }
  ......
}
# hotspot/src/share/vm/runtime/
// Determine compilation policy based on command line argument
void compilationPolicy_init() {
  CompilationPolicy::set_in_vm_startup(DelayCompilationDuringStartup);
  switch(CompilationPolicyChoice) {
  ......
  case 3:
#ifdef TIERED
    CompilationPolicy::set_policy(new AdvancedThresholdPolicy());
#else
    Unimplemented();
#endif
    break;
  ......
  CompilationPolicy::policy()->initialize();
}

At this time, we have enabled hierarchical compilation by default, so CompilationPolicyChoice is 3. The compilation policy uses AdvancedThresholdPolicy to view the relevant source code (compilationPolicy_init -> AdvancedThresholdPolicy::initialize):

# hotspot/src/share/vm/runtime/
void AdvancedThresholdPolicy::initialize() {
  // Turn on ergonomic compiler count selection
  if (FLAG_IS_DEFAULT(CICompilerCountPerCPU) && FLAG_IS_DEFAULT(CICompilerCount)) {
    FLAG_SET_DEFAULT(CICompilerCountPerCPU, true);
  }
  int count = CICompilerCount;
  if (CICompilerCountPerCPU) {
    // Simple log n seems to grow too slowly for tiered, try something faster: log n * log log n
    int log_cpu = log2_int(os::active_processor_count());
    int loglog_cpu = log2_int(MAX2(log_cpu, 1));
    count = MAX2(log_cpu * loglog_cpu, 1) * 3 / 2;
  }
  set_c1_count(MAX2(count / 3, 1));
  set_c2_count(MAX2(count - c1_count(), 1));
  ......
}

We can find that when the two parameters -XX:CICompilerCountPerCPU and -XX:CICompilerCount are not manually set, the JVM will start the CICompilerCountPerCPU. The number of start-up compilation threads will be recalculated based on the number of CPUs and no longer use the default CICompilerCount value (3). The calculation formula is usually log n * log log n * 1.5 (log is based on 2). At this time, the machine I used has 64 CPUs. After calculation, it is concluded that the number of compilation threads is 18. After calculating the total number of compilation threads, it is allocated to C1 and C2 in a ratio of 1:2, that is, c1_count and c2_count we have requested above.

Use jinfo -flag CICompilerCount to verify the number of compilation threads of the JVM process at this time:

jinfo -flag CICompilerCount 
-XX:CICompilerCount=18

So we can control the number of JVM's compilation threads by explicitly setting -XX:CICompilerCount, thereby limiting the memory used by the Compiler part (of course, this part of the memory is relatively small).

We can also reduce memory usage by turning off hierarchical compilation through -XX:-TieredCompilation. Of course, whether to turn off hierarchical compilation depends on actual business needs, and the memory saved is really tiny.

The compilation thread is also a thread, so we can also save this part of memory by setting a smaller value by -XX:VMThreadStackSize, but reducing the stack size of the virtual machine thread is a dangerous operation, and it is not recommended to set this parameter because of this.

Internal

Internal contains memory used by the command line parser, JVMTI, PerfData, and memory allocated by Unsafe, etc.

The command line interpreter parses the command line parameters of the JVM and performs corresponding operations when initializing the virtual machine, such as the parameters-XX:NativeMemoryTracking=detailPerform analysis.

JVMTI (JVM Tool Interface) is a programming interface used to develop and monitor JVMs. It provides some methods to check the JVM status and control the operation of the JVM. For details, please see the official JVMTI documentation [1].

PerfData is a file used in the JVM to record some metric data. If -XX:+UsePerfData is enabled (on by default), the JVM will map to it through mmap (that is, using os::reserve_memory and os::commit_memory mentioned above).{tmpdir}/hsperfdata_/pidIn the file, jstat displays various metric information in the JVM process by reading data in PerfData.

It should be noted that{tmpdir}/hsperfdata_/pidand{tmpdir}/.java_pidIt's not a thing, the latter is used for communication in the Attach mechanism, similar to the idea of ​​Unix Domain Socket, but the real Unix Domain Socket (JEP380 [2]) is only supported in JDK16.

When operating nio, we often use ByteBuffer, where /DirectByteBuffer will malloc allocate naive memory through the method. Although DirectByteBuffer itself is still stored in the Heap heap, its corresponding address maps native memory allocated on the off-heap memory. NMT will record the memory allocated by Unsafe_AllocateMemory in the Internal (jstat also uses PerfData through ByteBuffer).

It should be noted that the memory allocated by Unsafe_AllocateMemory belongs to Internal in NMT before JDK11, but is assigned to Other by NMT after JDK11.

For example, the same tracking is performed in JDK11: [0x0000ffff8c0b4a60] Unsafe_AllocateMemory0+0x60``[0x0000ffff6b822fbc] (malloc=393218KB type=Other #3)

A brief look at the relevant source code:

# 
    public static ByteBuffer allocateDirect(int capacity) {
        return new DirectByteBuffer(capacity);
    }
# 
  DirectByteBuffer(int cap) {                   // package-private
        ......
        long base = 0;
        try {
            base = (size);
        }
       ......
# 
  public native long allocateMemory(long bytes);
# hotspot/src/share/vm/prims/
UNSAFE_ENTRY(jlong, Unsafe_AllocateMemory(JNIEnv *env, jobject unsafe, jlong size))
  UnsafeWrapper("Unsafe_AllocateMemory");
  size_t sz = (size_t)size;
  ......
  sz = round_to(sz, HeapWordSize);
  void* x = os::malloc(sz, mtInternal);
  ......
UNSAFE_END

Generally speaking, command line interpreters, JVMTI, etc. will not apply for too much memory. We need to note that the off-heap memory applied through Unsafe_AllocateMemory (such as the business uses Netty), which can be verified through a simple example.

The JVM startup parameters for this example are: -Xmx1G -Xms1G -XX:+UseG1GC -XX:MaxMetaspaceSize=256M -XX:ReservedCodeCacheSize=256M -XX:NativeMemoryTracking=detail (removing the limit of -XX:MaxDirectMemorySize=256M):

import ;
public class ByteBufferTest {
    private static int _1M = 1024 * 1024;
    private static ByteBuffer allocateBuffer_1 = (128 * _1M);
    private static ByteBuffer allocateBuffer_2 = (256 * _1M);
    public static void main(String[] args) throws Exception {
        ("MaxDirect memory: " + () + " bytes");
        ("Direct allocation: " + (allocateBuffer_1.capacity() + allocateBuffer_2.capacity()) + " bytes");
        ("Native memory used: " + ().getDirectBufferPool().getMemoryUsed() + " bytes");
        (6000000);
    }
}

View the output:

MaxDirect memory: 1073741824 bytes
Direct allocation: 402653184 bytes
Native memory used: 402653184 bytes

View NMT details:

-                  Internal (reserved=405202KB, committed=405202KB)
                            (malloc=405170KB #3605) 
                            (mmap: reserved=32KB, committed=32KB) 
                   ......
                   [0x0000ffffbb599190] Unsafe_AllocateMemory+0x1c0
                   [0x0000ffffa40157a8]
                             (malloc=393216KB type=Internal #2)
                   ......
                   [0x0000ffffbb04b3f8] GenericGrowableArray::raw_allocate(int)+0x188
                   [0x0000ffffbb4339d8] PerfDataManager::add_item(PerfData*, bool) [clone .constprop.16]+0x108
                   [0x0000ffffbb434118] PerfDataManager::create_string_variable(CounterNS, char const*, int, char const*, Thread*)+0x178
                   [0x0000ffffbae9d400] CompilerCounters::CompilerCounters(char const*, int, Thread*) [clone .part.78]+0xb0
                             (malloc=3KB type=Internal #1)
                   ......

It can be found that we use the method in the code (internal also uses new DirectByteBuffer(capacity)), that is, the off-heap memory applied by Unsafe_AllocateMemory is recorded by NMT in the Internal form: (128 M + 256 M)= 384 M = 393216 KB = 402653184 Bytes.

Of course we can use the parameter -XX:MaxDirectMemorySize to limit the maximum memory requested by Direct Buffer.

Symbol

Symbol is the memory used by the symbol table in the JVM. There are two main symbol tables in HotSpot:SymbolTableandStringTable

Everyone knows that Java classes will generate a Constant pool constant pool after compilation. There will be many string constants in the constant pool. For the sake of saving memory, HotSpot often stores these string constants as a Symbol object into a table structure of a HashTable, namely SymbolTable. If the string can be lookedup (SymbolTable::lookup) in SymbolTable, then the string will be reused. If it cannot be found, a new Symbol (SymbolTable::new_symbol) will be created.

Of course, in addition to SymbolTable, there is also its twin brother StringTable (the StringTable structure is basically the same as SymbolTable, both of which are HashTable structures), which are the string constant pools we often call. We usually do business development and deal with StringTable. HotSpot also provides us with StringTable based on memory saving considerations. We can put strings into StringTable in a way to reuse strings.

Write a simple example:

public class StringTableTest {
    public static void main(String[] args) throws Exception {
        while (true){
            String str = new String("StringTestData_" + ());
            ();
        }
    }
}

After starting the program, we can use itjcmd VM.native_memory baselineCreate a baseline for easy comparison, wait for a while before using itjcmd VM.native_memory /Compare with the created baseline, after comparison, we can find:

Total: reserved=2831553KB +20095KB, committed=1515457KB +20095KB
......
-                    Symbol (reserved=18991KB +17144KB, committed=18991KB +17144KB)
                            (malloc=18504KB +17144KB #2307 +2143)
                            (arena=488KB #1)
......
[0x0000ffffa2aef4a8] BasicHashtable<(MemoryType)9>::new_entry(unsigned int)+0x1a0
[0x0000ffffa2aef558] Hashtable::new_entry(unsigned int, oopDesc*)+0x28
[0x0000ffffa2fbff78] StringTable::basic_add(int, Handle, unsigned short*, int, unsigned int, Thread*)+0xe0
[0x0000ffffa2fc0548] StringTable::intern(Handle, unsigned short*, int, Thread*)+0x1a0
                             (malloc=17592KB type=Symbol +17144KB #2199 +2143)
......

The memory of the JVM process has increased by 20095KB during this period, and most of them are the memory applied for by Symbol (17144KB). The specific application information is StringTable::intern that is constantly applying for memory.

If our program uses () incorrectly or JDK intern-related bugs cause memory exceptions, we can easily assist in locating it in this way.

It should be noted that the parameters provided by the virtual machine -XX:StringTableSizeNotTo limit the maximum memory size of StringTable, we add-XX:StringTableSize=10MRestart the JVM process and check the NMT tracking situation after a period of time:

-                    Symbol (reserved=100859KB +17416KB, committed=100859KB +17416KB)
                            (malloc=100371KB +17416KB #2359 +2177)
                            (arena=488KB #1)
......
[0x0000ffffa30c14a8] BasicHashtable<(MemoryType)9>::new_entry(unsigned int)+0x1a0
[0x0000ffffa30c1558] Hashtable::new_entry(unsigned int, oopDesc*)+0x28
[0x0000ffffa3591f78] StringTable::basic_add(int, Handle, unsigned short*, int, unsigned int, Thread*)+0xe0
[0x0000ffffa3592548] StringTable::intern(Handle, unsigned short*, int, Thread*)+0x1a0
                             (malloc=18008KB type=Symbol +17416KB #2251 +2177)

You can find that the size of StringTable is more than 10M. Check the function of this parameter:

# hotsopt/src/share/vm/classfile/
  StringTable() : RehashableHashtable((int)StringTableSize,
                              sizeof (HashtableEntry)) {}
  StringTable(HashtableBucket* t, int number_of_entries)
    : RehashableHashtable((int)StringTableSize, sizeof (HashtableEntry), t,
                     number_of_entries) {}

Because StringTable is stored in the form of a HashTable in HotSpot, the -XX:StringTableSize parameter is actually the length of the HashTable. If the value is set too small, even if the HashTable rehash, hash conflicts will be very frequent, which will cause performance degradation and may lead to the increase in time entering SafePoint. If this happens, you can increase the value.

  • -XX:StringTableSize defaults to 1009 on 32-bit systems, and defaults to 60013 on 64-bit systems:const int defaultStringTableSize = NOT_LP64(1009) LP64_ONLY(60013);
  • In G1, the -XX:+UseStringDedupplication parameter can be used to enable the automatic string deduplication function (off by default), and the -XX:StringDedupplicationAgeThreshold is used to control the GC age threshold for strings to participate in deduplication.
  • Similar to -XX:StringTableSize, we can control the length of the SymbolTable table through -XX:SymbolTableSize.

If we are using NMT after JDK11, we can directly use the commandjcmd andjcmd Let's check the usage of both:

StringTable statistics:
Number of buckets       :  16777216 = 134217728 bytes, each 8
Number of entries       :     39703 =    635248 bytes, each 16
Number of literals      :     39703 =   2849304 bytes, avg  71.765
Total footprsize_t         :           = 137702280 bytes
Average bucket size     :     0.002
Variance of bucket size :     0.002
Std. dev. of bucket size:     0.049
Maximum bucket size     :         2
SymbolTable statistics:
Number of buckets       :     20011 =    160088 bytes, each 8
Number of entries       :     20133 =    483192 bytes, each 24
Number of literals      :     20133 =    753832 bytes, avg  37.443
Total footprint         :           =   1397112 bytes
Average bucket size     :     1.006
Variance of bucket size :     1.013
Std. dev. of bucket size:     1.006
Maximum bucket size     :         9

Native Memory Tracking

The memory used by Native Memory Tracking is the memory applied by the NMT function itself after the JVM process enables the NMT function.

When MemTracker::init() is initialized, the JVM will use tracking_level() -> init_tracking_level() to obtain the tracking_level tracking level we set (such as summary, detail), and then pass the obtained level into MallocTracker::initialize(level) and VirtualMemoryTracker::initialize(level) for judgment. Only when level >= summary, the virtual machine will allocate the memory used by NMT itself, such as: VirtualMemoryTracker, MallocMemorySummary, and MallocSiteTable (it will be created when detailed) Wait for recording various data tracked by NMT.

# /hotspot/src/share/vm/services/
void MemTracker::init() {
  NMT_TrackingLevel level = tracking_level();
  ......
}
# /hotspot/src/share/vm/services/
static inline NMT_TrackingLevel tracking_level() {
    if (_tracking_level == NMT_unknown) {
      // No fencing is needed here, since JVM is in single-threaded
      // mode.
      _tracking_level = init_tracking_level();
      _cmdline_tracking_level = _tracking_level;
    }
    return _tracking_level;
  }
# /hotspot/src/share/vm/services/
NMT_TrackingLevel MemTracker::init_tracking_level() {
  NMT_TrackingLevel level = NMT_off;
  ......
  if (os::getenv(buf, nmt_option, sizeof(nmt_option))) {
    if (strcmp(nmt_option, "summary") == 0) {
      level = NMT_summary;
    } else if (strcmp(nmt_option, "detail") == 0) {
#if PLATFORM_NATIVE_STACK_WALKING_SUPPORTED
      level = NMT_detail;
#else
      level = NMT_summary;
#endif // PLATFORM_NATIVE_STACK_WALKING_SUPPORTED
    } 
   ......
  }
  ......
  if (!MallocTracker::initialize(level) ||
      !VirtualMemoryTracker::initialize(level)) {
    level = NMT_off;
  }
  return level;
}
# /hotspot/src/share/vm/services/
bool MallocTracker::initialize(NMT_TrackingLevel level) {
  if (level >= NMT_summary) {
    MallocMemorySummary::initialize();
  }
  if (level == NMT_detail) {
    return MallocSiteTable::initialize();
  }
  return true;
}
void MallocMemorySummary::initialize() {
  assert(sizeof(_snapshot) >= sizeof(MallocMemorySnapshot), "Sanity Check");
  // Uses placement new operator to initialize static area.
  ::new ((void*)_snapshot)MallocMemorySnapshot();
}
# 
bool VirtualMemoryTracker::initialize(NMT_TrackingLevel level) {
  if (level >= NMT_summary) {
    VirtualMemorySummary::initialize();
  }
  return true;
}

We performedjcmd VM.native_memory summary/detailThe command will use the NMTDCmd::report method to obtain different data according to different levels:

When summary, use MemSummaryReporter::report() to obtain data stored by VirtualMemoryTracker, MallocMemorySummary, etc.;

When detail, use MemDetailReporter::report() to obtain data stored in VirtualMemoryTracker, MallocMemorySummary, MallocSiteTable, etc.

hotspot/src/share/vm/services/

void NMTDCmd::execute(DCmdSource source, TRAPS) { ...... if (_summary.value()) { report(true, scale_unit); } else if (_detail.value()) { if (!check_detail_tracking_level(output())) { return; } report(false, scale_unit); } ...... }

void NMTDCmd::report(bool summaryOnly, size_t scale_unit) { MemBaseline baseline; if ((summaryOnly)) { if (summaryOnly) { MemSummaryReporter rpt(baseline, output(), scale_unit); (); } else { MemDetailReporter rpt(baseline, output(), scale_unit); (); } } }

Generally, NMTs occupy relatively small memory and do not need to be too concerned.

Arena Chunk

Arena is some chunks (memory blocks) allocated by the JVM, and when exiting the scope or leaving the code area, memory will be released from these chunks. Then these Chunks can be reused in other subsystems. It should be noted that the Arena and Chunk counted at this time are Arena and Chunk defined by HotSpot, rather than the related concepts of Arena and Chunk in Glibc.

We will find that there will be a lot of allocation information about Arena Chunk in the NMT details:

[0x0000ffff935906e0] ChunkPool::allocate(unsigned long, AllocFailStrategy::AllocFailEnum)+0x158
[0x0000ffff9358ec14] Arena::Arena(MemoryType, unsigned long)+0x18c
......

In the JVM, we use ChunkPool to manage and reuse these chunks, for example, when we create threads:

# /hotspot/src/share/vm/runtime/
Thread::Thread() {
  ......
  set_resource_area(new (mtThread)ResourceArea());
  ......
  set_handle_area(new (mtThread) HandleArea(NULL));
  ......

Among them, ResourceArea is a resource space allocated to the thread, and generally ResourceObj is stored here (such as the runtime information that needs to be accessed during optimization of C1/C2); HandleArea is used to store the handles held by the thread, and use handles to associate the used objects. Both of these apply for Arena, and Arena applies for a new Chunk memory block through ChunkPool::allocate. In addition, there are many places where JVM processes use Arena, such as JMX, OopMap and other related operations that use ChunkPool.

A sharp-eyed reader may notice that the Chunk memory block is usually applied for by ChunkPool::allocate. Yes, in fact, in addition to the ChunkPool::allocate method, there is another way to apply for Arena Chunk in the JVM, that is, directly use Glibc's malloc to apply for memory. The JVM provides us with relevant control parameters UseMallocOnly:

develop(bool, UseMallocOnly, false,                                       \
          "Use only malloc/free for allocation (no resource area/arena)") 

We can find that this parameter is a develop parameter, which we cannot use in general because VM option 'UseMallocOnly' is develop and is available only in debug version of VM, that is, we can only enable this parameter in the debug version of JVM.

Some readers here may have a question, that is, whether you can use the UseMallocOnly parameter in a normal release version of JVM by using the parameter -XX:+IgnoreUnrecognizedVMOptions (which allows the JVM to use some parameters that are not allowed in the release version after being enabled). Unfortunately, although we can enable UseMallocOnly in this way, in fact, UseMallocOnly will not take effect, because the logic in the source code is as follows:

# hotspot/src/share/vm/memory/
void* Amalloc(size_t x, AllocFailType alloc_failmode = AllocFailStrategy::EXIT_OOM) {
    assert(is_power_of_2(ARENA_AMALLOC_ALIGNMENT) , "should be a power of 2");
    x = ARENA_ALIGN(x);
    //debug version restrictions    debug_only(if (UseMallocOnly) return malloc(x);)
    if (!check_for_overflow(x, "Arena::Amalloc", alloc_failmode))
      return NULL;
    NOT_PRODUCT(inc_bytes_allocated(x);)
    if (_hwm + x > _max) {
      return grow(x, alloc_failmode);
    } else {
      char *old = _hwm;
      _hwm += x;
      return old;
    }
  }

It can be found that even if we successfully enable UseMallocOnly, it is only in the debug version (debug_only) can only allocate memory in the JVM using malloc.

We can compare and add JVM using normal version (release)-XX:+IgnoreUnrecognizedVMOptions -XX:+UseMallocOnlyNMT-related logs for startup parameters and JVM addition using debug (fastdebug/slowdebug) version-XX:+UseMallocOnlyNMT related logs for startup parameters:

# Normal JVM, add startup parameters: -XX:+IgnoreUnrecognizedVMOptions -XX:+UseMallocOnly......
[0x0000ffffb7d16968] ChunkPool::allocate(unsigned long, AllocFailStrategy::AllocFailEnum)+0x158
[0x0000ffffb7d15f58] Arena::grow(unsigned long, AllocFailStrategy::AllocFailEnum)+0x50
[0x0000ffffb7fc4888] Dict::Dict(int (*)(void const*, void const*), int (*)(void const*), Arena*, int)+0x138
[0x0000ffffb85e5968] Type::Initialize_shared(Compile*)+0xb0
                             (malloc=32KB type=Arena Chunk #1)
......                             
# debug version JVM, add startup parameters: -XX:+UseMallocOnly......
[0x0000ffff8dfae910] Arena::malloc(unsigned long)+0x74
[0x0000ffff8e2cb3b8] Arena::Amalloc_4(unsigned long, AllocFailStrategy::AllocFailEnum)+0x70
[0x0000ffff8e2c9d5c] Dict::Dict(int (*)(void const*, void const*), int (*)(void const*), Arena*, int)+0x19c
[0x0000ffff8e97c3d0] Type::Initialize_shared(Compile*)+0x9c
                             (malloc=5KB type=Arena Chunk #1)
......                             

We can clearly observe the difference in the call chain, that is, the former still uses ChunkPool::allocate to apply for memory, while the latter uses Arena::malloc to apply for memory, and view the Arena::malloc code:

# hotspot/src/share/vm/memory/
void* Arena::malloc(size_t size) {
  assert(UseMallocOnly, "shouldn't call");
  // use malloc, but save pointer in res. area for later freeing
  char** save = (char**)internal_malloc_4(sizeof(char*));
  return (*save = (char*)os::malloc(size, mtChunk));
}

You can find that the code passesos::mallocto allocate memory in a way, and then directly pass the memory when freeingos::freeThat is, for example, the relevant code for freeing memory in UseMallocOnly:

# hotspot/src/share/vm/memory/
// debugging code
inline void Arena::free_all(char** start, char** end) {
  for (char** p = start; p < end; p++) if (*p) os::free(*p);
}

Although the JVM provides us with two ways to manage the memory of Arena Chunk:

  • Through ChunkPool pooling, it is managed by the JVM itself;
  • Manage directly through Glibc's malloc/free.

However, in the general sense, we will only use the first method, and the objects managed by ChunkPool are generally smaller. Overall, the memory of Arena Chunk will not be used much.

Unknown

Unknown is the following situations

  • When the memory category cannot be determined;
  • When Arena is used as a stack or value object;
  • When the type information has not arrived yet.

NMT Untrackable Memory

It should be noted that NMT can only track the memory allocation of JVM code, and cannot be traced for non-JVM memory allocation.

  • Some third-party native codes that use JNI to request memory, such as some libraries loaded.
  • Standard Java Class Library, typical, such as file flow and other related operations (such as: ZipInputStream and DirectoryStream, etc.).

You can use the operating system's memory tools, etc. to assist in troubleshooting, or use the hook/jemalloc/google-perftools(tcmalloc) of the LD_PRELOAD malloc function to replace Glibc's malloc to assist in tracking memory allocation.

Due to limited space, I will share with you "cases of using NMT to assist in troubleshooting memory problems" in the next article, so stay tuned!

refer to

/javase/8/do…

/jeps/380

The above is the detailed content of the Native Memory Tracking Tracking Area Sample Analysis. For more information about the Native Memory Tracking Area, please follow my other related articles!