Android memory optimization is a relatively important part of our performance optimization work. This actually mainly includes two aspects of work:
1. Optimize RAM, that is, reduce runtime memory. The purpose here is to prevent OOM exceptions from occurring in the program and to reduce the probability that the program will be killed by the LMK mechanism due to excessive memory. On the other hand, unreasonable memory usage will greatly increase the number of GCs, which will lead to the program becoming stuck.
2. Optimize ROM, that is, reduce the volume of the program ROM. This is mainly to reduce the space occupied by the program and prevent the program from being unable to be installed due to insufficient ROM space.
The focus of this article is the first point, which summarizes and outlines the skills to reduce application running memory. Here we will no longer elaborate on concepts such as PSS and USS and the memory management of Android applications. If you are interested in this part of the content, you can read the reference article at the end of the article by yourself.
Detection and modification of memory leaks
Memory leak: Simply put, due to encoding errors or system reasons, there are still direct or indirect references to objects, which makes the system unable to recycle them. Memory leakage can easily leave logical hidden dangers, and at the same time increases the application memory peak and the probability of OOM occurring. It is a bug issue and we must modify it.
The following are some common causes of memory leaks, but how to establish a closed-loop solution to discover memory leaks and solve memory leaks is the focus of our work.
1. Monitoring scheme for memory leakage
Square's open source library leakcanry is a very good choice. It detects the life cycle of an activity or object through weak references. If memory leaks are found, the shortest path to the leak is obtained through the HAHA library, and finally displayed through notification.
The process of memory leakage judgment and processing is as shown in the figure below, the process space of each running (the main process uses idlehandler, and HAHA analysis uses a separate process):
Before leakcanry was launched, WeChat already had its own memory leak monitoring system, which is roughly the following difference from leakcanry:
- In WeChat, for models above 4.0, we also use the ActivityLifecycleCallbacks interface by registering. For models below 4.0, we will try to reflect the mInstrumentation object in ActivityThread. Of course, WeChat has now changed to only support Android-15 or above, which is beautiful.
- Although leakcanry uses idlehandler and subprocess, dumphprof still causes significant application lag (SuspendAll Thread). On some mobile phones such as Samsung, the system will cache the last Activity, so on WeChat, we have adopted a stricter detection mode, that is, three confirmations are leaked and 5 newly created Activity are made to ensure that it is not caused by the system cache.
- In WeChat, a dialog box will pop up when a suspected memory leak is found. When we actively click, we will do dumpHprof and upload Hprof snapshots. The analysis of whether false alarms, leaked chains and other analysis work is also placed on the server side.
In fact, by making simple customization of leakcanry, we can implement the following closed loop of memory leakage monitoring.
2. Hack Fix for system memory leaks
AndroidExcludedRefs lists some examples where references cannot be released due to system reasons. At the same time, for most examples, we will provide suggestions on how to fix them through hack suggestions. In WeChat, a Hack-like method is also adopted for TextLine, InputMethodManager, AudioManger.
3. Recycling memory through guarantee
Activity leaks will cause the Bitmap, DrawingCache, etc. referenced by the Activity to be unable to be released, which puts great pressure on the memory. Bottom recovery means that for the leaked Activity, trying to recycle the resources it holds, only an Activity empty shell is leaked, thereby reducing the pressure on the memory.
The method is also very simple. Starting from the root view of the view during Activity onDestory, recursively releases all the images, backgrounds, DrawingCache, listeners and other resources involved in the child view, making the Activity an empty shell that does not occupy resources. If leaked, it will not cause the image resources to be held.
… … Drawable d = (); if (d != null) { (null); } (null); ... ...
In general, we don’t just know some memory leak solutions, but more importantly, we can obtain a complete closed-loop system of memory leak detection and modification through daily testing and monitoring.
Some ways to reduce runtime memory
When we can ensure that there is no memory leak in the application, we need some other ways to reduce the runtime memory. More often, we only hope to reduce the probability of OOM in the application.
Android OOM:
- On Android system, OOM will occur when dalvik allocated + external allocated + newly allocated size >= dalvik heap maximum value. where bitmap is placed in external.
- Android system, the external counter is abolished, and the allocation similar to bitmap is changed to the java heap of dalvik. As long as allocated + newly allocated memory >= dalvik heap maximum value, OOM will occur (the statistical rules of the art running environment are still consistent with dalvik)
1. Reduce the memory occupied by bitmap
Speaking of memory, bitmap must be the big one here. Regarding bitmap memory usage, the following points are to be said:
1. Prevent bitmap from occupying resources and causing OOM
After the hidden inNativeAlloc reflection in Android system is opened, the applied bitmap will not be counted in external. For Android systems, you can use Facebook's fresco library to place image resources in native.
2. Pictures load as needed
That is, the size of the image should not exceed the size of the view. Before loading the image into memory, we need to calculate a suitable inSampleSize scaling ratio to avoid unnecessary large image loading. In this regard, we can overload drawable and ImageView. For example, when Activity ondestroy, we can detect the image size and the size of the View. If it exceeds it, we can report or prompt it.
3. Unified bitmap loader
Picasso and Fresco are both relatively famous loading libraries, and WeChat also has its own library ImageLoader. The advantage of loading libraries is that the version difference and size processing are not perceived to the user. With a unified bitmap loader, if OOM occurs when loading bitmap, you can try again by clearing cache, reducing bitmap format (ARGB8888/RBG565/ARGB4444/ALPHA8) and other methods.
4. Pixel waste of pictures
For .9 pictures, the artist may have a large number of pixel repetitions in the stretched and non-stretched areas when drawing the pictures. By obtaining the pixel ARGB value of the picture, calculating the same pixel areas in succession, and a custom algorithm determines whether these areas can be scaled. The key is to systematically implement these tasks so that problems can be discovered and solved in a timely manner.
A good imageLoader can hide or process the image loading from users, and can also place adaptive size, quality, etc. in the framework.
2. Monitoring of its own memory usage
For the system function onLowMemory and other functions are only for the entire system. For this process, the difference between the dalvik memory distance OOM is not reflected, and there is no callback function for us to free memory in time. If there is a mechanism that can monitor the process's heap memory usage in real time, reaching the set value, that is, notifying the relevant modules for memory release, which will greatly reduce OOM.
- Implementation principle
This is actually relatively simple. MaxMemory is obtained through Runtime, and totalMemory-freeMemory is the currently used dalvik memory.
().maxMemory(); ().totalMemory() - ().freeMemory();
- Operation method
We can get this value regularly (every 3 minutes in the front desk). When our value reaches a dangerous value (for example, 80%), we should mainly release our various cache resources (bitmap cache is the big header), and at the same time display the memory of the Trim application to accelerate memory collection.
3. Use multi-process
For webviews, galleries, etc., due to the memory system leak or excessive memory occupancy, we can use a separate process. WeChat also currently places them in separate tools processes
4. Report OOM details
When OOM crash occurs in the system, we should upload more detailed memory-related information to facilitate us to locate the specific memory situation at that time.
Others such as using large heap, inBitmap, SparseArray, Protobuf, etc. will not be described in detail one by one. The optimization-burning-optimization-burning-burning method for the code is not recommended. We should focus on establishing a reasonable framework and monitoring system to promptly discover problems such as bitmap too large, pixel waste, excessive memory usage, and OOM application.
GC optimization
Java has the GC mechanism, and the implementation of GC in different system versions may be quite different. However, regardless of the version, a large number of GC operations will significantly occupy the frame interval time (16ms). If too much GC operation is done in the frame interval time, then naturally the available time for other similar calculations, rendering and other operations will become less.
1. Types of GC
There are the following types of GC, among which GC_FOR_ALLOC is performed synchronously, which has the greatest impact on the application frame rate.
- GC_FOR_ALLOC
When the heap memory is insufficient, it is easy to be triggered, especially when new object, it is easy to be triggered. Therefore, if you want to accelerate the startup, you can increase the value, so that the number of GC_FOR_ALLOC can be reduced during the startup process. Note that this trigger is done in a synchronous manner. If there is still no space after GC, the heap will expand
- GC_EXPLICIT
This gc can be called. For example, the priority of the gc thread is generally low, so the garbage collection process may not be triggered immediately. Don't think that if the call is called, the memory situation will improve.
- GC_CONCURRENT
When the allocated object size exceeds 384K, please note that this is recycled in an asynchronous manner. If a large number of repeated Concurrent GCs are found, it means that objects greater than 384K may have been allocated in the system, and these are often temporary objects that are repeatedly triggered. The hint given to us is that the object reuse is insufficient.
- GC_EXTERNAL_ALLOC (wasted after 3.0 system)
If the memory allocation of the Native layer fails, this type of GC will be triggered. If the GPU's texture, bitmap, or usage is not released, this type of GC is often triggered frequently.
2. Memory jitter phenomenon
Memory Churn memory jitter, memory jitter is because a large number of objects are created and released immediately in a short period of time. A large number of objects are generated instantly, which will seriously occupy the memory area. When the threshold value is reached and there is not enough space left, GC will be triggered, resulting in the newly generated object being recycled soon. Even if the objects allocated each time take up very little memory, they overlap together increase the pressure on the Heap, triggering more other types of GC. This operation may affect the frame rate and make the user perceive performance problems.
Through Memory Monitor, we can track memory changes throughout the app. If multiple memory rises and falls occur in a short period of time, it means that memory jitter is likely.
3. GC optimization
Through Heap Viewer, we can view the current memory snapshot, which is easy to compare and analyze which objects may have leaked. The more important tool is the Allocation Tracker, which tracks the type, stack, size, etc. of memory objects. Mobile Q has a statistical tool to count the size and number of allocations of a certain object according to the combination of (type & stack) (the stack takes 5 layers at the top of the stack). At the same time, sorting according to the number of times and sizes, combining code analysis from more/large to less/small, and optimizing from top to bottom round by round.
In this way, we can quickly know which variables are created due to frequent GC when memory jitter occurs. Generally speaking, we need to pay attention to the following aspects:
String stitching optimization
Reduce the string to use plus sign splicing, and use StringBuilder instead. Reduce and set capacity during initialization; it should be noted here that if the Printer callback in Looper is turned on, there will be more string splicing.
Printer logging = ; if (logging != null) { (">>>>> Dispatching to " + + " " + + ": " + ); }
- Read file optimization Use ByteArrayPool to initially set capacity and reduce expansion
- Resource reuse
Establish a global cache pool to reuse the object types that are frequently applied and released
- Reduce unnecessary or unreasonable objects
For example, in ondraw and getview, object applications should be reduced and reused as much as possible. It's more logical things, such as constantly applying for local variables in a loop, etc.
- Choose a reasonable data format Use SparseArray, SparseBooleanArray, and LongSparseArray to replace Hashmap
Summarize
We cannot explain all the techniques used in memory optimization one by one, and with the replacement of Android versions, many methods may become outdated. I'm thinking that what's more important is that we can continue to discover problems and monitor them in a refined manner, rather than being in a dilemma of "what there is a hole to fill in." Here are the suggestions for you:
1. Be the first to consider using existing tools; Chinese people like to re-create wheels, so we recommend spending energy to optimize existing tools to contribute to the majority of coders. Life is no longer easy, so what is a coder difficult for a coder!
2. Don’t stick to the point, but more importantly, how to establish a reasonable framework to avoid problems, or to discover problems in a timely manner.
There are also some unsatisfactory aspects in the current WeChat memory monitoring system, and efforts are also needed to optimize in the days to come.
The above is all about this article, and I hope it will be helpful for everyone to optimize Android memory.