源码基于:Android R
0. 前言
02-09 17:15:20.886446 1164 13478 I ActivityManager: Low on memory:
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 986102: android.hardware.camera.provider@2.4-service_64 (pid 389) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 54670: logd (pid 261) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 17231: surfaceflinger (pid 421) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 10194: zygote (pid 367) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 10051: webview_zygote (pid 1696) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 9442: qcrild (pid 716) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 8538: qcrild (pid 681) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 7767: cameraserver (pid 578) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 7081: audioserver (pid 418) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 6399: mediaserver (pid 647) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 6032: zygote64 (pid 366) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 5666: netmgrd (pid 673) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4992: media.metrics (pid 645) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4738: media.swcodec (pid 678) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4687: init (pid 1) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4634: android.hardware.gnss@2.1-service-qti (pid 393) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4579: android.hardware.audio.service (pid 386) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4565: media.extractor (pid 637) native
02-09 17:15:20.887069 1164 13478 I ActivityManager: ntv ?? 4131: imsdaemon (pid 668) native
...
02-09 17:15:20.887217 1164 13478 I ActivityManager: vis IMPF 7024: com.android.smspush (pid 2446) service
02-09 17:15:20.887217 1164 13478 I ActivityManager: com.android.smspush/.WapPushManager<=Proc{1902:com.android.phone/1001}
02-09 17:15:20.887217 1164 13478 I ActivityManager: prcp IMPB 65781: com.sohu.inputmethod.sogou (pid 11697) service
02-09 17:15:20.887217 1164 13478 I ActivityManager: com.sohu.inputmethod.sogou/.SogouIME<=Proc{1164:system/1000}
02-09 17:15:20.887217 1164 13478 I ActivityManager: prcp IMPB 13053: com.android.webview:sandboxed_process0:org.chromium.content.app.SandboxedProcessService0:0 (pid 12509) service
02-09 17:15:20.887217 1164 13478 I ActivityManager: com.sohu.inputmethod.sogou/org.chromium.content.app.SandboxedProcessService0:0<=Proc{11697:com.sohu.inputmethod.sogou/u0a151}
02-09 17:15:20.887217 1164 13478 I ActivityManager: svc SVC 22194: com.sohu.inputmethod.sogou:push_service (pid 12588) started-services
02-09 17:15:20.887217 1164 13478 I ActivityManager: prev LAST 19057: com.android.packageinstaller (pid 11380) previous
02-09 17:15:20.887217 1164 13478 I ActivityManager: 2227718: TOTAL
最近在项目经常遇到logcat 中打印 Low on memory,这里结合源码来剖析一下。
大致分下面几个部分:
- log 中的信息根据什么得来的?
- 除了这些信息还有什么重点?
- 该log 是怎么触发打印的?
1. reportMemUsage
log 中的字符很容易从源码中搜到,而信息的处理都是在reportMemUsage 中,因为函数处理逻辑比较多,这里分段进行剖析。
代码路径:frameworks/base/services/core/java/com/android/server/am/AMS.java
void reportMemUsage(ArrayList<ProcessMemInfo> memInfos) {
参数下面在讨论,这里认为是系统中所有 LRU 的进程的 ProcessMemInfo。
step1. 整理ProcessMemInfo
final SparseArray<ProcessMemInfo> infoMap = new SparseArray<>(memInfos.size());
for (int i=0, N=memInfos.size(); i<N; i++) {
ProcessMemInfo mi = memInfos.get(i);
infoMap.put(mi.pid, mi);
}
将ProcessMemInfo 存到新的SparseArray 中。
step2. 更新CPU状态
紧接着是通过函数 updateCpuStatsNow() 更新此时CPU 状态,函数的代码不贴出来了,感兴趣可以查看源码,也是在AMS.java 中。
主要注意的是成员变量 mProcessCpuTracker,通过update 函数更新数据:
void updateCpuStatsNow() {
synchronized (mProcessCpuTracker) {
...
mProcessCpuTracker.update();
---->update()
frameworks/base/core/java/com/android/internal/os/ProcessCpuTracker.java
public void update() {
...
if (Process.readProcFile("/proc/stat", SYSTEM_CPU_FORMAT,
null, sysCpu, null)) {
...
}
...
try {
mCurPids = collectStats("/proc", -1, mFirst, mCurPids, mProcStats);
} finally {
StrictMode.setThreadPolicy(savedPolicy);
}
...
final float[] loadAverages = mLoadAverageData;
if (Process.readProcFile("/proc/loadavg", LOAD_AVERAGE_FORMAT,
null, null, loadAverages)) {
...
}
...
}
- 读取 /proc/stat 节点相关信息,例如 user time、system time、io wait time、irq time、idle time 等;
- collectStats 收集 /proc 目录下所有进程信息;
- 读取/proc/loadavg 节点信息,dumpsys cpuinfo 或者ANR 时都会打印出来;
step3. 获取进程PSS
// 如果不是 LRU 进程,则认为是 native 进程,会创建 ProcessMemInfo 对象,
// 并将该对象存放到 memInfos 中
for (int i = 0; i < statsCount; i++) {
ProcessCpuTracker.Stats st = stats.get(i);
long pss = Debug.getPss(st.pid, swaptrackTmp, memtrackTmp);
if (pss > 0) {
if (infoMap.indexOfKey(st.pid) < 0) {
ProcessMemInfo mi = new ProcessMemInfo(st.name, st.pid,
ProcessList.NATIVE_ADJ, -1, "native", null);
mi.pss = pss;
mi.swapPss = swaptrackTmp[1];
mi.memtrack = memtrackTmp[0];
memInfos.add(mi);
}
}
}
long totalPss = 0;
long totalSwapPss = 0;
long totalMemtrack = 0;
// 完全统计 memInfos 中所有进程的pss和swap、memtrack信息
// 如果是之前没有统计过的LRU进程,通过Debug.getPss() 获取
for (int i=0, N=memInfos.size(); i<N; i++) {
ProcessMemInfo mi = memInfos.get(i);
if (mi.pss == 0) {
mi.pss = Debug.getPss(mi.pid, swaptrackTmp, memtrackTmp);
mi.swapPss = swaptrackTmp[1];
mi.memtrack = memtrackTmp[0];
}
totalPss += mi.pss; //所有进程的total Pss
totalSwapPss += mi.swapPss; //所有进程的swap Pss
totalMemtrack += mi.memtrack; //所有进程的memtrack
}
step4. 对ProcessMemInfo 按照oom adj 进行排序
Collections.sort(memInfos, new Comparator<ProcessMemInfo>() {
@Override public int compare(ProcessMemInfo lhs, ProcessMemInfo rhs) {
if (lhs.oomAdj != rhs.oomAdj) {
return lhs.oomAdj < rhs.oomAdj ? -1 : 1;
}
if (lhs.pss != rhs.pss) {
return lhs.pss < rhs.pss ? 1 : -1;
}
return 0;
}
});
如果oom adj 有小到大的顺序排列,如果oom adj 相同,则按照pss 由大到小排列。
step5. 更新tag builder
在 reportMemUsage() 这个函数中有几个重要的 StringBuilder 对象,这些对象中存放着后期 logcat 和DropBox 文件打印所需的信息。
这里的 tag builder,是为后期 dropbox 准备信息。
for (int i=0, N=memInfos.size(); i<N; i++) {
ProcessMemInfo mi = memInfos.get(i);
// 统计所有CACHED进程的 Pss
if (mi.oomAdj >= ProcessList.CACHED_APP_MIN_ADJ) {
cachedPss += mi.pss;
}
// dropbox需要的log
if (mi.oomAdj != ProcessList.NATIVE_ADJ
&& (mi.oomAdj < ProcessList.SERVICE_ADJ
|| mi.oomAdj == ProcessList.HOME_APP_ADJ
|| mi.oomAdj == ProcessList.PREVIOUS_APP_ADJ)) {
if (lastOomAdj != mi.oomAdj) {
lastOomAdj = mi.oomAdj;
if (mi.oomAdj <= ProcessList.FOREGROUND_APP_ADJ) {
tag.append(" / ");
}
if (mi.oomAdj >= ProcessList.FOREGROUND_APP_ADJ) {
if (firstLine) {
stack.append(":");
firstLine = false;
}
stack.append("\n\t at ");
} else {
stack.append("$");
}
} else {
tag.append(" ");
stack.append("$");
}
if (mi.oomAdj <= ProcessList.FOREGROUND_APP_ADJ) {
appendMemBucket(tag, mi.pss, mi.name, false);
}
appendMemBucket(stack, mi.pss, mi.name, true);
if (mi.oomAdj >= ProcessList.FOREGROUND_APP_ADJ
&& ((i+1) >= N || memInfos.get(i+1).oomAdj != lastOomAdj)) {
stack.append("(");
for (int k=0; k<DUMP_MEM_OOM_ADJ.length; k++) {
if (DUMP_MEM_OOM_ADJ[k] == mi.oomAdj) {
stack.append(DUMP_MEM_OOM_LABEL[k]);
stack.append(":");
stack.append(DUMP_MEM_OOM_ADJ[k]);
}
}
stack.append(")");
}
}
cachedPss:统计所有进程 cached进程的 Pss;
step6. 更新 fullNativeBuilder 和 shortNativeBuilder、fullJavaBuilder
//appendMemInfo()是用来统计所有进程的adj、pss、name、memtrack、pid、adjType、adjReson等信息
appendMemInfo(fullNativeBuilder, mi);
//之前按照oomAdj排序过,所以统计的时候优先统计native进程
if (mi.oomAdj == ProcessList.NATIVE_ADJ) {
// The short form only has native processes that are >= 512K.
if (mi.pss >= 512) {
appendMemInfo(shortNativeBuilder, mi);
} else {
extraNativeRam += mi.pss;
extraNativeMemtrack += mi.memtrack;
}
} else { //native进程统计完后,统计java进程
//如果有pss小于512的native进程,这里会统一放到shortNativeBuilder最后
if (extraNativeRam > 0) {
appendBasicMemEntry(shortNativeBuilder, ProcessList.NATIVE_ADJ,
-1, extraNativeRam, extraNativeMemtrack, "(Other native)");
shortNativeBuilder.append('\n');
extraNativeRam = 0;
}
//统计java进程
appendMemInfo(fullJavaBuilder, mi);
}
- appendMemInfo() 将所有需要的信息都放到了 fullNativeBuilder 中;
- 内存在 512KB 之上的native 进程 信息会存放在 shortNativeBuilder 中;
- 内存在 512KB 之下的则统计之和,并最终以 Other native 形式拼接在shortNativeBuilder;
- Java 进程信息放在 fullJavaBuilder 中;
fullJavaBuilder.append(" ");
ProcessList.appendRamKb(fullJavaBuilder, totalPss);
fullJavaBuilder.append(": TOTAL");
if (totalMemtrack > 0) {
fullJavaBuilder.append(" (");
fullJavaBuilder.append(stringifyKBSize(totalMemtrack));
fullJavaBuilder.append(" memtrack)");
} else {
}
fullJavaBuilder.append("\n");
在fullJavaBuilder 最后添加上最后的 total pss 和 total memtrack。
上面是站在 PSS 角度统计,下面则站在内核 meminfo 角度统计。
step7. 读取/proc/meminfo 信息
MemInfoReader memInfo = new MemInfoReader();
memInfo.readMemInfo();
final long[] infos = memInfo.getRawInfo();
读取 /proc/meminfo 存放在 MemInfoReader 中成员变量 mInfos 中,通过接口 getRawInfo() 获取;
step8. 显示meminfo部分信息
memInfoBuilder.append(" MemInfo: ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_SLAB])).append(" slab, ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_SHMEM])).append(" shmem, ");
memInfoBuilder.append(stringifyKBSize(
infos[Debug.MEMINFO_VM_ALLOC_USED])).append(" vm alloc, ");
memInfoBuilder.append(stringifyKBSize(
infos[Debug.MEMINFO_PAGE_TABLES])).append(" page tables ");
memInfoBuilder.append(stringifyKBSize(
infos[Debug.MEMINFO_KERNEL_STACK])).append(" kernel stack\n");
memInfoBuilder.append(" ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_BUFFERS])).append(" buffers, ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_CACHED])).append(" cached, ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_MAPPED])).append(" mapped, ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_FREE])).append(" free\n");
MemInfo 之后统计:
- Slab:所有的Slab页面,包含reclaimable 和 unreclaimable;
- Shmem:共享内存的页面;
- VmallocUsed:虚拟内存区域 vmalloc 已经被使用的内存;
- PageTables:所有用于页表的页面;
- KernelStack:内核栈使用的内存;
- Buffers:用于块层的缓存;
- Cached:统计告诉缓存的页面,是Mapped 的超集;
- Mapped:统计所有映射到用户地址空间的 page cache;
- MemFree:剩余的空闲物理内存;
这些属性的具体涵义,可以查看:《Linux 内核参数:meminfo》
log 如下:
09-07 13:31:19.592 1000 896 17159 I ActivityManager: MemInfo: 260,664K slab, 11,208K shmem, 81,256K vm alloc, 31,436K page tables 24,080K kernel stack
09-07 13:31:19.592 1000 896 17159 I ActivityManager: 204K buffers, 131,584K cached, 83,920K mapped, 13,164K free
step9. 统计ZRAM 内存
if (infos[Debug.MEMINFO_ZRAM_TOTAL] != 0) {
memInfoBuilder.append(" ZRAM: ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_ZRAM_TOTAL]));
memInfoBuilder.append(" RAM, ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_SWAP_TOTAL]));
memInfoBuilder.append(" swap total, ");
memInfoBuilder.append(stringifyKBSize(infos[Debug.MEMINFO_SWAP_FREE]));
memInfoBuilder.append(" swap free\n");
}
读取三个信息:
- ZRAM:读取节点 /sys/block/zram0/mm_stat 节点第三个值,表示ZRAM 占用的内存;
- SwapTotal:用于swap 的总内存;
- SwapFree:剩下的swap 的内存;
log 如下:
02-09 17:15:20.887254 1164 13478 I ActivityManager: ZRAM: 166,244K RAM, 1,572,860K swap total, 1,047,972K swap free
step10. 统计 Free RAM
memInfoBuilder.append(stringifyKBSize(cachedPss + memInfo.getCachedSizeKb()
+ memInfo.getFreeSizeKb()));
memInfoBuilder.append("\n");
这里显示的 Free RAM 就是目前系统中可以使用的内存。包括:
- cached 进程 pss;
- 系统cached 内存:Cached + SReclaimable + Buffers - Mapped;
- MemFree;
这里在 MemFree 基础上,将用户层 cached进程和系统的cached计算在内,主要是认为这部分内存可以回收使用,但实际上可使用的内存并不能完全达到这个值。
log 如下:
02-09 17:15:20.887254 1164 13478 I ActivityManager: Free RAM: 1,126,524K
step11. 统计 ION 使用的内存
long kernelUsed = memInfo.getKernelUsedSizeKb();
final long ionHeap = Debug.getIonHeapsSizeKb();
if (ionHeap > 0) {
final long ionMapped = Debug.getIonMappedSizeKb();
final long ionUnmapped = ionHeap - ionMapped;
final long ionPool = Debug.getIonPoolsSizeKb();
memInfoBuilder.append(" ION: ");
memInfoBuilder.append(stringifyKBSize(ionHeap + ionPool));
memInfoBuilder.append("\n");
// Note: mapped ION memory is not accounted in PSS due to VM_PFNMAP flag being
// set on ION VMAs, therefore consider the entire ION heap as used kernel memory
kernelUsed += ionHeap;
}
变量 ionHeap 获取的是 /sys/kernel/ion/total_heaps_kb 节点信息;
变量 ionPool 获取的是 /sys/kernel/ion/total_pools_kb 节点信息;
两者相加就是 ION 使用的内存。
log 如下:
02-09 17:15:20.887254 1164 13478 I ActivityManager: ION: 149,764K
step12. 统计 Used RAM
memInfoBuilder.append(" Used RAM: ");
memInfoBuilder.append(stringifyKBSize(
totalPss - cachedPss + kernelUsed));
memInfoBuilder.append("\n");
step11 中通过函数 memInfo.getKernelUsedSizeKb() 计算内核 kernelUsed,再加上 ionHeap 得到总的 kernelUsed。
Used RAM 就等于用户层使用的内存 + 内核使用的内存:
- 用户层:totalPss - cachedPss
- 内核层:Shmem + SUnreclaimable + VmallocUsed + PageTables + KernelStack + ionHeap;
注意,用户层使用的 usedPss = totalPss - cachedPss,这里其实包含了 totalSwapPss,这部分内存其实会被压缩在 ZRAM 中,所以,这里这样统计相比于实际值是偏大的。更精确的应该是:
totalPss - totalSwapPss + ZRAM
step13. 统计 Lost RAM
memInfoBuilder.append(" Lost RAM: ");
memInfoBuilder.append(stringifyKBSize(memInfo.getTotalSizeKb()
- (totalPss - totalSwapPss) - memInfo.getFreeSizeKb() - memInfo.getCachedSizeKb()
- kernelUsed - memInfo.getZramTotalSizeKb()));
memInfoBuilder.append("\n");
Lost RAM = Memtotal - (totalPss - totalSwapPss) - Memfree - Cached - kernel used - zram used
totalPss 之所以要减去 totalSwapPss,因为这部分 swap 的内存是被压缩到 ZRAM 中,最后计算ZRAM 中内存比较准确。
totalPss - totalSwapPss + ZRAM 其实可以看成用户用掉的内存。
那么,为什么Lost RAM 在统计的时候出现负值呢?
根据上面公式不难看出问题出在 kernelUsed 和 getCachedSizeKb() 的统计上。
来看下 kernelUsed 统计:
Shmem + SUnreclaimable + VmallocUsed + PageTables + KernelStack + ionHeap;
来看下 getCachedSizeKb():
KReclaimable + Buffers + Cached - Mapped;
其实在 KReclaimable 中包含了SReclaimable 和 ION 内存。
step14. logcat 打印
Slog.i(TAG, "Low on memory:");
Slog.i(TAG, shortNativeBuilder.toString());
Slog.i(TAG, fullJavaBuilder.toString());
Slog.i(TAG, memInfoBuilder.toString());
这里就是logcat 看到的打印,主要三个方面:
- shortNativeBuilder统计所有大于512KB 的native 的进程,其他小于512KB 的native将以other 出现;
- fullJavaBuilder统计所有java 进程的信息;
- memInfoBuilder 统计当前进程的 /proc/meminfo;
注意logcat 中的
02-09 17:15:20.887217 1164 13478 I ActivityManager: pers PER 83753: com.android.systemui (pid 1520) fixed
02-09 17:15:20.887217 1164 13478 I ActivityManager: pers PER 51579: com.android.phone (pid 1902) fixed
注意TAG 后的两列,pers、PER 分别对应oom adj 和 proc state。
先来看下oom adj 对应的string:
- cch ---- CACHED_APP_MIN_ADJ
- svcb ---- SERVICE_B_ADJ
- prev ---- PREVIOUS_APP_ADJ
- home ---- HOME_APP_ADJ
- svc ---- SERVICE_ADJ
- hvy ---- HEAVY_WEIGHT_APP_ADJ
- bkup ---- BACKUP_APP_ADJ
- prcl ---- PERCEPTIBLE_LOW_APP_ADJ
- prcp ---- PERCEPTIBLE_APP_ADJ
- vis ---- VISIBLE_APP_ADJ
- fg ---- FOREGROUND_APP_ADJ
- psvc ---- PERSISTENT_SERVICE_ADJ
- pers ---- PERSISTENT_PROC_ADJ
- sys ---- SYSTEM_ADJ
- ntv ---- NATIVE_ADJ
再来看下proc state 对应的string:
- PER ---- PROCESS_STATE_PERSISTENT
- PERU ---- PROCESS_STATE_PERSISTENT_UI
- TOP ---- PROCESS_STATE_TOP
- BTOP ---- PROCESS_STATE_BOUND_TOP
- FGS ---- PROCESS_STATE_FOREGROUND_SERVICE
- BFGS ---- PROCESS_STATE_BOUND_FOREGROUND_SERVICE
- IMPF ---- PROCESS_STATE_IMPORTANT_FOREGROUND
- IMPB ---- PROCESS_STATE_IMPORTANT_BACKGROUND
- TRNB ---- PROCESS_STATE_TRANSIENT_BACKGROUND
- BKUP ---- PROCESS_STATE_BACKUP
- SVC ---- PROCESS_STATE_SERVICE
- RCVR ---- PROCESS_STATE_RECEIVER
- TPSL ---- PROCESS_STATE_TOP_SLEEPING
- HVY ---- PROCESS_STATE_HEAVY_WEIGHT
- HOME ---- PROCESS_STATE_HOME
- LAST ---- PROCESS_STATE_LAST_ACTIVITY
- CAC ---- PROCESS_STATE_CACHED_ACTIVITY
- CACC ---- PROCESS_STATE_CACHED_ACTIVITY_CLIENT
- CRE ---- PROCESS_STATE_CACHED_RECENT
- CEM ---- PROCESS_STATE_CACHED_EMPTY
- NONE ---- PROCESS_STATE_CACHED_EMPTY
- 其他 ---- ??
step15. 整理dropBuilder
通过上面 stack builder、fullNativeBuilder、fullJavaBuilder、memInfoBuilder 整理dropBuilder 用以输出drop box。
2. 触发reportMemUsage
接下来分析下触发这些打印的原因。
2.1 reportMemUsage 来自消息REPORT_MEM_USAGE_MSG
从代码上下文看,触发reportMemUsage() 只有一个地方,就是MainHandler 中的REPORT_MEM_USAGE_MSG 消息:
case REPORT_MEM_USAGE_MSG: {
final ArrayList<ProcessMemInfo> memInfos = (ArrayList<ProcessMemInfo>)msg.obj;
Thread thread = new Thread() {
@Override public void run() {
reportMemUsage(memInfos);
}
};
thread.start();
break;
}
此消息附带一个参数,就是reportMemUsage() 函数中的参数,另外,这个函数是在一个单独的线程中执行。
2.2 消息从doLowMemReportIfNeededLocked 发出
另外,发送该消息也只有一个地方,就是doLowMemReportIfNeededLocked():
final void doLowMemReportIfNeededLocked(ProcessRecord dyingProc) {
// If there are no longer any background processes running,
// and the app that died was not running instrumentation,
// then tell everyone we are now low on memory.
if (!mProcessList.haveBackgroundProcessLocked()) {
boolean doReport = "1".equals(SystemProperties.get(SYSTEM_DEBUGGABLE, "0"));
if (doReport) {
long now = SystemClock.uptimeMillis();
if (now < (mLastMemUsageReportTime+5*60*1000)) {
doReport = false;
} else {
mLastMemUsageReportTime = now;
}
}
final ArrayList<ProcessMemInfo> memInfos
= doReport ? new ArrayList<ProcessMemInfo>(mProcessList.getLruSizeLocked())
: null;
EventLogTags.writeAmLowMemory(mProcessList.getLruSizeLocked());
long now = SystemClock.uptimeMillis();
for (int i = mProcessList.mLruProcesses.size() - 1; i >= 0; i--) {
ProcessRecord rec = mProcessList.mLruProcesses.get(i);
if (rec == dyingProc || rec.thread == null) {
continue;
}
if (doReport) {
memInfos.add(new ProcessMemInfo(rec.processName, rec.pid, rec.setAdj,
rec.setProcState, rec.adjType, rec.makeAdjReason()));
}
if ((rec.lastLowMemory+mConstants.GC_MIN_INTERVAL) <= now) {
// The low memory report is overriding any current
// state for a GC request. Make sure to do
// heavy/important/visible/foreground processes first.
if (rec.setAdj <= ProcessList.HEAVY_WEIGHT_APP_ADJ) {
rec.lastRequestedGc = 0;
} else {
rec.lastRequestedGc = rec.lastLowMemory;
}
rec.reportLowMemory = true;
rec.lastLowMemory = now;
mProcessesToGc.remove(rec);
addProcessToGcListLocked(rec);
}
}
if (doReport) {
Message msg = mHandler.obtainMessage(REPORT_MEM_USAGE_MSG, memInfos);
mHandler.sendMessage(msg);
}
scheduleAppGcsLocked();
}
}
该函数中细节需要注意:
- 需要打印mem usage,需要ro.debuggable 是置1的,而默认值为0。也就是这个打印只会出现在user debug 或engine 版本;
- 打印在 5 分钟内只会出现一次;
- 打印的内存信息只会是当前活着的进程;
2.3 doLowMemReportIfNeededLocked 的触发
函数的触发来源:
- killAllBackgroundProcesses() 该函数是AMS的对外接口函数,用于用户主动调用
- appDiedLocked() 本文主要分析该函数的触发流程
2.4 appDiedLocked
private final class AppDeathRecipient implements IBinder.DeathRecipient {
final ProcessRecord mApp;
final int mPid;
final IApplicationThread mAppThread;
AppDeathRecipient(ProcessRecord app, int pid,
IApplicationThread thread) {
...
}
@Override
public void binderDied() {
...
synchronized(ActivityManagerService.this) {
appDiedLocked(mApp, mPid, mAppThread, true, null);
}
}
}
如果要看appDiedLocked这个方法,首先要知道在ActivityThread.main()中,从这里开始进程创建之后会从这个方法开始绑定AMS,在AMS中通过调用AMS.attachApplicationLocked()这个方法开始绑定,在这个方法中会有对ApplicationThreadProxy绑定通知:
private boolean attachApplicationLocked(@NonNull IApplicationThread thread,
int pid, int callingUid, long startSeq) {
...
final String processName = app.processName;
try {
AppDeathRecipient adr = new AppDeathRecipient(
app, pid, thread);
thread.asBinder().linkToDeath(adr, 0);
app.deathRecipient = adr;
} catch (RemoteException e) {
...
}
...
其中IApplicationThread thread是在ActivityThread中创建的此时属于新建的进程(比如新建app的进程)。client就是ApplicationThreadProxy对象,这个对象是在AMS中,AMS是在system_server中。所以当我们binder server端死亡的时候(app进程死亡)我们system_server进程就会收到通知,然后做一些处理。