一、作用
显示内存使用情况
-S dump SwapPss
二、概念与名词
- USS:进程独占内存
- PSS:USS+共享内存/共享进程数
- RSS:USS+共享内存
- VSS:RSS+未分配的实际物理内存
三、效果展示
Total PSS by process:
41743 kB: com.csr.BTApp (pid 1078)
36924 kB: com.android.launcher (pid 2683)
35452 kB: android.process.acore (pid 1042)
16094 kB: system (pid 782)
11609 kB: com.android.systemui (pid 851)
8564 kB: com.baidu.input (pid 2999)
5298 kB: com.android.phone (pid 959)
4443 kB: com.apical.dreamthemetime (pid 4448)
4203 kB: com.csr.csrservices (pid 982)
4130 kB: com.apical.apicalradio (pid 4518)
Total PSS by OOM adjustment:
16094 kB: System
16094 kB: system (pid 782)
21110 kB: Persistent
11609 kB: com.android.systemui (pid 851)
5298 kB: com.android.phone (pid 959)
4203 kB: com.csr.csrservices (pid 982)
36924 kB: Foreground
36924 kB: com.android.launcher (pid 2683)
85759 kB: Perceptible
41743 kB: com.csr.BTApp (pid 1078)
35452 kB: android.process.acore (pid 1042)
8564 kB: com.baidu.input (pid 2999)
4443 kB: A Services
4443 kB: com.apical.dreamthemetime (pid 4448)
4130 kB: Background
4130 kB: com.apical.apicalradio (pid 4518)
Total PSS by category:
56020 kB: Dalvik
30214 kB: Other dev
27716 kB: Native
24504 kB: Cursor
13198 kB: Unknown
7723 kB: Other mmap
6895 kB: .so mmap
1232 kB: .apk mmap
888 kB: .dex mmap
36 kB: .ttf mmap
34 kB: Ashmem
0 kB: .jar mmap
Total PSS: 168460 kB
四、原理
最终PSS,RSS的信息来自/proc/$pid/smaps
最后的totoal等信息来自/proc/meminfo的信息
五、流程
1、注册服务
public static void main(String[] args) {
// MIUI ADD: Init miui service stubs
SystemServerStub.get();
new SystemServer().run();
}
private void run() {
// Start services.
try {
t.traceBegin("StartServices");
startBootstrapServices(t);//跳转
startCoreServices(t);
startOtherServices(t);
} catch (Throwable ex) {
Slog.e("System", "******************************************");
Slog.e("System", "************ Failure starting system services", ex);
throw ex;
} finally {
t.traceEnd(); // StartServices
}
}
private void startBootstrapServices(@NonNull TimingsTraceAndSlog t) {
mActivityManagerService = ActivityManagerService.Lifecycle.startService(
mSystemServiceManager, atm);
......
// Set up the Application instance for the system process and get started.
t.traceBegin("SetSystemProcess");
mActivityManagerService.setSystemProcess();//跳转
t.traceEnd();
}
public void setSystemProcess() {
...
ServiceManager.addService("meminfo", new MemBinder(this));//其中meminfo服务注册将调用MemBinder的相关方法
}
2、服务调用
main()
//type=Type::DUMP
--|Dumpsys::main(
int
argc,
char
*
const
argv[])
//参数 meminfo -S
----|Dumpsys::startDumpThread(Type type,
const
String16& serviceName,
const
Vector<String16>& args)
------|service->dump(remote_end.get(), args)
//meminfo对应远程服务dumpApplicationMemoryUsage
--------| mActivityManagerService.dumpApplicationMemoryUsage(fd, pw,
" "
, args,
false
,
null
,asProto)
----------|dumpApplicationMemoryUsage(fd, pw, prefix, opts, innerArgs, brief, procs, categoryPw)
------------|dumpApplicationMemoryUsageHeader(pw, uptime, realtime,opts.isCheckinRequest, opts.isCompact)
------------|Debug.getMemoryInfo(pid, mi)
//native,在cpp中实现
--------------|android_os_Debug_getDirtyPagesPid(JNIEnv *env, jobject clazz,jint pid, jobject object)
----------------|load_maps(pid, stats, &foundSwapPss)
----------------|read_memtrack_memory(pid, &graphics_mem)
------------|对获取的内存信息按照顺序和格式进行输出展示
----|Dumpsys::stopDumpThread(dumpComplete);
在ActivityManagerService.java中
if (collectNative) {//打印所有进程内存信息执行的代码块
mi = null;
final Debug.MemoryInfo[] memInfos = new Debug.MemoryInfo[1];
mAppProfiler.forAllCpuStats((st) -> {
if (st.vsize > 0 && procMemsMap.indexOfKey(st.pid) < 0) {
long memtrackGraphics = 0;
long memtrackGl = 0;
if (memInfos[0] == null) {
memInfos[0] = new Debug.MemoryInfo();
}
final Debug.MemoryInfo info = memInfos[0];
if (!brief && !opts.oomOnly) {
if (!Debug.getMemoryInfo(st.pid, info)) {
return;
}
memtrackGraphics = info.getOtherPrivate(Debug.MemoryInfo.OTHER_GRAPHICS);
memtrackGl = info.getOtherPrivate(Debug.MemoryInfo.OTHER_GL);
} else {
long pss = Debug.getPss(st.pid, tmpLong, memtrackTmp);
if (pss == 0) {
return;
}
info.nativePss = (int) pss;
info.nativePrivateDirty = (int) tmpLong[0];
info.nativeRss = (int) tmpLong[2];
memtrackGraphics = memtrackTmp[1];
memtrackGl = memtrackTmp[2];
}
final long myTotalPss = info.getTotalPss();
final long myTotalSwapPss = info.getTotalSwappedOutPss();
final long myTotalRss = info.getTotalRss();
ss[INDEX_TOTAL_PSS] += myTotalPss;
ss[INDEX_TOTAL_SWAP_PSS] += myTotalSwapPss;
ss[INDEX_TOTAL_RSS] += myTotalRss;
ss[INDEX_TOTAL_NATIVE_PSS] += myTotalPss;
ss[INDEX_TOTAL_MEMTRACK_GRAPHICS] += memtrackGraphics;
ss[INDEX_TOTAL_MEMTRACK_GL] += memtrackGl;
MemItem pssItem = new MemItem(st.name + " (pid " + st.pid + ")",
st.name, myTotalPss, info.getSummaryTotalSwapPss(), myTotalRss,
st.pid, false);
procMems.add(pssItem);
ss[INDEX_NATIVE_PSS] += info.nativePss;
ss[INDEX_NATIVE_SWAP_PSS] += info.nativeSwappedOutPss;
ss[INDEX_NATIVE_RSS] += info.nativeRss;
ss[INDEX_DALVIK_PSS] += info.dalvikPss;
ss[INDEX_DALVIK_SWAP_PSS] += info.dalvikSwappedOutPss;
ss[INDEX_DALVIK_RSS] += info.dalvikRss;
for (int j = 0; j < dalvikSubitemPss.length; j++) {//其他类型的Pss的值,具体后面在Java到native的参数传递时会讲到
dalvikSubitemPss[j] += info.getOtherPss(
Debug.MemoryInfo.NUM_OTHER_STATS + j);
dalvikSubitemSwapPss[j] +=
info.getOtherSwappedOutPss(Debug.MemoryInfo.NUM_OTHER_STATS + j);
dalvikSubitemRss[j] += info.getOtherRss(Debug.MemoryInfo.NUM_OTHER_STATS
+ j);
}
ss[INDEX_OTHER_PSS] += info.otherPss;
ss[INDEX_OTHER_SWAP_PSS] += info.otherSwappedOutPss;
ss[INDEX_OTHER_RSS] += info.otherRss;
for (int j = 0; j < Debug.MemoryInfo.NUM_OTHER_STATS; j++) {
long mem = info.getOtherPss(j);
miscPss[j] += mem;
ss[INDEX_OTHER_PSS] -= mem;
mem = info.getOtherSwappedOutPss(j);
miscSwapPss[j] += mem;
ss[INDEX_OTHER_SWAP_PSS] -= mem;
mem = info.getOtherRss(j);
miscRss[j] += mem;
ss[INDEX_OTHER_RSS] -= mem;
}
oomPss[0] += myTotalPss;
oomSwapPss[0] += myTotalSwapPss;
if (oomProcs[0] == null) {
oomProcs[0] = new ArrayList<MemItem>();
}
oomProcs[0].add(pssItem);
oomRss[0] += myTotalRss;
}
});
ArrayList<MemItem> catMems = new ArrayList<MemItem>();
catMems.add(new MemItem("Native", "Native",
ss[INDEX_NATIVE_PSS], ss[INDEX_NATIVE_SWAP_PSS], ss[INDEX_NATIVE_RSS], -1));
final int dalvikId = -2;
catMems.add(new MemItem("Dalvik", "Dalvik", ss[INDEX_DALVIK_PSS],
ss[INDEX_DALVIK_SWAP_PSS], ss[INDEX_DALVIK_RSS], dalvikId));
catMems.add(new MemItem("Unknown", "Unknown", ss[INDEX_OTHER_PSS],
ss[INDEX_OTHER_SWAP_PSS], ss[INDEX_OTHER_RSS], -3));
for (int j=0; j<Debug.MemoryInfo.NUM_OTHER_STATS; j++) {
String label = Debug.MemoryInfo.getOtherLabel(j);
catMems.add(new MemItem(label, label, miscPss[j], miscSwapPss[j], miscRss[j], j));
}
if (dalvikSubitemPss.length > 0) {
// Add dalvik subitems.
for (MemItem memItem : catMems) {
int memItemStart = 0, memItemEnd = 0;
if (memItem.id == dalvikId) {
memItemStart = Debug.MemoryInfo.OTHER_DVK_STAT_DALVIK_START;
memItemEnd = Debug.MemoryInfo.OTHER_DVK_STAT_DALVIK_END;
} else if (memItem.id == Debug.MemoryInfo.OTHER_DALVIK_OTHER) {
memItemStart = Debug.MemoryInfo.OTHER_DVK_STAT_DALVIK_OTHER_START;
memItemEnd = Debug.MemoryInfo.OTHER_DVK_STAT_DALVIK_OTHER_END;
} else if (memItem.id == Debug.MemoryInfo.OTHER_DEX) {
memItemStart = Debug.MemoryInfo.OTHER_DVK_STAT_DEX_START;
memItemEnd = Debug.MemoryInfo.OTHER_DVK_STAT_DEX_END;
} else if (memItem.id == Debug.MemoryInfo.OTHER_ART) {
memItemStart = Debug.MemoryInfo.OTHER_DVK_STAT_ART_START;
memItemEnd = Debug.MemoryInfo.OTHER_DVK_STAT_ART_END;
} else {
continue; // No subitems, continue.
}
memItem.subitems = new ArrayList<MemItem>();
for (int j=memItemStart; j<=memItemEnd; j++) {
final String name = Debug.MemoryInfo.getOtherLabel(
Debug.MemoryInfo.NUM_OTHER_STATS + j);
memItem.subitems.add(new MemItem(name, name, dalvikSubitemPss[j],
dalvikSubitemSwapPss[j], dalvikSubitemRss[j], j));
}
}
}
ArrayList<MemItem> oomMems = new ArrayList<MemItem>();
for (int j=0; j<oomPss.length; j++) {
if (oomPss[j] != 0) {
String label = opts.isCompact ? DUMP_MEM_OOM_COMPACT_LABEL[j]
: DUMP_MEM_OOM_LABEL[j];
MemItem item = new MemItem(label, label, oomPss[j], oomSwapPss[j], oomRss[j],
DUMP_MEM_OOM_ADJ[j]);
item.subitems = oomProcs[j];
oomMems.add(item);
}
}
if (!opts.isCompact) {
pw.println();
}
if (!brief && !opts.oomOnly && !opts.isCompact) {
pw.println();
pw.println("Total RSS by process:");
dumpMemItems(pw, " ", "proc", procMems, true, opts.isCompact, false, false);
pw.println();
}
if (!opts.isCompact) {
pw.println("Total RSS by OOM adjustment:");
}
dumpMemItems(pw, " ", "oom", oomMems, false, opts.isCompact, false, false);
if (!brief && !opts.oomOnly) {
PrintWriter out = categoryPw != null ? categoryPw : pw;
if (!opts.isCompact) {
out.println();
out.println("Total RSS by category:");
}
dumpMemItems(out, " ", "cat", catMems, true, opts.isCompact, false, false);
}
//输入的-S在这里起作用,opts.dumpSwapPss=true,这样由swapPss就会一起打印出来
opts.dumpSwapPss = opts.dumpSwapPss && hasSwapPss && ss[INDEX_TOTAL_SWAP_PSS] != 0;
if (!brief && !opts.oomOnly && !opts.isCompact) {
pw.println();
pw.println("Total PSS by process:");//关键打印按照PSS的大小的顺序进行排序打印
dumpMemItems(pw, " ", "proc", procMems, true, opts.isCompact, true,
opts.dumpSwapPss);//此函数用于打印按照PSS大小顺序排列的所有进程的memory信息
pw.println();
}
if (!opts.isCompact) {
pw.println("Total PSS by OOM adjustment:");
}
dumpMemItems(pw, " ", "oom", oomMems, false, opts.isCompact, true, opts.dumpSwapPss);
if (!brief && !opts.oomOnly) {
PrintWriter out = categoryPw != null ? categoryPw : pw;
if (!opts.isCompact) {
out.println();
out.println("Total PSS by category:");
}
dumpMemItems(out, " ", "cat", catMems, true, opts.isCompact, true,
opts.dumpSwapPss);
}
if (!opts.isCompact) {
pw.println();
}
MemInfoReader memInfo = new MemInfoReader();//获取系统内存总体信息,从/proc/meminfo文件中读取信息
memInfo.readMemInfo();
if (ss[INDEX_TOTAL_NATIVE_PSS] > 0) {
synchronized (mProcessStats.mLock) {
final long cachedKb = memInfo.getCachedSizeKb();
final long freeKb = memInfo.getFreeSizeKb();
final long zramKb = memInfo.getZramTotalSizeKb();
final long kernelKb = memInfo.getKernelUsedSizeKb();
EventLogTags.writeAmMeminfo(cachedKb * 1024, freeKb * 1024, zramKb * 1024,
kernelKb * 1024, ss[INDEX_TOTAL_NATIVE_PSS] * 1024);
mProcessStats.addSysMemUsageLocked(cachedKb, freeKb, zramKb, kernelKb,
ss[INDEX_TOTAL_NATIVE_PSS]);
}
}
if (!brief) {
if (!opts.isCompact) {
pw.print("Total RAM: "); pw.print(stringifyKBSize(memInfo.getTotalSizeKb()));
pw.print(" (status ");
mAppProfiler.dumpLastMemoryLevelLocked(pw);
pw.print(" Free RAM: ");
/*
free mem = cached pss + cached kernel + free + ion cached + gpu cached
(系统cached APP PSS总和 + meminfo的MemFree + Cached +Buffers-Mapped + meminfo的display ion和gpu模块的内存占用
*/
pw.print(stringifyKBSize(cachedPss + memInfo.getMoreCachedSizeKb()
+ memInfo.getFreeSizeKb()));
// pw.print(stringifyKBSize(cachedPss + memInfo.getCachedSizeKb()
// + memInfo.getFreeSizeKb()));
pw.print(" (");
pw.print(stringifyKBSize(cachedPss));
pw.print(" cached pss + ");
pw.print(stringifyKBSize(memInfo.getMoreCachedSizeKb()));
// pw.print(stringifyKBSize(memInfo.getCachedSizeKb()));
pw.print(" cached kernel + ");
pw.print(stringifyKBSize(memInfo.getFreeSizeKb()));
pw.println(" free)");
} else {
pw.print("ram,"); pw.print(memInfo.getTotalSizeKb()); pw.print(",");
pw.print(cachedPss + memInfo.getMoreCachedSizeKb()
+ memInfo.getFreeSizeKb()); pw.print(",");
// pw.print(cachedPss + memInfo.getCachedSizeKb()
// + memInfo.getFreeSizeKb()); pw.print(",");
pw.println(ss[INDEX_TOTAL_PSS] - cachedPss);
}
}
long kernelUsed = memInfo.getKernelUsedSizeKb();
final long ionHeap = Debug.getIonHeapsSizeKb();
final long ionPool = Debug.getIonPoolsSizeKb();
final long dmabufMapped = Debug.getDmabufMappedSizeKb();
if (ionHeap >= 0 && ionPool >= 0) {
final long ionUnmapped = ionHeap - dmabufMapped;
pw.print(" ION: ");
pw.print(stringifyKBSize(ionHeap + ionPool));
pw.print(" (");
pw.print(stringifyKBSize(dmabufMapped));
pw.print(" mapped + ");
pw.print(stringifyKBSize(ionUnmapped));
pw.print(" unmapped + ");
pw.print(stringifyKBSize(ionPool));
pw.println(" pools)");
kernelUsed += ionUnmapped;
// Note: mapped ION memory is not accounted in PSS due to VM_PFNMAP flag being
// set on ION VMAs, however it might be included by the memtrack HAL.
// Replace memtrack HAL reported Graphics category with mapped dmabufs
ss[INDEX_TOTAL_PSS] -= ss[INDEX_TOTAL_MEMTRACK_GRAPHICS];
ss[INDEX_TOTAL_PSS] += dmabufMapped;
} else {
final long totalExportedDmabuf = Debug.getDmabufTotalExportedKb();
if (totalExportedDmabuf >= 0) {
final long dmabufUnmapped = totalExportedDmabuf - dmabufMapped;
pw.print("DMA-BUF: ");
pw.print(stringifyKBSize(totalExportedDmabuf));
pw.print(" (");
pw.print(stringifyKBSize(dmabufMapped));
pw.print(" mapped + ");
pw.print(stringifyKBSize(dmabufUnmapped));
pw.println(" unmapped)");
// Account unmapped dmabufs as part of kernel memory allocations
kernelUsed += dmabufUnmapped;
// Replace memtrack HAL reported Graphics category with mapped dmabufs
ss[INDEX_TOTAL_PSS] -= ss[INDEX_TOTAL_MEMTRACK_GRAPHICS];
ss[INDEX_TOTAL_PSS] += dmabufMapped;
}
// totalDmabufHeapExported is included in totalExportedDmabuf above and hence do not
// need to be added to kernelUsed.
final long totalDmabufHeapExported = Debug.getDmabufHeapTotalExportedKb();
if (totalDmabufHeapExported >= 0) {
pw.print("DMA-BUF Heaps: ");
pw.println(stringifyKBSize(totalDmabufHeapExported));
}
final long totalDmabufHeapPool = Debug.getDmabufHeapPoolsSizeKb();
if (totalDmabufHeapPool >= 0) {
pw.print("DMA-BUF Heaps pool: ");
pw.println(stringifyKBSize(totalDmabufHeapPool));
}
}
final long gpuUsage = Debug.getGpuTotalUsageKb();
if (gpuUsage >= 0) {
final long gpuPrivateUsage = Debug.getGpuPrivateMemoryKb();
if (gpuPrivateUsage >= 0) {
final long gpuDmaBufUsage = gpuUsage - gpuPrivateUsage;
pw.print(" GPU: ");
pw.print(stringifyKBSize(gpuUsage));
pw.print(" (");
pw.print(stringifyKBSize(gpuDmaBufUsage));
pw.print(" dmabuf + ");
pw.print(stringifyKBSize(gpuPrivateUsage));
pw.println(" private)");
// Replace memtrack HAL reported GL category with private GPU allocations and
// account it as part of kernel memory allocations
ss[INDEX_TOTAL_PSS] -= ss[INDEX_TOTAL_MEMTRACK_GL];
kernelUsed += gpuPrivateUsage;
} else {
pw.print(" GPU: "); pw.println(stringifyKBSize(gpuUsage));
}
}
// Note: ION/DMA-BUF heap pools are reclaimable and hence, they are included as part of
// memInfo.getCachedSizeKb().
final long lostRAM = memInfo.getTotalSizeKb()
- (ss[INDEX_TOTAL_PSS] - ss[INDEX_TOTAL_SWAP_PSS])
- memInfo.getFreeSizeKb() - memInfo.getCachedSizeKb()
- kernelUsed - memInfo.getZramTotalSizeKb();
if (!opts.isCompact) {
pw.print(" Used RAM: "); pw.print(stringifyKBSize(ss[INDEX_TOTAL_PSS] - cachedPss
+ kernelUsed)); pw.print(" (");
pw.print(stringifyKBSize(ss[INDEX_TOTAL_PSS] - cachedPss)); pw.print(" used pss + ");
pw.print(stringifyKBSize(kernelUsed)); pw.print(" kernel)\n");
pw.print(" Lost RAM: "); pw.println(stringifyKBSize(lostRAM));
} else {
pw.print("lostram,"); pw.println(lostRAM);
}
if (!brief) {
if (memInfo.getZramTotalSizeKb() != 0) {
if (!opts.isCompact) {
pw.print(" ZRAM: ");
pw.print(stringifyKBSize(memInfo.getZramTotalSizeKb()));
pw.print(" physical used for ");
pw.print(stringifyKBSize(memInfo.getSwapTotalSizeKb()
- memInfo.getSwapFreeSizeKb()));
pw.print(" in swap (");
pw.print(stringifyKBSize(memInfo.getSwapTotalSizeKb()));
pw.println(" total swap)");
} else {
pw.print("zram,"); pw.print(memInfo.getZramTotalSizeKb()); pw.print(",");
pw.print(memInfo.getSwapTotalSizeKb()); pw.print(",");
pw.println(memInfo.getSwapFreeSizeKb());
}
}
final long[] ksm = getKsmInfo();
if (!opts.isCompact) {
if (ksm[KSM_SHARING] != 0 || ksm[KSM_SHARED] != 0 || ksm[KSM_UNSHARED] != 0
|| ksm[KSM_VOLATILE] != 0) {
pw.print(" KSM: "); pw.print(stringifyKBSize(ksm[KSM_SHARING]));
pw.print(" saved from shared ");
pw.print(stringifyKBSize(ksm[KSM_SHARED]));
pw.print(" "); pw.print(stringifyKBSize(ksm[KSM_UNSHARED]));
pw.print(" unshared; ");
pw.print(stringifyKBSize(
ksm[KSM_VOLATILE])); pw.println(" volatile");
}
pw.print(" Tuning: ");
pw.print(ActivityManager.staticGetMemoryClass());
pw.print(" (large ");
pw.print(ActivityManager.staticGetLargeMemoryClass());
pw.print("), oom ");
pw.print(stringifySize(
mProcessList.getMemLevel(ProcessList.CACHED_APP_MAX_ADJ), 1024));
pw.print(", restore limit ");
pw.print(stringifyKBSize(mProcessList.getCachedRestoreThresholdKb()));
if (ActivityManager.isLowRamDeviceStatic()) {
pw.print(" (low-ram)");
}
if (ActivityManager.isHighEndGfx()) {
pw.print(" (high-end-gfx)");
}
pw.println();
} else {
pw.print("ksm,"); pw.print(ksm[KSM_SHARING]); pw.print(",");
pw.print(ksm[KSM_SHARED]); pw.print(","); pw.print(ksm[KSM_UNSHARED]);
pw.print(","); pw.println(ksm[KSM_VOLATILE]);
pw.print("tuning,");
pw.print(ActivityManager.staticGetMemoryClass());
pw.print(',');
pw.print(ActivityManager.staticGetLargeMemoryClass());
pw.print(',');
pw.print(mProcessList.getMemLevel(ProcessList.CACHED_APP_MAX_ADJ)/1024);
if (ActivityManager.isLowRamDeviceStatic()) {
pw.print(",low-ram");
}
if (ActivityManager.isHighEndGfx()) {
pw.print(",high-end-gfx");
}
pw.println();
}
}
}
具体每个进程的读取细节在android_os_Debug.cpp中,android_os_Debug_getDirtyPagesPid函数读取proc/pid/smaps中的信息以及gl、egl,并按照类型存入
static jboolean android_os_Debug_getDirtyPagesPid(JNIEnv *env, jobject clazz,
jint pid, jobject object)
{
bool foundSwapPss;
stats_t stats[_NUM_HEAP];
memset(&stats, 0, sizeof(stats));
//读取节点proc/pid/smaps节点的信息smaps.txt,统计结果保存在stats_t数组中
if (!load_maps(pid, stats, &foundSwapPss)) {
return JNI_FALSE;
}
/*根据 pid 获取 Graphic 内存,赋值到 graphics_mem 中,补充了smaps中没有的index:17,18,19。至此0~19分类对应的数值填写完毕。
这部分数据获取原理https://blog.csdn.net/msf568834002/article/details/78881341
https://www.cnblogs.com/pyjetson/p/14769359.html#23-read_memtrack_memory
*/
if (read_memtrack_memory(pid, &graphics_mem) == 0) {
stats[HEAP_GRAPHICS].pss = graphics_mem.graphics;
stats[HEAP_GRAPHICS].privateDirty = graphics_mem.graphics;
stats[HEAP_GRAPHICS].rss = graphics_mem.graphics;
stats[HEAP_GL].pss = graphics_mem.gl;
stats[HEAP_GL].privateDirty = graphics_mem.gl;
stats[HEAP_GL].rss = graphics_mem.gl;
stats[HEAP_OTHER_MEMTRACK].pss = graphics_mem.other;
stats[HEAP_OTHER_MEMTRACK].privateDirty = graphics_mem.other;
stats[HEAP_OTHER_MEMTRACK].rss = graphics_mem.other;
}
/*NUM_CORE_HEAP=HEAP_NATIVE+1=3
_NUM_EXCLUSIVE_HEAP=HEAP_OTHER_MEMTRACK+1=20。因此这段代码的意思是把3~19的数据累加到0(HEAP_UNKNOWN)中
*/
for (int i=_NUM_CORE_HEAP; i<_NUM_EXCLUSIVE_HEAP; i++) {
stats[HEAP_UNKNOWN].pss += stats[i].pss;
stats[HEAP_UNKNOWN].swappablePss += stats[i].swappablePss;
stats[HEAP_UNKNOWN].rss += stats[i].rss;
stats[HEAP_UNKNOWN].privateDirty += stats[i].privateDirty;
stats[HEAP_UNKNOWN].sharedDirty += stats[i].sharedDirty;
stats[HEAP_UNKNOWN].privateClean += stats[i].privateClean;
stats[HEAP_UNKNOWN].sharedClean += stats[i].sharedClean;
stats[HEAP_UNKNOWN].swappedOut += stats[i].swappedOut;
stats[HEAP_UNKNOWN].swappedOutPss += stats[i].swappedOutPss;
}
/*使用 stats 的值对 JNI 传过来的 Java 对象进行赋值 HEAP_UNKNOWN(0)、HEAP_DALVIK(1)、HEAP_NATIVE(2)
由于上一段代码已经将下标3~19的数据都记录到了unkown中,因此现在这三个下标实际上已经包含0~19的数据。
分别保存在stat_fields[0]/[1]/[2]
*/
for (int i=0; i<_NUM_CORE_HEAP; i++) {
env->SetIntField(object, stat_fields[i].pss_field, stats[i].pss);
env->SetIntField(object, stat_fields[i].pssSwappable_field, stats[i].swappablePss);
env->SetIntField(object, stat_fields[i].rss_field, stats[i].rss);
env->SetIntField(object, stat_fields[i].privateDirty_field, stats[i].privateDirty);
env->SetIntField(object, stat_fields[i].sharedDirty_field, stats[i].sharedDirty);
env->SetIntField(object, stat_fields[i].privateClean_field, stats[i].privateClean);
env->SetIntField(object, stat_fields[i].sharedClean_field, stats[i].sharedClean);
env->SetIntField(object, stat_fields[i].swappedOut_field, stats[i].swappedOut);
env->SetIntField(object, stat_fields[i].swappedOutPss_field, stats[i].swappedOutPss);
}
//foundSwapPss 在 load_smaps 判断 swap_pss 是否大于 0
env->SetBooleanField(object, hasSwappedOutPss_field, foundSwapPss);
// 对 Java 中 Meminfo对象 otherStats field 进行赋值
jintArray otherIntArray = (jintArray)env->GetObjectField(object, otherStats_field);
// 获取 Java 对象 otherStats 指针
jint* otherArray = (jint*)env->GetPrimitiveArrayCritical(otherIntArray, 0);
if (otherArray == NULL) {
return JNI_FALSE;
}
int j=0;
/* 这里 Java otherStats 对象是一个 int[32*9],用一个数组对象进行存储多个数据
将stats数组下标 3~35的数据存入otherArray中。这里会重复数据,因为heap和subheap的下标都在
注意:这在传回去之后以后就有了3的偏移。
*/
for (int i=_NUM_CORE_HEAP; i<_NUM_HEAP; i++) {
otherArray[j++] = stats[i].pss;
otherArray[j++] = stats[i].swappablePss;
otherArray[j++] = stats[i].rss;
otherArray[j++] = stats[i].privateDirty;
otherArray[j++] = stats[i].sharedDirty;
otherArray[j++] = stats[i].privateClean;
otherArray[j++] = stats[i].sharedClean;
otherArray[j++] = stats[i].swappedOut;
otherArray[j++] = stats[i].swappedOutPss;
}
// 对 Java 对象 otherStats 指针的操作进行 release
env->ReleasePrimitiveArrayCritical(otherIntArray, otherArray, 0);
return JNI_TRUE;
}
其中,stats_t类型:
struct stats_t {
int pss;//MemUsage.pss累加
/*如果该VMA的is_swappable==true
计算共享率sharing_proportion=(MemUsage.pss - MemUsage.uss) / (MemUsage.shared_clean + MemUsage.shared_dirty);
swapable_pss = (sharing_proportion * MemUsage.shared_clean) + MemUsage.private_clean;累加
如果is_swappable不为true,该项为0
*/
int swappablePss;
int rss;//MemUsage.rss累加
int privateDirty;//MemUsage.private_dirty累加
int sharedDirty;//MemUsage.shared_dirty累加
int privateClean;//MemUsage.private_clean累加
int sharedClean;//MemUsage.shared_clean累加
int swappedOut;//MemUsage.swap累加
int swappedOutPss;//MemUsage.swap_pss累加
};
MemUsage来源为vma
struct Vma {
uint64_t start;
uint64_t end;
uint64_t offset;
uint16_t flags;
std::string name;
uint64_t inode;
bool is_shared;
Vma() : start(0), end(0), offset(0), flags(0), name(""), inode(0), is_shared(false) {}
Vma(uint64_t s, uint64_t e, uint64_t off, uint16_t f, const std::string& n,
uint64_t iNode, bool is_shared)
: start(s), end(e), offset(off), flags(f), name(n), inode(iNode), is_shared(is_shared) {}
~Vma() = default;
void clear() { memset(&usage, 0, sizeof(usage)); }
// Memory usage of this mapping.
MemUsage usage;
};
struct MemUsage {
uint64_t vss;//"Size:"
uint64_t rss;//"Rss:"
uint64_t pss;//"Pss:"
uint64_t uss;//"Private_Clean:"+"Private_Dirty:"
uint64_t swap;//"Swap:"
uint64_t swap_pss;//"SwapPss:"
uint64_t private_clean;//"Private_Clean:"
uint64_t private_dirty;//"Private_Dirty:"
uint64_t shared_clean;//"Shared_Clean:"
uint64_t shared_dirty;//"Shared_Dirty:"
uint64_t anon_huge_pages;//"AnonHugePages:"
uint64_t shmem_pmd_mapped;//"ShmemPmdMapped:"
uint64_t file_pmd_mapped;//"FilePmdMapped:"
uint64_t shared_hugetlb;//"Shared_Hugetlb:"
uint64_t private_hugetlb;//"Private_Hugetlb:"
uint64_t thp;
MemUsage()
: vss(0),
rss(0),
pss(0),
uss(0),
swap(0),
swap_pss(0),
private_clean(0),
private_dirty(0),
shared_clean(0),
shared_dirty(0),
anon_huge_pages(0),
shmem_pmd_mapped(0),
file_pmd_mapped(0),
shared_hugetlb(0),
private_hugetlb(0),
thp(0) {}
~MemUsage() = default;
void clear() {
vss = rss = pss = uss = swap = swap_pss = 0;
private_clean = private_dirty = shared_clean = shared_dirty = 0;
}
};
最终vma的对应关系
下标 | name |
HEAP_STACK | "[stack"、"[anon:stack_and_tls:"开头 |
HEAP_SO,is_swappable | ".so"结尾 |
HEAP_JAR,is_swappable | ".jar"结尾 |
HEAP_TTF,is_swappable | ".apk"结尾 |
HEAP_TTF,is_swappable | ".ttf"结尾 |
HEAP_DEX, sub_heap=HEAP_DEX_APP_DEX | ".odex"结尾或(namesize>4&&".dex") |
HEAP_DEX, is_swappable | ".vdex"结尾 若"@boot"||"/boot"||"/apex":sub_heap = HEAP_DEX_BOOT_VDEX 否则sub_heap = HEAP_DEX_APP_VDEX |
HEAP_OAT,is_swappable | ".oat"结尾 |
HEAP_ART,is_swappable | ".art"、 ".art]"结尾 若"@boot"||"/boot"||"/apex":sub_heap = HEAP_ART_BOOT 否则sub_heap = HEAP_ART_APP |
HEAP_UNKNOWN_DEV | "/dev/"开头 |
HEAP_GL_DEV | "/dev/kgsl-3d0"开头 |
HEAP_CURSOR | "/dev/ashmem/CursorWindow"开头 |
HEAP_DALVIK_OTHER sub_heap = HEAP_DALVIK_OTHER_ZYGOTE_CODE_CACHE | "/dev/ashmem/jit-zygote-cache"开头 |
HEAP_ASHMEM | "/dev/ashmem"开头 |
HEAP_DALVIK_OTHER sub_heap = HEAP_DALVIK_OTHER_APP_CODE_CACHE | "/memfd:jit-cache"开头 |
HEAP_DALVIK_OTHER sub_heap = HEAP_DALVIK_OTHER_ZYGOTE_CODE_CACHE | "/memfd:jit-zygote-cache"开头 |
HEAP_DALVIK_OTHER sub_heap=HEAP_DALVIK_OTHER_LINEARALLOC | "[anon:dalvik-LinearAlloc"开头 |
HEAP_DALVIK sub_heap = HEAP_DALVIK_NORMAL | "[anon:dalvik-alloc space"||"[anon:dalvik-main space"开头 |
HEAP_DALVIK sub_heap = HEAP_DALVIK_LARGE | "[anon:dalvik-large object space"||"[anon:dalvik-free list large object space"开头 |
HEAP_DALVIK sub_heap = HEAP_DALVIK_NON_MOVING | "[anon:dalvik-non moving space"开头 |
HEAP_DALVIK sub_heap = HEAP_DALVIK_ZYGOTE | "[anon:dalvik-zygote space"开头 |
HEAP_DALVIK_OTHER sub_heap = HEAP_DALVIK_OTHER_INDIRECT_REFERENCE_TABLE | "[anon:dalvik-indirect ref"开头 |
HEAP_DALVIK_OTHER sub_heap = HEAP_DALVIK_OTHER_APP_CODE_CACHE | "[anon:dalvik-jit-code-cache"|| "[anon:dalvik-data-code-cache"开头 |
HEAP_DALVIK_OTHER sub_heap = HEAP_DALVIK_OTHER_COMPILER_METADATA | "[anon:dalvik-CompilerMetadata"开头 |
HEAP_DALVIK_OTHER | "[anon:dalvik-"开头 |
HEAP_UNKNOWN sub_heap = HEAP_DALVIK_OTHER_ACCOUNTING | "[anon:"开头 |
HEAP_UNKNOWN_MAP 16 | 上面的都没有提到,但名字长度大于0的 |
HEAP_SO 9 | 本行的start==上一个map的end并且上一个是so |
六、总结
数据来源整理:
输出标签 | 二级标签 | 含义 | 计算方式 |
Total RAM: | 当前系统的所有内存大小。一般并不等于机器的物理内存大小。物理内存 - kernel reserved = 系统实际内存总和 | /proc/meminfo的"MemTotal:" | |
Free RAM: | 1+2+3 | ||
1.cached pss | Cached 的是 OOM 900 以上的,一般都是切到后台的进程,表示可以随时回收 | Total PSS by OOM adjustment中子模块为 cached 的合计值 | |
2.cached kernel | /proc/meminfo中的“Buffers:”+“Cached:”+“KReclaimable:” 如果“KReclaimable:”项为0则用"SReclaimable" | ||
3.free | /proc/meminfo中的"MemFree:” | ||
DMA-BUF: | 将/sys/kernel/dmabuf/buffers路径下每个目录下的size累加 | ||
1.mapped | 遍历每个/proc/$pid/proc/$pid/maps文件 对于一个maps文件,遍历其每一行,将行尾有name的行的endaddr(第二串数字)和startaddr(第一串数字)相减作为size进行累加得到一个maps文件的mappedsize 最后将所有pid的mapedsize/1024进行累加 | ||
2.unmapped | DMA-BUF total - mapped部分 | ||
DMA-BUF Heaps: | 根据/dev/dma_heap路径下的文件名,遍历累加/sys/kernel/dmabuf/buffers/$@/exporter_name中的值与之对应的size之和 | ||
DMA-BUF Heaps pool: | /sys/kernel/dma_heap/total_pools_kb | ||
GPU: | /sys/fs/bpf/map_gpu_mem_gpu_mem_total_map | ||
dmabuf | Total gpu - private | ||
private | memtrack_proc_get获取的 GPU-private (GL)内存的全局累计(Memtrack hal 定义PID 0为GL内存) | ||
Used RAM: | |||
used pss | (所有/proc/$pid/maps中pss+swappss) - (OOM分类下cachedPss) - (category分类下的EGL和GL )+ (DMA-BUF的mapped部分) | ||
kernel(usedkernel) | kernelUsed | "Shmem:"+"SUnreclaim:"+"VmallocUsed:"+"PageTables:"+DMA中unmap部分+GPU中private部分 其实还会加上!Debug.isVmapStack()?"KernelStack:":0,但除了初始化第一次调用之外,该值都为0 | |
Lost RAM: | "Total RAM: "-(/proc/$@pid/smaps文件中纯Pss数据累加) -Free RAM的free部分- ("Buffers:"+("KReclaimable:"==0?"SReclaimable:":"KReclaimable:")+"Cached:"-"Mapped:")- Used RAM的kernel部分 - ZRAM的physical used部分 | ||
ZRAM: | physical used | /sys/block/zram$/mm_stat文件的第三列数字,如果mm_stat不存在则读取mem_used_total的值 | |
in swap | "SwapTotal:"-"SwapFree:" | ||
total swap | "SwapTotal:" | ||
Tuning: | 通过SystemProperties.get("dalvik.vm.heapgrowthlimit", "")获得单个应用可用最大内存的android属性。底层调用property.get | ||
(large | SystemProperties.get("dalvik.vm.heapsize", "16m") 表示单个进程可用的最大内存,但如果存在heapgrowthlimit参数,则以heapgrowthlimit为准.heapsize表示不受控情况下的极限堆,表示单个虚拟机或单个进程可用的最大内存。而android上的应用是带有独立虚拟机的,也就是每开一个应用就会打开一个独立的虚拟机(这样设计就会在单个程序崩溃的情况下不会导致整个系统的崩溃)。 | ||
), oom= | ProcessList.CACHED_APP_MAX_ADJ(999)与mOomAdj(0,100,200,250,900,950)数组中挨个比较,mOomMinFree[index]值,否则返回mOomMinFree[mOomAdj.length-1]的值 | ||
restore limit | $oom/3 | ||
ActivityManager.isLowRamDeviceStatic()? | " (low-ram)" RAM小于1GB | ||
ActivityManager.isHighEndGfx() | " (high-end-gfx)" 内存大于1GB并且满足一些条件(在代码中写死了是满足的) |