Simpleperf 总结

Simpleperf

Android Studio 包含 Simpleperf 的图形前端,记录在使用 CPU 性能剖析器检查 CPU 活动中。大多数用户更喜欢使用该图形前端,而不是直接使用 Simpleperf。

如果您更喜欢使用命令行,可以直接使用 Simpleperf。Simpleperf 是一个通用的命令行 CPU 性能剖析工具,包含在面向 Mac、Linux 和 Windows 的 NDK 中。

如需查看完整的文档,请先阅读 Simpleperf 自述文件

Simpleperf 提示与诀窍

如果您刚开始使用 Simpleperf,不妨试试以下一些特别实用的命令。如需了解更多命令和选项,请参阅 Simpleperf 命令和选项参考

查找执行时间最长的共享库

您可以运行此命令来查看哪些 .so 文件占用了最大的执行时间百分比(基于 CPU 周期数)。启动性能分析会话时,首先运行此命令是个不错的选择。

$ simpleperf report --sort dso

查找执行时间最长的函数

当您确定占用最多执行时间的共享库后,就可以运行此命令来查看执行该 .so 文件的函数所用时间的百分比。

$ simpleperf report --dsos library.so --sort symbol

查找线程中所用时间的百分比

.so 文件中的执行时间可以跨多个线程分配。您可以运行此命令来查看每个线程所用时间的百分比。

$ simpleperf report --sort tid,comm

查找对象模块中所用时间的百分比

在找到占用大部分执行时间的线程之后,可以使用此命令来隔离在这些线程上占用最长执行时间的对象模块。

$ simpleperf report --tids threadID --sort dso

了解函数调用的相关性

调用图可直观呈现 Simpleperf 在对会话进行性能剖析期间记录的堆栈轨迹。

您可以使用 report -g 命令打印调用图,以查看其他函数调用的函数。这有助于确定是某个函数本身运行缓慢还是因为它调用的一个或多个函数运行较慢。

$ simpleperf report -g

您还可以使用 Python 脚本 report.py -g 来启动显示函数的交互式工具。您可以点击每个函数,查看它的子函数所用的时间。

对使用 Unity 构建的应用进行性能剖析

如果要对使用 Unity 构建的应用进行性能剖析,请确保使用调试符号构建应用,具体步骤如下:

  1. 在 Unity 编辑器中打开 Android 项目。
  2. 在适用于 Android 平台的 Build Settings 窗口中,确保选中 Development Build 选项。
  3. 点击 Player Settings,并将 Stripping Level 属性设置为 Disabled

 

Simpleperf

Simpleperf is a native CPU profiling tool for Android. It can be used to profile both Android applications and native processes running on Android. It can profile both Java and C++ code on Android. The simpleperf executable can run on Android >=L, and Python scripts can be used on Android >= N.

Simpleperf is part of the Android Open Source Project. The source code is here. The latest document is here.

Contents

Introduction

An introduction slide deck is here.

Simpleperf contains two parts: the simpleperf executable and Python scripts.

The simpleperf executable works similar to linux-tools-perf, but has some specific features for the Android profiling environment:

  1. It collects more info in profiling data. Since the common workflow is “record on the device, and report on the host”, simpleperf not only collects samples in profiling data, but also collects needed symbols, device info and recording time.

  2. It delivers new features for recording.

    1. When recording dwarf based call graph, simpleperf unwinds the stack before writing a sample to file. This is to save storage space on the device.
    2. Support tracing both on CPU time and off CPU time with --trace-offcpu option.
    3. Support recording callgraphs of JITed and interpreted Java code on Android >= P.
  3. It relates closely to the Android platform.

    1. Is aware of Android environment, like using system properties to enable profiling, using run-as to profile in application's context.
    2. Supports reading symbols and debug information from the .gnu_debugdata section, because system libraries are built with .gnu_debugdata section starting from Android O.
    3. Supports profiling shared libraries embedded in apk files.
    4. It uses the standard Android stack unwinder, so its results are consistent with all other Android tools.
  4. It builds executables and shared libraries for different usages.

    1. Builds static executables on the device. Since static executables don't rely on any library, simpleperf executables can be pushed on any Android device and used to record profiling data.
    2. Builds executables on different hosts: Linux, Mac and Windows. These executables can be used to report on hosts.
    3. Builds report shared libraries on different hosts. The report library is used by different Python scripts to parse profiling data.

Detailed documentation for the simpleperf executable is here.

Python scripts are split into three parts according to their functions:

  1. Scripts used for recording, like app_profiler.py, run_simpleperf_without_usb_connection.py.

  2. Scripts used for reporting, like report.py, report_html.py, inferno.

  3. Scripts used for parsing profiling data, like simpleperf_report_lib.py.

Detailed documentation for the Python scripts is here.

Tools in simpleperf

The simpleperf executables and Python scripts are located in simpleperf/ in ndk releases, and in system/extras/simpleperf/scripts/ in AOSP. Their functions are listed below.

bin/: contains executables and shared libraries.

bin/android/${arch}/simpleperf: static simpleperf executables used on the device.

bin/${host}/${arch}/simpleperf: simpleperf executables used on the host, only supports reporting.

bin/${host}/${arch}/libsimpleperf_report.${so/dylib/dll}: report shared libraries used on the host.

*.py, inferno, purgatorio: Python scripts used for recording and reporting. Details are in scripts_reference.md.

Android application profiling

See android_application_profiling.md.

Android platform profiling

See android_platform_profiling.md.

Executable commands reference

See executable_commands_reference.md.

Scripts reference

See scripts_reference.md.

View the profile

See view_the_profile.md.

Answers to common issues

Why we suggest profiling on Android >= N devices?

  1. Running on a device reflects a real running situation, so we suggest profiling on real devices instead of emulators.
  2. To profile Java code, we need ART running in oat mode, which is only available >= L for rooted devices, and >= N for non-rooted devices.
  3. Old Android versions are likely to be shipped with old kernels (< 3.18), which may not support profiling features like recording dwarf based call graphs.
  4. Old Android versions are likely to be shipped with Arm32 chips. In Arm32 mode, recording stack frame based call graphs doesn't work well.

Suggestions about recording call graphs

Below is our experiences of dwarf based call graphs and stack frame based call graphs.

dwarf based call graphs:

  1. Need support of debug information in binaries.
  2. Behave normally well on both ARM and ARM64, for both fully compiled Java code and C++ code.
  3. Can only unwind 64K stack for each sample. So usually can't show complete flamegraph. But probably is enough for users to identify hot places.
  4. Take more CPU time than stack frame based call graphs. So the sample frequency is suggested to be 1000 Hz. Thus at most 1000 samples per second.

stack frame based call graphs:

  1. Need support of stack frame registers.
  2. Don‘t work well on ARM. Because ARM is short of registers, and ARM and THUMB code have different stack frame registers. So the kernel can’t unwind user stack containing both ARM/THUMB code.
  3. Also don‘t work well on fully compiled Java code on ARM64. Because the ART compiler doesn’t reserve stack frame registers.
  4. Work well when profiling native programs on ARM64. One example is profiling surfacelinger. And usually shows complete flamegraph when it works well.
  5. Take less CPU time than dwarf based call graphs. So the sample frequency can be 4000 Hz or higher.

So if you need to profile code on ARM or profile fully compiled Java code, dwarf based call graphs may be better. If you need to profile C++ code on ARM64, stack frame based call graphs may be better. After all, you can always try dwarf based call graph first, because it always produces reasonable results when given unstripped binaries properly. If it doesn't work well enough, then try stack frame based call graphs instead.

Simpleperf may need unstripped native binaries on the device to generate good dwarf based call graphs. It can be supported by downloading unstripped native libraries on device, as here.

Why we can't always get complete DWARF-based call graphs?

DWARF-based call graphs are generated by unwinding thread stacks. When a sample is generated, up to 64KB stack data is dumped by the kernel. By unwinding the stack based on dwarf information, we get a callchain. But the thread stack can be much longer than 64KB. In that case, we can't unwind to the thread start point.

To alleviate the problem, simpleperf joins callchains after recording them. If two callchains of a thread have an entry containing the same ip and sp address, then simpleperf tries to join them to make the callchains longer. In that case, the longer we run, the more samples we get. This makes it more likely to get complete callchains, but it's still not guaranteed to get complete call graphs.

How to solve missing symbols in report?

The simpleperf record command collects symbols on device in perf.data. But if the native libraries you use on device are stripped, this will result in a lot of unknown symbols in the report. A solution is to build binary_cache on host.

# Collect binaries needed by perf.data in binary_cache/.
$ ./binary_cache_builder.py -lib NATIVE_LIB_DIR,...

The NATIVE_LIB_DIRs passed in -lib option are the directories containing unstripped native libraries on host. After running it, the native libraries containing symbol tables are collected in binary_cache/ for use when reporting.

$ ./report.py --symfs binary_cache

# report_html.py searches binary_cache/ automatically, so you don't need to
# pass it any argument.
$ ./report_html.py

Fix broken callchain stopped at C functions

When using dwarf based call graphs, simpleperf generates callchains during recording to save space. The debug information needed to unwind C functions is in .debug_frame section, which is usually stripped in native libraries in apks. To fix this, we can download unstripped version of native libraries on device, and ask simpleperf to use them when recording.

To use simpleperf directly:

# create native_libs dir on device, and push unstripped libs in it (nested dirs are not supported).
$ adb shell mkdir /data/local/tmp/native_libs
$ adb push <unstripped_dir>/*.so /data/local/tmp/native_libs
# run simpleperf record with --symfs option.
$ adb shell simpleperf record xxx --symfs /data/local/tmp/native_libs

To use app_profiler.py:

$ ./app_profiler.py -lib <unstripped_dir>

Show annotated source code and disassembly

To show hot places at source code and instruction level, we need to show source code and disassembly with event count annotation. Simpleperf supports showing annotated source code and disassembly for C++ code and fully compiled Java code. Simpleperf supports two ways to do it:

  1. Through report_html.py:

    1. Generate perf.data and pull it on host.
    2. Generate binary_cache, containing elf files with debug information. Use -lib option to add libs with debug info. Do it with binary_cache_builder.py -i perf.data -lib <dir_of_lib_with_debug_info>.
    3. Use report_html.py to generate report.html with annotated source code and disassembly, as described here.
  2. Through pprof.

    1. Generate perf.data and binary_cache as above.
    2. Use pprof_proto_generator.py to generate pprof proto file. pprof_proto_generator.py.
    3. Use pprof to report a function with annotated source code, as described here.

Bugs and contribution

Bugs and feature requests can be submitted at https://github.com/android/ndk/issues. Patches can be uploaded to android-review.googlesource.com as here, or sent to email addresses listed here.

If you want to compile simpleperf C++ source code, follow below steps:

  1. Download AOSP main branch as here.
  2. Build simpleperf.
$ . build/envsetup.sh
$ lunch aosp_arm64-userdebug
$ mmma system/extras/simpleperf -j30

If built successfully, out/target/product/generic_arm64/system/bin/simpleperf is for ARM64, and out/target/product/generic_arm64/system/bin/simpleperf32 is for ARM.


 

Android application profiling

This section shows how to profile an Android application. Some examples are Here.

Profiling an Android application involves three steps:

  1. Prepare an Android application.
  2. Record profiling data.
  3. Report profiling data.

Contents

Prepare an Android application

Based on the profiling situation, we may need to customize the build script to generate an apk file specifically for profiling. Below are some suggestions.

  1. If you want to profile a debug build of an application:

For the debug build type, Android studio sets android::debuggable=“true” in AndroidManifest.xml, enables JNI checks and may not optimize C/C++ code. It can be profiled by simpleperf without any change.

  1. If you want to profile a release build of an application:

For the release build type, Android studio sets android::debuggable=“false” in AndroidManifest.xml, disables JNI checks and optimizes C/C++ code. However, security restrictions mean that only apps with android::debuggable set to true can be profiled. So simpleperf can only profile a release build under these three circumstances: If you are on a rooted device, you can profile any app.

If you are on Android >= Q, you can add profileableFromShell flag in AndroidManifest.xml, this makes a released app profileable by preinstalled profiling tools. In this case, simpleperf downloaded by adb will invoke simpleperf preinstalled in system image to profile the app.

<manifest ...>
    <application ...>
      <profileable android:shell="true" />
    </application>
</manifest>

If you are on Android >= O, we can use wrap.sh to profile a release build: Step 1: Add android::debuggable=“true” in AndroidManifest.xml to enable profiling.

<manifest ...>
    <application android::debuggable="true" ...>

Step 2: Add wrap.sh in lib/arch directories. wrap.sh runs the app without passing any debug flags to ART, so the app runs as a release app. wrap.sh can be done by adding the script below in app/build.gradle.

android {
    buildTypes {
        release {
            sourceSets {
                release {
                    resources {
                        srcDir {
                            "wrap_sh_lib_dir"
                        }
                    }
                }
            }
        }
    }
}

task createWrapShLibDir
    for (String abi : ["armeabi", "armeabi-v7a", "arm64-v8a", "x86", "x86_64"]) {
        def dir = new File("app/wrap_sh_lib_dir/lib/" + abi)
        dir.mkdirs()
        def wrapFile = new File(dir, "wrap.sh")
        wrapFile.withWriter { writer ->
            writer.write('#!/system/bin/sh\n\$@\n')
        }
    }
}
  1. If you want to profile C/C++ code:

Android studio strips symbol table and debug info of native libraries in the apk. So the profiling results may contain unknown symbols or broken callgraphs. To fix this, we can pass app_profiler.py a directory containing unstripped native libraries via the -lib option. Usually the directory can be the path of your Android Studio project.

  1. If you want to profile Java code:

On Android >= P, simpleperf supports profiling Java code, no matter whether it is executed by the interpreter, or JITed, or compiled into native instructions. So you don't need to do anything.

On Android O, simpleperf supports profiling Java code which is compiled into native instructions, and it also needs wrap.sh to use the compiled Java code. To compile Java code, we can pass app_profiler.py the --compile_java_code option.

On Android N, simpleperf supports profiling Java code that is compiled into native instructions. To compile java code, we can pass app_profiler.py the --compile_java_code option.

On Android <= M, simpleperf doesn't support profiling Java code.

Below I use application SimpleperfExampleCpp. It builds an app-debug.apk for profiling.

$ git clone https://android.googlesource.com/platform/system/extras
$ cd extras/simpleperf/demo
# Open SimpleperfExampleCpp project with Android studio, and build this project
# successfully, otherwise the `./gradlew` command below will fail.
$ cd SimpleperfExampleCpp

# On windows, use "gradlew" instead.
$ ./gradlew clean assemble
$ adb install -r app/build/outputs/apk/debug/app-debug.apk

Record and report profiling data

We can use app-profiler.py to profile Android applications.

# Cd to the directory of simpleperf scripts. Record perf.data.
# -p option selects the profiled app using its package name.
# --compile_java_code option compiles Java code into native instructions, which isn't needed on
# Android >= P.
# -a option selects the Activity to profile.
# -lib option gives the directory to find debug native libraries.
$ ./app_profiler.py -p simpleperf.example.cpp -a .MixActivity -lib path_of_SimpleperfExampleCpp

This will collect profiling data in perf.data in the current directory, and related native binaries in binary_cache/.

Normally we need to use the app when profiling, otherwise we may record no samples. But in this case, the MixActivity starts a busy thread. So we don't need to use the app while profiling.

# Report perf.data in stdio interface.
$ ./report.py
Cmdline: /data/data/simpleperf.example.cpp/simpleperf record ...
Arch: arm64
Event: task-clock:u (type 1, config 1)
Samples: 10023
Event count: 10023000000

Overhead  Command     Pid   Tid   Shared Object              Symbol
27.04%    BusyThread  5703  5729  /system/lib64/libart.so    art::JniMethodStart(art::Thread*)
25.87%    BusyThread  5703  5729  /system/lib64/libc.so      long StrToI<long, ...
...

report.py reports profiling data in stdio interface. If there are a lot of unknown symbols in the report, check here.

# Report perf.data in html interface.
$ ./report_html.py

# Add source code and disassembly. Change the path of source_dirs if it not correct.
$ ./report_html.py --add_source_code --source_dirs path_of_SimpleperfExampleCpp \
      --add_disassembly

report_html.py generates report in report.html, and pops up a browser tab to show it.

Record and report call graph

We can record and report call graphs as below.

# Record dwarf based call graphs: add "-g" in the -r option.
$ ./app_profiler.py -p simpleperf.example.cpp \
        -r "-e task-clock:u -f 1000 --duration 10 -g" -lib path_of_SimpleperfExampleCpp

# Record stack frame based call graphs: add "--call-graph fp" in the -r option.
$ ./app_profiler.py -p simpleperf.example.cpp \
        -r "-e task-clock:u -f 1000 --duration 10 --call-graph fp" \
        -lib path_of_SimpleperfExampleCpp

# Report call graphs in stdio interface.
$ ./report.py -g

# Report call graphs in python Tk interface.
$ ./report.py -g --gui

# Report call graphs in html interface.
$ ./report_html.py

# Report call graphs in flamegraphs.
# On Windows, use inferno.bat instead of ./inferno.sh.
$ ./inferno.sh -sc

Report in html interface

We can use report_html.py to show profiling results in a web browser. report_html.py integrates chart statistics, sample table, flamegraphs, source code annotation and disassembly annotation. It is the recommended way to show reports.

$ ./report_html.py

Show flamegraph

To show flamegraphs, we need to first record call graphs. Flamegraphs are shown by report_html.py in the “Flamegraph” tab. We can also use inferno to show flamegraphs directly.

# On Windows, use inferno.bat instead of ./inferno.sh.
$ ./inferno.sh -sc

We can also build flamegraphs using https://github.com/brendangregg/FlameGraph. Please make sure you have perl installed.

$ git clone https://github.com/brendangregg/FlameGraph.git
$ ./report_sample.py --symfs binary_cache >out.perf
$ FlameGraph/stackcollapse-perf.pl out.perf >out.folded
$ FlameGraph/flamegraph.pl out.folded >a.svg

Report in Android Studio

simpleperf report-sample command can convert perf.data into protobuf format accepted by Android Studio cpu profiler. The conversion can be done either on device or on host. If you have more symbol info on host, then prefer do it on host with --symdir option.

$ simpleperf report-sample --protobuf --show-callchain -i perf.data -o perf.trace
# Then open perf.trace in Android Studio to show it.

Deobfuscate Java symbols

Java symbols may be obfuscated by ProGuard. To restore the original symbols in a report, we can pass a Proguard mapping file to the report scripts or report-sample command via --proguard-mapping-file.

$ ./report_html.py --proguard-mapping-file proguard_mapping_file.txt

Record both on CPU time and off CPU time

We can record both on CPU time and off CPU time.

First check if trace-offcpu feature is supported on the device.

$ ./run_simpleperf_on_device.py list --show-features
dwarf-based-call-graph
trace-offcpu

If trace-offcpu is supported, it will be shown in the feature list. Then we can try it.

$ ./app_profiler.py -p simpleperf.example.cpp -a .SleepActivity \
    -r "-g -e task-clock:u -f 1000 --duration 10 --trace-offcpu" \
    -lib path_of_SimpleperfExampleCpp
$ ./report_html.py --add_disassembly --add_source_code \
    --source_dirs path_of_SimpleperfExampleCpp

Profile from launch

We can profile from launch of an application.

# Start simpleperf recording, then start the Activity to profile.
$ ./app_profiler.py -p simpleperf.example.cpp -a .MainActivity

# We can also start the Activity on the device manually.
# 1. Make sure the application isn't running or one of the recent apps.
# 2. Start simpleperf recording.
$ ./app_profiler.py -p simpleperf.example.cpp
# 3. Start the app manually on the device.

Control recording in application code

Simpleperf supports controlling recording from application code. Below is the workflow:

  1. Run api_profiler.py prepare -p <package_name> to allow an app recording itself using simpleperf. By default, the permission is reset after device reboot. So we need to run the script every time the device reboots. But on Android >= 13, we can use --days options to set how long we want the permission to last.

  2. Link simpleperf app_api code in the application. The app needs to be debuggable or profileableFromShell as described here. Then the app can use the api to start/pause/resume/stop recording. To start recording, the app_api forks a child process running simpleperf, and uses pipe files to send commands to the child process. After recording, a profiling data file is generated.

  3. Run api_profiler.py collect -p <package_name> to collect profiling data files to host.

Examples are CppApi and JavaApi in demo.

Parse profiling data manually

We can also write python scripts to parse profiling data manually, by using simpleperf_report_lib.py. Examples are report_sample.py, report_html.py.

///

Android platform profiling

Contents

General Tips

Here are some tips for Android platform developers, who build and flash system images on rooted devices:

  1. After running adb root, simpleperf can be used to profile any process or system wide.
  2. It is recommended to use the latest simpleperf available in AOSP main, if you are not working on the current main branch. Scripts are in system/extras/simpleperf/scripts, binaries are in system/extras/simpleperf/scripts/bin/android.
  3. It is recommended to use app_profiler.py for recording, and report_html.py for reporting. Below is an example.
# Record surfaceflinger process for 10 seconds with dwarf based call graph. More examples are in
# scripts reference in the doc.
$ ./app_profiler.py -np surfaceflinger -r "-g --duration 10"

# Generate html report.
$ ./report_html.py
  1. Since Android >= O has symbols for system libraries on device, we don't need to use unstripped binaries in $ANDROID_PRODUCT_OUT/symbols to report call graphs. However, they are needed to add source code and disassembly (with line numbers) in the report. Below is an example.
# Doing recording with app_profiler.py or simpleperf on device, and generates perf.data on host.
$ ./app_profiler.py -np surfaceflinger -r "--call-graph fp --duration 10"

# Collect unstripped binaries from $ANDROID_PRODUCT_OUT/symbols to binary_cache/.
$ ./binary_cache_builder.py -lib $ANDROID_PRODUCT_OUT/symbols

# Report source code and disassembly. Disassembling all binaries is slow, so it's better to add
# --binary_filter option to only disassemble selected binaries.
$ ./report_html.py --add_source_code --source_dirs $ANDROID_BUILD_TOP --add_disassembly \
  --binary_filter surfaceflinger.so

Start simpleperf from system_server process

Sometimes we want to profile a process/system-wide when a special situation happens. In this case, we can add code starting simpleperf at the point where the situation is detected.

  1. Disable selinux by adb shell setenforce 0. Because selinux only allows simpleperf running in shell or debuggable/profileable apps.

  2. Add below code at the point where the special situation is detected.

try {
  // for capability check
  Os.prctl(OsConstants.PR_CAP_AMBIENT, OsConstants.PR_CAP_AMBIENT_RAISE,
           OsConstants.CAP_SYS_PTRACE, 0, 0);
  // Write to /data instead of /data/local/tmp. Because /data can be written by system user.
  Runtime.getRuntime().exec("/system/bin/simpleperf record -g -p " + String.valueOf(Process.myPid())
            + " -o /data/perf.data --duration 30 --log-to-android-buffer --log verbose");
} catch (Exception e) {
  Slog.e(TAG, "error while running simpleperf");
  e.printStackTrace();
}

Hardware PMU counter limit

When monitoring instruction and cache related perf events (in hw/cache/raw/pmu category of list cmd), these events are mapped to PMU counters on each cpu core. But each core only has a limited number of PMU counters. If number of events > number of PMU counters, then the counters are multiplexed among events, which probably isn't what we want. We can use simpleperf stat --print-hw-counter to show hardware counters (per core) available on the device.

On Pixel devices, the number of PMU counters on each core is usually 7, of which 4 of them are used by the kernel to monitor memory latency. So only 3 counters are available. It's fine to monitor up to 3 PMU events at the same time. To monitor more than 3 events, the --use-devfreq-counters option can be used to borrow from the counters used by the kernel.

Get boot-time profile

On userdebug/eng devices, we can get boot-time profile via simpleperf.

Step 1. In adb root, set options used to record boot-time profile. Simpleperf stores the options in a persist property persist.simpleperf.boot_record.

# simpleperf boot-record --enable "-a -g --duration 10 --exclude-perf"

Step 2. Reboot the device. When booting, init finds that the persist property is set, so it forks a background process to run simpleperf to record boot-time profile. init starts simpleperf at zygote-start stage, right after zygote is started.

$ adb reboot

Step 3. After boot, the boot-time profile is stored in /data/simpleperf_boot_data. Then we can pull the profile to host to report.

$ adb shell ls /data/simpleperf_boot_data
perf-20220126-11-47-51.data

Following is a boot-time profile example. From timestamp, the first sample is generated at about 4.5s after booting.

//

Executable commands reference

Contents

How simpleperf works

Modern CPUs have a hardware component called the performance monitoring unit (PMU). The PMU has several hardware counters, counting events like how many cpu cycles have happened, how many instructions have executed, or how many cache misses have happened.

The Linux kernel wraps these hardware counters into hardware perf events. In addition, the Linux kernel also provides hardware independent software events and tracepoint events. The Linux kernel exposes all events to userspace via the perf_event_open system call, which is used by simpleperf.

Simpleperf has three main commands: stat, record and report.

The stat command gives a summary of how many events have happened in the profiled processes in a time period. Here’s how it works:

  1. Given user options, simpleperf enables profiling by making a system call to the kernel.
  2. The kernel enables counters while the profiled processes are running.
  3. After profiling, simpleperf reads counters from the kernel, and reports a counter summary.

The record command records samples of the profiled processes in a time period. Here’s how it works:

  1. Given user options, simpleperf enables profiling by making a system call to the kernel.
  2. Simpleperf creates mapped buffers between simpleperf and the kernel.
  3. The kernel enables counters while the profiled processes are running.
  4. Each time a given number of events happen, the kernel dumps a sample to the mapped buffers.
  5. Simpleperf reads samples from the mapped buffers and stores profiling data in a file called perf.data.

The report command reads perf.data and any shared libraries used by the profiled processes, and outputs a report showing where the time was spent.

Commands

Simpleperf supports several commands, listed below:

The debug-unwind command: debug/test dwarf based offline unwinding, used for debugging simpleperf.
The dump command: dumps content in perf.data, used for debugging simpleperf.
The help command: prints help information for other commands.
The kmem command: collects kernel memory allocation information (will be replaced by Python scripts).
The list command: lists all event types supported on the Android device.
The record command: profiles processes and stores profiling data in perf.data.
The report command: reports profiling data in perf.data.
The report-sample command: reports each sample in perf.data, used for supporting integration of
                           simpleperf in Android Studio.
The stat command: profiles processes and prints counter summary.

Each command supports different options, which can be seen through help message.

# List all commands.
$ simpleperf --help

# Print help message for record command.
$ simpleperf record --help

Below describes the most frequently used commands, which are list, stat, record and report.

The list command

The list command lists all events available on the device. Different devices may support different events because they have different hardware and kernels.

$ simpleperf list
List of hw-cache events:
  branch-loads
  ...
List of hardware events:
  cpu-cycles
  instructions
  ...
List of software events:
  cpu-clock
  task-clock
  ...

On ARM/ARM64, the list command also shows a list of raw events, they are the events supported by the ARM PMU on the device. The kernel has wrapped part of them into hardware events and hw-cache events. For example, raw-cpu-cycles is wrapped into cpu-cycles, raw-instruction-retired is wrapped into instructions. The raw events are provided in case we want to use some events supported on the device, but unfortunately not wrapped by the kernel.

The stat command

The stat command is used to get event counter values of the profiled processes. By passing options, we can select which events to use, which processes/threads to monitor, how long to monitor and the print interval.

# Stat using default events (cpu-cycles,instructions,...), and monitor process 7394 for 10 seconds.
$ simpleperf stat -p 7394 --duration 10
Performance counter statistics:

 1,320,496,145  cpu-cycles         # 0.131736 GHz                     (100%)
   510,426,028  instructions       # 2.587047 cycles per instruction  (100%)
     4,692,338  branch-misses      # 468.118 K/sec                    (100%)
886.008130(ms)  task-clock         # 0.088390 cpus used               (100%)
           753  context-switches   # 75.121 /sec                      (100%)
           870  page-faults        # 86.793 /sec                      (100%)

Total test time: 10.023829 seconds.

Select events to stat

We can select which events to use via -e.

# Stat event cpu-cycles.
$ simpleperf stat -e cpu-cycles -p 11904 --duration 10

# Stat event cache-references and cache-misses.
$ simpleperf stat -e cache-references,cache-misses -p 11904 --duration 10

When running the stat command, if the number of hardware events is larger than the number of hardware counters available in the PMU, the kernel shares hardware counters between events, so each event is only monitored for part of the total time. In the example below, there is a percentage at the end of each row, showing the percentage of the total time that each event was actually monitored.

# Stat using event cache-references, cache-references:u,....
$ simpleperf stat -p 7394 -e cache-references,cache-references:u,cache-references:k \
      -e cache-misses,cache-misses:u,cache-misses:k,instructions --duration 1
Performance counter statistics:

4,331,018  cache-references     # 4.861 M/sec    (87%)
3,064,089  cache-references:u   # 3.439 M/sec    (87%)
1,364,959  cache-references:k   # 1.532 M/sec    (87%)
   91,721  cache-misses         # 102.918 K/sec  (87%)
   45,735  cache-misses:u       # 51.327 K/sec   (87%)
   38,447  cache-misses:k       # 43.131 K/sec   (87%)
9,688,515  instructions         # 10.561 M/sec   (89%)

Total test time: 1.026802 seconds.

In the example above, each event is monitored about 87% of the total time. But there is no guarantee that any pair of events are always monitored at the same time. If we want to have some events monitored at the same time, we can use --group.

# Stat using event cache-references, cache-references:u,....
$ simpleperf stat -p 7964 --group cache-references,cache-misses \
      --group cache-references:u,cache-misses:u --group cache-references:k,cache-misses:k \
      -e instructions --duration 1
Performance counter statistics:

3,638,900  cache-references     # 4.786 M/sec          (74%)
   65,171  cache-misses         # 1.790953% miss rate  (74%)
2,390,433  cache-references:u   # 3.153 M/sec          (74%)
   32,280  cache-misses:u       # 1.350383% miss rate  (74%)
  879,035  cache-references:k   # 1.251 M/sec          (68%)
   30,303  cache-misses:k       # 3.447303% miss rate  (68%)
8,921,161  instructions         # 10.070 M/sec         (86%)

Total test time: 1.029843 seconds.

Select target to stat

We can select which processes or threads to monitor via -p or -t. Monitoring a process is the same as monitoring all threads in the process. Simpleperf can also fork a child process to run the new command and then monitor the child process.

# Stat process 11904 and 11905.
$ simpleperf stat -p 11904,11905 --duration 10

# Stat thread 11904 and 11905.
$ simpleperf stat -t 11904,11905 --duration 10

# Start a child process running `ls`, and stat it.
$ simpleperf stat ls

# Stat the process of an Android application. This only works for debuggable apps on non-rooted
# devices.
$ simpleperf stat --app simpleperf.example.cpp

# Stat system wide using -a.
$ simpleperf stat -a --duration 10

Decide how long to stat

When monitoring existing threads, we can use --duration to decide how long to monitor. When monitoring a child process running a new command, simpleperf monitors until the child process ends. In this case, we can use Ctrl-C to stop monitoring at any time.

# Stat process 11904 for 10 seconds.
$ simpleperf stat -p 11904 --duration 10

# Stat until the child process running `ls` finishes.
$ simpleperf stat ls

# Stop monitoring using Ctrl-C.
$ simpleperf stat -p 11904 --duration 10
^C

If you want to write a script to control how long to monitor, you can send one of SIGINT, SIGTERM, SIGHUP signals to simpleperf to stop monitoring.

Decide the print interval

When monitoring perf counters, we can also use --interval to decide the print interval.

# Print stat for process 11904 every 300ms.
$ simpleperf stat -p 11904 --duration 10 --interval 300

# Print system wide stat at interval of 300ms for 10 seconds. Note that system wide profiling needs
# root privilege.
$ su 0 simpleperf stat -a --duration 10 --interval 300

Display counters in systrace

Simpleperf can also work with systrace to dump counters in the collected trace. Below is an example to do a system wide stat.

# Capture instructions (kernel only) and cache misses with interval of 300 milliseconds for 15
# seconds.
$ su 0 simpleperf stat -e instructions:k,cache-misses -a --interval 300 --duration 15
# On host launch systrace to collect trace for 10 seconds.
(HOST)$ external/chromium-trace/systrace.py --time=10 -o new.html sched gfx view
# Open the collected new.html in browser and perf counters will be shown up.

Show event count per thread

By default, stat cmd outputs an event count sum for all monitored targets. But when --per-thread option is used, stat cmd outputs an event count for each thread in monitored targets. It can be used to find busy threads in a process or system wide. With --per-thread option, stat cmd opens a perf_event_file for each exisiting thread. If a monitored thread creates new threads, event count for new threads will be added to the monitored thread by default, otherwise omitted if --no-inherit option is also used.

# Print event counts for each thread in process 11904. Event counts for threads created after
# stat cmd will be added to threads creating them.
$ simpleperf stat --per-thread -p 11904 --duration 1

# Print event counts for all threads running in the system every 1s. Threads not running will not
# be reported.
$ su 0 simpleperf stat --per-thread -a --interval 1000 --interval-only-values

# Print event counts for all threads running in the system every 1s. Event counts for threads
# created after stat cmd will be omitted.
$ su 0 simpleperf stat --per-thread -a --interval 1000 --interval-only-values --no-inherit

Show event count per core

By default, stat cmd outputs an event count sum for all monitored cpu cores. But when --per-core option is used, stat cmd outputs an event count for each core. It can be used to see how events are distributed on different cores. When stating non-system wide with --per-core option, simpleperf creates a perf event for each monitored thread on each core. When a thread is in running state, perf events on all cores are enabled, but only the perf event on the core running the thread is in running state. So the percentage comment shows runtime_on_a_core / runtime_on_all_cores. Note that, percentage is still affected by hardware counter multiplexing. Check simpleperf log output for ways to distinguish it.

# Print event counts for each cpu running threads in process 11904.
# A percentage shows runtime_on_a_cpu / runtime_on_all_cpus.
$ simpleperf stat --per-core -p 11904 --duration 1
Performance counter statistics:

# cpu       count  event_name   # percentage = event_run_time / enabled_time
  7    56,552,838  cpu-cycles   #   (60%)
  3    25,958,605  cpu-cycles   #   (20%)
  0    22,822,698  cpu-cycles   #   (15%)
  1     6,661,495  cpu-cycles   #   (5%)
  4     1,519,093  cpu-cycles   #   (0%)

Total test time: 1.001082 seconds.

# Print event counts for each cpu system wide.
$ su 0 simpleperf stat --per-core -a --duration 1

# Print cpu-cycle event counts for each cpu for each thread running in the system.
$ su 0 simpleperf stat -e cpu-cycles -a --per-thread --per-core --duration 1

The record command

The record command is used to dump samples of the profiled processes. Each sample can contain information like the time at which the sample was generated, the number of events since last sample, the program counter of a thread, the call chain of a thread.

By passing options, we can select which events to use, which processes/threads to monitor, what frequency to dump samples, how long to monitor, and where to store samples.

# Record on process 7394 for 10 seconds, using default event (cpu-cycles), using default sample
# frequency (4000 samples per second), writing records to perf.data.
$ simpleperf record -p 7394 --duration 10
simpleperf I cmd_record.cpp:316] Samples recorded: 21430. Samples lost: 0.

Select events to record

By default, the cpu-cycles event is used to evaluate consumed cpu cycles. But we can also use other events via -e.

# Record using event instructions.
$ simpleperf record -e instructions -p 11904 --duration 10

# Record using task-clock, which shows the passed CPU time in nanoseconds.
$ simpleperf record -e task-clock -p 11904 --duration 10

Select target to record

The way to select target in record command is similar to that in the stat command.

# Record process 11904 and 11905.
$ simpleperf record -p 11904,11905 --duration 10

# Record thread 11904 and 11905.
$ simpleperf record -t 11904,11905 --duration 10

# Record a child process running `ls`.
$ simpleperf record ls

# Record the process of an Android application. This only works for debuggable apps on non-rooted
# devices.
$ simpleperf record --app simpleperf.example.cpp

# Record system wide.
$ simpleperf record -a --duration 10

Set the frequency to record

We can set the frequency to dump records via -f or -c. For example, -f 4000 means dumping approximately 4000 records every second when the monitored thread runs. If a monitored thread runs 0.2s in one second (it can be preempted or blocked in other times), simpleperf dumps about 4000 * 0.2 / 1.0 = 800 records every second. Another way is using -c. For example, -c 10000 means dumping one record whenever 10000 events happen.

# Record with sample frequency 1000: sample 1000 times every second running.
$ simpleperf record -f 1000 -p 11904,11905 --duration 10

# Record with sample period 100000: sample 1 time every 100000 events.
$ simpleperf record -c 100000 -t 11904,11905 --duration 10

To avoid taking too much time generating samples, kernel >= 3.10 sets the max percent of cpu time used for generating samples (default is 25%), and decreases the max allowed sample frequency when hitting that limit. Simpleperf uses --cpu-percent option to adjust it, but it needs either root privilege or to be on Android >= Q.

# Record with sample frequency 10000, with max allowed cpu percent to be 50%.
$ simpleperf record -f 1000 -p 11904,11905 --duration 10 --cpu-percent 50

Decide how long to record

The way to decide how long to monitor in record command is similar to that in the stat command.

# Record process 11904 for 10 seconds.
$ simpleperf record -p 11904 --duration 10

# Record until the child process running `ls` finishes.
$ simpleperf record ls

# Stop monitoring using Ctrl-C.
$ simpleperf record -p 11904 --duration 10
^C

If you want to write a script to control how long to monitor, you can send one of SIGINT, SIGTERM, SIGHUP signals to simpleperf to stop monitoring.

Set the path to store profiling data

By default, simpleperf stores profiling data in perf.data in the current directory. But the path can be changed using -o.

# Write records to data/perf2.data.
$ simpleperf record -p 11904 -o data/perf2.data --duration 10

Record call graphs

A call graph is a tree showing function call relations. Below is an example.

main() {
    FunctionOne();
    FunctionTwo();
}
FunctionOne() {
    FunctionTwo();
    FunctionThree();
}
a call graph:
    main-> FunctionOne
       |    |
       |    |-> FunctionTwo
       |    |-> FunctionThree
       |
       |-> FunctionTwo

A call graph shows how a function calls other functions, and a reversed call graph shows how a function is called by other functions. To show a call graph, we need to first record it, then report it.

There are two ways to record a call graph, one is recording a dwarf based call graph, the other is recording a stack frame based call graph. Recording dwarf based call graphs needs support of debug information in native binaries. While recording stack frame based call graphs needs support of stack frame registers.

# Record a dwarf based call graph
$ simpleperf record -p 11904 -g --duration 10

# Record a stack frame based call graph
$ simpleperf record -p 11904 --call-graph fp --duration 10

Here are some suggestions about recording call graphs.

Record both on CPU time and off CPU time

Simpleperf is a CPU profiler, which generates samples for a thread only when it is running on a CPU. But sometimes we want to know where the thread time is spent off-cpu (like preempted by other threads, blocked in IO or waiting for some events). To support this, simpleperf added a --trace-offcpu option to the record command. When --trace-offcpu is used, simpleperf does the following things:

  1. Only cpu-clock/task-clock event is allowed to be used with --trace-offcpu. This let simpleperf generate on-cpu samples for cpu-clock event.
  2. Simpleperf also monitors sched:sched_switch event, which will generate a sched_switch sample each time the monitored thread is scheduled off cpu.
  3. Simpleperf also records context switch records. So it knows when the thread is scheduled back on a cpu.

The samples and context switch records collected by simpleperf for a thread are shown below:

Here we have two types of samples:

  1. on-cpu samples generated for cpu-clock event. The period value in each sample means how many nanoseconds are spent on cpu (for the callchain of this sample).
  2. off-cpu (sched_switch) samples generated for sched:sched_switch event. The period value is calculated as Timestamp of the next switch on record minus Timestamp of the current sample by simpleperf. So the period value in each sample means how many nanoseconds are spent off cpu (for the callchain of this sample).

note: In reality, switch on records and samples may lost. To mitigate the loss of accuracy, we calculate the period of an off-cpu sample as Timestamp of the next switch on record or sample minus Timestamp of the current sample.

When reporting via python scripts, simpleperf_report_lib.py provides SetTraceOffCpuMode() method to control how to report the samples:

  1. on-cpu mode: only report on-cpu samples.
  2. off-cpu mode: only report off-cpu samples.
  3. on-off-cpu mode: report both on-cpu and off-cpu samples, which can be split by event name.
  4. mixed-on-off-cpu mode: report on-cpu and off-cpu samples under the same event name.

If not set, mixed-on-off-cpu mode will be used to report.

When using report_html.py, inferno and report_sample.py, the report mode can be set by --trace-offcpu option.

Below are some examples recording and reporting trace offcpu profiles.

# Check if --trace-offcpu is supported by the kernel (should be available on kernel >= 4.2).
$ simpleperf list --show-features
trace-offcpu
...

# Record with --trace-offcpu.
$ simpleperf record -g -p 11904 --duration 10 --trace-offcpu -e cpu-clock

# Record system wide with --trace-offcpu.
$ simpleperf record -a -g --duration 3 --trace-offcpu -e cpu-clock

# Record with --trace-offcpu using app_profiler.py.
$ ./app_profiler.py -p com.google.samples.apps.sunflower \
    -r "-g -e cpu-clock:u --duration 10 --trace-offcpu"

# Report on-cpu samples.
$ ./report_html.py --trace-offcpu on-cpu
# Report off-cpu samples.
$ ./report_html.py --trace-offcpu off-cpu
# Report on-cpu and off-cpu samples under different event names.
$ ./report_html.py --trace-offcpu on-off-cpu
# Report on-cpu and off-cpu samples under the same event name.
$ ./report_html.py --trace-offcpu mixed-on-off-cpu

The report command

The report command is used to report profiling data generated by the record command. The report contains a table of sample entries. Each sample entry is a row in the report. The report command groups samples belong to the same process, thread, library, function in the same sample entry. Then sort the sample entries based on the event count a sample entry has.

By passing options, we can decide how to filter out uninteresting samples, how to group samples into sample entries, and where to find profiling data and binaries.

Below is an example. Records are grouped into 4 sample entries, each entry is a row. There are several columns, each column shows piece of information belonging to a sample entry. The first column is Overhead, which shows the percentage of events inside the current sample entry in total events. As the perf event is cpu-cycles, the overhead is the percentage of CPU cycles used in each function.

# Reports perf.data, using only records sampled in libsudo-game-jni.so, grouping records using
# thread name(comm), process id(pid), thread id(tid), function name(symbol), and showing sample
# count for each row.
$ simpleperf report --dsos /data/app/com.example.sudogame-2/lib/arm64/libsudo-game-jni.so \
      --sort comm,pid,tid,symbol -n
Cmdline: /data/data/com.example.sudogame/simpleperf record -p 7394 --duration 10
Arch: arm64
Event: cpu-cycles (type 0, config 0)
Samples: 28235
Event count: 546356211

Overhead  Sample  Command    Pid   Tid   Symbol
59.25%    16680   sudogame  7394  7394  checkValid(Board const&, int, int)
20.42%    5620    sudogame  7394  7394  canFindSolution_r(Board&, int, int)
13.82%    4088    sudogame  7394  7394  randomBlock_r(Board&, int, int, int, int, int)
6.24%     1756    sudogame  7394  7394  @plt

Set the path to read profiling data

By default, the report command reads profiling data from perf.data in the current directory. But the path can be changed using -i.

$ simpleperf report -i data/perf2.data

Set the path to find binaries

To report function symbols, simpleperf needs to read executable binaries used by the monitored processes to get symbol table and debug information. By default, the paths are the executable binaries used by monitored processes while recording. However, these binaries may not exist when reporting or not contain symbol table and debug information. So we can use --symfs to redirect the paths.

# In this case, when simpleperf wants to read executable binary /A/b, it reads file in /A/b.
$ simpleperf report

# In this case, when simpleperf wants to read executable binary /A/b, it prefers file in
# /debug_dir/A/b to file in /A/b.
$ simpleperf report --symfs /debug_dir

# Read symbols for system libraries built locally. Note that this is not needed since Android O,
# which ships symbols for system libraries on device.
$ simpleperf report --symfs $ANDROID_PRODUCT_OUT/symbols

Filter samples

When reporting, it happens that not all records are of interest. The report command supports four filters to select samples of interest.

# Report records in threads having name sudogame.
$ simpleperf report --comms sudogame

# Report records in process 7394 or 7395
$ simpleperf report --pids 7394,7395

# Report records in thread 7394 or 7395.
$ simpleperf report --tids 7394,7395

# Report records in libsudo-game-jni.so.
$ simpleperf report --dsos /data/app/com.example.sudogame-2/lib/arm64/libsudo-game-jni.so

Group samples into sample entries

The report command uses --sort to decide how to group sample entries.

# Group records based on their process id: records having the same process id are in the same
# sample entry.
$ simpleperf report --sort pid

# Group records based on their thread id and thread comm: records having the same thread id and
# thread name are in the same sample entry.
$ simpleperf report --sort tid,comm

# Group records based on their binary and function: records in the same binary and function are in
# the same sample entry.
$ simpleperf report --sort dso,symbol

# Default option: --sort comm,pid,tid,dso,symbol. Group records in the same thread, and belong to
# the same function in the same binary.
$ simpleperf report

Report call graphs

To report a call graph, please make sure the profiling data is recorded with call graphs, as here.

$ simpleperf report -g

Scripts reference

Contents

Record a profile

app_profiler.py

app_profiler.py is used to record profiling data for Android applications and native executables.

# Record an Android application.
$ ./app_profiler.py -p simpleperf.example.cpp

# Record an Android application with Java code compiled into native instructions.
$ ./app_profiler.py -p simpleperf.example.cpp --compile_java_code

# Record the launch of an Activity of an Android application.
$ ./app_profiler.py -p simpleperf.example.cpp -a .SleepActivity

# Record a native process.
$ ./app_profiler.py -np surfaceflinger

# Record a native process given its pid.
$ ./app_profiler.py --pid 11324

# Record a command.
$ ./app_profiler.py -cmd \
    "dex2oat --dex-file=/data/local/tmp/app-debug.apk --oat-file=/data/local/tmp/a.oat"

# Record an Android application, and use -r to send custom options to the record command.
$ ./app_profiler.py -p simpleperf.example.cpp \
    -r "-e cpu-clock -g --duration 30"

# Record both on CPU time and off CPU time.
$ ./app_profiler.py -p simpleperf.example.cpp \
    -r "-e task-clock -g -f 1000 --duration 10 --trace-offcpu"

# Save profiling data in a custom file (like perf_custom.data) instead of perf.data.
$ ./app_profiler.py -p simpleperf.example.cpp -o perf_custom.data

Profile from launch of an application

Sometimes we want to profile the launch-time of an application. To support this, we added --app in the record command. The --app option sets the package name of the Android application to profile. If the app is not already running, the record command will poll for the app process in a loop with an interval of 1ms. So to profile from launch of an application, we can first start the record command with --app, then start the app. Below is an example.

$ ./run_simpleperf_on_device.py record --app simpleperf.example.cpp \
    -g --duration 1 -o /data/local/tmp/perf.data
# Start the app manually or using the `am` command.

To make it convenient to use, app_profiler.py supports using the -a option to start an Activity after recording has started.

$ ./app_profiler.py -p simpleperf.example.cpp -a .MainActivity

api_profiler.py

api_profiler.py is used to control recording in application code. It does preparation work before recording, and collects profiling data files after recording.

Here are the details.

run_simpleperf_without_usb_connection.py

run_simpleperf_without_usb_connection.py records profiling data while the USB cable isn‘t connected. Maybe api_profiler.py is more suitable, which also don’t need USB cable when recording. Below is an example.

$ ./run_simpleperf_without_usb_connection.py start -p simpleperf.example.cpp
# After the command finishes successfully, unplug the USB cable, run the
# SimpleperfExampleCpp app. After a few seconds, plug in the USB cable.
$ ./run_simpleperf_without_usb_connection.py stop
# It may take a while to stop recording. After that, the profiling data is collected in perf.data
# on host.

binary_cache_builder.py

The binary_cache directory is a directory holding binaries needed by a profiling data file. The binaries are expected to be unstripped, having debug information and symbol tables. The binary_cache directory is used by report scripts to read symbols of binaries. It is also used by report_html.py to generate annotated source code and disassembly.

By default, app_profiler.py builds the binary_cache directory after recording. But we can also build binary_cache for existing profiling data files using binary_cache_builder.py. It is useful when you record profiling data using simpleperf record directly, to do system wide profiling or record without the USB cable connected.

binary_cache_builder.py can either pull binaries from an Android device, or find binaries in directories on the host (via -lib).

# Generate binary_cache for perf.data, by pulling binaries from the device.
$ ./binary_cache_builder.py

# Generate binary_cache, by pulling binaries from the device and finding binaries in
# SimpleperfExampleCpp.
$ ./binary_cache_builder.py -lib path_of_SimpleperfExampleCpp

run_simpleperf_on_device.py

This script pushes the simpleperf executable on the device, and run a simpleperf command on the device. It is more convenient than running adb commands manually.

Viewing the profile

Scripts in this section are for viewing the profile or converting profile data into formats used by external UIs. For recommended UIs, see view_the_profile.md.

report.py

report.py is a wrapper of the report command on the host. It accepts all options of the report command.

# Report call graph
$ ./report.py -g

# Report call graph in a GUI window implemented by Python Tk.
$ ./report.py -g --gui

report_html.py

report_html.py generates report.html based on the profiling data. Then the report.html can show the profiling result without depending on other files. So it can be shown in local browsers or passed to other machines. Depending on which command-line options are used, the content of the report.html can include: chart statistics, sample table, flamegraphs, annotated source code for each function, annotated disassembly for each function.

# Generate chart statistics, sample table and flamegraphs, based on perf.data.
$ ./report_html.py

# Add source code.
$ ./report_html.py --add_source_code --source_dirs path_of_SimpleperfExampleCpp

# Add disassembly.
$ ./report_html.py --add_disassembly

# Adding disassembly for all binaries can cost a lot of time. So we can choose to only add
# disassembly for selected binaries.
$ ./report_html.py --add_disassembly --binary_filter libgame.so

# report_html.py accepts more than one recording data file.
$ ./report_html.py -i perf1.data perf2.data

Below is an example of generating html profiling results for SimpleperfExampleCpp.

$ ./app_profiler.py -p simpleperf.example.cpp
$ ./report_html.py --add_source_code --source_dirs path_of_SimpleperfExampleCpp \
    --add_disassembly

After opening the generated report.html in a browser, there are several tabs:

The first tab is “Chart Statistics”. You can click the pie chart to show the time consumed by each process, thread, library and function.

The second tab is “Sample Table”. It shows the time taken by each function. By clicking one row in the table, we can jump to a new tab called “Function”.

The third tab is “Flamegraph”. It shows the graphs generated by inferno.

The fourth tab is “Function”. It only appears when users click a row in the “Sample Table” tab. It shows information of a function, including:

  1. A flamegraph showing functions called by that function.
  2. A flamegraph showing functions calling that function.
  3. Annotated source code of that function. It only appears when there are source code files for that function.
  4. Annotated disassembly of that function. It only appears when there are binaries containing that function.

inferno

inferno is a tool used to generate flamegraph in a html file.

# Generate flamegraph based on perf.data.
# On Windows, use inferno.bat instead of ./inferno.sh.
$ ./inferno.sh -sc --record_file perf.data

# Record a native program and generate flamegraph.
$ ./inferno.sh -np surfaceflinger

purgatorio

purgatorio is a visualization tool to show samples in time order.

pprof_proto_generator.py

It converts a profiling data file into pprof.proto, a format used by pprof.

# Convert perf.data in the current directory to pprof.proto format.
$ ./pprof_proto_generator.py
# Show report in pdf format.
$ pprof -pdf pprof.profile

# Show report in html format. To show disassembly, add --tools option like:
#  --tools=objdump:<ndk_path>/toolchains/llvm/prebuilt/linux-x86_64/aarch64-linux-android/bin
# To show annotated source or disassembly, select `top` in the view menu, click a function and
# select `source` or `disassemble` in the view menu.
$ pprof -http=:8080 pprof.profile

gecko_profile_generator.py

Converts perf.data to Gecko Profile Format, the format read by Firefox Profiler.

Firefox Profiler is a powerful general-purpose profiler UI which runs locally in any browser (not just Firefox), with:

  • Per-thread tracks
  • Flamegraphs
  • Search, focus for specific stacks
  • A time series view for seeing your samples in timestamp order
  • Filtering by thread and duration

Usage:

# Record a profile of your application
$ ./app_profiler.py -p simpleperf.example.cpp

# Convert and gzip.
$ ./gecko_profile_generator.py -i perf.data | gzip > gecko-profile.json.gz

Then open gecko-profile.json.gz in Firefox Profiler.

report_sample.py

report_sample.py converts a profiling data file into the perf script text format output by linux-perf-tool.

This format can be imported into:

# Record a profile to perf.data
$ ./app_profiler.py <args>

# Convert perf.data in the current directory to a format used by FlameGraph.
$ ./report_sample.py --symfs binary_cache >out.perf

$ git clone https://github.com/brendangregg/FlameGraph.git
$ FlameGraph/stackcollapse-perf.pl out.perf >out.folded
$ FlameGraph/flamegraph.pl out.folded >a.svg

stackcollapse.py

stackcollapse.py converts a profiling data file (perf.data) to Brendan Gregg's “Folded Stacks” format.

Folded Stacks are lines of semicolon-delimited stack frames, root to leaf, followed by a count of events sampled in that stack, e.g.:

BusyThread;__start_thread;__pthread_start(void*);java.lang.Thread.run 17889729

All similar stacks are aggregated and sample timestamps are unused.

Folded Stacks format is readable by:

Example:

# Record a profile to perf.data
$ ./app_profiler.py <args>

# Convert to Folded Stacks format
$ ./stackcollapse.py --kernel --jit | gzip > profile.folded.gz

# Visualise with FlameGraph with Java Stacks and nanosecond times
$ git clone https://github.com/brendangregg/FlameGraph.git
$ gunzip -c profile.folded.gz \
    | FlameGraph/flamegraph.pl --color=java --countname=ns \
    > profile.svg

simpleperf_report_lib.py

simpleperf_report_lib.py is a Python library used to parse profiling data files generated by the record command. Internally, it uses libsimpleperf_report.so to do the work. Generally, for each profiling data file, we create an instance of ReportLib, pass it the file path (via SetRecordFile). Then we can read all samples through GetNextSample(). For each sample, we can read its event info (via GetEventOfCurrentSample), symbol info (via GetSymbolOfCurrentSample) and call chain info (via GetCallChainOfCurrentSample). We can also get some global information, like record options (via GetRecordCmd), the arch of the device (via GetArch) and meta strings (via MetaInfo).

Examples of using simpleperf_report_lib.py are in report_sample.pyreport_html.pypprof_proto_generator.py and inferno/inferno.py.

//

View the profile

Contents

Introduction

After using simpleperf record or app_profiler.py, we get a profile data file. The file contains a list of samples. Each sample has a timestamp, a thread id, a callstack, events (like cpu-cycles or cpu-clock) used in this sample, etc. We have many choices for viewing the profile. We can show samples in chronological order, or show aggregated flamegraphs. We can show reports in text format, or in some interactive UIs.

Below shows some recommended UIs to view the profile. Google developers can find more examples in go/gmm-profiling.

Continuous PProf UI (great flamegraph UI, but only available internally)

PProf is a mature profiling technology used extensively on Google servers, with a powerful flamegraph UI, with strong drilldown, search, pivot, profile diff, and graph visualisation.

We can use pprof_proto_generator.py to convert profiles into pprof.profile protobufs for use in pprof.

# Output all threads, broken down by threadpool.
./pprof_proto_generator.py

# Use proguard mapping.
./pprof_proto_generator.py --proguard-mapping-file proguard.map

# Just the main (UI) thread (query by thread name):
./pprof_proto_generator.py --comm com.example.android.displayingbitmaps

This will print some debug logs about Failed to read symbols: this is usually OK, unless those symbols are hotspots.

Upload pprof.profile to http://pprof/ UI:

# Upload all threads in profile, grouped by threadpool.
# This is usually a good default, combining threads with similar names.
pprof --flame --tagroot threadpool pprof.profile

# Upload all threads in profile, grouped by individual thread name.
pprof --flame --tagroot thread pprof.profile

# Upload all threads in profile, without grouping by thread.
pprof --flame pprof.profile
This will output a URL, example: https://pprof.corp.google.com/?id=589a60852306144c880e36429e10b166

Firefox Profiler (great chronological UI)

We can view Android profiles using Firefox Profiler: Firefox Profiler. This does not require Firefox installation -- Firefox Profiler is just a website, you can open it in any browser.

Firefox Profiler has a great chronological view, as it doesn't pre-aggregate similar stack traces like pprof does.

We can use gecko_profile_generator.py to convert raw perf.data files into a Firefox Profile, with Proguard deobfuscation.

# Create Gecko Profile
./gecko_profile_generator.py | gzip > gecko_profile.json.gz

# Create Gecko Profile using Proguard map
./gecko_profile_generator.py --proguard-mapping-file proguard.map | gzip > gecko_profile.json.gz

Then drag-and-drop gecko_profile.json.gz into Firefox Profiler.

Firefox Profiler supports:

  1. Aggregated Flamegraphs
  2. Chronological Stackcharts

And allows filtering by:

  1. Individual threads
  2. Multiple threads (Ctrl+Click thread names to select many)
  3. Timeline period
  4. Stack frame text search

FlameScope (great jank-finding UI)

Netflix's FlameScope is a rough, proof-of-concept UI that lets you spot repeating patterns of work by laying out the profile as a subsecond heatmap.

Below, each vertical stripe is one second, and each cell is 10ms. Redder cells have more samples. See FlameScope Pattern Recognition for how to spot patterns.

This is an example of a 60s DisplayBitmaps app Startup Profile.

You can see:

The thick red vertical line on the left is startup. The long white vertical sections on the left shows the app is mostly idle, waiting for commands from instrumented tests. Then we see periodically red blocks, which shows the app is periodically busy handling commands from instrumented tests.

Click the start and end cells of a duration:

To see a flamegraph for that duration:

Install and run Flamescope:

git clone https://github.com/Netflix/flamescope ~/flamescope
cd ~/flamescope
pip install -r requirements.txt
npm install
npm run webpack
python3 run.py

Then open FlameScope in-browser: http://localhost:5000/.

FlameScope can read gzipped perf script format profiles. Convert simpleperf perf.data to this format with report_sample.py, and place it in Flamescope's examples directory:

# Create `Linux perf script` format profile.
report_sample.py | gzip > ~/flamescope/examples/my_simpleperf_profile.gz

# Create `Linux perf script` format profile using Proguard map.
report_sample.py \
  --proguard-mapping-file proguard.map \
  | gzip > ~/flamescope/examples/my_simpleperf_profile.gz

Open the profile “as Linux Perf”, and click start and end sections to get a flamegraph of that timespan.

To investigate UI Thread Jank, filter to UI thread samples only:

report_sample.py \
  --comm com.example.android.displayingbitmaps \ # UI Thread
  | gzip > ~/flamescope/examples/uithread.gz

Once you've identified the timespan of interest, consider also zooming into that section with Firefox Profiler, which has a more powerful flamegraph viewer.

Differential FlameGraph

See Brendan Gregg's Differential Flame Graphs blog.

Use Simpleperf's stackcollapse.py to convert perf.data to Folded Stacks format for the FlameGraph toolkit.

Consider diffing both directions: After minus Before, and Before minus After.

If you‘ve recorded before and after your optimisation as perf_before.data and perf_after.data, and you’re only interested in the UI thread:

# Generate before and after folded stacks from perf.data files
./stackcollapse.py --kernel --jit -i perf_before.data \
  --proguard-mapping-file proguard_before.map \
  --comm com.example.android.displayingbitmaps \
  > perf_before.folded
./stackcollapse.py --kernel --jit -i perf_after.data \
  --proguard-mapping-file proguard_after.map \
  --comm com.example.android.displayingbitmaps \
  > perf_after.folded

# Generate diff reports
FlameGraph/difffolded.pl -n perf_before.folded perf_after.folded \
  | FlameGraph/flamegraph.pl > diff1.svg
FlameGraph/difffolded.pl -n --negate perf_after.folded perf_before.folded \
  | FlameGraph/flamegraph.pl > diff2.svg

Android Studio Profiler

Android Studio Profiler supports recording and reporting profiles of app processes. It supports several recording methods, including one using simpleperf as backend. You can use Android Studio Profiler for both recording and reporting.

In Android Studio: Open View -> Tool Windows -> Profiler Click + -> Your Device -> Profileable Processes -> Your App

Click into “CPU” Chart

Choose Callstack Sample Recording. Even if you're using Java, this provides better observability, into ART, malloc, and the kernel.

Click Record, run your test on the device, then Stop when you're done.

Click on a thread track, and “Flame Chart” to see a chronological chart on the left, and an aggregated flamechart on the right:

If you want more flexibility in recording options, or want to add proguard mapping file, you can record using simpleperf, and report using Android Studio Profiler.

We can use simpleperf report-sample to convert perf.data to trace files for Android Studio Profiler.

# Convert perf.data to perf.trace for Android Studio Profiler.
# If on Mac/Windows, use simpleperf host executable for those platforms instead.
bin/linux/x86_64/simpleperf report-sample --show-callchain --protobuf -i perf.data -o perf.trace

# Convert perf.data to perf.trace using proguard mapping file.
bin/linux/x86_64/simpleperf report-sample --show-callchain --protobuf -i perf.data -o perf.trace \
    --proguard-mapping-file proguard.map

In Android Studio: Open File -> Open -> Select perf.trace

Simpleperf HTML Report

Simpleperf can generate its own HTML Profile, which is able to show Android-specific information and separate flamegraphs for all threads, with a much rougher flamegraph UI.

This UI is fairly rough; we recommend using the Continuous PProf UI or Firefox Profiler instead. But it's useful for a quick look at your data.

Each of the following commands take as input ./perf.data and output ./report.html.

# Make an HTML report.
./report_html.py

# Make an HTML report with Proguard mapping.
./report_html.py --proguard-mapping-file proguard.map

This will print some debug logs about Failed to read symbols: this is usually OK, unless those symbols are hotspots.

See also report_html.py's README and report_html.py -h.

PProf Interactive Command Line

Unlike Continuous PProf UI, PProf command line is publicly available, and allows drilldown, pivoting and filtering.

The below session demonstrates filtering to stack frames containing processBitmap.

$ pprof pprof.profile
(pprof) show=processBitmap
(pprof) top
Active filters:
   show=processBitmap
Showing nodes accounting for 2.45s, 11.44% of 21.46s total
      flat  flat%   sum%        cum   cum%
     2.45s 11.44% 11.44%      2.45s 11.44%  com.example.android.displayingbitmaps.util.ImageFetcher.processBitmap

And then showing the tags of those frames, to tell what threads they are running on:

(pprof) tags
 pid: Total 2.5s
      2.5s (  100%): 31112

 thread: Total 2.5s
         1.4s (57.21%): AsyncTask #3
         1.1s (42.79%): AsyncTask #4

 threadpool: Total 2.5s
             2.5s (  100%): AsyncTask #%d

 tid: Total 2.5s
      1.4s (57.21%): 31174
      1.1s (42.79%): 31175

Contrast with another method:

(pprof) show=addBitmapToCache
(pprof) top
Active filters:
   show=addBitmapToCache
Showing nodes accounting for 1.05s, 4.88% of 21.46s total
      flat  flat%   sum%        cum   cum%
     1.05s  4.88%  4.88%      1.05s  4.88%  com.example.android.displayingbitmaps.util.ImageCache.addBitmapToCache

For more information, see the pprof README.

Simpleperf Report Command Line

The simpleperf report command reports profiles in text format.

You can call simpleperf report directly or call it via report.py.

# Report symbols in table format.
$ ./report.py --children

# Report call graph.
$ bin/linux/x86_64/simpleperf report -g -i perf.data

See also report command's README and report.py -h.

Custom Report Interface

If the above View UIs can't fulfill your need, you can use simpleperf_report_lib.py to parse perf.data, extract sample information, and feed it to any views you like.

See simpleperf_report_lib.py's README for more details.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值