from:https://elinux.org/Kernel_Small_Stacks
静态检查elf的栈使用空间用kernel自带的checkstack.pl
静态检查elf的调用深度第三方工具stack_size
动态查看每个task的栈空间使用状况使用kernel config :CONFIG_DEBUG_STACK_USAGE
详情如下:
checkstack.pl
The kernel source includes a script to perform static stack analysis called scripts/checkstack.pl.
Usage is as follows:
$(CROSS_COMPILE)objdump -d vmlinux | scripts/checkstack.pl [arch]
Replace [arch] with the architecture of the kernel being analyzed. Several architectures are supported, including arm, mips and x86. You should use a cross-objdump that matches the architecture you compiled the kernel for. For example, if you used: arm-gnueabi-linux-gcc as your compiler, you would use arm-gnueabi-linux-objdump as your object dump program. This should have been included in your cross-compiler toolchain package.
Below is some sample output from using checkstack.pl. Note that the file is first dumped to an assembly file (.S), and then piped to checkstack.pl. You can examine the assembly file to see in detail the instructions used to reserve space on the stack, for routines of interest found by checkstack.pl.
An item in brackets is a module name, in case of a loadable module. The number at end is stack depth detected for function. The Leading value is the address of the stack reservation code.
$ arm-eabi-objdummp -d vmlinux -o vmlinux-arm.S $ cat vmlinux-arm.S | scripts/checkstack.pl arm 0x0012c858 nlmclnt_reclaim [vmlinux-arm.o]: 720 0x0025748c do_tcp_getsockopt.clone.11 [vmlinux-arm.o]: 552 0x00258d04 do_tcp_setsockopt.clone.14 [vmlinux-arm.o]: 544 0x000b2db4 do_sys_poll [vmlinux-arm.o]: 532 0x00138744 semctl_main.clone.7 [vmlinux-arm.o]: 532 0x00138ec4 sys_semtimedop [vmlinux-arm.o]: 484 0x000c5618 default_file_splice_read [vmlinux-arm.o]: 436 0x00251de4 do_ip_setsockopt.clone.22 [vmlinux-arm.o]: 416 0x00191fd4 extract_buf [vmlinux-arm.o]: 408 0x0019bc24 loop_get_status_old [vmlinux-arm.o]: 396 0x000e6f88 do_task_stat [vmlinux-arm.o]: 380 0x0019b8f0 loop_set_status_old [vmlinux-arm.o]: 380 0x002078f0 snd_ctl_elem_add_user [vmlinux-arm.o]: 376 0x0026267c tcp_make_synack [vmlinux-arm.o]: 372 0x00127be4 nfs_dns_parse [vmlinux-arm.o]: 368 0x000b2240 do_select [vmlinux-arm.o]: 340 0x001f6f10 mmc_blk_issue_rw_rq [vmlinux-arm.o]: 340 0x001726a0 fb_set_var [vmlinux-arm.o]: 336 0x000c58d0 __generic_file_splice_read [vmlinux-arm.o]: 316 0x0022a074 dev_seq_printf_stats [vmlinux-arm.o]: 316 0x0006383c tracing_splice_read_pipe [vmlinux-arm.o]: 308 0x000c53c8 vmsplice_to_pipe [vmlinux-arm.o]: 308 0x002512b4 do_ip_getsockopt [vmlinux-arm.o]: 304 0x00225f68 skb_splice_bits [vmlinux-arm.o]: 300
stack_size
Below are some results for static analysis of function stack depth in the Linux kernel, using 'stack_size'. (stack_size is a custom tool written by Tim Bird, before he found out about checkstack.pl.)
See this kernel message for a patch containing 'stack_size': https://lkml.org/lkml/2011/10/18/479
ARM results
The following results include the reduction in size for 'struct poll_wqueue':
$ ./stack_size vmlinux-arm ============ RESULTS =============== number of functions = 14371 max function stack depth= 736 function with max depth = nlmclnt_reclaim Function Name Stack Depth ===================== =========== __generic_file_splice_read 352 do_select 376 loop_set_status_old 392 snd_ctl_elem_add_user 408 extract_buf 432 default_file_splice_read 472 sys_semtimedop 520 semctl_main.clone.7 560 do_sys_poll 568 nlmclnt_reclaim 736
x86_64 results
$ ./show_stacks_x86_64.py vmlinux-x86_64.o ============ RESULTS =============== number of functions = 29587 max function stack depth= 1208 function with max depth = security_load_policy Function Name Stack Depth ===================== =========== x86_schedule_events 632 drm_crtc_helper_set_mode 632 sys_semtimedop 664 do_task_stat 712 node_read_meminfo 760 default_file_splice_read 792 do_select 920 nlmclnt_reclaim 936 do_sys_poll 1048 security_load_policy 1208
CONFIG_DEBUG_STACK_USAGE
There is kernel feature to output the stack usage of each process, as well as the process that uses the most stack in the system. This is controlled by the kernel configuration option CONFIG_DEBUG_STACK_USAGE.
This option modifies the process creation path so that the stack is initialized with all zeros. At any time, a request can be made to measure the stack depth of all running processes. This is calculated by measuring the amount of zeros from the end of thread_info to the first non-zero item on each stack.
In more detail, it does the following:
- at process creation time, fills the stack with zeros (kernel/fork.c)
- on sysrq 't', show free space, from call to stack_not_used() (kernel/sched.c)
- it shows as 0 otherwise ??
- define check_stack_usage(), which emits printks on each low-water hit
- low-water appears to be global over all stacks
- check_stack_usage() is only called on process exit, so you might not know about a problem process until very late
- stack_not_used() is defined in include/linux/sched.h. It counts the number of zero bytes following the end of thread_info going up.
As the systems runs, any time the stack low-water mark is exceeded, then the kernel prints a report (logs it to the kernel message log). This can be viewed with the dmesg command:
Here is example output, greping the kernel message log for "greatest":
# dmesg | grep greatest kworker/u:0 used greatest stack depth: 10564 bytes left busybox used greatest stack depth: 9512 bytes left busybox used greatest stack depth: 9504 bytes left grep used greatest stack depth: 9372 bytes left init used greatest stack depth: 9028 bytes left
To get a report on the stack usage of currently running processes, you use 't' with sysrq. For example:
$ echo t >/proc/sysrq-trigger
A stack dump for each process is shown, along with stack usage information.
Here is some sample output:
$ echo t >/proc/sysrq-trigger $ dmesg | grep -v [[] task PC stack pid father init S 802af8b0 932 1 0 0x00000000 kthreadd S 802af8b0 2496 2 0 0x00000000 ksoftirqd/0 S 802af8b0 2840 3 2 0x00000000 kworker/0:0 S 802af8b0 2776 4 2 0x00000000 kworker/u:0 S 802af8b0 2548 5 2 0x00000000 migration/0 S 802af8b0 2704 6 2 0x00000000 migration/1 S 802af8b0 2704 7 2 0x00000000 kworker/1:0 S 802af8b0 2560 8 2 0x00000000 ksoftirqd/1 S 802af8b0 3024 9 2 0x00000000 khelper S 802af8b0 2824 10 2 0x00000000 sync_supers S 802af8b0 2872 11 2 0x00000000 bdi-default S 802af8b0 2584 12 2 0x00000000 kblockd S 802af8b0 2824 13 2 0x00000000 khubd S 802af8b0 2744 14 2 0x00000000 rpciod S 802af8b0 3024 15 2 0x00000000 kworker/0:1 S 802af8b0 1240 16 2 0x00000000 kswapd0 S 802af8b0 2848 17 2 0x00000000 fsnotify_mark S 802af8b0 2632 18 2 0x00000000 nfsiod S 802af8b0 3024 19 2 0x00000000 kworker/u:1 S 802af8b0 2840 20 2 0x00000000 hoge S 802af8b0 3024 23 2 0x00000000 kworker/1:1 S 802af8b0 1716 24 2 0x00000000 flush-0:13 S 802af8b0 2528 28 2 0x00000000 telnetd S 802af8b0 1848 48 1 0x00000000 ash R running 1264 56 1 0x00000000