linux /proc目录下的文件

目录

1 /proc/sys/vm/目录下的文件

1.1 dirty_background_bytes:脏页限值

1.2 dirty_background_ratio:脏页百分比限值

1.3 min_free_kbytes:关键性分配保留的空间的最小值。

1.4 watermark_scale_factor:修改水线缩放因子

1.5 drop_caches

1.6 numa_zonelist_order:设置区域优先顺序

1.7 hugepages_treat_as_movable:设置允许从可移动区域分配巨型页

1.8 compact_memory:触发内存碎片整理

1.9 compact_unevictable_allowed:设置是否允许内存碎片整理移动不可回收的页

1.10 extfrag_threshold:设置外部碎片的阀值

1.11 oom_kill_allocating_task :内存耗尽杀手相关

1.12 oom_dump_tasks :内存耗尽杀手相关

1.13 panic_on_oom

1.14 legacy_va_layout

1.15 其它/proc/sys/vm/下的文件

2 /proc/slabinfo:获取所有活动缓存的列表

3 /proc/buddyinfo:伙伴系统的当前状态信息

4 /proc/pid/maps

5 /proc/meminfo

7 /proc/ksyms 或者/proc/kallsyms

8 /proc/sys/kernel/目录下的文件

8.1 /proc/sys/kernel/sched_*:进程调度相关

8.1.1 sched_rt_period_us 和 sched_rt_runtime_us

8.1.2 sched_cfs_bandwidth_slice_us:公平带宽片

8.1.3 sched_autogroup_enabled:进程的自动分组功能开关

8.1.4 sched_min_granularity_ns:进程最少运行时间/调度最小粒度

8.1.5 sched_latency_ns:运行队列所有进程运行一次的周期

8.1.6 sched_features :表示调度器支持的特性

8.1.7 sched_wakeup_granularity_ns:进程被唤醒后至少应该运行的时间基数

8.1.8 sched_child_runs_first

8.1.9 sched_compat_yield

8.1.10 sched_migration_cost

8.1.11 sched_nr_migrate:一次最多迁移多少个进程到另一个CPU上

8.1.12 sched_tunable_scaling

8.2 sysrq

8.3 randomize_va_space:是否启用地址空间随机化

9 /proc/filesystems:查看已经注册的文件系统类型

10 /proc/sys/kernel/sysrq 和/proc/sysrq-trigger:调试和挽救垂危系统——系统请求建

10.1 简介

10.2 用法总结

11 /proc/irq/目录下的文件

11.1 /proc/irq/irq_id/smp_affinity 和/proc/irq/irq_id/smp_affinity_list:中断亲和性

11.2 /proc/irq/irq_id/spurious:中断计数,和未被处理的中断计数

12 /proc//和/proc/self/目录下的文件

12.1 /proc/self/简介

12.2 /proc//fd:是一个目录,包含进程打开文件的情况

12.3 /proc//latency:显示哪些代码造成的延时比较大

12.4 /proc//limits:显示当前进程的资源限制

12.5 /proc//schedstat:当前进程在调度上的时间统计

12.6 /proc//sched

13 /proc/schedstat

13.1 相关代码

13.2 CPU statistics(统计)

13.3 Domain statistics(统计)

14 /proc/stat:系统启动后的统计信息(可以结合 vmstat -s 使用)

14.1 相关代码

14.2 CPU(每个处理器)的统计信息

14.3 中断、进程的统计信息

15 /proc/sched_debug:调度相关的详细信息

15.1 相关代码

15.2 sched_debug文件里的系统时间统计(ktime / sched_clk / jiffies等)

15.3 sched_debug文件里的进程调度的关键参数信息

15.4 sched_debug文件里的每个处理器核的调度信息

15.5 sched_debug文件里的公平调度类(CFS)信息

15.6 sched_debug文件里的公平调度类(CFS)的进程组信息

15.7 sched_debug文件里的实时进程信息

15.8 sched_debug文件里的限期进程信息

15.9 sched_debug文件里的每个处理器核上所有可运行进程的详细信息

16 /proc/uptime

17 /proc/acpi/


 

1 /proc/sys/vm/目录下的文件

1.1 dirty_background_bytes:脏页限值

Contains the amount of dirty memory at which the pdflush background writeback daemon will start writeback.

Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only one of them may be specified at a time. When one sysctl is written it is immediately taken into account to evaluate the dirty memory limits and the other appears as 0 when read.

                                                              Documentation/sysctl/vm.txt

当脏页所占的内存数量超过 dirty_background_bytes 时,内核的 flusher 线程开始回写脏页。


注意: dirty_background_bytes 参数和 dirty_background_ratio 参数是相对的,只能指定其中一个。当其中一个参数文件被写入时,会立即开始计算脏页限制,并且会将另一个参数的值清零。

                                                               https://blog.csdn.net/chongyang198999/article/details/48707735

1.2 dirty_background_ratio:脏页百分比限值

Contains, as a percentage of total system memory, the number of pages at which the pdflush background writeback daemon will start writing out dirty data.

                                                               Documentation/sysctl/vm.txt

当脏页所占的百分比(相对于所有可用内存,即空闲内存页 + 可回收内存页)达到 dirty_background_ratio 时,内核的flusher 线程开始回写脏页数据。所有可用内存不等于总的系统内存。

                                                                https://blog.csdn.net/chongyang198999/article/details/48707735

1.3 min_free_kbytes:关键性分配保留的空间的最小值。

请参考《深入 LINUX 内核架构》P116 ,《Linux 内核深度解析》P156

1.4 watermark_scale_factor:修改水线缩放因子

请参考《深入 LINUX 内核架构》P117 ,《Linux 内核深度解析》P156

1.5 drop_caches

Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. (dentries 指目录项缓存。参考 《深入 LINUX 内核架构》P431)

To free pagecache:
        echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
        echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
        echo 3 > /proc/sys/vm/drop_caches

As this is a non-destructive(非破坏性) operation and dirty objects are not freeable, the user should run `sync' first.

                                                        Documentation/sysctl/vm.txt

1.6 numa_zonelist_order:设置区域优先顺序

《Linux 内核深度解析》P155

1.7 hugepages_treat_as_movable:设置允许从可移动区域分配巨型页

《Linux 内核深度解析》P289

1.8 compact_memory:触发内存碎片整理

向/proc/sys/vm/compact_memory 文件写入任何整数值(数值没有意义),触发内存碎片整理。

                                                      《Linux 内核深度解析》P291

1.9 compact_unevictable_allowed:设置是否允许内存碎片整理移动不可回收的页

用来设置是否允许内存碎片整理移动不可回收的页,设置为 1 表示允许。

                                                       《Linux 内核深度解析》P292

1.10 extfrag_threshold:设置外部碎片的阀值

取值范围 0~1000,默认 500。         《Linux 内核深度解析》P292

1.11 oom_kill_allocating_task :内存耗尽杀手相关

是否允许杀死正在申请分配内存并触发内存耗尽的进程。

                                                        《Linux 内核深度解析》P338

1.12 oom_dump_tasks :内存耗尽杀手相关

是否允许内存耗尽杀手杀死进程的时候打印所有用户进程的内存使用信息。

1.13 panic_on_oom

是否允许在内存耗尽的时候内核恐慌,重启系统。   《Linux 内核深度解析》P338

1.14 legacy_va_layout

1:使用旧的进程虚拟地址空间布局 。      《深入 LINUX 内核架构》P235

1.15 其它/proc/sys/vm/下的文件

Documentation/sysctl/vm.txt
https://blog.csdn.net/chongyang198999/article/details/48707735

 

2 /proc/slabinfo:获取所有活动缓存的列表

《深入 LINUX 内核架构》P208
《深入理解 LINUX 内核》P329

 

3 /proc/buddyinfo:伙伴系统的当前状态信息

《深入 LINUX 内核架构》P161

 

4 /proc/pid/maps

《LINUX 设备驱动程序》(第三版)P415

 

5 /proc/meminfo

参考:

《Linux 性能优化》P58
Documentation/filesystems/proc.txt
http://blog.lujun9972.win/blog/2018/04/17/meminfo%E6%96%87%E4%BB%B6%E8%AF%A6%E8%A7%A3/

MemTotal: 3076224 kB

系统物理内存总量

Total usable ram (i.e. physical ram minus(减去) a few reserved bits and the kernel binary code)

MemFree: 732380 kB

空闲物理内存总量

The sum of LowFree+HighFree

MemAvailable: 1881552 kB

An estimate(估算) of how much memory is available for starting new applications, without swapping. Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the low watermarks in each zone.

The estimate takes into account(考虑) that the system needs some page cache to function well, and that not all reclaimable(可回收的) slab will be reclaimable, due to items being in use. The impact of those factors will vary from(不同于) system to system.

Buffers: 200936 kB

等待中的磁盘写操作的内存容量

Relatively temporary storage for raw disk blocks shouldn't get tremendously large (20MB or so)

Cached: 1059168 kB

用于缓存磁盘读操作的内存容量

in-memory cache for files read from the disk (the pagecache). Doesn't include SwapCached

SwapCached: 0 kB

在交换分区和物理内存中都存在的内存容量

Memory that once was swapped out, is swapped back in but still also is in the swapfile (if memory is needed it doesn't need to be swapped out AGAIN because it is already in the swapfile. This saves I/O)

Active: 1553376 kB

系统中当前处于活跃状态的内存容量

Memory that has been used more recently and usually not reclaimed unless absolutely necessary.

Inactive: 545804 kB

当前处于非活跃状态且可用于交换的内存容量

Memory which has been less recently used. It is more eligible(合适的) to be reclaimed for other purposes

Active(anon): 844232 kB

括号中为anon的内存为匿名内存,括号中为file的内存为file-backed内存,这两个内存的区别在于,物理内存的内容是否与物理磁盘上的文件相关联。

 

其中,匿名内存就是进程中堆上分配的内存,是用malloc分配的内存。

而file-backed内存为磁盘高速缓存的内存空间和“文件映射(将物理磁盘上的文件内容与用户进程的逻辑地址直接关联)”的内存空间,其中的内容与物理磁盘上的文件相对应。

 

而Active和Inactive的区别在于内存空间中是否包含最近被使用过的数据。当物理内存不足,不得不释放正在使用的内存空间时,会优先释放Inactive的内存空间。

Linux内核中使用4类LRU表来分别记录对应的这4类内存页,内存页一般以4K为一页。

Inactive(anon): 34976 kB

Active(file): 709144 kB

Inactive(file): 510828 kB

Unevictable: 6444 kB

有些内存页是不能被释放的,这些内存页不能放在LRU表中,而是记录到Unevictable标中

Mlocked: 6444 kB

 

SwapTotal: 1046524 kB

交换内存容量(以KB为单位)

SwapFree: 1046524 kB

空闲的交换内存容量(以KB为单位)

Dirty: 92 kB

等待写入磁盘的内存

Writeback: 0 kB

当前被写入磁盘的内存

AnonPages: 845516 kB

Linux内核中存在一个rmap(reverse mapping)机制,负责管理匿名内存中每一个物理内存页映射到哪个进程的哪个逻辑地址这样的信息。 这个rmap中记录的内存页总和就是AnonPages的值

Mapped: 412668 kB

用mmap映射一个到一个进程虚拟地址空间的内存容量

Shmem: 36624 kB

tmpfs所使用的内存.

 

tmpfs即利用物理内存来提供RAM磁盘的功能。在tmpfs上保存文件时,文件系统会暂时将它们保存到磁盘高速缓存上,因此它是属于磁盘高速缓存对应的"buffers+cached"一类。 但是由于磁盘上并没有与之对应的内容,因此它并记录在File-backed内存对应的LRU列表上,而是记录在匿名内存的LRU表上。 这就是 buffers + cached = Active(file) + Inactive(file) + Shmem 公式的由来

Slab: 151924 kB

内核分片内存的总量(以KB为单位)

SReclaimable: 121244 kB

不存在活跃对象,可以回收的Slab容量

SUnreclaim: 30680 kB

对象处于活跃状态,不能被回收的Slab容量

KernelStack: 7920 kB

KernelStack是内核代码使用的堆栈区域。

由于Linux内核中用户进程在运行过程中需要不断切换,因此内核需要为每个用户进程都设置各自的堆栈区域。 因此,每启动一个新进程,KernelStack的值都会增加。

PageTables: 35192 kB

为内核页表保留的内存容量

NFS_Unstable: 0 kB

 

Bounce: 0 kB

 

WritebackTmp: 0 kB

 

CommitLimit: 2584636 kB

 

Committed_AS: 3759092 kB

所需内存容量,在当前的工作负载下,这个容量几乎不会耗尽的。通常情况下,内核会分配更多的内存,预期应用程序会超分配。如果所有的应用程序都使用自己被分配的内存,那么这个就是你需要的物理内存容量。

VmallocTotal: 34359738367 kB

vmalloc可用的内核内存容量

VmallocUsed: 0 kB

vmalloc已使用的内核内存容量

VmallocChunk: 0 kB

vmalloc可用内存中最大的连续块

HardwareCorrupted: 0 kB

 

AnonHugePages: 0 kB

 

CmaTotal: 0 kB

 

CmaFree: 0 kB

 

HugePages_Total: 0

所有HugePage的总的大小

HugePages_Free: 0

空闲HugePage的总量

HugePages_Rsvd: 0

 

HugePages_Surp: 0

 

Hugepagesize: 2048 kB

 

DirectMap4k: 106368 kB

 

DirectMap2M: 3039232 kB

 

7 /proc/ksyms 或者/proc/kallsyms

《LINUX 设备驱动程序》(第三版)P226

https://blog.csdn.net/diy534/article/details/6941001?utm_source=blogxgwz7

8 /proc/sys/kernel/目录下的文件

8.1 /proc/sys/kernel/sched_*:进程调度相关

8.1.1 sched_rt_period_us 和 sched_rt_runtime_us

指定实时进程的全局带宽

带宽包含的两个参数是周期运行时间,即指定在每个周期内所有实时进程的运行时间总合。
可以借助文件“/proc/sys/kernel/sched_rt_period_us”设置周期
可以借助文件“/proc/sys/kernel/sched_rt_runtime_us”设置运行时间

                                                            《Linux 内核深度解析》P85

kernel_src_dir/Documentation/scheduler/sched-rt-group.txt
https://blog.csdn.net/adaptiver/article/details/6585372

8.1.2 sched_cfs_bandwidth_slice_us:公平带宽片

公平带宽片是公平运行队列每次向任务组请求分配运行时间时任务组分配的运行时间数量,默认值是 5 毫秒,用户可以通过文件”/proc/sys/kernel/sched_cfs_bandwidth_slice_us”修改。

                                                           《Linux 内核深度解析》P90

8.1.3 sched_autogroup_enabled:进程的自动分组功能开关

《Linux 内核深度解析》P60

8.1.4 sched_min_granularity_ns:进程最少运行时间/调度最小粒度

表示进程最少运行时间,防止频繁的切换,对于交互系统(如桌面),该值可以设置得较小,这样可以保证交互得到更快的响应(见周期调度器的check_preempt_tick过程)

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

为了防止进程切换太频繁,进程被调度后应该至少运行一小段时间,我们把这个时间长度称为调度最小粒度。

                                                           《Linux内核深度解析》P59

8.1.5 sched_latency_ns:运行队列所有进程运行一次的周期

表示一个运行队列所有进程运行一次的周期,当前这个与运行队列的进程数有关。

如果进程数超过 sched_nr_latency(这个变量不能通过/proc设置,它是由(sysctl_sched_latency+ sysctl_sched_min_granularity-1)/sysctl_sched_min_granularity确定的),那么调度周期就是sched_min_granularity_ns*运行队列里的进程数,与sysctl_sched_latency无关;

否则队列进程数小于 sched_nr_latency,运行周期就是sysctl_sched_latency。显然这个数越小,一个运行队列支持的sched_nr_latency越少,而且当sysctl_sched_min_granularity越小时能支持的sched_nr_latency越多,那么每个进程在这个周期内能执行的时间也就越少,这也与上面sysctl_sched_min_granularity变量的讨论一致。其实sched_nr_latency也可以当做我们cpu load的基准值,如果cpu的load大于这个值,那么说明cpu不够使用了

                                                               https://blog.csdn.net/wudongxu/article/details/8574753

8.1.6 sched_features :表示调度器支持的特性

该变量表示调度器支持的特性,如GENTLE_FAIR_SLEEPERS(平滑的补偿睡眠进程),START_DEBIT(新进程尽量的早调度),WAKEUP_PREEMPT(是否wakeup的进程可以去抢占当前运行的进程)等,所有的features见内核sech_features.h文件的定义

                                                               https://blog.csdn.net/wudongxu/article/details/8574753

8.1.7 sched_wakeup_granularity_ns:进程被唤醒后至少应该运行的时间基数

该变量表示进程被唤醒后至少应该运行的时间的基数,它只是用来判断某个进程是否应该抢占当前进程,并不代表它能够执行的最小时间(sysctl_sched_min_granularity),如果这个数值越小,那么发生抢占的概率也就越高(见wakeup_gran、wakeup_preempt_entity函数)

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

8.1.8 sched_child_runs_first

该变量表示在创建子进程的时候是否让子进程抢占父进程,即使父进程的vruntime小于子进程,这个会减少公平性,但是可以降低write_on_copy(参考《深入Linux内核架构》P51),具体要根据系统的应用情况来考量使用哪种方式(见task_fork_fair过程)

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

8.1.9 sched_compat_yield

该参数可以让sched_yield()系统调用更加有效,让它使用更少的cpu,对于那些依赖sched_yield来获得更好性能的应用可以考虑设置它为1

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

8.1.10 sched_migration_cost

该变量用来判断一个进程是否还是hot,如果进程的运行时间(now - p->se.exec_start)小于它,那么内核认为它的code还在cache,所以该进程还是hot,那么在迁移的时候就不会考虑它

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

8.1.11 sched_nr_migrate:一次最多迁移多少个进程到另一个CPU上

在多CPU情况下进行负载均衡时,一次最多移动多少个进程到另一个CPU上

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

8.1.12 sched_tunable_scaling

当内核试图调整sched_min_granularity,sched_latency和sched_wakeup_granularity这三个值的时候所使用的更新方法,0为不调整,1为按照cpu个数以2为底的对数值进行调整,2为按照cpu的个数进行线性比例的调整

                                                              https://blog.csdn.net/wudongxu/article/details/8574753

 

8.2 sysrq

参考/proc/sysrq-trigger

8.3 randomize_va_space:是否启用地址空间随机化

《深入 LINUX 内核架构》P235

《Linux 内核深度解析》P118

 

9 /proc/filesystems:查看已经注册的文件系统类型

《Linux 内核深度解析》P564

10 /proc/sys/kernel/sysrq 和/proc/sysrq-trigger:调试和挽救垂危系统——系统请求建

10.1 简介

需要注意的是,/proc/sys/kernel/sysrq 中的值只影响键盘的操作。

                                      https://blog.csdn.net/skdkjzz/article/details/50426397

因为 SysRq 功能非常有用,因此这些功能也对无法访问控制台的系统管理员开放。 /proc/sysrq-trigger 是一个只写的 /proc 入口点,向这个文件写入对应的字符,就可以触发相应的 SysRq 动作。这个针对 SysRq 的入口点始终可用,即使控制台上的 SysRq 是禁止的。

                                     《LINUX 设备驱动程序》(第三版)P100

《LINUX 设备驱动程序》(第三版)P100
《Linux 内核设计与实现》P302
kernel_src_dir/Documentation/sysrq.txt

10.2 用法总结

/proc/sys/kernel/sysrq

/proc/sysrq-trigger

功能说明(参考kernel_src_dir/Documentation/sysrq.txt)

<Alt> + SysRq(Print Screen) + 'b'

echo “b” > /proc/sysrq-trigger

Will immediately reboot the system without syncing or unmounting your disks.

<Alt> + SysRq(Print Screen) + 'c'

echo “c” > /proc/sysrq-trigger

Will perform a system crash by a NULL pointer dereference.

A crashdump will be taken if configured.

<Alt> + SysRq(Print Screen) + 'd'

echo “d” > /proc/sysrq-trigger

Shows all locks that are held.

<Alt> + SysRq(Print Screen) + 'e'

echo “e” > /proc/sysrq-trigger

Send a SIGTERM to all processes, except for init.

<Alt> + SysRq(Print Screen) + 'f'

echo “f” > /proc/sysrq-trigger

Will call oom_kill to kill a memory hog process.

<Alt> + SysRq(Print Screen) + 'g'

echo “g” > /proc/sysrq-trigger

Used by kgdb (kernel debugger)

<Alt> + SysRq(Print Screen) + 'h'

echo “h” > /proc/sysrq-trigger

Will display help (actually any other key than those listed

<Alt> + SysRq(Print Screen) + 'i'

echo “i” > /proc/sysrq-trigger

Send a SIGKILL to all processes, except for init.

<Alt> + SysRq(Print Screen) + 'j'

echo “j” > /proc/sysrq-trigger

Forcibly "Just thaw it" - filesystems frozen by the FIFREEZE ioctl.

<Alt> + SysRq(Print Screen) + 'k'

echo “k” > /proc/sysrq-trigger

Secure Access Key (SAK) Kills all programs on the current virtual console. NOTE: See important comments below in SAK section.

<Alt> + SysRq(Print Screen) + 'l'

echo “l” > /proc/sysrq-trigger

Shows a stack backtrace for all active CPUs.

<Alt> + SysRq(Print Screen) + 'm'

echo “m” > /proc/sysrq-trigger

Will dump current memory info to your console.

<Alt> + SysRq(Print Screen) + 'n'

echo “n” > /proc/sysrq-trigger

Used to make RT tasks nice-able

<Alt> + SysRq(Print Screen) + 'o'

echo “o” > /proc/sysrq-trigger

Will shut your system off (if configured and supported).

<Alt> + SysRq(Print Screen) + 'p'

echo “p” > /proc/sysrq-trigger

Will dump the current registers and flags to your console.

<Alt> + SysRq(Print Screen) + 'q'

echo “q” > /proc/sysrq-trigger

Will dump per CPU lists of all armed hrtimers (but NOT regular timer_list timers) and detailed information about all clockevent devices.

<Alt> + SysRq(Print Screen) + 'r'

echo “r” > /proc/sysrq-trigger

Turns off keyboard raw mode and sets it to XLATE.

<Alt> + SysRq(Print Screen) + 's'

echo “s” > /proc/sysrq-trigger

Will attempt to sync all mounted filesystems.

<Alt> + SysRq(Print Screen) + 't'

echo “t” > /proc/sysrq-trigger

Will dump a list of current tasks and their information to your console.

<Alt> + SysRq(Print Screen) + 'u'

echo “u” > /proc/sysrq-trigger

Will attempt to remount all mounted filesystems read-only.

<Alt> + SysRq(Print Screen) + 'v'

echo “v” > /proc/sysrq-trigger

Forcefully restores framebuffer console

<Alt> + SysRq(Print Screen) + 'v'

echo “v” > /proc/sysrq-trigger

Causes ETM buffer dump [ARM-specific]

<Alt> + SysRq(Print Screen) + 'w'

echo “w” > /proc/sysrq-trigger

Dumps tasks that are in uninterruptable (blocked) state.

<Alt> + SysRq(Print Screen) + 'x'

echo “x” > /proc/sysrq-trigger

Used by xmon interface on ppc/powerpc platforms.

<Alt> + SysRq(Print Screen) + 'y'

echo “y” > /proc/sysrq-trigger

Show global CPU Registers [SPARC-64 specific]

<Alt> + SysRq(Print Screen) + 'z'

echo “z” > /proc/sysrq-trigger

Dump the ftrace buffer

<Alt> + SysRq(Print Screen) + '0' ~ '9'

echo “0” ~ “9” > /proc/sysrq-trigger

Sets the console log level, controlling which kernel messages

will be printed to your console.

 

11 /proc/irq/目录下的文件

11.1 /proc/irq/irq_id/smp_affinity 和/proc/irq/irq_id/smp_affinity_list:中断亲和性

在多处理器系统中,管理员可以设置中断亲和性,允许中断控制器把某个中断转发给哪些处理器,有两种配置方法。
    <1> 写文件”/proc/irq/irq_id/smp_affinity”,参数是位掩码。
    <2> 写文件“/proc/irq/irq_id/smp_affinity_list”,参数是处理器列表


例如,管理员允许中断控制器把 Linux 中断号为 32 的中断转发给处理器 0~3,配置方法有两种
    <1> echo 0f > /proc/irq/32/smp_affinity
    <2> echo 0-3 > /proc/irq/32/smp_affinity_list

                                              《Linux 内核深度解析》P431

内核提供了设置中断亲和性的函数
int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask);

                                               《Linux 内核深度解析》P432

11.2 /proc/irq/irq_id/spurious:中断计数,和未被处理的中断计数

相关代码

文件:kernel/irq/proc.c

函数:register_irq_proc();

 

文件内容如下

root@ubuntu:~# cat /proc/irq/1/spurious 
count 1230 
unhandled 1 
last_unhandled 4294669484 ms 
root@ubuntu:~#

 

12 /proc/<pid>/和/proc/self/目录下的文件

12.1 /proc/self/简介

我们都知道可以通过/proc/$pid/来获取指定进程的信息,例如内存映射、CPU 绑定信息等等。如果某个进程想要获取本
进程的系统信息,就可以通过进程的 pid 来访问/proc/$pid/目录。但是这个方法还需要获取进程 pid,在 fork、daemon 等情况
下 pid 还可能发生变化。为了更方便的获取本进程的信息,linux 提供了/proc/self/目录,这个目录比较独特,不同的进程访
问该目录时获得的信息是不同的,内容等价于/proc/本进程 pid/。进程可以通过访问/proc/self/目录来获取自己的系统信息,
而不用每次都获取 pid。

                                     https://blog.csdn.net/dillanzhou/article/details/82876575

12.2 /proc/<pid>/fd:是一个目录,包含进程打开文件的情况

https://github.com/NanXiao/gnu-linux-proc-pid-intro

12.3 /proc/<pid>/latency:显示哪些代码造成的延时比较大

需要执行“echo 1 > /proc/sys/kernel/latencytop”

# cat /proc/2948/latency
Latency Top version : v0.1
30667 10650491 4891 poll_schedule_timeout do_sys_poll SyS_poll system_call_fastpath 0x7f636573dc1d
8 105 44 futex_wait_queue_me futex_wait do_futex SyS_futex system_call_fastpath 0x7f6365a167bc

每一行前三个数字分别是后面代码执行的次数,总共执行延迟时间(单位是微秒)和最长执行延迟时间(单位是微秒),后面则是代码完整的调用栈。

https://github.com/NanXiao/gnu-linux-proc-pid-intro

12.4 /proc/<pid>/limits:显示当前进程的资源限制

https://github.com/NanXiao/gnu-linux-proc-pid-intro

12.5 /proc/<pid>/schedstat:当前进程在调度上的时间统计

# cat /proc/3116/schedstat 
28944927 12616364 50
#

There are three fields in this file correlating for that process to:

    1) time spent on the cpu         //这个值和/proc/3156/sched文件里的se.sum_exec_runtime值一样只是上面的除于1,000,000

    2) time spent waiting on a runqueue               //这个值和/proc/3156/sched文件里的se.wait_sum一样

    3) # of timeslices(时间片) run on this cpu

12.6 /proc/<pid>/sched

大多数字段的计算在 sched.c 及 sched_fair.c 里,在这两个文件里搜索相应的字段就能得到相应的计算方法。

      https://blog.csdn.net/wudongxu/article/details/8574755?utm_medium=distribute.pc_relevant.none-task-blog-title-2&spm=1001.2101.3001.4242

cat /proc/<pid>/sched 的结果

说明

root@ubuntu:/proc/2021# cat sched

accounts-daemon (2021, #threads: 3)

-------------------------------------------------------------------

 

se.exec_start : 39576884.154813

 

此进程最近被调度到的开始执行时刻(这个值是每次update_curr都进行更新)

se.vruntime : 3486.779924

虚拟运行时间

se.sum_exec_runtime : 260.673586

累计运行的物理时间时间

se.statistics.sum_sleep_runtime : 39552774.057551

 

se.statistics.wait_start : 0.000000

最近一次当前进程被入队的时刻

se.statistics.sleep_start : 39576884.154813

此进程最近一次被从队列里取出,并被置S状态的时刻

se.statistics.block_start : 0.000000

此进程最近一次被从队列里取出,并被置D状态的时刻

se.statistics.sleep_max : 19864302.863301

最长处于S状态时间

se.statistics.block_max : 184.392282

最长处于D状态时间

se.statistics.exec_max : 5.516069

最长单次执行时间

se.statistics.slice_max : 0.907005

曾经获得时间片的最长时间

se.statistics.wait_max : 7.028090

最长在就绪队列里的等待时间

se.statistics.wait_sum : 87.225409

累计在就绪队列里的等待时间

se.statistics.wait_count : 412

累计等待次数

se.statistics.iowait_sum : 2397.986958

io等待时间

se.statistics.iowait_count : 46

io等待次数  io_schedule调用次数

se.nr_migrations : 11

需要迁移当前进程到其他cpu时累加此字段

se.statistics.nr_migrations_cold : 0

 

se.statistics.nr_failed_migrations_affine : 0

进程设置了cpu亲和,进程迁移时检查失败的次数

se.statistics.nr_failed_migrations_running : 17

 

se.statistics.nr_failed_migrations_hot : 25

当前进程因为是cache hot导致迁移失败的次数

se.statistics.nr_forced_migrations : 0

在当前进程cache hot下,由于负载均衡尝试多次失败,强行进行迁移的次数

se.statistics.nr_wakeups : 201

被唤醒的累计次数(从不可运行到可运行)

se.statistics.nr_wakeups_sync : 2

同步唤醒次数,即a唤醒b,a立刻睡眠,b被唤醒的次数

se.statistics.nr_wakeups_migrate : 0

被唤醒得到调度的当前cpu,不是之前睡眠的cpu的次数

se.statistics.nr_wakeups_local : 149

被本地唤醒的次数(唤醒后在当前cpu上执行)

se.statistics.nr_wakeups_remote : 52

非本地唤醒累计次数

se.statistics.nr_wakeups_affine : 0

考虑了任务的cache亲和性的唤醒次数

se.statistics.nr_wakeups_affine_attempts : 0

 

se.statistics.nr_wakeups_passive : 0

 

se.statistics.nr_wakeups_idle : 0

 

avg_atom : 0.648441

本进程平均耗时sum_exec_runtime/ nr_switches

avg_per_cpu : 23.697598

 

nr_switches : 402

主动切换和被动切换的累计次数

nr_voluntary_switches : 202

主动切换次数(由于prev->state为不可运行状态引起的切换)

nr_involuntary_switches : 200

被动切换次数

se.load.weight : 1024

该se的load

se.avg.load_sum : 90280

 

se.avg.util_sum : 78255

 

se.avg.load_avg : 1

 

se.avg.util_avg : 1

 

se.avg.last_update_time : 39576884154813

 

policy : 0

调度策略 normal

prio : 120

优先级(nice=0)

clock-delta : 77

 

mm->numa_scan_seq : 0

 

numa_pages_migrated : 0

 

numa_preferred_nid : -1

 

total_numa_faults : 0

current_node=0, numa_group_id=0

numa_faults node=0 task_private=0 task_shared=0 group_private=0 group_shared=0

root@ubuntu:/proc/2021#


 

13 /proc/schedstat

文件内容大致如下

root@ubuntu:~# cat /proc/schedstat
version 15
timestamp 4306287549
cpu0 294 0 32095016 7431326 18644389 15676938 4057909153582 1913013647069 24650707
domain0 00000003 506071 498253 3249 2053788 7708 69 17 498236 77306 74256 1957 567809 1992 5 14 74242 3061058 2420308 398260 66678339 317320 157
375277 2045031 3 0 3 0 0 0 0 0 0 2967446 12479 0
cpu1 26 0 31222798 7544146 17691604 14903637 3922430803519 1780788651689 23674286
domain0 00000003 630913 620954 3786 2773786 10581 116 19 620935 65880 63153 1629 539738 1869 9 18 63135 2858694 2226461 392401 65179254 314437
169 366326 1860135 2 0 2 0 0 0 0 0 0 2787967 12721 0
root@ubuntu:~#

13.1 相关代码

内核版本:3.10

文件:kernel/sched/stats.c

函数:proc_schedstat_init();

static int show_schedstat(struct seq_file *seq, void *v)
{
    ......
        /* runqueue-specific stats */
        seq_printf(seq,
            "cpu%d %u 0 %u %u %u %u %llu %llu %lu",
            cpu, rq->yld_count,
            rq->sched_count, rq->sched_goidle,
            rq->ttwu_count, rq->ttwu_local,
            rq->rq_cpu_time,
            rq->rq_sched_info.run_delay, rq->rq_sched_info.pcount);

        seq_printf(seq, "\n");
    ......
}

 

13.2 CPU statistics(统计)

cpu<N> 1 2 3 4 5 6 7 8 9
First field is a sched_yield() statistic:

    1) # of times sched_yield() was called
Next three are schedule() statistics:
    2) This field is a legacy array expiration count field used in the O(1) scheduler. We kept it for ABI compatibility, but it is always set to zero.
    3) # of times schedule() was called
    4) # of times schedule() left the processor idle
Next two are try_to_wake_up() statistics:
    5) # of times try_to_wake_up() was called
    6) # of times try_to_wake_up() was called to wake up the local cpu
Next three are statistics describing scheduling latency:
    7) sum of all time spent running by tasks on this processor (in jiffies)
    8) sum of all time spent waiting to run by tasks on this processor (in jiffies)
    9) # of timeslices run on this cpu

                                       Documentation/scheduler/sched-stats.txt

13.3 Domain statistics(统计)

One of these is produced per domain for each cpu described. (Note that if CONFIG_SMP is not defined, *no* domains are utilized
and these lines will not appear in the output.)
domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
The first field is a bit mask indicating what cpus this domain operates over.
The next 24 are a variety of load_balance() statistics in grouped into types of idleness (idle, busy, and newly idle):
1) # of times in this domain load_balance() was called when the cpu was idle
2) # of times in this domain load_balance() checked but found the load did not require balancing when the cpu was idle
3) # of times in this domain load_balance() tried to move one or more tasks and failed, when the cpu was idle
4) sum of imbalances discovered (if any) with each call to load_balance() in this domain when the cpu was idle
5) # of times in this domain pull_task() was called when the cpu was idle
6) # of times in this domain pull_task() was called even though the target task was cache-hot when idle
7) # of times in this domain load_balance() was called but did not find a busier queue while the cpu was idle
8) # of times in this domain a busier queue was found while the cpu was idle but no busier group was found
9) # of times in this domain load_balance() was called when the cpu was busy10) # of times in this domain load_balance() checked but found the load did not require balancing when busy
11) # of times in this domain load_balance() tried to move one or more tasks and failed, when the cpu was busy
12) sum of imbalances discovered (if any) with each call to load_balance() in this domain when the cpu was busy
13) # of times in this domain pull_task() was called when busy
14) # of times in this domain pull_task() was called even though the target task was cache-hot when busy
15) # of times in this domain load_balance() was called but did not find a busier queue while the cpu was busy
16) # of times in this domain a busier queue was found while the cpu was busy but no busier group was found
17) # of times in this domain load_balance() was called when the cpu was just becoming idle
18) # of times in this domain load_balance() checked but found the load did not require balancing when the cpu was just becoming idle
19) # of times in this domain load_balance() tried to move one or more tasks and failed, when the cpu was just becoming idle
20) sum of imbalances discovered (if any) with each call to load_balance() in this domain when the cpu was just becoming idle
21) # of times in this domain pull_task() was called when newly idle
22) # of times in this domain pull_task() was called even though the target task was cache-hot when just becoming idle
23) # of times in this domain load_balance() was called but did not find a busier queue while the cpu was just becoming idle
24) # of times in this domain a busier queue was found while the cpu was just becoming idle but no busier group was found
Next three are active_load_balance() statistics:
25) # of times active_load_balance() was called
26) # of times active_load_balance() tried to move a task and failed
27) # of times active_load_balance() successfully moved a task
Next three are sched_balance_exec() statistics:
28) sbe_cnt is not used
29) sbe_balanced is not used
30) sbe_pushed is not used
Next three are sched_balance_fork() statistics:
31) sbf_cnt is not used
32) sbf_balanced is not used
33) sbf_pushed is not used
Next three are try_to_wake_up() statistics:
34) # of times in this domain try_to_wake_up() awoke a task that last ran on a different cpu in this domain
35) # of times in this domain try_to_wake_up() moved a task to the waking cpu because it was cache-cold on its own cpu anyway
36) # of times in this domain try_to_wake_up() started passive balancing

                                              Documentation/scheduler/sched-stats.txt

14 /proc/stat:系统启动后的统计信息(可以结合 vmstat -s 使用)

文件内容大致如下

root@ubuntu:~# cat /proc/stat
cpu 442421 2003 213768 8811167 51048 0 6149 0 0 0
cpu0 223583 797 109482 4400190 27173 0 2488 0 0 0
cpu1 218837 1206 104286 4410976 23875 0 3661 0 0 0
intr 24432579 58 19165 0 0 58710 0 2 0 1 0 0 0 361780 0 161175 48319 20846 44 69 81341 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ctxt 76391445
btime 1600912429
processes 15764
procs_running 3
procs_blocked 0
softirq 7738294 1 3023708 9890 189653 185388 0 18189 2319347 0 1992118
root@ubuntu:~#

14.1 相关代码

内核版本:3.10

文件:fs/proc/stat.c

函数:show_stat();

 

14.2 CPU(每个处理器)的统计信息

The very first "cpu" line aggregates(聚合) the numbers in all of the other "cpuN" lines. These numbers identify the amount of time
the CPU has spent performing different kinds of work. Time units are in USER_HZ (typically hundredths of a second)(单位是
jiffies). The meanings of the columns are as follows, from left to right:
- user:         normal processes executing in user mode
- nice:         niced processes executing in user mode
- system:    processes executing in kernel mode
- idle:          twiddling thumbs (闲着无聊)
- iowait:      waiting for I/O to complete
- irq:           servicing interrupts
- softirq:     servicing softirqs
- steal:       involuntary(非自愿的) wait
- guest:      running a normal guest
- guest_nice: running a niced guest

                                      Documentation/filesystems/proc.txt

user: 从系统启动开始累计到当前时刻,用户态的CPU时间(单位:jiffies) ,不包含 nice值为负进程。
nice: 从系统启动开始累计到当前时刻,nice值为负的进程所占用的CPU时间(单位:jiffies)
system: 从系统启动开始累计到当前时刻,核心时间(单位:jiffies)
idle: 从系统启动开始累计到当前时刻,除硬盘IO等待时间以外其它等待时间(单位:jiffies)
iowait: 从系统启动开始累计到当前时刻,硬盘IO等待时间(单位:jiffies) ,
irq: 从系统启动开始累计到当前时刻,硬中断时间(单位:jiffies)
softirq: 从系统启动开始累计到当前时刻,软中断时间(单位:jiffies)

                                     https://www.jianshu.com/p/0ec1ea49f4a3

14.3 中断、进程的统计信息

intr: 系统启动以来的所有interrupts的次数情况 
ctxt: 系统上下文切换次数 
btime: 启动的时刻(单位:秒),从Epoch(即1970零时)开始到系统启动所经过的时长,每次启动会改变。 
processes: 系统启动后所创建过的进程数量。当短时间该值特别大,系统可能出现异常 
procs_running:处于Runnable状态的进程个数 
procs_blocked:处于等待I/O完成的进程个数 

https://blog.csdn.net/houzhizhen/article/details/79474427?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.channel_param&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.channel_param

15 /proc/sched_debug:调度相关的详细信息

15.1 相关代码

内核版本:3.10

文件:kernel/sched/debug.c

函数:sched_debug_show();

15.2 sched_debug文件里的系统时间统计(ktime / sched_clk / jiffies等)

ktime                         : 802247.997240
sched_clk                     : 802389.677699
cpu_clk                       : 802389.678170
jiffies                       : 4295092857
sched_clock_stable()          : 1

15.3 sched_debug文件里的进程调度的关键参数信息

sysctl_sched
  .sysctl_sched_latency                    	: 12.000000
  .sysctl_sched_min_granularity            	: 1.500000
  .sysctl_sched_wakeup_granularity         	: 2.000000
  .sysctl_sched_child_runs_first           	: 0
  .sysctl_sched_features                   	: 44859
  .sysctl_sched_tunable_scaling            	: 1 (logaritmic)

15.4 sched_debug文件里的每个处理器核的调度信息

CPU0的信息

CPU1的信息

cpu#0, 2195.099 MHz

.nr_running : 0

.load : 0

.nr_switches : 327111

.nr_load_updates : 51134

.nr_uninterruptible : -9

.next_balance : 4295.092857

.curr->pid : 0

.clock : 802387.039552

.clock_task : 802387.039552

.cpu_load[0] : 103

.cpu_load[1] : 64

.cpu_load[2] : 36

.cpu_load[3] : 19

.cpu_load[4] : 11

.yld_count : 0

.sched_count : 335080

.sched_goidle : 68160

.avg_idle : 1000000

.max_idle_balance_cost : 500000

.ttwu_count : 174919

.ttwu_local : 115433

cpu#1, 2195.099 MHz

.nr_running : 1

.load : 1004

.nr_switches : 343955

.nr_load_updates : 55582

.nr_uninterruptible : 9

.next_balance : 4295.092874

.curr->pid : 4371

.clock : 802396.426147

.clock_task : 802396.426147

.cpu_load[0] : 189

.cpu_load[1] : 117

.cpu_load[2] : 65

.cpu_load[3] : 35

.cpu_load[4] : 18

.yld_count : 1

.sched_count : 344076

.sched_goidle : 67939

.avg_idle : 1000000

.max_idle_balance_cost : 500000

.ttwu_count : 191547

.ttwu_local : 129434

15.5 sched_debug文件里的公平调度类(CFS)信息

cfs_rq[0]:/
  .exec_clock                    : 89143.902147
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 90492.627487
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : 0.000000
  .nr_spread_over                : 7
  .nr_running                    : 0
  .load                          : 0
  .load_avg                      : 125
  .runnable_load_avg          	 : 11
  .util_avg                      : 96
  .removed_load_avg          	 : 0
  .removed_util_avg            	 : 0
  .tg_load_avg_contrib        	 : 125
  .tg_load_avg                   : 332
  .throttled                     : 0
  .throttle_count                : 0

15.6 sched_debug文件里的公平调度类(CFS)的进程组信息

cfs_rq[0]:/autogroup-xx
  .exec_clock                    	: 285.906045
  .MIN_vruntime                  	: 0.000001
  .min_vruntime                  	: 361.259130
  .max_vruntime                  	: 0.000001
  .spread                        	: 0.000000
  .spread0                       	: -90131.368357
  .nr_spread_over                	: 0
  .nr_running                    	: 0
  .load                          	: 0
  .load_avg                      	: 0
  .runnable_load_avg             	: 0
  .util_avg                      	: 0
  .removed_load_avg              	: 0
  .removed_util_avg              	: 0
  .tg_load_avg_contrib           	: 0
  .tg_load_avg                   	: 0
  .throttled                     	: 0
  .throttle_count                	: 0
  .se->exec_start                	: 802148.483830
  .se->vruntime                  	: 90486.119523
  .se->sum_exec_runtime          	: 286.081525
  .se->statistics.wait_start     	: 0.000000
  .se->statistics.sleep_start    	: 0.000000
  .se->statistics.block_start    	: 0.000000
  .se->statistics.sleep_max      	: 0.000000
  .se->statistics.block_max      	: 0.000000
  .se->statistics.exec_max       	: 3.831219
  .se->statistics.slice_max      	: 7.226789
  .se->statistics.wait_max       	: 10.210677
  .se->statistics.wait_sum       	: 116.826292
  .se->statistics.wait_count     	: 707
  .se->load.weight               	: 2
  .se->avg.load_avg              	: 0
  .se->avg.util_avg              	: 0

15.7 sched_debug文件里的实时进程信息

rt_rq[0]:
  .rt_nr_running                 	: 0
  .rt_throttled                  	: 0
  .rt_time                       	: 0.000000
  .rt_runtime                   	: 950.000000

15.8 sched_debug文件里的限期进程信息

dl_rq[0]:
  .dl_nr_running                 : 0

15.9 sched_debug文件里的每个处理器核上所有可运行进程的详细信息

下面是0号处理器核上的可运行进程详细信息

runnable tasks:
            task   PID         tree-key  switches  prio     wait-time             sum-exec        sum-sleep
----------------------------------------------------------------------------------------------------------
     ksoftirqd/0     3     90485.992151      1775   120       266.043127       110.966470    801296.894683 0 0 /
     kworker/0:0     4     65185.528152        28   120         1.306286         0.332624    562255.501693 0 0 /
    kworker/0:0H     5      4338.641597         6   100         0.155425         0.141135      7672.095926 0 0 /
          rcu_bh     8       165.273690         2   120         0.081261         0.002227         0.002447 0 0 /
     migration/0     9         0.000000       786     0         0.002513        45.365081         0.002590 0 0 /
      watchdog/0    10         0.012255       205     0         5.316437        11.912340         0.003262 0 0 /
   fsnotify_mark    35     79455.085442        78   120         4.551369         0.961024    741217.412209 0 0 /
       scsi_eh_0    63      2178.651436         7   120        21.055407         2.198312       549.411236 0 0 /
   ipv6_addrconf    73      2127.408759         2   100         2.062244         0.008961         0.002921 0 0 /
     kworker/0:2    74     65401.668160      1291   120        86.803284       534.084240    560335.520914 0 0 /
         deferwq    87      2184.619134         2   100         0.020571         0.019738         0.049237 0 0 /
          bioset    89      2184.620937         2   100         0.620084         0.003987         0.003938 0 0 /
          bioset    90      2188.653609         2   100         0.164756         0.033579         0.004576 0 0 /
     jbd2/sda1-8   188     89536.093850       477   120        22.359626       127.702531    791738.643016 0 0 /
 ext4-rsv-conver   189      3895.134181         2   100         0.000000         0.032571         0.038954 0 0 /
    kworker/0:1H   240     90485.977125       157   100         8.737684         8.533078    793983.918518 0 0 /
  lttng-sessiond   979        12.141718         5   120         4.988424         0.519009         2.421136 0 0 /autogroup-89
  lttng-sessiond   993        13.852937         1   120         2.348588         0.053102         0.000000 0 0 /autogroup-89
    avahi-daemon  1298        29.914075       228   120        52.138361        51.662322    732905.502924 0 0 /autogroup-124
        krfcommd  1417      9384.019997         2   110         2.685290         0.046557         0.067995 0 0 /
        rsyslogd  1426         0.015253         2   120         0.136469         2.365443         0.000000 0 0 /autogroup-132
......

16 /proc/uptime

root@ubuntu:~# cat /proc/uptime 
11865.94 21024.12
root@ubuntu:~#

第一列输出的是,系统启动到现在的时间(以秒为单位),这里简记为num1;
第二列输出的是,系统空闲的时间(以秒为单位),这里简记为num2。

注意,很多很多人都知道第二个是系统空闲的时间,但是可能你不知道是,在SMP系统里,系统空闲的时间有时会是系统运行时间的几倍,这是怎么回事呢?因为系统空闲时间的计算,是把SMP算进去的,就是所你有几个逻辑的CPU(包括超线程)。

系统的空闲率(%) = num2/(num1*N) 其中N是SMP系统中的CPU个数。

                                                                       https://www.cnblogs.com/frydsh/p/3887357.html

17 /proc/acpi/

《精通Linux设备驱动程序开发》P408

 

 

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值