Linux内核OOM killer机制

https://blog.csdn.net/s_lisheng/article/details/82192613

Linux内核OOM killer机制

程序运行了一段时间,有个进程挂掉了,正常情况下进程不会主动挂掉,简单分析后认为可能是运行时某段时间内存占用过大,系统内存不足导致触发了Linux操作系统OOM killer机制,将运行中的进程杀掉了。

一、Linux内核OOM killer机制
Linux 内核有个机制叫OOM killer(Out Of Memory killer),该机制会监控那些占用内存过大,尤其是瞬间占用内存很快的进程,然后防止内存耗尽而自动把该进程杀掉。内核检测到系统内存不足、挑选并杀掉某个进程的过程可以参考内核源代码linux/mm/oom_kill.c,当系统内存不足的时候,out_of_memory()被触发,然后调用select_bad_process()选择一个”bad”进程杀掉。如何判断和选择一个”bad进程呢?linux选择”bad”进程是通过调用oom_badness(),挑选的算法和想法都很简单很朴实:最bad的那个进程就是那个最占用内存的进程。

怎么避免进程因为OOM机制被kill掉?1. 与OOM相关的几个文件是 /proc//oom_adj 、 /proc//oom_score。前者是一个权值-16至15,默认是0,设置为-17表示永远不被kill,其余情况值越大越容易被kill。后者就是它计算出来的一个值,就是根据这个值来选择哪些进程被kill掉的。2. 上面放置使用swap中的第三个方法 overcommit参数,因为它分配不出内存就会返回错误,所以永远也不能达到内存被耗尽。OOM也就不会有影响了。3 至于Liunx为什么要这么做,鬼知道!反正Android也这么干!

作者:最难不过二叉树
链接:https://www.zhihu.com/question/21972130/answer/1588422763
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

当物理内存不够的时候启用虚拟内存不就行了吗? 一般系统管理员会设置swap,内存不够时会把暂时不用的内存页swap到磁盘,腾出空间,也就是你说的虚拟内存了。但是这个swap大小都会设定为某一个值,所以内存+swap 也会面临都耗尽的情况。为什么不可以当一个程序试图malloc内存的时候简单地返回错误? 考虑这种情景:A进程疯狂分配内存,导致系统可用的内存已用完,此时B进程需要分配内存空间,那B进程肯定分配空间失败啊,不断抛出异常,挂掉的还是无辜的B;如果是OS自己需要分配内存空间,发现malloc返回失败,那就严重多了,为了不影响后面的逻辑,OS会更倾向于与fast fail,选择shutdown。所以操作系统层面的OOM Killer机制还是挺有用的,系统向用户提供了可以保护进程免杀的手段:直接修改/proc//oom_score_adj文件,将其置为-1000。以前是通过/proc//oom_score来控制的,但近年来新版linux已经使用oom_score_adj来代替旧版的oom_score,可以参考:https://github.com/tinganho/linux-kernel/blob/master/Documentation/feature-removal-schedule.txt#L171

https://www.zhihu.com/question/21972130

https://sysctl-explorer.net/vm/oom_kill_allocating_task/ 找到一个开关可以控制oom_killer的

cat /proc/sys/vm/oom_kill_allocating_task
https://sysctl-explorer.net/vm/oom_kill_allocating_task/ 找到一个开关可以控制oom_killer的。

##########################################
##########################################
郭建:Linux内存管理系统参数配置之OOM(内存耗尽)
https://www.sohu.com/a/238012686_467784

一、前言

本文是描述Linux virtual memory运行参数的第二篇,主要是讲OOM相关的参数的。为了理解OOM参数,第二章简单的描述什么是OOM。如果这个名词对你毫无压力,你可以直接进入第三章,这一章是描述具体的参数的,除了描述具体的参数,我们引用了一些具体的内核代码,本文的代码来自4.0内核,如果有兴趣,可以结合代码阅读,为了缩减篇幅,文章中的代码都是删减版本的。按照惯例,最后一章是参考文献,本文的参考文献都是来自linux内核的Documentation目录,该目录下有大量的文档可以参考,每一篇都值得细细品味。

二、什么是OOM

OOM就是out of memory的缩写,虽然linux kernel有很多的内存管理技巧(从cache中回收、swap out等)来满足各种应用空间的vm内存需求,但是,当你的系统配置不合理,让一匹小马拉大车的时候,linux kernel会运行非常缓慢并且在某个时间点分配page frame的时候遇到内存耗尽、无法分配的状况。应对这种状况首先应该是系统管理员,他需要首先给系统增加内存,不过对于kernel而言,当面对OOM的时候,咱们也不能慌乱,要根据OOM参数来进行相应的处理。

三、OOM参数

1、panic_on_oom

当kernel遇到OOM的时候,可以有两种选择:

(1)产生kernel panic(就是死给你看)。

(2)积极面对人生,选择一个或者几个最“适合”的进程,启动OOM killer,干掉那些选中的进程,释放内存,让系统勇敢的活下去。

panic_on_oom这个参数就是控制遇到OOM的时候,系统如何反应的。当该参数等于0的时候,表示选择积极面对人生,启动OOM killer。当该参数等于2的时候,表示无论是哪一种情况,都强制进入kernel panic。panic_on_oom等于其他值的时候,表示要区分具体的情况,对于某些情况可以panic,有些情况启动OOM killer。kernel的代码中,enum oom_constraint 就是一个进一步描述OOM状态的参数。系统遇到OOM总是有各种各样的情况的,kernel中定义如下:

对于UMA而言, oom_constraint永远都是CONSTRAINT_NONE,表示系统并没有什么约束就出现了OOM,不要想太多了,就是内存不足了。在NUMA的情况下,有可能附加了其他的约束导致了系统遇到OOM状态,实际上,系统中还有充足的内存。这些约束包括:

(1)CONSTRAINT_CPUSET。cpusets是kernel中的一种机制,通过该机制可以把一组cpu和memory node资源分配给特定的一组进程。这时候,如果出现OOM,仅仅说明该进程能分配memory的那个node出现状况了,整个系统有很多的memory node,其他的node可能有充足的memory资源。

(2)CONSTRAINT_MEMORY_POLICY。memory policy是NUMA系统中如何控制分配各个memory node资源的策略模块。用户空间程序(NUMA-aware的程序)可以通过memory policy的API,针对整个系统、针对一个特定的进程,针对一个特定进程的特定的VMA来制定策略。产生了OOM也有可能是因为附加了memory policy的约束导致的,在这种情况下,如果导致整个系统panic似乎有点不太合适吧。

(3)CONSTRAINT_MEMCG。MEMCG就是memory control group,Cgroup这东西太复杂,这里不适合多说,Cgroup中的memory子系统就是控制系统memory资源分配的控制器,通俗的将就是把一组进程的内存使用限定在一个范围内。当这一组的内存使用超过上限就会OOM,在这种情况下的OOM就是CONSTRAINT_MEMCG类型的OOM。

OK,了解基础知识后,我们来看看内核代码。内核中sysctl_panic_on_oom变量是和/proc/sys/vm/panic_on_oom对应的,主要的判断逻辑如下:

2、oom_kill_allocating_task

当系统选择了启动OOM killer,试图杀死某些进程的时候,又会遇到这样的问题:干掉哪个,哪一个才是“合适”的哪那个进程?系统可以有下面的选择:

(1)谁触发了OOM就干掉谁

(2)谁最“坏”就干掉谁

oom_kill_allocating_task这个参数就是控制这个选择路径的,当该参数等于0的时候选择(2),否则选择(1)。具体的代码可以在参考__out_of_memory函数,具体如下:

当然也不能说杀就杀,还是要考虑是否用户空间进程(不能杀内核线程)、是否unkillable task(例如init进程就不能杀),用户空间是否通过设定参数(oom_score_adj)阻止kill该task。如果万事俱备,那么就调用oom_kill_process干掉当前进程。

3、oom_dump_tasks

当系统的内存出现OOM状况,无论是panic还是启动OOM killer,做为系统管理员,你都是想保留下线索,找到OOM的root cause,例如dump系统中所有的用户空间进程关于内存方面的一些信息,包括:进程标识信息、该进程使用的total virtual memory信息、该进程实际使用物理内存(我们又称之为RSS,Resident Set Size,不仅仅是自己程序使用的物理内存,也包含共享库占用的内存),该进程的页表信息等等。拿到这些信息后,有助于了解现象(出现OOM)之后的真相。

当设定为0的时候,上一段描述的各种进程们的内存信息都不会打印出来。在大型的系统中,有几千个进程,逐一打印每一个task的内存信息有可能会导致性能问题(要知道当时已经是OOM了)。当设定为非0值的时候,在下面三种情况会调用dump_tasks来打印系统中所有task的内存状况:

(1)由于OOM导致kernel panic

(2)没有找到适合的“bad”process

(3)找适合的并将其干掉的时候

4、oom_adj、oom_score_adj和oom_score

准确的说这几个参数都是和具体进程相关的,因此它们位于/proc/xxx/目录下(xxx是进程ID)。假设我们选择在出现OOM状况的时候杀死进程,那么一个很自然的问题就浮现出来:到底干掉哪一个呢?内核的算法倒是非常简单,那就是打分(oom_score,注意,该参数是read only的),找到分数最高的就OK了。那么怎么来算分数呢?可以参考内核中的oom_badness函数:

(1)对某一个task进行打分(oom_score)主要有两部分组成,一部分是系统打分,主要是根据该task的内存使用情况。另外一部分是用户打分,也就是oom_score_adj了,该task的实际得分需要综合考虑两方面的打分。如果用户将该task的 oom_score_adj设定成OOM_SCORE_ADJ_MIN(-1000)的话,那么实际上就是禁止了OOM killer杀死该进程。

(2)这里返回了0也就是告知OOM killer,该进程是“good process”,不要干掉它。后面我们可以看到,实际计算分数的时候最低分是1分。

(3)前面说过了,系统打分就是看物理内存消耗量,主要是三部分,RSS部分,swap file或者swap device上占用的内存情况以及页表占用的内存情况。

(4)root进程有3%的内存使用特权,因此这里要减去那些内存使用量。

(5)用户可以调整oom_score,具体如何操作呢?oom_score_adj的取值范围是-1000~1000,0表示用户不调整oom_score,负值表示要在实际打分值上减去一个折扣,正值表示要惩罚该task,也就是增加该进程的oom_score。在实际操作中,需要根据本次内存分配时候可分配内存来计算(如果没有内存分配约束,那么就是系统中的所有可用内存,如果系统支持cpuset,那么这里的可分配内存就是该cpuset的实际额度值)。oom_badness函数有一个传入参数totalpages,该参数就是当时的可分配的内存上限值。实际的分数值(points)要根据oom_score_adj进行调整,例如如果oom_score_adj设定-500,那么表示实际分数要打五折(基数是totalpages),也就是说该任务实际使用的内存要减去可分配的内存上限值的一半。

了解了oom_score_adj和oom_score之后,应该是尘埃落定了,oom_adj是一个旧的接口参数,其功能类似oom_score_adj,为了兼容,目前仍然保留这个参数,当操作这个参数的时候,kernel实际上是会换算成oom_score_adj,有兴趣的同学可以自行了解,这里不再细述了。

四、参考文献

1、Documentation/vm/numa_memory_policy.txt

2、Documentation/sysctl/vm.txt

3、Documentation/cgroup/cpusets.txt

4、Documentation/cgroup/memory.txt

5、Documentation/filesystems/proc.txt

How to Configure the Linux Out-of-Memory Killer

https://www.oracle.com/technical-resources/articles/it-infrastructure/dev-oom-killer.html

This article describes the Linux out-of-memory (OOM) killer and how to find out why it killed a particular process. It also provides methods for configuring the OOM killer to better suit the needs of many different environments.

About the OOM Killer
When a server that’s supporting a database or an application server goes down, it’s often a race to get critical services back up and running especially if it is an important production system. When attempting to determine the root cause after the initial triage, it’s often a mystery as to why the application or database suddenly stopped functioning. In certain situations, the root cause of the issue can be traced to the system running low on memory and killing an important process in order to remain operational.

The Linux kernel allocates memory upon the demand of the applications running on the system. Because many applications allocate their memory up front and often don’t utilize the memory allocated, the kernel was designed with the ability to over-commit memory to make memory usage more efficient. This over-commit model allows the kernel to allocate more memory than it actually has physically available. If a process actually utilizes the memory it was allocated, the kernel then provides these resources to the application. When too many applications start utilizing the memory they were allocated, the over-commit model sometimes becomes problematic and the kernel must start killing processes in order to stay operational. The mechanism the kernel uses to recover memory on the system is referred to as the out-of-memory killer or OOM killer for short.

Finding Out Why a Process Was Killed
When troubleshooting an issue where an application has been killed by the OOM killer, there are several clues that might shed light on how and why the process was killed. In the following example, we are going to take a look at our syslog to see whether we can locate the source of our problem. The oracle process was killed by the OOM killer because of an out-of-memory condition. The capital K in Killed indicates that the process was killed with a -9 signal, and this is usually a good sign that the OOM killer might be the culprit.

grep -i kill /var/log/messages*
host kernel: Out of Memory: Killed process 2592 (oracle).
grep -i kill /var/log/messages*
host kernel: Out of Memory: Killed process 2592 (oracle).
We can also examine the status of low and high memory usage on a system. It’s important to note that these values are real time and change depending on the system workload; therefore, these should be watched frequently before memory pressure occurs. Looking at these values after a process was killed won’t be very insightful and, thus, can’t really help in investigating OOM issues.

[root@test-sys1 ~]# free -lm
total used free shared buffers cached
Mem: 498 93 405 0 15 32
Low: 498 93 405
High: 0 0 0
-/+ buffers/cache: 44 453
Swap: 1023 0 1023
[root@test-sys1 ~]# free -lm
total used free shared buffers cached
Mem: 498 93 405 0 15 32
Low: 498 93 405
High: 0 0 0
-/+ buffers/cache: 44 453
Swap: 1023 0 1023
On this test virtual machine, we have 498 MB of low memory free. The system has no swap usage. The -l switch shows high and low memory statistics, and the -m switch puts the output in megabytes to make it easier to read.

[root@test-sys1 ~]# egrep ‘High|Low’ /proc/meminfo
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 510444 kB
LowFree: 414768 kB
[root@test-sys1 ~]# egrep ‘High|Low’ /proc/meminfo
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 510444 kB
LowFree: 414768 kB
The same data can be obtained by examining /proc/memory and looking specifically at the high and low values. However, with this method, we don’t get swap information from the output and the output is in kilobytes.

Low memory is memory to which the kernel has direct physical access. High memory is memory to which the kernel does not have a direct physical address and, thus, it must be mapped via a virtual address. On older 32-bit systems, you will see low memory and high memory due to the way that memory is mapped to a virtual address. On 64-bit platforms, virtual address space is not needed and all system memory will be shown as low memory.

While looking at /proc/memory and using the free command are useful for knowing “right now” what our memory usage is, there are occasions when we want to look at memory usage over a longer duration. The vmstat command is quite useful for this.

In the example in Listing 1, we are using the vmstat command to look at our resources every 45 seconds 10 times. The -S switch shows our data in a table and the -M switch shows the output in megabytes to make it easier to read. As you can see, something is consuming our free memory, but we are not yet swapping in this example.

[root@localhost ~]# vmstat -SM 45 10
procs -----------memory-------- —swap-- -----io-- --system-- ----cpu---------
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 221 125 42 0 0 0 0 70 4 0 0 100 0 0
2 0 0 192 133 43 0 0 192 78 432 1809 1 15 81 2 0
2 1 0 85 161 43 0 0 624 418 1456 8407 7 73 0 21 0
0 0 0 65 168 43 0 0 158 237 648 5655 3 28 65 4 0
3 0 0 64 168 43 0 0 0 2 1115 13178 9 69 22 0 0
7 0 0 60 168 43 0 0 0 5 1319 15509 13 87 0 0 0
4 0 0 60 168 43 0 0 0 1 1387 15613 14 86 0 0 0
7 0 0 61 168 43 0 0 0 0 1375 15574 14 86 0 0 0
2 0 0 64 168 43 0 0 0 0 1355 15722 13 87 0 0 0
0 0 0 71 168 43 0 0 0 6 215 1895 1 8 91 0 0
[root@localhost ~]# vmstat -SM 45 10
procs -----------memory-------- —swap-- -----io-- --system-- ----cpu---------
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 221 125 42 0 0 0 0 70 4 0 0 100 0 0
2 0 0 192 133 43 0 0 192 78 432 1809 1 15 81 2 0
2 1 0 85 161 43 0 0 624 418 1456 8407 7 73 0 21 0
0 0 0 65 168 43 0 0 158 237 648 5655 3 28 65 4 0
3 0 0 64 168 43 0 0 0 2 1115 13178 9 69 22 0 0
7 0 0 60 168 43 0 0 0 5 1319 15509 13 87 0 0 0
4 0 0 60 168 43 0 0 0 1 1387 15613 14 86 0 0 0
7 0 0 61 168 43 0 0 0 0 1375 15574 14 86 0 0 0
2 0 0 64 168 43 0 0 0 0 1355 15722 13 87 0 0 0
0 0 0 71 168 43 0 0 0 6 215 1895 1 8 91 0 0
Listing 1

The output of vmstat can be redirected to a file using the following command. We can even adjust the duration and the number of times in order to monitor longer. While the command is running, we can look at the output file at any time to see the results.

In the following example, we are looking at memory every 120 seconds 1000 times. The & at the end of the line allows us to run this as a process and regain our terminal.

vmstat -SM 120 1000 > memoryusage.out &
For reference, Listing 2 shows a section from the vmstat man page that provides additional information about the output the command provides. This is the memory-related information only; the command provides information about both disk I/O and CPU usage as well.

Memory
swpd: the amount of virtual memory used.
free: the amount of idle memory.
buff: the amount of memory used as buffers.
cache: the amount of memory used as cache.
inact: the amount of inactive memory. (-a option)
active: the amount of active memory. (-a option)

Swap
si: Amount of memory swapped in from disk (/s).
so: Amount of memory swapped to disk (/s).
Memory
swpd: the amount of virtual memory used.
free: the amount of idle memory.
buff: the amount of memory used as buffers.
cache: the amount of memory used as cache.
inact: the amount of inactive memory. (-a option)
active: the amount of active memory. (-a option)

Swap
si: Amount of memory swapped in from disk (/s).
so: Amount of memory swapped to disk (/s).
Listing 2

There are a number of other tools available for monitoring memory and system performance for investigating issues of this nature. Tools such as sar (System Activity Reporter) and dtrace (Dynamic Tracing) are quite useful for collecting specific data about system performance over time. For even more visibility, the dtrace stability and data stability probes even have a trigger for OOM conditions that will fire if the kernel kills a process due to an OOM condition. More information about dtrace and sar is included in the “See Also” section of this article.

There are several things that might cause an OOM event other than the system running out of RAM and available swap space due to the workload. The kernel might not be able to utilize swap space optimally due to the type of workload on the system. Applications that utilize mlock() or HugePages have memory that can’t be swapped to disk when the system starts to run low on physical memory. Kernel data structures can also take up too much space exhausting memory on the system and causing an OOM situation. Many NUMA architecture–based systems can experience OOM conditions because of one node running out of memory triggering an OOM in the kernel while plenty of memory is left in the remaining nodes. More information about OOM conditions on machines that have the NUMA architecture can be found in the “See Also” section of this article.

Configuring the OOM Killer
The OOM killer on Linux has several configuration options that allow developers some choice as to the behavior the system will exhibit when it is faced with an out-of-memory condition. These settings and choices vary depending on the environment and applications that the system has configured on it.

Note: It’s suggested that testing and tuning be performed in a development environment before making changes on important production systems.

In some environments, when a system runs a single critical task, rebooting when a system runs into an OOM condition might be a viable option to return the system back to operational status quickly without administrator intervention. While not an optimal approach, the logic behind this is that if our application is unable to operate due to being killed by the OOM killer, then a reboot of the system will restore the application if it starts with the system at boot time. If the application is manually started by an administrator, this option is not beneficial.

The following settings will cause the system to panic and reboot in an out-of-memory condition. The sysctl commands will set this in real time, and appending the settings to sysctl.conf will allow these settings to survive reboots. The X for kernel.panic is the number of seconds before the system should be rebooted. This setting should be adjusted to meet the needs of your environment.

sysctl vm.panic_on_oom=1
sysctl kernel.panic=X
echo “vm.panic_on_oom=1” >> /etc/sysctl.conf
echo “kernel.panic=X” >> /etc/sysctl.conf
sysctl vm.panic_on_oom=1
sysctl kernel.panic=X
echo “vm.panic_on_oom=1” >> /etc/sysctl.conf
echo “kernel.panic=X” >> /etc/sysctl.conf
We can also tune the way that the OOM killer handles OOM conditions with certain processes. Take, for example, our oracle process 2592 that was killed earlier. If we want to make our oracle process less likely to be killed by the OOM killer, we can do the following.

echo -15 > /proc/2592/oom_adj
We can make the OOM killer more likely to kill our oracle process by doing the following.

echo 10 > /proc/2592/oom_adj
If we want to exclude our oracle process from the OOM killer, we can do the following, which will exclude it completely from the OOM killer. It is important to note that this might cause unexpected behavior depending on the resources and configuration of the system. If the kernel is unable to kill a process using a large amount of memory, it will move onto other available processes. Some of those processes might be important operating system processes that ultimately might cause the system to go down.

echo -17 > /proc/2592/oom_adj
We can set valid ranges for oom_adj from -16 to +15, and a setting of -17 exempts a process entirely from the OOM killer. The higher the number, the more likely our process will be selected for termination if the system encounters an OOM condition. The contents of /proc/2592/oom_score can also be viewed to determine how likely a process is to be killed by the OOM killer. A score of 0 is an indication that our process is exempt from the OOM killer. The higher the OOM score, the more likely a process will be killed in an OOM condition.

The OOM killer can be completely disabled with the following command. This is not recommended for production environments, because if an out-of-memory condition does present itself, there could be unexpected behavior depending on the available system resources and configuration. This unexpected behavior could be anything from a kernel panic to a hang depending on the resources available to the kernel at the time of the OOM condition.

sysctl vm.overcommit_memory=2
echo “vm.overcommit_memory=2” >> /etc/sysctl.conf
sysctl vm.overcommit_memory=2
echo “vm.overcommit_memory=2” >> /etc/sysctl.conf
For some environments, these configuration options are not optimal and further tuning and adjustments might be needed. Configuring HugePages for your kernel can assist with OOM issues depending on the needs of the applications running on the system.

See Also
Here are some additional resources about HugePages, dtrace, sar, and OOM for NUMA architectures:

HugePages information on My Oracle Support (requires login): HugePages on Linux: What It Is… and What It Is Not… [ID 361323.1]

dtrace

sar

NUMA architecture and OOM

About the Author
Robert Chase is a member of the Oracle Linux product management team. He has been involved with Linux and open source software since 1996. He has worked with systems as small as embedded devices and with large supercomputer-class hardware.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值