(转贴)Out of Memory: Killed process

如果/VAR/LOG/MESSAGE中出现了上面的错误,那就是因为某些进程因为内存不够而被系统自动杀掉了,系统自动杀掉某些进程主要是出于对系统整体的保护,避免整个系统HANG掉。

下面转载的文章中详细描述了这种问题发生的原因和避免的办法

[@more@]
http://www.redaht.com/archives/redhat-list/2007-August/msg00060.html
Since this problem seems to popup on different lists, this message has
been cross-posted to the general Red Hat discussion list, the RHEL3
(Taroon) list and the RHEL4 (Nahant) list.  My apologies for not having
the time to post this summary sooner.

I would still be banging my head against this problem were it not for
the generous assistance of Tom Sightler  and Brian
Long .

In general, the out of memory killer (oom-killer) begins killing
processes, even on servers with large amounts (6Gb+) of RAM.  In many
cases people report plenty of "free" RAM and are perplexed as to why the
oom-killer is whacking processes.  Indications that this has happened
appear in /var/log/messages:
  Out of Memory: Killed process [PID] [process name].

In my case I was upgrading various VMware servers from RHEL3 / VMware
GSX to RHEL4 / VMware Server.  One of the virtual machines on a server
with 16Gb of RAM kept getting whacked by the oom-killer.  Needless to
say, this was quite frustrating.

As it turns out, the problem was low memory exhaustion.  Quoting Tom:
"The kernel uses low memory to track allocations of all memory thus a
system with 16GB of memory will use significantly more low memory than a
system with 4GB, perhaps as much as 4 times.  This extra pressure
happens from the moment you turn the system on before you do anything at
all because the kernel structures have to be sized for the potential of
tracking allocations in four times as much memory."

You can check the status of low & high memory a couple of ways:

# egrep 'High|Low' /proc/meminfo
HighTotal:     5111780 kB
HighFree:         1172 kB
LowTotal:       795688 kB
LowFree:         16788 kB

# free -lm
             total       used       free     shared    buffers     cached
Mem:          5769       5751         17          0          8       5267
Low:           777        760         16          0          0          0
High:         4991       4990          1          0          0          0
-/+ buffers/cache:        475       5293
Swap:         4773          0       4773

When low memory is exhausted, it doesn't matter how much high memory is
available, the oom-killer will begin whacking processes to keep the
server alive.

There are a couple of solutions to this problem:

If possible, upgrade to 64-bit Linux.  This is the best solution because
*all* memory becomes low memory.  If you run out of low memory in this
case, then you're *really* out of memory. ;-)

If limited to 32-bit Linux, the best solution is to run the hugemem
kernel.  This kernel splits low/high memory differently, and in most
cases should provide enough low memory to map high memory.  In most
cases this is an easy fix - simply install the hugemem kernel RPM &
reboot.

If running the 32-bit hugemem kernel isn't an option either, you can try
setting /proc/sys/vm/lower_zone_protection to a value of 250 or more.
This will cause the kernel to try to be more aggressive in defending the
low zone from allocating memory that could potentially be allocated in
the high memory zone.  As far as I know, this option isn't available
until the 2.6.x kernel. Some experimentation to find the best setting
for your environment will probably be necessary.  You can check & set
this value on the fly via:
  # cat /proc/sys/vm/lower_zone_protection
  # echo "250" > /proc/sys/vm/lower_zone_protection

To set this option on boot, add the following to /etc/sysctl.conf:
  vm.lower_zone_protection = 250

As a last-ditch effort, you can disable the oom-killer.  This option can
cause the server to hang, so use it with extreme caution (and at your
own risk)!
Check status of oom-killer:
  # cat /proc/sys/vm/oom-kill

Turn oom-killer off/on:
  # echo "0" > /proc/sys/vm/oom-kill
  # echo "1" > /proc/sys/vm/oom-kill

To make this change take effect at boot time, add the following
to /etc/sysctl.conf:
  vm.oom-kill = 0

For processes that would have been killed, but weren't because the oom-
killer is disabled, you'll see the following message
in /var/log/messages:
  "Would have oom-killed but /proc/sys/vm/oom-kill is disabled"

Sorry for being so long-winded.  I hope this helps others who have
struggled with this problem.
 
 
总结一下:
1、这样的问题一般出在32位系统,因为64位系统上所有的内存都当作LOWER MEMORY了,所以如果出现这样的问题,那就是内存真的不够了。
2、可以关闭OOM-KILLER来避免这样的报错,不过这样更危险
3、32位系统上,可以通过设置内核参数来调高LOWER MEMORY的值,避免这样的报错发生(其实也不是完全避免,只是把报错的门槛提高了一点)

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/25016/viewspace-1004687/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/25016/viewspace-1004687/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值