java 单进程内存,Java进程内存远大于指定的限制

I've researched through most of the methods available to find out how much memory a java process is really using, at all.

So far, I can say I know the total memory allocated could be one or more of the following:

Heap Memory (supposedly controlled by my -XX:MaxHeapSize=4096m)

Permanent Memory (supposedly controlled by my -XX:MaxPermSize=1024m)

Reserved Code Cache (supposedly controlled by my -XX:ReservedCodeCacheSize=256m)

N of Threads * Thread Size (supposedly controlled by my -XX:ThreadStackSize=1024)

But the results are too different from what linux tells me, with any of the methods I found available to get memory consumption of a java process.

In my case it is a Tomcat instance running on a Ubuntu 11.10 x86_64 machine, JVM 1.6_u26 64-bit, and ps -ALcf | grep org.apache.catalina.startup.Bootstrap | wc -l tells me I have 145 threads or processes running, linked all to the same root process (Tomcat).

That all summed up should give me total max memory of

(4096MB) + (1024MB) + (256MB) + 145 * (1024KB) = 5521MB.

What jmap -heap PID tells me, what ManagementFactory.memoryMXBean.(heapMemoryUsage + nonHeapMemoryUsage).getCommitted() tells me, and the theoric value above are all on pair.

Now to the linux side, top and nmon both tells me ResidentMemory allocated by this process is 5.8GB -> roughly 5939,2MB. But I also know this is only part of the memory, the part in live RAM memory. VIRT by top and Size by nmon (both are supposed to represent the same) tells me the process is 7530MB (or precisely 7710952KB by nmon).

This is TOO different from the expected maximum: 2009MB above the maximum, and according to jmap and jstat the heap memory allocation didn't even reach its peak (2048-OldSpace + 1534-Eden_+_Survivors).

top also tells me code stack is 36KB (fair, for the initial catalina starter), and data stack is 7.3GB (representing the rest).

This tomcat server instance is the only one running on this machine, and has been seing some instability. Needs restarting every three days or so, because the machine has 7647544k RAM available, and no swap (for performance reasons). I did the math for the limits, and expecting the process to follow them I saw it was a pretty good security margin to leave for all other services running on the machine (none of which should bother other than ssh and top itself): 7468 - 5521 = 1947. That is almost too much for a "security margin".

So, I want to understand where is all that memory being used from, and why isn't the limit obeyed. If any information is lacking, I'll be happy to provide.

解决方案

Here is a very detailed article on how the JVM allocates and manages memory, it isn't as simple as what you are expected based on your assumptions in your question, it is well worth a comprehensive read.

ThreadStack size in many implementations have minimum limits that vary by Operating System and sometimes JVM version; the threadstack setting is ignored if you set the limit below the native OS limit for the JVM or the OS ( ulimit on *nix has to be set instead sometimes ). Other command line options work the same way, silently defaulting to higher values when too small values are supplied. Don't assume that all the values passed in represent what are actually used.

The Classloaders, and Tomcat has more than one, eat up lots of memory that isn't documented easily. The JIT eats up a lot of memory, trading space for time, which is a good trade off most of the time.

The numbers you cite are pretty close to what I would expect.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值