Java jcmd内存远大于top,Java进程内存使用情况(jcmd vs pmap)

I have a java application running on Java 8 inside a docker container. The process starts a Jetty 9 server and a web application is being deployed. The following JVM options are passed: -Xms768m -Xmx768m.

Recently I noticed that the process consumes a lot of memory:

$ ps aux 1

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND

app 1 0.1 48.9 5268992 2989492 ? Ssl Sep23 4:47 java -server ...

$ pmap -x 1

Address Kbytes RSS Dirty Mode Mapping

...

total kB 5280504 2994384 2980776

$ jcmd 1 VM.native_memory summary

1:

Native Memory Tracking:

Total: reserved=1378791KB, committed=1049931KB

- Java Heap (reserved=786432KB, committed=786432KB)

(mmap: reserved=786432KB, committed=786432KB)

- Class (reserved=220113KB, committed=101073KB)

(classes #17246)

(malloc=7121KB #25927)

(mmap: reserved=212992KB, committed=93952KB)

- Thread (reserved=47684KB, committed=47684KB)

(thread #47)

(stack: reserved=47288KB, committed=47288KB)

(malloc=150KB #236)

(arena=246KB #92)

- Code (reserved=257980KB, committed=48160KB)

(malloc=8380KB #11150)

(mmap: reserved=249600KB, committed=39780KB)

- GC (reserved=34513KB, committed=34513KB)

(malloc=5777KB #280)

(mmap: reserved=28736KB, committed=28736KB)

- Compiler (reserved=276KB, committed=276KB)

(malloc=146KB #398)

(arena=131KB #3)

- Internal (reserved=8247KB, committed=8247KB)

(malloc=8215KB #20172)

(mmap: reserved=32KB, committed=32KB)

- Symbol (reserved=19338KB, committed=19338KB)

(malloc=16805KB #184025)

(arena=2533KB #1)

- Native Memory Tracking (reserved=4019KB, committed=4019KB)

(malloc=186KB #2933)

(tracking overhead=3833KB)

- Arena Chunk (reserved=187KB, committed=187KB)

(malloc=187KB)

As you can see there is a huge difference between the RSS (2,8GB) and what is actually being shown by VM native memory statistics (1.0GB commited, 1.3GB reserved).

Why there is such huge difference? I understand that RSS also shows the memory allocation for shared libraries but after analysis of pmap verbose output I realized that it is not the shared libraries issue but rather memory is consumed by somehing whas is called [ anon ] structure. Why JVM allocated so much anonymous memory blocks?

I was searching and found out the following topic:

Why does a JVM report more committed memory than the linux process resident set size?

However the case described there is different, because less memory usage is shown by RSS than by JVM stats. I have opposite situation and can't figure out the reason.

解决方案

I was facing similar issue with one of our Apache Spark job where we were submitting our application as a fat jar, After analyzing thread dumps we figured that Hibernate is the culprit, we used to load hibernate classes on startup of the application which was actually using java.util.zip.Inflater.inflateBytes to read hibernate class files , this was overshooting our native resident memory usage by almost 1.5 gb , here is a bug raised in hibernate for this issue

https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc , the patch suggested in the comments worked for us, Hope this helps.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值