Hi,
We are running several 64-Bit JVMs on Solaris 8. All VMs are configured with
-Xms1536m -Xmx3072m -XX:MaxPermSize=512M.
Recently we ran into memory issues on the machines and noticed that the VMs are 3.5GB to 7GB in size as reported by
prstat.
These sizes imply a per VM memory overhead of up to 100%.
We checked the memory use of these VMs with
pmap and were able to identify three large chunks of memory used, two of which we could relate to our settings. However the overhead remains unexplainable to us.
pmap output of a VM at the lower end of the configured memory range (1.5GB):
...
0000000100114000 1324048K read/write/exec [ heap ] (1)
...
FFFFFFFE93800000 1635576K read/write/exec [ anon ] (2)
FFFFFFFF53800000 460744K read/write/exec [ anon ] (3)
...
total 3677264K
pmap output of a VM at the upper end of the configured memory range (3.0GB):
...
0000000100114000 3141568K read/write/exec [ heap ] (1)
...
FFFFFFFE93800000 3145680K read/write/exec [ anon ] (2)
FFFFFFFF53800000 457032K read/write/exec [ anon ] (3)
...
total 7008952K
We think that (2) corresponds to the Java Object Heap as configured via -Xms and -Xmx and (3) is the size of the PermGen. The big chunk at (1) however seems to be pure JVM overhead. We are aware that the VM needs some memory for shared libs, threads and so on. According to our calculations however that would only explain ~500MB of overhead (~100MB standard VM stuff + ~700 Threads with 512K stack each), not 3GB.
Is there something we are missing here? Is there a direct connection between the configured Java Object Heap and the VM overhead? Can the overhead be reduced or calculated in a predictable manner?
Greetings