Few Big Fat JVMs vs. Lots of Small JVMs
616199Mar 24 2009 — edited Apr 2 2009I have recently been getting into debates about the merits (or otherwise) of deploying Coherence on either fewer big JVMs of a larger number of smaller JVMs. For example , would it be better to run Coherence on three JVMs with 12GB heaps or nine 4GB heaps? This would be on 64-bit JRockit, if that makes a difference. Actually, it has been suggested by someone we could run even larger heaps than 12GB on 64-bit JVMs without a problem. If our cluster is going to be spread across multiple physical machines, do we run one JVM per machine with a big heap, or do we run multiple JVMs per machine with smaller heaps?
Now, I have my own opinions on this based on previous experience of large heaps being bad for GC. In some recent testing with Java 6 though we were able to use much bigger heaps than previously without problems. This has led people to believe that very big heaps are now fine.
I know that better support for large "off heap" caches is coming soon in Coherence so maybe this is a bit academic. But, even with large cache support should we favour fewer big caches? What factors should be considered besides storage?
Presumably if you are doing more processing in your cache servers (lots of queries, entry processors etc.) then spreading this load across more JVMs is better. If you are doing predominantly puts and gets then a few big caches would be OK.
If you have lots of Extend clients then you may need more servers (or more likely we would run more separate storage disabled extend proxies)
Presumably if you loose a node then recovery time for big heaps is slower than if you loose a small node. But is it more likely you would loose a whole machine rather than one node?
I am assuming there is never a "one size fits all" answer to this question, but are there a few up-to-date guidelines?
If anyone has any opinions then I would be glad to hear them
JK