Skip to Main Content

Integration

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Sparse hashmaps after extensive use

748856Feb 3 2010 — edited Feb 9 2010
Hi,

I've been using coherence grid edition to hold data about customer positions, which is frequently updated (several times per second per customer). We run an explicit eviciton service and do not rely on coherence to do it for us as our eviction criteria are fairly complex.

In general, the use case is:

market is opened, customers become active in market and we may hold up to 50k positions for a given market. the market is then closed and after a while, position data is moved off to disk in a seperate, cooler, coherence cache. This means we run a entryprocessor along with a filter to remove a large number of entries at once.

what we find is that heap usage stays high, even after a long period of quiet when virtually all positions have been moved to disk. Upon dumping and analysing the memory usage, I see large trees of com.tangosol.util.SegmentedHashMap's with buckets often containing only 1 item and otherwise empty (bucket size appears to be 17 in my case).

So my question is thus: how can I force the tangosol hashmap innards to compress it's storage trees after heavy inserts/removal acitvity, and why is this a problem at all? Surely I am not the only person moving large number of entries to/from a cache regularly. I'm not even sure how to reduce bucket size, and even then am worried about the performance costs of such a move.

Cheers,
David
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Mar 9 2010
Added on Feb 3 2010
11 comments
1,637 views