Skip to Main Content

Integration

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

lease-granularity and what it really means for atomicity, concurrency

755396Feb 22 2010 — edited Mar 1 2010
Hi,

My goals are to avoid silently rejected updates, and to preserve the atomicity of EntryProcessors (EPs) across a cluster. These goals should be met regardless of whether, for a particular key, (1) multiple threads on multiple storage-enabled node invoke EPs, or (2) multiple clients connected to multiple Extend proxies invoke EPs. Hope that makes sense?

I dropped a breakpoint into my custom EP and started up several cluster nodes (VMs) in my debugger. Somewhat unexpectedly the breakpoint was hit by two threads simultaneously on the same node. If I step through, both threads are apparently successful in setting the entry value, which was previously unset. Each EP instance returns a different value. The second value is the one retained by the cache.

I'm using Coherence 3.5.3 on Java 1.6.0_17. I am not using TransactionMaps, nor do I wish to. The cluster has several storage-disabled nodes that serve as Extend proxies, and several storage-enabled nodes. The cache is replicated, with lease-granularity=member on all nodes.

I found this forum thread 689665 which explains how my chosen lease-granularity may result in a lock being lost. Another forum thread 689665 explains how a cached object's lease may result in cache updates being rejected.

If I set lease-granularity=thread, I am subsequently unable to reproduce the "issue". Based upon the forum posts and User Guide it makes sense to me that lease-granularity=member essentially enables coarse-grained locking that restricts access to a key to a single node within the cluster, and that lease-granularity=thread restricts access to a key to a single thread within the cluster. In the former case, threads on the node may steal locks from one another.

Some questions, then:

First of all, have I understood the semantics of lease-granularity correctly?

Why are both threads' invocations on my custom EP apparently both successful when lease-granularity=member?

Should an Extend proxy node use a cache configuration that is distinct from the storage node cache configurations? I.e. should the proxy node use lease-granularity=member, whereas storage enabled nodes use lease-granularity=thread? In this configuration, are Extend clients guaranteed to retain a lock across a conversation, regardless of other client connections competing for the same lock?

Any advice appreciated.

Thanks in advance,

Simon

Edited by: user8850969 on 22-Feb-2010 05:46
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Mar 29 2010
Added on Feb 22 2010
7 comments
2,095 views