Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Coherence SIG: Database backed Coherence cache

Coherence SIG: Database backed Coherence cache

aragozin

May 31, 2012
Tweet

More Decks by aragozin

Other Decks in Technology

Transcript

  1. Power of read-write-backing-map • Fetching data as needed • Separation

    of concerns • Gracefully handling concurrency • Write-behind – removing DB from critical path • Database operation bundling
  2. … and challenges  DB operations are order of magnitude

    slower • Less deterministic response time • Coherence thread pools issues  How verify persistence with write behind?  Data are written in DB in random order  read-write-backing-map and expiry
  3. BinaryEntryStore, did you know? BinaryEntryStore – an alternative to CacheLoader

    / CacheStore interface. Works with BinaryEntry instead of objects.  You can access binary key and value • Skip deserialization, if binary is enough  You can access previous version of value • Distinguish inserts vs. updates • Find which fields were cached  You cannot set entry TTL in cache loader 
  4. When storeAll(…) is called?  cache.getAll(…) • loadAll(…) will be

    called with partition granularity (since Coherence 3.7)  cache.putAll(…) • write-behind scheme will use storeAll(…) • write-through scheme will use store(…) (this could be really slow)
  5. When storeAll(…) is called?  cache.invokeAll(…)/aggregate(…) • calling get() on

    entry will invoke load(…) (if entry is not cached yet) • calling set() on entry will invoke put(…) (in case of write-through) • you can check entry.isPresent() to avoid needless read-through • Coherence will never use bulk cache store operations for aggregators and entry processors
  6. Warming up aggregator public static void preloadValuesViaReadThrough(Set<BinaryEntry> entries) { CacheMap

    backingMap = null; Set<Object> keys = new HashSet<Object>(); for (BinaryEntry entry : entries) { if (backingMap == null) { backingMap = (CacheMap) entry.getBackingMapContext().getBackingMap(); } if (!entry.isPresent()) { keys.add(entry.getBinaryKey()); } } backingMap.getAll(keys); } Code above will force all entries for working set to be preloaded using bulk loadAll(…). Call it before processing entries.
  7. Why load(…) is called on write? Case: • Entry processor

    is called on set of entries which is not in cache and assigns values to them Question: • Why read-through is triggered? Answer: • BinaryEntry.setValue(Object) returns old value • Use BinaryEntry.setValue(Object, boolean)
  8. Bulk put with write through You can use same trick

    for updates. 1. Pack your values in entry processor. 2. In entry processor obtain backing map reference. 3. Call putAll(…) on backing map. Be careful !!! • You should only put key for partition entry processor was called for. • Backing map accepts serialized objects.
  9. Using operation bundling Worker thread1 Worker thread2 Req 1 Req

    2 Worker thread3 storeAll() RWBM Cache Store Waiting Req 3 Worker thread 3 Req 1 Req 2 Req 3 Worker thread1 Worker thread2 Worker thread3 RWBM Cache Store
  10. Using operation bundling storeAll(…) with N keys could be called

    if  You have at least N concurrent operations  You have at least N threads in worker pool <cachestore-scheme> <operation-bundling> <bundle-confing> <operation-name>store</operation-name> <delay-millis>5</delay-millis> <thread-threshold>4</thread-threshold> </bundle-config> </operation-bundling> </cachestore-scheme>
  11. Checking STORE decoration  Configure cache as “write-behind”  Put

    data  Wait until, STORE decoration become TRUE (actually it will switch from FALSE to null) public class StoreFlagExtractor extends AbstractExtractor implements PortableObject { // ... private Object extractInternal(Binary binValue, BinaryEntry entry) { if (ExternalizableHelper.isDecorated(binValue)) { Binary store = ExternalizableHelper.getDecoration(binValue, ExternalizableHelper.DECO_STORE); if (store != null) { Object st = ExternalizableHelper.fromBinary(store, entry.getSerializer()); return st; } } return Boolean.TRUE; } }
  12. How it works? Distributed cache service read-write backing-map Cache store

    Internal map Miss cache Distribution and backup of data Storing cache data, expiry Interacting with persistent storage Caching cache loader misses Coordination
  13. How it works? read-write backing-map Cache store Internal map Miss

    cache get(key) Distributed cache service #1 Cache service is receiving get(…) request.
  14. How it works? read-write backing-map Cache store Internal map Miss

    cache get(key) Distributed cache service get(key) #2 Cache service is invoking get(…) on backing map. Partition transaction is open.
  15. How it works? #3 read-write backing-map Cache store Internal map

    Miss cache Distributed cache service get(key) get(key) Backing map checks internal map and miss cache if present. Key is not found.
  16. How it works? #4 read-write backing-map Cache store Internal map

    Miss cache load(key) Distributed cache service get(key) get(key) Backing map is invoking load(…) on cache loader.
  17. How it works? #5 read-write backing-map Cache store Internal map

    Miss cache Distributed cache service query external data source get(key) get(key) load(key) Cache loader is retrieving value for external source
  18. How it works? #6 read-write backing-map Cache store Internal map

    Miss cache Distributed cache service get(key) get(key) load(key) query external data source Value is loaded
  19. How it works? #7 read-write backing-map Cache store Internal map

    Miss cache Distributed cache service get(key) get(key) load(key) query external data source Backing map is updating internal map
  20. How it works? #8 read-write backing-map Cache store Internal map

    Miss cache map event Distributed cache service get(key) get(key) load(key) query external data source Internal map is observable and cache service is receiving event about new entry in internal map.
  21. How it works? #9 read-write backing-map Cache store Internal map

    Miss cache map event Distributed cache service get(key) get(key) load(key) query external data source Call to backing map returns. Cache service is ready to commit partition transaction.
  22. How it works? #10 read-write backing-map Cache store Internal map

    Miss cache update backup Distributed cache service get(key) get(key) load(key) map event query external data source Partition transaction is being committed. New value is being sent to backup node.
  23. How it works? #11 read-write backing-map Cache store Internal map

    Miss cache Distributed cache service get(key) get(key) load(key) map event update backup query external data source Response for get(…) request is sent back as backup has confirmed update.
  24. How it works? #1 put(k,v) read-write backing-map Cache store Internal

    map Miss cache Distributed cache service Cache service is receiving put(…) requiest.
  25. How it works? #2 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service Cache service is invoking put(…) on backing map. Partition transaction is open.
  26. How it works? #3 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service Value is immediately stored in internal map and put to write-behind queue.
  27. How it works? #4 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service map event DECO_STORE=false Cache service is receiving event, but backing map is decorating value with DECO_STORE=false flag to mark that value is yet-to-stored.
  28. How it works? #5 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service map event DECO_STORE=false Call to backing map return.
  29. How it works? #6 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service update backup map event DECO_STORE=false Partition transaction is being committed. Backup will receive value decorated with DECO_STORE=false.
  30. How it works? #7 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service map event DECO_STORE=false update backup Cache service is sending response back as soon as backup is confirmed.
  31. How it works? #8 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service Eventually, cache store is called to persist value. It is done on separate thread.
  32. How it works? #9 put(k,v) put(k,v) read-write backing-map Cache store

    Internal map Miss cache Distributed cache service store to external storage Value is stored in external storage by cache store.
  33. How it works? #10 store to external storage put(k,v) put(k,v)

    read-write backing-map Cache store Internal map Miss cache Distributed cache service map event DECO_STORE=null Once call to cache store has returned successfully. Backing map is removing DECO_STORE decoration from value is internal map. Cache service is receiving map event
  34. How it works? #11 store to external storage put(k,v) put(k,v)

    read-write backing-map Cache store Internal map Miss cache Distributed cache service update backup map event DECO_STORE=null Map event was received by cache service outside of service thread. It will be put to OOB queue and eventually processed. Update to backup will be sent once event is processed.
  35. Requests and jobs Problem  Single API call may produce

    hundreds of jobs for worker threads in cluster (limited by partition count).  Write-through and read-through jobs could be time consuming.  While all threads are busy by time consuming jobs, cache is unresponsive.
  36. Requests and jobs Workarounds  Huge thread pools  Request

    throttling  By member (one network request at time)  By partitions (one job at time)  Priorities  Applicable only to EP and aggregators
  37. “Canary” keys  Canary keys – special keys (one per

    partitions) ignored by all cache operations.  Canary key is inserted once “recovery” procedure have verified that partition data is complete.  If partition is not yet loaded or lost due to disaster, canary key will be missing.
  38. Recovery procedure  Store object hash code in database 

    Using hash you can query database for all keys belonging to partition  Knowing all keys, can use read-through to pull data to a cache  Cache is writable during recovery!  Coherence internal concurrency control will ensure consistency
  39. “Unbreakable cache” read/write-trough + canary keys + recovery  Key

    based operations rely on read-through  Filter based operations are checking “canary” keys (and activate recovery is needed)  Preloading = recovery  Cache is writable at all times
  40. Checking “canary” keys  Option 1  check “canary” keys

     perform query  Option 2  perform query  check “canary” keys
  41. Checking “canary” keys  Option 1  check “canary” keys

     perform query  Option 2  perform query  check “canary” keys  Right way  check “canaries” inside of query!
  42. “Unbreakable cache” Motivation  Incomplete data set would invalidate hundred

    of hours of number crunching  100% complete data or exception  Persistent DB is requirement anyway Summary  Transparent recovery (+ preloading for free)  Always writable (i.e. feeds are not waiting for recovery)  Graceful degradation of service in case of “disastrous conditions”