Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Extreme Performance and Scalability with Near C...

Extreme Performance and Scalability with Near Caches

The aim of near caching is to provide a bridge between fast, in-memory, local caching and remote, massively scalable, Data Grids in such way that most recently or most frequently accessed data is quickly available while at the same time, clients still being able to transparently and seamlessly access the remote Data Grid when needed. Due to zero latency access provided for local data and the scalability offered by the possibility of going to a remote Data Grid, it's no wonder that this is one of the most demanded Infinispan patterns. In this talk, Galder will offer a detailed view of the pattern with a look at best practices for deploying it in your own environment. The talk will finish with a demo showing near caching in action!

Galder Zamarreño

October 31, 2011
Tweet

More Decks by Galder Zamarreño

Other Decks in Programming

Transcript

  1. Galder Zamarreño • R&D Engineer, Red Hat Inc. • Infinispan

    developer • 5+ years exp. with distributed data systems • Twitter: @galderz • Blog: zamarreno.com
  2. Agenda • Introduction to Infinispan • Near caches • Clustered

    near caches • Near caches with JMS • Demo
  3. Perfectly suited for it! Highly concurrent thanks to MVCC, has

    built-in eviction, pluggable persistence...etc
  4. Why separate data tier? Manage/tune data tier independently, helps build

    stateless application tiers, and is easily scalable!
  5. JMS to the rescue! Hot Rod protocol not there yet,

    but JMS can easily fill the gap!