The saying goes that there are only two hard things in Computer Science: cache invalidation, and naming things. Well, turns out the first one is solved actually. Join us for this session to learn how to keep read views of your data in distributed caches close to your users, always kept in sync with your primary data stores change data capture. You will learn how to:
* Implement a low-latency data pipeline for cache updates based on Debezium, Apache Kafka, and Infinispan
* Create denormalized views of your data using Kafka Streams and make them accessible via plain key look-ups from a cache cluster close by
* Propagate updates between cache clusters using cross-site replication
We'll also touch on some advanced concepts, such as detecting and rejecting writes to the system of record which are derived from outdated cached state, and show in a demo how all the pieces come together, of course connected via Apache Kafka.