Presented at the University of Cambridge Computer Laboratory on 12 November 2014 (http://www.talks.cam.ac.uk/talk/index/54973) and at the Imperial College London Large-Scale Distributed Systems Group on 13 November 2014 (http://lsds.doc.ic.ac.uk/node/194).
Stream processing is an old idea, but it is currently being rediscovered in industry due to pressures from increasing data volumes (throughput), increasingly diverse data sources (complexity) and increasing impatience (latency).
Apache Samza and Apache Kafka, two open source projects that originated at LinkedIn, are being successfully used at scale in production. Kafka is a fault-tolerant message broker, and Samza provides a scalable processing model on top of it. They have an interesting “back to basics” approach which questions many assumptions from the last few decades of data management practice.
In particular, their design is informed by the experience of operating large-scale systems under heavy load, and the challenges that arise in a large organisation with hundreds or even thousands of software engineers. This talk will introduce the architecture of Samza and Kafka, and explain some of the reasoning behind their underlying design decisions.