Gone are the days of point-to-point batch feeds in analytical systems. Nowadays a single source system will provide data for multiple consumers, many requiring low-latency feeds. The requirement for data discovery to rapidly realise the business value in the data blurs the line between Production data and not.
In this presentation we will see how Apache Kafka, implemented as part of Oracle's Big Data Architecture on the Big Data Appliance, can underpin a modern stream-based data platform. Batch or streaming, live or replayed, Kafka provides the data your platform needs. Primary feeds from Oracle GoldenGate into the data reservoir on HDFS will be demonstrated, as well as ad hoc population of discovery lab environments such as Elasticsearch.