Modern data pipelines have come a long way since the traditional publisher-subscriber and asynch execution.
Now days tools like Kafka are used as the organization's data backbone, processing Terabytes of daily data across real-time microsservices and batch processing using different data stores and tools.
In this talk we will discuss the key differences between kafka and traditional queues, how data pipelines transformed the backend architecture for many big data companies providing better resiliency using concepts like back pressure, distributed logs and stream processing