model - Implemented as persistent and durable append- only log(s) - Dumb broker, smart consumer (contrary to standard message queues) - Highly performant and scalable (if it’s good enough for LinkedIn, it’s gonna be just fine for you)
not removed from the log once consumed - Controlled via retention policy - Messages are stored in topics that can be partitioned - Messages are identified by “offsets” in a given topic/partition
a consumer group for partition) - More partitions - more throughput - More partitions - potentially more problems as well :( (increased unavailability, latency) - Ordering of the messages is preserved only in a given partition of a topic. This is critical if you care about causality
for a different purpose (one is not better than another) - RabbitMQ is a smart broker/dumb consumer type of system - No persistent/durable storage in Rabbit
to replay events (even indefinitely, durability is for real) - Log compaction - Heart of the event-driven ecosystems (Kafka Streams, Apache Samza/Spark/Flink, oh my!)