1 Distributed streaming made easy in the cloud Distributed Streaming with Apache Kafka in Kubernetes Aykut M. Bulgu - @systemcraftsman Middleware Consultant - Red Hat Distributed Streaming with Apache Kafka in Kubernetes
What is Apache Kafka? 7 A publish/subscribe messaging system. A data streaming platform A distributed, horizontally-scalable, fault-tolerant, commit log @systemcraftsman
8 Developed at LinkedIn back in 2010, open sourced in 2011 Distributed by design High throughput Designed to be fast, scalable, durable and highly available Data partitioning (sharding) Ability to handle huge number of consumers What is Apache Kafka? @systemcraftsman
Traditional Messaging 9 Queue Producer Consumer 1 2 3 Reference count-based message retention model When message is consumed it is deleted from broker “Smart broker, dumb client” Broker knows about all consumers Can perform per consumer filtering @systemcraftsman
Apache Kafka 10 Kafka Topic Producer Consumer 1 2 3 1 2 3 Time-based message retention model by default Messages are retained according to topic config (time or capacity) Also “compacted topic” – like a “last-value topic” “Dumb broker, smart client” Client maintains position in message stream Message stream can be replayed @systemcraftsman
Kafka Use Cases 16 Messaging Replacement of traditional message broker High scale, high throughput, built-in partitioning, replication, and fault-tolerance. Some limitations compared to traditional broker (filtering, standard protocols, JMS …) Website Activity Tracker Rebuild user activity tracking pipeline as a set of real-time publish-subscribe feeds. Activity is published to central topics with one topic per activity type Metrics Aggregation of statistics from distributed applications to produce centralized feeds of operational data. Log Aggregation Abstracts details of files an gives event data as stream of messages. Offers good performance, stronger durability guarantees due to replication. Stream Processing Enables continuous, real-time applications built to react to, process, or transform streams. Data Integration Captures streams of events or data changes and feed these to other data systems (see Debezium project). @systemcraftsman
18 @systemcraftsman Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications across multiple hosts kubernetes
19 Comes from Google experience with project “Borg” On the nodes a set of different “resources” can be deployed and handled Abstract the underlying hardware in terms of “nodes” Containerized applications are deployed, using and sharing “resources” Kubernetes kubernetes @systemcraftsman
20 Security Control who can do what Scaling Scale containers up and down Persistence Survive data beyond container lifecycle Aggregation Compose apps from multiple containers kubernetes Scheduling Decide where to deploy containers Lifecycle and health Keep containers running despite failures Discovery Find other containers on the network Monitoring Visibility into running containers @systemcraftsman
21 An open-source Enterprise Kubernetes platform based on Docker and Kubernetes for building, distributing and running containers at scale @systemcraftsman
Challenges 25 A Kafka cluster requires; A stable broker identity and stable network address A way for brokers to discover each other and communicate Durable state on brokers and storage recovery To have brokers accessible from clients, directly It runs alongside a Zookeeper ensemble which requires; Each node has the configuration of the others To have nodes able to communicate each others Accessing Kafka isn’t so simple @systemcraftsman
How Kubernetes Can Help 26 Kubernetes provides; StatefulSets for stable identity and network Together with Headless services for internal discovery Services for accessing the cluster Secrets and ConfigMap for handling configurations PersistentVolume and PersistentVolumeClaim for durable storage Kubernetes primitives help but still not easy It is still hard to deploy and manage Kafka on Kubernetes... @systemcraftsman
Operator Framework 27 An application used to create, configure and manage other complex applications Contains domain-specific domain knowledge Operator works based on input from Custom Resource Definitions (CRDs) User describes the desired state Controller applies this state to the application It watches the *desired* state and the *actual* state and makes forward progress to reconcile OperatorHub.io Observe Analyze Act @systemcraftsman
Strimzi - The open-source Apache Kafka Operator 31 Open source project licensed under Apache License 2.0 Focuses on running Apache Kafka on Kubernetes and OpenShift: Container images for Apache Kafka and Apache Zookeeper Operators for managing and configuring Kafka clusters, topics or users Provides Kubernetes-native experience for running Kafka on Kubernetes and OpenShift Kafka cluster, topic and user as Kubernetes custom resources @systemcraftsman
Red Hat AMQ Streams - Apache Kafka for the Enterprise 32 Part of the Red Hat AMQ suite AMQ Streams on OCP Running Apache Kafka on OpenShift Container Platform Based on the Strimzi project AMQ Streams on RHEL Running Apache Kafka on “bare metal” @systemcraftsman
Cluster Operator 34 Responsible for deploying and managing clusters Kafka, Kafka Connect, Zookeeper Also deploys other operators Topic Operator, User Operator The only component which the user has to install on his own Uses CRDs as blueprints for the clusters it deploys and manages CRDs act as extensions to the Kubernetes API Can be used similarly to native resources … oc get kafkas or kubectl get kafkas @systemcraftsman
Cluster Operator 35 @systemcraftsman Installation Runs as a Deployment inside Kubernetes Configuration options are passed as environment variables Installation Requirements Service Account RBAC resources CRD definitions Should always run as a single replica
Topic Operator 37 Manages Kafka topics Bi-directional synchronization and 3-way diff Using CRDs Users can just do … oc get kafkatopics or kubectl get kafkatopics Installation One Topic Operator per Kafka cluster Users are expected to install Topic Operator through Cluster Operator Standalone installation is available and supported @systemcraftsman
User Operator 39 Manages authentication and authorization Using CRDs Users can just do … oc get kafkausers or kubectl get kafkausers Installation One User Operator per Kafka cluster Users are expected to install User Operator through Cluster Operator Standalone installation is available and supported @systemcraftsman
User Operator 40 Authentication Currently supports TLS Client Authentication and SASL SCRAM-SHA-512 The KafkaUser CR requests TLS Client Authentication The User Operator will issue TLS certificate and stores it in Secret Authorization Currently supports Kafka’s built-in SimpleAclAuthorizer The KafkaUser CR lists the desired ACL rights The User Operator will update them in Zookeeper @systemcraftsman
Features 47 Tolerations Memory and CPU resources High Availability Mirroring Affinity Authentication Storage Encryption Scale Down JVM Configuration Logging Metrics Off cluster access Scale Up Authorization Healthchecks Source2Image Configuration @systemcraftsman