Slide 1

Slide 1 text

@arafkarsh arafkarsh Architecting & Building Apps a tech presentorial Combination of presentation & tutorial ARAF KARSH HAMID Co-Founder / CTO MetaMagic Global Inc., NJ, USA @arafkarsh arafkarsh AI / ML Generative AI LLMs, RAG 6+ Years Microservices Blockchain 8 Years Cloud Computing 8 Years Network & Security 8 Years Distributed Computing 1 Microservices Architecture Series Kinesis Data Steams Kinesis Firehose Kinesis Data Analytics Apache Flink Part 3 of 15 To Build Cloud Native Apps Composable Enterprise Architecture

Slide 2

Slide 2 text

@arafkarsh arafkarsh 2 Source: https://arafkarsh.medium.com/embracing-cloud-native-a-roadmap-to-innovation-a6b06fe3a9fb Cloud-Native Architecture

Slide 3

Slide 3 text

@arafkarsh arafkarsh 3 Source: https://arafkarsh.medium.com/embracing-cloud-native-a-roadmap-to-innovation-a6b06fe3a9fb

Slide 4

Slide 4 text

@arafkarsh arafkarsh 4 Slides are color coded based on the topic colors. AWS Kinesis Video Streams Data Streams 1 AWS Kinesis Data Firehose Data Analytics 2 Apache Flink Streams Table / SQL 3 Kinesis Case Studies 4

Slide 5

Slide 5 text

@arafkarsh arafkarsh Agile Scrum (4-6 Weeks) Developer Journey Monolithic Domain Driven Design Event Sourcing and CQRS Waterfall Optional Design Patterns Continuous Integration (CI) 6/12 Months Enterprise Service Bus Relational Database [SQL] / NoSQL Development QA / QC Ops 5 Microservices Domain Driven Design Event Sourcing and CQRS Scrum / Kanban (1-5 Days) Mandatory Design Patterns Infrastructure Design Patterns CI DevOps Event Streaming / Replicated Logs SQL NoSQL CD Container Orchestrator Service Mesh

Slide 6

Slide 6 text

@arafkarsh arafkarsh Application Modernization – 3 Transformations 6 Monolithic SOA Microservice Physical Server Virtual Machine Cloud Waterfall Agile DevOps Source: IBM: Application Modernization > https://www.youtube.com/watch?v=RJ3UQSxwGFY Architecture Infrastructure Delivery

Slide 7

Slide 7 text

@arafkarsh arafkarsh Application Modernization – 3 Transformations 7 Monolithic SOA Microservice Physical Server Virtual Machine Cloud Waterfall Agile DevOps Source: IBM: Application Modernization > https://www.youtube.com/watch?v=RJ3UQSxwGFY Architecture Infrastructure Delivery Modernization 1 2 3

Slide 8

Slide 8 text

@arafkarsh arafkarsh Microservices Principles…. 8 Components via Services Organized around Business Capabilities Products NOT Projects Smart Endpoints & Dumb Pipes Decentralized Governance & Data Management Infrastructure Automation Design for Failure Evolutionary Design

Slide 9

Slide 9 text

@arafkarsh arafkarsh AWS Kinesis • Data Streams • Video Streams 9 1 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 10

Slide 10 text

@arafkarsh arafkarsh AWS Kinesis - Purpose 10 1. Collect 2. Process 3. Analyze Realtime 4. Streaming Data Ingest Realtime Data 1. Video 2. Audio 3. Application Logs 4. Website Click Streams IoT Telemetry Data 1. Analytics 2. Machine Learning

Slide 11

Slide 11 text

@arafkarsh arafkarsh AWS Kinesis 11 Kinesis Video Streams helps you to securely stream video from systems to AWS for processing such as Analytics, Machine Learning and others. Kinesis Data Streams are a highly Scalable, Durable, & Realtime data streaming service that can capture Gigabytes of data per second different data sources. Kinesis Data Firehose is used to Extract, Load, Transform (ETL) data streams into AWS stores like S3, Redshift, Open Search etc. for near Realtime data analytics. Kinesis Data Analytics is used to process the real-time streams in SQL or Java or Python.

Slide 12

Slide 12 text

@arafkarsh arafkarsh Streaming Data 12 • Continuously generated Data to be processed sequentially or incrementally • Data is sent record by record by thousands or over a sliding time windows of Data Sources Use Cases Gaming Stock Market Real Estate Transport Applications

Slide 13

Slide 13 text

@arafkarsh arafkarsh Kinesis Video Streams 13 Devices Processing • AWS Rekognition • AWS Sage Maker • Tensor Flow • HLS Playback • Custom Video Processing • Automatically scales the infrastructure needed for streaming video data from devices • Stream video from connected devices to AWS for Analytics, Machine Learning, Playback etc. • Stores, Encrypts and indexes video data and access the data using APIs HLS – HTTP Live Streaming INPUT Kinesis Video Stream

Slide 14

Slide 14 text

@arafkarsh arafkarsh Kinesis Data Streams 14 Applications Processing • Kinesis Data Analytics • Spark • AWS EC2 • AWS Lambda • Kinesis Data Streams are Highly Scalable and Durable Real-time streaming • Stream Data from connected devices to AWS for Analytics, Machine Learning. etc. INPUT Kinesis Data Stream

Slide 15

Slide 15 text

@arafkarsh arafkarsh Kinesis Data Streams: Example 15 Applications • Raw Events are coming from Cart Checkout • Using the Lambda, the Raw Event is Enriched and send to another Stream for further processing Event Producer Kinesis Data Stream Raw Events Kinesis Data Stream Enriched Events Enrich the Checkout Event IN OUT Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 16

Slide 16 text

@arafkarsh arafkarsh Kinesis Data Firehose 16 Store Data • AWS S3 • AWS Redshift • AWS Elastic Search • Splunk • Kinesis Data Firehose is to store the streaming data into Data Stores, Lakes etc. • Firehose is used to Capture, Transform and Load Data into S3, Redshift etc. Kinesis Data Stream Kinesis Data Firehose Data Transformation using Lambda

Slide 17

Slide 17 text

@arafkarsh arafkarsh Kinesis Data Analytics 17 • Kinesis Data Analytics is used to analyze the streaming Data • Reduces the complexity in building and deploying Analytics Applications • Provides built-in Functions to Filter, Aggregate and Transform Streaming Data • Serverless Architecture • Under the hood its Apache Flink (v1.13) INPUT Kinesis Data Stream Kinesis Data Analytics OUTPUT Kinesis Data Stream

Slide 18

Slide 18 text

@arafkarsh arafkarsh AWS Kinesis – Summary 18 Kinesis Video Streams helps you to securely stream video from systems to AWS for processing such as Analytics, Machine Learning and others. Kinesis Data Streams are a highly Scalable, Durable, & Realtime data streaming service that can capture Gigabytes of data per second different data sources. Kinesis Data Firehose is used to Extract, Load, Transform (ETL) data streams into AWS stores like S3, Redshift, Open Search etc. for near Realtime data analytics. Kinesis Data Analytics is used to process the real-time streams in SQL or Java or Python.

Slide 19

Slide 19 text

@arafkarsh arafkarsh Kinesis Data Streams Producers Consumers 19 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 20

Slide 20 text

@arafkarsh arafkarsh How it works 20 Source: https://aws.amazon.com/kinesis/data-streams/

Slide 21

Slide 21 text

@arafkarsh arafkarsh Architecture 21 Source: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

Slide 22

Slide 22 text

@arafkarsh arafkarsh Kinesis Data Streams 22 Data Record The atomic unit of data in a Data Stream stored in Kinesis Data Stream Collection of Data Records streamed and stored in multiple shards. Data Record Data Record Data Record Data Record Data Stream Data Record Data Record Data Record Shard 1 Data Record Data Record Data Record Shard 2 Data Record Data Record Data Record Shard n Data Stream Source: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html Producer puts the Data Records into the Shards and Consumer retrieves the data from the Shard.

Slide 23

Slide 23 text

@arafkarsh arafkarsh Kinesis Data Streams: Shards 23 • A shard is a uniquely identified sequence of data records in a stream. • A stream is composed of one or more shards, each of which provides a fixed unit of capacity. • Each shard can support up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys). • The data capacity of your stream is a function of the number of shards that you specify for the stream. Data Record Data Record Data Record Shard 1 Data Record Data Record Data Record Shard 2 Data Record Data Record Data Record Shard n Data Stream Source: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html • The total capacity of the stream is the sum of the capacities of its shards.

Slide 24

Slide 24 text

@arafkarsh arafkarsh Kinesis Data Streams: Partition Keys 24 Source: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html Partition Key Data BLOB • A partition key is used to group data by shard within a stream. • Kinesis Data Streams segregates the data records belonging to a stream into multiple shards. • It uses the partition key that is associated with each data record to determine which shard a given data record belongs to. • Partition keys are Unicode strings, with a maximum length limit of 256 characters for each key. • An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. • When an application puts data into a stream, it must specify a partition key.

Slide 25

Slide 25 text

@arafkarsh arafkarsh Kinesis Data Streams: Sequence Number 25 • Each data record has a sequence number that is unique per partition-key within its shard. • Kinesis Data Streams assigns the sequence number after you write to the stream with client.putRecords or client.putRecord. • Sequence numbers for the same partition key generally increase over time. • The longer the time period between write requests, the larger the sequence numbers become. Source: https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

Slide 26

Slide 26 text

@arafkarsh arafkarsh Kinesis Data Stream Lambda Config 26 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart You can Control the Stream • Batch Size • Batch Window in Seconds • Max Retry

Slide 27

Slide 27 text

@arafkarsh arafkarsh Kinesis Data Stream Lambda 27 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 28

Slide 28 text

@arafkarsh arafkarsh Multi Consumer Fan out 28 Source: https://docs.aws.amazon.com/streams/latest/dev/enhanced-consumers.html

Slide 29

Slide 29 text

@arafkarsh arafkarsh Data Stream – On Demand Scaling On-Demand • Automatically provisions the infrastructure • Max 200 MiB per Second OR • Max 200K Records per Second 29

Slide 30

Slide 30 text

@arafkarsh arafkarsh Data Stream – Retention 1 Day 30

Slide 31

Slide 31 text

@arafkarsh arafkarsh Data Stream – Retention 365 Days Retention Days • Min 1 Day • Max 365 Days 31

Slide 32

Slide 32 text

@arafkarsh arafkarsh Data Stream - Monitoring 32

Slide 33

Slide 33 text

@arafkarsh arafkarsh Security 33 • Data is automatically encrypted before its stored in the Shard. • Encryption is done using AWS KMS Customer Master Key Server-Side Encryption

Slide 34

Slide 34 text

@arafkarsh arafkarsh Kinesis Video Streams • Realtime using WebRTC • Batch Mode 34

Slide 35

Slide 35 text

@arafkarsh arafkarsh Kinesis Video Streams 35 Video Producer Library 1. Java 2. Android 3. C++ 4. C

Slide 36

Slide 36 text

@arafkarsh arafkarsh Kinesis Video Stream 36 Source: https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-it-works.html Producer can be any video-generating device, such as a • security camera, • a body-worn camera, • a smartphone camera, or a • Dashboard camera. • A producer can also send non-video data, such as audio feeds, images, or RADAR data. A single producer can generate one or more video streams. For example, a video camera can push video data to one Kinesis video stream and audio data to another. Kinesis Video Streams Producer libraries • Install and configure on your devices. • Securely connect and reliably stream video in different ways, • including in real time, after buffering it for a few seconds, • or as after-the-fact media uploads.

Slide 37

Slide 37 text

@arafkarsh arafkarsh Kinesis Video Stream End Points 37 Examples: Sending Data to Kinesis Video Streams • Example: Kinesis Video Streams Producer SDK GStreamer Plugin: Shows how to build the Kinesis Video Streams Producer SDK to use as a GStreamer destination. • Run the GStreamer Element in a Docker Container: Shows how to use a pre-built Docker image for sending RTSP video from an IP camera to Kinesis Video Streams. • Example: Streaming from an RTSP Source: Shows how to build your own Docker image and send RTSP video from an IP camera to Kinesis Video Streams. • Example: Sending Data to Kinesis Video Streams Using the PutMedia API: Shows how to use the Using the Java Producer Library to send data to Kinesis Video Streams that is already in a container format Matroska (MKV) using the PutMedia API. GStreamer is a popular media framework used by a multitude of cameras and video sources to create custom media pipelines by combining modular plugins. • RTSP Camera on Ubuntu • USB Camera on Ubuntu • Camera on Raspberry Pi Source: https://docs.aws.amazon.com/kine sisvideostreams/latest/dg/examples -gstreamer-plugin.html

Slide 38

Slide 38 text

@arafkarsh arafkarsh Kinesis Video Stream 38 Kinesis video stream • Transport live video data, optionally store it • Data available for consumption both in real time and on a batch or ad hoc basis. • A Kinesis video stream has only one producer publishing data into it. The stream can carry • audio, • video, and • similar time-encoded data streams, such as • depth sensing feeds, • RADAR feeds, and more. Source: https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-it-works.html Kinesis Video Stream Consumer (App) • Gets data, such as fragments and frames, from a Kinesis video stream • To view, process, or analyse it. Kinesis Video Stream Parser Library • To reliably get media from Kinesis video streams in a low-latency manner. • It parses the frame boundaries in the media so that applications can focus on processing and analysing the frames themselves.

Slide 39

Slide 39 text

@arafkarsh arafkarsh Kinesis Video Stream Parser Library 39 • StreamingMkvReader: This class reads specified MKV elements from a video stream. • FragmentMetadataVisitor: This class retrieves metadata for fragments (media elements) and tracks (individual data streams containing media information, such as audio or subtitles). • OutputSegmentMerger: This class merges consecutive fragments or chunks in a video stream. • KinesisVideoExample: This is a sample application that shows how to use the Kinesis Video Stream Parser Library. The library also includes tests that show how the tools are used.

Slide 40

Slide 40 text

@arafkarsh arafkarsh Kinesis Data Firehose 40 2 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 41

Slide 41 text

@arafkarsh arafkarsh Kinesis Data Firehose 41 Store Data • AWS S3 • AWS Redshift • AWS Elastic Search • Splunk • Kinesis Data Firehose is to Store the Streaming data into Data Stores, Lakes etc. • Firehose is used to Capture, Transform & Load Data into S3, Redshift etc. Kinesis Data Stream Kinesis Data Firehose Data Transformation using Lambda

Slide 42

Slide 42 text

@arafkarsh arafkarsh Kinesis Data Firehose – Transformation Lambda 42 recordId • The record ID is passed from Kinesis Data Firehose to Lambda during the invocation. • The transformed record must contain the same record ID. • Any mismatch between the ID of the original record and the ID of the transformed record is treated as a data transformation failure. result The status of the data transformation of the record. The possible values are: • Ok (the record was transformed successfully), • Dropped (the record was dropped intentionally by your processing logic), and • ProcessingFailed (the record could not be transformed). If a record has a status of Ok or Dropped, Kinesis Data Firehose considers it successfully processed. Otherwise, Kinesis Data Firehose considers it unsuccessfully processed. data The transformed data payload, after base64-encoding. Source: https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html

Slide 43

Slide 43 text

@arafkarsh arafkarsh Kinesis Firehose Lambda Config 43 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart You can Control the Stream • Batch Size • Batch Window in Seconds • Max Retry

Slide 44

Slide 44 text

@arafkarsh arafkarsh Kinesis Firehose Lambda 44 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 45

Slide 45 text

@arafkarsh arafkarsh Kinesis Data Firehose – S3 45

Slide 46

Slide 46 text

@arafkarsh arafkarsh Kinesis Data Firehose – S3 46

Slide 47

Slide 47 text

@arafkarsh arafkarsh Kinesis Data Firehose – Direct Input 47

Slide 48

Slide 48 text

@arafkarsh arafkarsh Kinesis Data Analytics • K 48 Example Source: https://github.com/MetaArivu/Kinesis-Quickstart

Slide 49

Slide 49 text

@arafkarsh arafkarsh Kinesis Data Analytics 49 • Kinesis Data Analytics is used to Analyze the Streaming Data • Reduces the complexity in building and deploying Analytics Applications • Provides built-in Functions to Filter, Aggregate & Transform Streaming Data • Serverless Architecture • Under the hood its Apache Flink (v1.13) – December 2021 INPUT Kinesis Data Stream Kinesis Data Analytics OUTPUT Kinesis Data Stream

Slide 50

Slide 50 text

@arafkarsh arafkarsh Kinesis Data Analytics – Architecture (Flink) 50 AWS Cloud Kinesis Data Analytics Elastic Kubernetes Service Job Manager Task Manager Task Manager Task Manager S3 Bucket Auto Scaling Zookeeper Cloud Watch Cloud Watch Logs Flink Web UI

Slide 51

Slide 51 text

@arafkarsh arafkarsh Kinesis Data Analytics 51

Slide 52

Slide 52 text

@arafkarsh arafkarsh Kinesis Data Analytics 52

Slide 53

Slide 53 text

@arafkarsh arafkarsh Apache Flink Open-Source Stream Processing Framework 53 3

Slide 54

Slide 54 text

@arafkarsh arafkarsh Apache Flink 54 Ease of Programming Stateful Processing High Performance Strong Data Integrity Flexible APIs for Programming Low Latency & Horizontally Scalable Stores Application States Exactly Once Processing & Consistent State Is an Open-Source Stream Processing Framework

Slide 55

Slide 55 text

@arafkarsh arafkarsh What is Apache Flink 55 Stateful Computations over Data Streams Batch Processing Process Static & historic Data Data Stream Processing Realtime Results from Data Streams Event Driven Applications Data Driven Actions and Services Instead of Spark + Hadoop

Slide 56

Slide 56 text

@arafkarsh arafkarsh Use Case: Periodic ETL vs Streaming CTL 56 Traditional Periodic ETL • External Tool Periodically triggers ETL Batch Job Batch Processing Process Static & historic Data Data Stream Processing Realtime Results from Data Streams Continuous Streaming Data Pipeline • Ingestion with Low Latency • No Artificial Boundaries Streaming App Ingest Append Real Time Events Event Logs Batch Process Module Read Write Transactional Data Extract, Transform, Load Capture, Transform, Load State Source: GoTo: Intro to Stateful Stream Processing – Robert Metzger

Slide 57

Slide 57 text

@arafkarsh arafkarsh Use Case: Data Analytics 57 • Great for Ad-Hoc Queries • Queries changes faster than data Batch Analytics Stream Analytics Ingest K-V Data Store Real Time Events Batch Analytics Read Write Recorded Events • High Performance Low Latency Result • Data Changes faster than Queries Analytics App State State Update Source: GoTo: Intro to Stateful Stream Processing – Robert Metzger

Slide 58

Slide 58 text

@arafkarsh arafkarsh Use Case: Event Driven Application 58 • Compute & Data Tier Architecture • React to Process Events • State is stored in (Remote) Database Traditional Application Design Event Driven Application • High Performance Low Latency Result • Data Changes faster than Queries Application Read Write Events Trigger Action Ingest Real Time Events Application State Append Periodically write asynchronous checkpoints in Remote Database Event Logs Event Logs Trigger Action Source: GoTo: Intro to Stateful Stream Processing – Robert Metzger

Slide 59

Slide 59 text

@arafkarsh arafkarsh Apache Flink Use Case Features 59 • Business, Operational, Technical App Metrics • User Experience Metrics Real-time Analytics • Transform, Filter, Aggregate Streaming Data • IoT and Application Log Analysis Streaming ETL Applications • Trigger Conditions and External Notifications • Detecting Patterns / Anomaly Stateful Event Processing

Slide 60

Slide 60 text

@arafkarsh arafkarsh Apache Flink Architecture • Architecture • Anatomy of the Flink Cluster • Tasks, Slots & Operator Chains • Anatomy of a Flink Program • Flink API & Operators 60

Slide 61

Slide 61 text

@arafkarsh arafkarsh Apache Flink Architecture 61

Slide 62

Slide 62 text

@arafkarsh arafkarsh Deployment Model 62 Source: https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/ The Job Manager distributes the work onto the Task Managers, where the actual operators such as 1. sources, 2. transformations and 3. sinks are running. Job Manager is the name of the central work coordination component of Flink. Task Managers are the services actually performing the work of a Flink job.

Slide 63

Slide 63 text

@arafkarsh arafkarsh Anatomy of the Flink Cluster 63 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/flink-architecture/ Job Manager: Resource Manager It is responsible for resource de-/allocation and provisioning in a Flink cluster — it manages task slots, which are the unit of resource scheduling in a Flink cluster. Dispatcher It provides a REST interface to submit Flink applications for execution and starts a new Job Master for each submitted job. Job Master It is responsible for managing the execution of a single JobGraph. Multiple jobs can run simultaneously in a Flink cluster, each having its own Job Master.

Slide 64

Slide 64 text

@arafkarsh arafkarsh Job Manager HA 64 Source: https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/ha/overview/ Flink ships with two high availability service implementations: • ZooKeeper: ZooKeeper HA services can be used with every Flink cluster deployment. They require a running ZooKeeper quorum. • Kubernetes: Kubernetes HA services only work when running on Kubernetes. Flink’s high availability services encapsulate the required services to make everything work: • Leader election: Selecting a single leader out of a pool of n candidates • Service discovery: Retrieving the address of the current leader • State persistence: Persisting state which is required for the successor to resume the job execution (Job Graphs, user code jars, completed checkpoints

Slide 65

Slide 65 text

@arafkarsh arafkarsh Deployment Modes 65 Source: https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/

Slide 66

Slide 66 text

@arafkarsh arafkarsh Task & Operator Chains 66 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/flink-architecture/ • For distributed execution, Flink chains operator subtasks together into tasks • Each task is executed by one thread. • Chaining operators together into tasks is a useful optimization: • it reduces the overhead of thread-to-thread handover and buffering, • and increases overall throughput while decreasing latency. T1 T2 T3 T4 T5

Slide 67

Slide 67 text

@arafkarsh arafkarsh Task Slots 67 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/flink-architecture/ • Each worker (Task Manager) is a JVM process and may execute one or more subtasks in separate threads. • To control how many tasks a Task Manager accepts, it has so called task slots (at least one). • Memory is divided equally across the slots. • No CPU isolation across task slot. • Having multiple slots means more subtasks share the same JVM. • Tasks in the same JVM share TCP connections (via multiplexing) and heartbeat messages. • They may also share data sets and data structures, thus reducing the per-task overhead. • Flink allows subtasks to share slots even if they are subtasks of different tasks, so long as they are from the same job.

Slide 68

Slide 68 text

@arafkarsh arafkarsh Anatomy of a Flink Program 68 1. Obtain an execution environment, 2. Load/create the initial data, 3. Specify transformations on this data, 4. Specify where to put the results of your computations, 5. Trigger the program execution. Will be triggered on your local machine or submit your program for execution on a cluster. Source Transform Transform Sink 1 2 3 5 4 Each program consists of the same basic parts: Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/overview/#anatomy-of-a-flink-program

Slide 69

Slide 69 text

@arafkarsh arafkarsh External Components 69 Feature Description Implementation 1 High Availability Service Provider Flink's Job Manager can be run in high availability mode which allows Flink to recover from Job Manager faults. In order to failover faster, multiple standby Job Managers can be started to act as backups. • Zookeeper • Kubernetes HA 2 File Storage and Persistency For checkpointing (recovery mechanism for streaming jobs) Flink relies on external file storage systems See FileSystems page. 3 Resource Provider Flink can be deployed through different Resource Provider Frameworks, such as Kubernetes, YARN or Mesos. • Kubernetes • YARN • Mesos 4 Metrics Storage Flink components report internal metrics and Flink jobs can report additional, job specific metrics as well. See Metrics Reporter page. 5 Application-level data sources and sinks While application-level data sources and sinks are not technically part of the deployment of Flink cluster components, they should be considered when planning a new Flink production deployment. Colocating frequently used data with Flink can have significant performance benefits For example: • Apache Kafka • Amazon S3 • Amazon Kinesis • Elastic Search See Connectors page. Source: https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/

Slide 70

Slide 70 text

@arafkarsh arafkarsh Flink Scale 70

Slide 71

Slide 71 text

@arafkarsh arafkarsh Flink API 71 Source: https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/concepts/overview/

Slide 72

Slide 72 text

@arafkarsh arafkarsh Apache Flink DataStream API • Data Source • Operators • Data Sink • Generating Watermarks 72

Slide 73

Slide 73 text

@arafkarsh arafkarsh DataStream 73 • A DataStream is similar to a regular Java Collection in terms of usage but is quite different in some keyways. • They are immutable, meaning that once they are created you cannot add or remove elements. • You can also not simply inspect the elements inside but only work on them using the DataStream API operations, which are also called transformations. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/overview/ Reading from Socket

Slide 74

Slide 74 text

@arafkarsh arafkarsh Data Sources 74 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/overview/ File-based: • readTextFile(path) - Reads text files, i.e. files that respect the TextInputFormat specification, line-by- line and returns them as Strings. • readFile(fileInputFormat, path) - Reads (once) files as dictated by the specified file input format. • readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo) - This is the method called internally by the two previous ones. It reads files in the path based on the given fileInputFormat. Depending on the provided watchType, this source may periodically monitor (every interval ms) the path for new data (FileProcessingMode.PROCESS_CONTINUOUSLY), or process once the data currently in the path and exit (FileProcessingMode.PROCESS_ONCE). Using the pathFilter, the user can further exclude files from being processed.

Slide 75

Slide 75 text

@arafkarsh arafkarsh Data Sources 75 Socket-based: • socketTextStream - Reads from a socket. Elements can be separated by a delimiter. Collection-based: • fromCollection(Collection) - Creates a data stream from the Java Java.util.Collection. All elements in the collection must be of the same type. • fromCollection(Iterator, Class) - Creates a data stream from an iterator. The class specifies the data type of the elements returned by the iterator. • fromElements(T ...) - Creates a data stream from the given sequence of objects. All objects must be of the same type. • fromParallelCollection(SplittableIterator, Class) - Creates a data stream from an iterator, in parallel. The class specifies the data type of the elements returned by the iterator. • generateSequence(from, to) - Generates the sequence of numbers in the given interval, in parallel. Custom: • addSource - Attach a new source function. For example, to read from Apache Kafka you can use addSource(new FlinkKafkaConsumer<>(...)). See connectors for more details. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/overview/

Slide 76

Slide 76 text

@arafkarsh arafkarsh Data Sources: Custom Connectors 76 1. Apache Kafka (source/sink) 2. Apache Cassandra (sink) 3. Amazon Kinesis Streams (source/sink) 4. Elasticsearch (sink) 5. FileSystem (Hadoop included) - Streaming only sink (sink) 6. FileSystem (Hadoop included) - Streaming and Batch sink (sink) 7. [FileSystem (Hadoop included) - Batch source] (//nightlies.apache.org/flink/flink-docs- release1.14/docs/connectors/datastream/formats/) (source) 8. RabbitMQ (source/sink) 9. Google PubSub (source/sink) 10. Hybrid Source (source) 11. Apache NiFi (source/sink) 12. Apache Pulsar (source) 13. Twitter Streaming API (source) 14. JDBC (sink) Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/datastream/overview/ Bundled Connectors 1. Apache ActiveMQ (source/sink) 2. Apache Flume (sink) 3. Redis (sink) 4. Akka (sink) 5. Netty (source) Apache Bahir

Slide 77

Slide 77 text

@arafkarsh arafkarsh Data Sink 77 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/overview/ • writeAsText() / TextOutputFormat - Writes elements line-wise as Strings. The Strings are obtained by calling the toString() method of each element. • writeAsCsv(...) / CsvOutputFormat - Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the toString() method of the objects. • print() / printToErr() - Prints the toString() value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to print. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output. • writeUsingOutputFormat() / FileOutputFormat - Method and base class for custom file outputs. Supports custom object-to-bytes conversion. • writeToSocket - Writes elements to a socket according to a SerializationSchema • addSink - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as Apache Kafka) that are implemented as sink functions. Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them

Slide 78

Slide 78 text

@arafkarsh arafkarsh Execution Mode – Batch / Streaming 78 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/execution_mode/ The execution mode can be configured via the execution.runtime-mode setting. There are three possible values: 1. STREAMING: The classic DataStream execution mode (default) 2. BATCH: Batch-style execution on the DataStream API 3. AUTOMATIC: Let the system decide based on the boundedness of the sources • The BATCH execution mode can only be used for Jobs/Flink Programs that are bounded. • Boundedness is a property of a data source that tells us whether all the input coming from that source is known before execution or whether new data will show up, potentially indefinitely. • A job, in turn, is bounded if all its sources are bounded, and unbounded otherwise. • STREAMING execution mode, on the other hand, can be used for both bounded and unbounded jobs. • As a rule of thumb, you should be using BATCH execution mode when your program is bounded because this will be more efficient.

Slide 79

Slide 79 text

@arafkarsh arafkarsh Stream Processing: Operators 79 Map Takes one element and produces one element. A map function that doubles the values of the input stream: Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/overview/ Flat Map Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words: Filter Evaluates a Boolean function for each element and retains those for which the function returns true. Key By Logically partitions a stream into disjoint partitions. All records with the same key are assigned to the same partition.

Slide 80

Slide 80 text

@arafkarsh arafkarsh Stream Processing: Operators 80 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/overview/ Reduce A “rolling” reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value. Union Union of two or more data streams creating a new stream containing all the elements from all the streams. Join Join two data streams on a given key and a common window. Join Interval Join two elements e1 and e2 of two keyed streams with a common key over a given time interval, so that e1.timestamp + lowerBound <= e2.timestamp <= e1.timestamp + upperBound

Slide 81

Slide 81 text

@arafkarsh arafkarsh Stream Processing: Operators 81 Window All Windows can be defined on regular Data Streams. Windows group all the stream events according to some characteristic. Window Apply Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window. Window Reduce Applies a functional reduce function to the window and returns the reduced value. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/overview/ Window Windows can be defined on already partitioned Keyed Streams. Windows group the data in each key according to some characteristic.

Slide 82

Slide 82 text

@arafkarsh arafkarsh Watermarks 82 1. Watermarks are provided by the Data Source of Application 2. They are part of the Stream and carry a timestamp 3. A Watermark asserts that all earlier events have probably arrived • Watermark w9 asserts that all the events with time < w9 has arrived. • Watermark w15 asserts that all the events with time < w15 has arrived. 27 Event Stream 25 13 21 4 10 13 12 15 8 7 11 1 3 w9 w15 w5 18 w21 Event Timestamp Watermarks Late Events

Slide 83

Slide 83 text

@arafkarsh arafkarsh Goal: Count Events in 10 Seconds Windows 83 0 – 10 Seconds 11 – 20 Seconds 21 – 30 Seconds 8 7 11 1 3 15 13 12 10 4 18 13 21 27 Event Stream 25 13 21 4 10 13 12 15 8 7 11 1 3 w9 w15 w5 18 w21 Event Timestamp Watermarks Late Events 27 25 R1 R2 R1 R2 Event Time Timers

Slide 84

Slide 84 text

@arafkarsh arafkarsh Allowed Lateness 84 • Once a window is fired it’s state is freed & all the late events are dropped. • You can avoid the dropping of the late events by configuring the max time to wait for the late events. • With Sufficient lateness allowed Event [4] and [13] are updated in the respective window and result is updated (R2) stream.window().allowedLateness()

Slide 85

Slide 85 text

@arafkarsh arafkarsh Timers 85 Explicit o TimerService timerService = context.timerService(); o timerService.registerEventTimeTimer(event.timestamp); // Time In Millis o timerService.registerProcessingTimeTimer(event.timestamp); // Time In Millis Implicit o stream.window(TumblingEventTimeWindows.of(Time.seconds(7))) o stream.window(TumblingProcessingTimeWindows.of(Time.seconds(7))) o SELECT user, SUM(amount) o FROM Orders o GROUP BY TUMBLE(rowtime, INTERVAL ‘1’ HOUR), user Source: Streaming Concepts & Introduction – Feb 1, 2021: https://www.youtube.com/watch?v=QVDJFZVHZ3c

Slide 86

Slide 86 text

@arafkarsh arafkarsh Watermarks – In Order Events 86 Watermarks: • To measure progress in event time. • It flow as part of the data stream and carry a timestamp t. • A Watermark(t) declares that event time has reached time t in that stream. • Meaning that there should be no more elements from the stream with a timestamp t' <= t (i.e. events with timestamps older or equal to the watermark). Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/time/

Slide 87

Slide 87 text

@arafkarsh arafkarsh Watermarks – Out of Order Events 87 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/time/ • A watermark is a declaration that by that point in the stream, all events up to a certain timestamp should have arrived. • Once a watermark reaches an operator, the operator can advance its internal event time clock to the value of the watermark.

Slide 88

Slide 88 text

@arafkarsh arafkarsh Watermarks – in Parallel Streams 88 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/time/ • Watermarks are generated at, or directly after, source functions. • Each parallel subtask of a source function usually generates its watermarks independently. • These watermarks define the event time at that particular parallel source.

Slide 89

Slide 89 text

@arafkarsh arafkarsh Generating Watermarks 89 In order to work with event time, Flink needs to know the events timestamps, meaning each element in the stream needs to have its event timestamp assigned. This is usually done by accessing/extracting the timestamp from some field in the element by using a Timestamp Assigner. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/event-time/generating_watermarks/ Specifying a Timestamp Assigner is optional, and, in most cases, you don’t actually want to specify one. For example, when using Kafka or Kinesis you would get timestamps directly from the Kafka/Kinesis records. Idle Input Source If one of the input splits/partitions/shards does not carry events for a while this means that the Watermark Generator also does not get any new information on which to base a watermark. To deal with this, you can use a Watermark Strategy that will detect idleness and mark an input as idle.

Slide 90

Slide 90 text

@arafkarsh arafkarsh Watermark Strategies 90 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/event-time/generating_watermarks/ There are two places in Flink applications where a Watermark Strategy can be used: 1. directly on sources and (RECOMMENDED) 2. after non-source operation. The first option is preferable, because • it allows sources to exploit knowledge about shards/partitions/splits in the watermarking logic. • Sources can usually then track watermarks at a finer level and • the overall watermark produced by a source will be more accurate. The second option (setting a Watermark Strategy after arbitrary operations) should only be used if you cannot set a strategy directly on the source. After non-source operation

Slide 91

Slide 91 text

@arafkarsh arafkarsh Periodic Watermark Generator 91 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/event-time/generating_watermarks/ A periodic generator observes stream events and generates watermarks periodically (possibly depending on the stream elements, or purely based on processing time).

Slide 92

Slide 92 text

@arafkarsh arafkarsh Punctuated Watermark Generator 92 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/event-time/generating_watermarks/ A punctuated watermark generator will observe the stream of events and emit a watermark whenever it sees a special element that carries watermark information.

Slide 93

Slide 93 text

@arafkarsh arafkarsh Watermark Summary 93 • Flink Supports different Types of Time • Event Time • Processing Time • With Event Time • Events can be out of Order • Expect Deterministic Results • Event time Applications are Responsible for • Providing Watermarks • Deciding how to handle late events • Streaming Applications must trade off Completeness for Latency • Can wait longer to have more complete information before acting • Can wait less to reduce latency • Watermarks are the mechanism for managing this trade off Source: https://www.youtube.com/watch?v=QVDJFZVHZ3c

Slide 94

Slide 94 text

@arafkarsh arafkarsh Core Building Blocks • Event Time • Event Streams • State • Snapshots 94

Slide 95

Slide 95 text

@arafkarsh arafkarsh Flink Core Building Blocks 95 Event Streams Real-time & hindsight State Complex Business Logic Consistency with out-of- order data & Late data Event Time Snapshots Forking / versioning / Time Travel Source: Flink Forward 2021: https://www.youtube.com/watch?v=vLLn5PxF2Lw

Slide 96

Slide 96 text

@arafkarsh arafkarsh Flink API Architecture (v1.14) 96 Table / SQL API Source: Flink Forward 2021: https://www.youtube.com/watch?v=vLLn5PxF2Lw Relational Planner DataStream API Stateful Functions Internal Streams API Runtime

Slide 97

Slide 97 text

@arafkarsh arafkarsh 97 Consistency with out-of- order data & Late data Event Time

Slide 98

Slide 98 text

@arafkarsh arafkarsh Handling Time 98 Partition 2 Partition 1 Partition 3 Messaging Layer Kafka / Kinesis Data Streams Event Time Broker Time Source: https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/concepts/time/ Event Producer Flink Data Source Flink Window Operator [ ] [ ] Processing Time Ingestion Time

Slide 99

Slide 99 text

@arafkarsh arafkarsh Handling Event Time 99 • Can Ensure Ordering of Event Time • Increases Latency for Ordered Event Time • Flink Reconstruct the order Event time: Event time is the time that each individual event occurred on its producing device. Processing time: Processing time refers to the system time of the machine that is executing the respective operation. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/concepts/time/

Slide 100

Slide 100 text

@arafkarsh arafkarsh Windows 100 • Windows are at the heart of processing infinite streams. • Windows split the stream into “buckets” of finite size, over which we can apply computations. • It is created as soon as the first element that should belong to this window arrives, and the • Window is completely removed when the time (event or processing time) passes its end timestamp plus the user-specified allowed lateness. • Flink guarantees removal only for time-based windows. • 2 Category of Windows – Keyed keyBy(…) and non-Keyed Windows windowAll(…) Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ Types of Windows 1. Time Windows 2. Count Windows

Slide 101

Slide 101 text

@arafkarsh arafkarsh Window (Sliding, Tumbling, Hopping, Session) 101 Source: https://docs.microsoft.com/en-us/stream-analytics-query/windowing-azure-stream-analytics Sliding Tumbling Hopping Session

Slide 102

Slide 102 text

@arafkarsh arafkarsh Window – Tumbling 102 Tumbling windows have a fixed size and do not overlap. • Without offsets hourly tumbling windows are aligned with epoch, that is you will get windows such as • 1:00:00.000 - 1:59:59.999, 2:00:00.000 - 2:59:59.999 and so on. • Offset of 15 minutes you would, for example, get 1:15:00.000 - 2:14:59.999. • An important use case for offsets is to adjust windows to time zones other than UTC-0. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/

Slide 103

Slide 103 text

@arafkarsh arafkarsh Window – Sliding 103 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ You could have windows of size 10 minutes that slides by 5 minutes. With this you get every 5 minutes a window that contains the events that arrived during the last 10 minutes

Slide 104

Slide 104 text

@arafkarsh arafkarsh Window – Session 104 • The session windows groups elements by sessions of activity. • Session windows do not overlap and do not have a fixed start and end time. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/

Slide 105

Slide 105 text

@arafkarsh arafkarsh Window – Global 105 • This windowing scheme is only useful if you also specify a custom trigger. • Otherwise, no computation will be performed, as the global window does not have a natural end at which we could process the aggregated elements. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/

Slide 106

Slide 106 text

@arafkarsh arafkarsh Window Functions 106 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ Reduce Function Aggregate Function Process Window Function Process Window Function with Incremental Aggregation • Window functions are used to specify the computation needs to happen on the window. • This is done when a Window is ready for Processing. • Triggers are used to determine when the Window is ready for Computation. The window function can be one of Reduce Function, Aggregate Function, or Process Window Function. The Reduce Function, Aggregate Function can be executed more efficiently because Flink can incrementally aggregate the elements for each window as they arrive. A Process Window Function gets an Iterable for all the elements contained in a window and additional meta information about the window to which the elements belong.

Slide 107

Slide 107 text

@arafkarsh arafkarsh Reduce Function 107 A Reduce Function specifies how two elements from the input are combined to produce an output element of the same type. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/

Slide 108

Slide 108 text

@arafkarsh arafkarsh Aggregate Function 108 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ An Aggregate Function is a generalized version of a Reduce Function that has three types: 1. an input type (IN), 2. accumulator type (ACC), 3. and an output type (OUT). The input type is the type of elements in the input stream and the Aggregate Function has a method for adding one input element to an accumulator. The interface also has methods for 1. creating an initial accumulator, 2. for merging two accumulators into one accumulator and for 3. extracting an output (of type OUT) from an accumulator. Same as with Reduce Function, Flink will incrementally aggregate input elements of a window as they arrive.

Slide 109

Slide 109 text

@arafkarsh arafkarsh Process Function 109 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ A Process Window Function gets an Iterable containing all the elements of the window, and a Context object with access to time and state information, which enables it to provide more flexibility than other window functions.

Slide 110

Slide 110 text

@arafkarsh arafkarsh Process Function with Incremental Aggregation 110 A Process Window Function can be combined with either a Reduce Function, or an Aggregate Function to incrementally aggregate elements as they arrive in the window. When the window is closed, the Process Window Function will be provided with the aggregated result. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/

Slide 111

Slide 111 text

@arafkarsh arafkarsh Trigger 111 • A Trigger determines when a window (as formed by the window assigner) is ready to be processed by the window function. • It comes with a default Trigger. Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ 1. The onElement() method is called for each element that is added to a window. 2. The onEventTime() method is called when a registered event-time timer fires. 3. The onProcessingTime() method is called when a registered processing-time timer fires. 4. The onMerge() method is relevant for stateful triggers and merges the states of two triggers when their corresponding windows merge, e.g. when using session windows. 5. The clear() method performs any action needed upon removal of the corresponding window.

Slide 112

Slide 112 text

@arafkarsh arafkarsh Evictor 112 Flink’s windowing model allows specifying an optional Evictor in addition to the Window Assigner and the Trigger. This can be done using the evictor(...) method (shown in the beginning of this document). The evictor has the ability to remove elements from a window after the trigger fires and before and/or after the window function is applied. Flink comes with three pre-implemented evictors. These are: • Count Evictor: keeps up to a user-specified number of elements from the window and discards the remaining ones from the beginning of the window buffer. • Delta Evictor: takes a Delta Function and a threshold, computes the delta between the last element in the window buffer and each of the remaining ones, and removes the ones with a delta greater or equal to the threshold. • Time Evictor: takes as argument an interval in milliseconds and for a given window, it finds the maximum timestamp max_ts among its elements and removes all the elements with timestamps smaller than max_ts - interval. • By default, all the pre-implemented evictors apply their logic before the window function Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/

Slide 113

Slide 113 text

@arafkarsh arafkarsh Handling Late Events 113 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/#allowed-lateness • By default, the allowed lateness is set to 0. • That is, elements that arrive behind the watermark will be dropped.

Slide 114

Slide 114 text

@arafkarsh arafkarsh Late Events – Side Out 114 Source: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/operators/windows/ Using Flink’s side output feature you can get a stream of the data that was discarded as late.

Slide 115

Slide 115 text

@arafkarsh arafkarsh 115 Event Streams Real-time & hindsight State Complex Business Logic

Slide 116

Slide 116 text

@arafkarsh arafkarsh Streams & Batch Processing 116 • Processes “unbounded” (stream) and “bounded” (batch) data • Processes recorded (offline) and live (real-time) data • Batch is just a special case of streaming data Event Log Bounded Stream Bounded Stream now Unbounded Stream Unbounded Stream Start of the Stream Past Future Source: Flink Forward 2021: https://www.youtube.com/watch?v=vLLn5PxF2Lw

Slide 117

Slide 117 text

@arafkarsh arafkarsh Stateful Event & Stream Processing 117 Source Transform Transform Sink Source Transform Window Sink Streaming Data Flow

Slide 118

Slide 118 text

@arafkarsh arafkarsh Stateful Event & Stream Processing 118 Source Filter & Transform Window State Read & Write Sink 1 2 keyBy(R1, R3, R5) keyBy(R2, R4, R6) Scalable Local State Scalable Local State keyBy() keyBy() High Performance In Memory Computing & Parallelize the Tasks Raw Events Raw Events New Aggregated Event External Storage For Snapshots

Slide 119

Slide 119 text

@arafkarsh arafkarsh 119 Snapshots Forking / versioning / Time Travel

Slide 120

Slide 120 text

@arafkarsh arafkarsh Storage for States 120 Processor State External External Storage Processor State Snapshots Internal Storage Internal • Independent from Processing • Low Performance due to remote Storage • Hard to get ”Exactly-Once” guarantees • Highly consistent distributed Snapshotting • Faster access with Local Storage • Stream processor needs to handle scaling and storage

Slide 121

Slide 121 text

@arafkarsh arafkarsh Checkpoint Barrier 121 Source Filter & Transform Window State Read & Write Sink 1 2 keyBy(R1, R3, R5) keyBy(R2, R4, R6) keyBy() keyBy() Checkpoint Barrier Stream partition Record offset Local State Local State Fault Tolerant Storage HDFS, S3, NFS… Snapshots Trigger Checkpoint via RPC from Job Manager Aggregate, Count etc., Aggregate, Count etc., Record offset Aggregate, Count etc., Message Shards / Partitions

Slide 122

Slide 122 text

@arafkarsh arafkarsh Snapshot Alignment 122 Source Filter & Transform Window State Read & Write 1 2 keyBy() keyBy() Checkpoint Barrier Stream partition Record offset Local State Local State Fault Tolerant Storage HDFS, S3, NFS… Snapshots Trigger Checkpoint via RPC from Job Manager Aggregate, Count etc., Aggregate, Count etc., Record offset Aggregate, Count etc., Message Shards / Partitions

Slide 123

Slide 123 text

@arafkarsh arafkarsh Snapshot Alignment 123 Source Filter & Transform Window State Read & Write 1 2 keyBy() keyBy() Checkpoint Barrier Stream partition Record offset Local State Local State Fault Tolerant Storage HDFS, S3, NFS… Snapshots Trigger Checkpoint via RPC from Job Manager Aggregate, Count etc., Aggregate, Count etc., Record offset Aggregate, Count etc., Message Shards / Partitions

Slide 124

Slide 124 text

@arafkarsh arafkarsh Snapshots & Fault Tolerance 124 Source Filter & Transform Window State Read & Write Sink 1 2 keyBy(R1, R3, R5) keyBy(R2, R4, R6) Local Storage Local Storage keyBy() keyBy() Reload State Reset Positions in Input Stream Rolling back Computation Re-Processing Fault Tolerant Storage HDFS, S3, NFS… Snapshots

Slide 125

Slide 125 text

@arafkarsh arafkarsh Configure Checkpoints – Local Storage 125 Processor State Snapshots HashMap State Backend • Store the state in Memory (HashMap) • Faster access with Memory Storage • Subject to Garbage Collection Processor State Snapshots RocksDB State Backend • Stores the state in Local RocksDB • Limited only by Local Disk Size • Slower than Memory Storage (10x Slower) • Serialize on write and DeSerialize on Read RocksDB Key Value Storage • Jobs with large state, long windows, large key/value states. • All high-availability setups • Jobs with large state, long windows, large key/value states. • All high-availability setups

Slide 126

Slide 126 text

@arafkarsh arafkarsh Integration & Comparisons • Integration • Comparison with Spark 126

Slide 127

Slide 127 text

@arafkarsh arafkarsh Integrations 127 • Event Logs • Kafka, AWS Kinesis, Pulsar • File Systems • HDFS, NFS, S3, MapR FS… • Databases • JDBC, Hcatalog • Encodings • Avro, JSON, CSV, Parquet, ORC • Key Value Stores • Redis, Cassandra, Elastic Search

Slide 128

Slide 128 text

@arafkarsh arafkarsh Apache Flink Vs Apache Spark 128 Features Flink Spark 1 Developed in Java Scala 2 Streaming Model Windowing & Checkpoints Micro batching 3 Real Time Processing Real time Processing Near Real time 4 Models Data Stream / Table SQL RDD 5 Performance High Medium 6 Supported Languages Java, Scala, Python, SQL Java, Scala, Python, R, SQL 7 SQL Analytics Yes Yes 8 Runs on Hadoop, Mesos, Kubernetes, AWS Kinesis, …. Hadoop, Mesos, Kubernetes AWS EMR 9 Machine Learning Yes - FlinkML Yes FlinkML: https://nightlies.apache.org/flink/flink-docs-release-1.2/dev/libs/ml/index.html

Slide 129

Slide 129 text

@arafkarsh arafkarsh Flink Summary 129 1. Distributed and Fault Tolerant 2. Stateful, No DB Needed 3. Horizontally Scalable 4. Parallel Execution, No Concurrency Issues

Slide 130

Slide 130 text

@arafkarsh arafkarsh Case Studies 1. HP Ink Cartridge Manufacturing Process 2. Infor: Compliance Violation (Banking) 3. Biogen: Centralized Log Management 4. Viber: Massive Data Handling - 300 Msgs / Second 5. AWS: IoT Data using Firehose and Data Analytics 6. Nordstrom: Ledger with Multi Data Views 130 4

Slide 131

Slide 131 text

@arafkarsh arafkarsh HP: Ink Cartridge Manufacturing Process • From Factory Data comes Kinesis • Using Lambda’s Data is stored in DynamoDB (Sequential Ops) • Firehose stores Raw Data in S3 • Enriched Data is stored in Aurora, Elastic Search and S3 • Glue is used for Batch Process 131 Source: https://www.youtube.com/watch?v=KM5ONS2fnG0

Slide 132

Slide 132 text

@arafkarsh arafkarsh Infor: Compliance Violation Realtime / Batch • Security & Tx Data is sent to Kinesis Data Stream • Services in Fargate picks up the data from KDS send to Aurora & S3 • Scheduler (5) invokes service to EMR processing. • EMR fetch data from Aurora & S3 and sends data to Event bridge • Event Bridge (10) sent data to SQS • Service in Fargate picks up the data from SQS and sends out email. 132 Source: https://www.youtube.com/watch?v=0gNMEyei-co

Slide 133

Slide 133 text

@arafkarsh arafkarsh Biogen: Centralized Log Management • Application, Network and VPC Logs sent to Kinesis Firehose • Firehose (4) sends data to Lambda • Lambda (5) Enrich / Normalize the data and stores in S3 • Lambda (7)npicks up the data from S3 and stores in Elastic Search • Kibana is used for Data Visualization. 133 Source: https://www.youtube.com/watch?v=m8xtR3-ZQs8

Slide 134

Slide 134 text

@arafkarsh arafkarsh Viber: Massive Data Lakes 300k Msgs / Second • From Viber BE events are batched and send to Kinesis. • Using KCL in Apache Storm Events are picked from Kinesis and using Firehose Events are stored in S3 • Aggregated Data is Sent to another Kinesis Stream and using a Lambda the event is send in Viber BE based on Rules. 134 Source: https://www.youtube.com/watch?v=7i1tj59pvYw EMR – Elastic Map Reduce

Slide 135

Slide 135 text

@arafkarsh arafkarsh Nordstrom: Ledger with Multi Data Views • Customer Data is stored in Kinesis Data Stream as Raw Data (Ledger) • Firehose Stores (4) Raw Data in S3 Bucket • Lambda (5.1-5.3) Transforms and stores data in different DB in different formats for various Read usages. 135 Source: https://www.youtube.com/watch?v=O7PTtm_3Os4

Slide 136

Slide 136 text

@arafkarsh arafkarsh AWS: IoT Data – Firehose – Analytics – DynamoDB • MQTT based Data from IoT • Firehose stores the data in S3 • Kinesis DA get the data from Firehose analyze it and stores send to Firehose to store in S3 • Using Lambda the data is enriched and stored in DynamoDB • Using Web Based App user gets the data from DynamoDB 136 Source: https://www.youtube.com/watch?v=uWUAcc68MWI

Slide 137

Slide 137 text

@arafkarsh arafkarsh 137 Design Patterns are solutions to general problems that software developers faced during software development. Design Patterns

Slide 138

Slide 138 text

@arafkarsh arafkarsh 138 Thank you DREAM EMPOWER AUTOMATE MOTIVATE India: +91.999.545.8627 https://arafkarsh.medium.com/ https://speakerdeck.com/arafkarsh https://www.linkedin.com/in/arafkarsh/ https://www.youtube.com/user/arafkarsh/playlists http://www.slideshare.net/arafkarsh http://www.arafkarsh.com/ @arafkarsh arafkarsh LinkedIn arafkarsh.com Medium.com Speakerdeck.com

Slide 139

Slide 139 text

@arafkarsh arafkarsh 139 Slides: https://speakerdeck.com/arafkarsh Blogs https://arafkarsh.medium.com/ Web: https://arafkarsh.com/ Source: https://github.com/arafkarsh

Slide 140

Slide 140 text

@arafkarsh arafkarsh 140 Slides: https://speakerdeck.com/arafkarsh

Slide 141

Slide 141 text

@arafkarsh arafkarsh References 141 1. July 15, 2015 – Agile is Dead : GoTo 2015 By Dave Thomas 2. Apr 7, 2016 - Agile Project Management with Kanban | Eric Brechner | Talks at Google 3. Sep 27, 2017 - Scrum vs Kanban - Two Agile Teams Go Head-to-Head 4. Feb 17, 2019 - Lean vs Agile vs Design Thinking 5. Dec 17, 2020 - Scrum vs Kanban | Differences & Similarities Between Scrum & Kanban 6. Feb 24, 2021 - Agile Methodology Tutorial for Beginners | Jira Tutorial | Agile Methodology Explained. Agile Methodologies

Slide 142

Slide 142 text

@arafkarsh arafkarsh References 142 1. Vmware: What is Cloud Architecture? 2. Redhat: What is Cloud Architecture? 3. Cloud Computing Architecture 4. Cloud Adoption Essentials: 5. Google: Hybrid and Multi Cloud 6. IBM: Hybrid Cloud Architecture Intro 7. IBM: Hybrid Cloud Architecture: Part 1 8. IBM: Hybrid Cloud Architecture: Part 2 9. Cloud Computing Basics: IaaS, PaaS, SaaS 1. IBM: IaaS Explained 2. IBM: PaaS Explained 3. IBM: SaaS Explained 4. IBM: FaaS Explained 5. IBM: What is Hypervisor? Cloud Architecture

Slide 143

Slide 143 text

@arafkarsh arafkarsh References 143 Microservices 1. Microservices Definition by Martin Fowler 2. When to use Microservices By Martin Fowler 3. GoTo: Sep 3, 2020: When to use Microservices By Martin Fowler 4. GoTo: Feb 26, 2020: Monolith Decomposition Pattern 5. Thought Works: Microservices in a Nutshell 6. Microservices Prerequisites 7. What do you mean by Event Driven? 8. Understanding Event Driven Design Patterns for Microservices

Slide 144

Slide 144 text

@arafkarsh arafkarsh References – Microservices – Videos 144 1. Martin Fowler – Micro Services : https://www.youtube.com/watch?v=2yko4TbC8cI&feature=youtu.be&t=15m53s 2. GOTO 2016 – Microservices at NetFlix Scale: Principles, Tradeoffs & Lessons Learned. By R Meshenberg 3. Mastering Chaos – A NetFlix Guide to Microservices. By Josh Evans 4. GOTO 2015 – Challenges Implementing Micro Services By Fred George 5. GOTO 2016 – From Monolith to Microservices at Zalando. By Rodrigue Scaefer 6. GOTO 2015 – Microservices @ Spotify. By Kevin Goldsmith 7. Modelling Microservices @ Spotify : https://www.youtube.com/watch?v=7XDA044tl8k 8. GOTO 2015 – DDD & Microservices: At last, Some Boundaries By Eric Evans 9. GOTO 2016 – What I wish I had known before Scaling Uber to 1000 Services. By Matt Ranney 10. DDD Europe – Tackling Complexity in the Heart of Software By Eric Evans, April 11, 2016 11. AWS re:Invent 2016 – From Monolithic to Microservices: Evolving Architecture Patterns. By Emerson L, Gilt D. Chiles 12. AWS 2017 – An overview of designing Microservices based Applications on AWS. By Peter Dalbhanjan 13. GOTO Jun, 2017 – Effective Microservices in a Data Centric World. By Randy Shoup. 14. GOTO July, 2017 – The Seven (more) Deadly Sins of Microservices. By Daniel Bryant 15. Sept, 2017 – Airbnb, From Monolith to Microservices: How to scale your Architecture. By Melanie Cubula 16. GOTO Sept, 2017 – Rethinking Microservices with Stateful Streams. By Ben Stopford. 17. GOTO 2017 – Microservices without Servers. By Glynn Bird.

Slide 145

Slide 145 text

@arafkarsh arafkarsh References 145 Domain Driven Design 1. Oct 27, 2012 What I have learned about DDD Since the book. By Eric Evans 2. Mar 19, 2013 Domain Driven Design By Eric Evans 3. Jun 02, 2015 Applied DDD in Java EE 7 and Open Source World 4. Aug 23, 2016 Domain Driven Design the Good Parts By Jimmy Bogard 5. Sep 22, 2016 GOTO 2015 – DDD & REST Domain Driven API’s for the Web. By Oliver Gierke 6. Jan 24, 2017 Spring Developer – Developing Micro Services with Aggregates. By Chris Richardson 7. May 17. 2017 DEVOXX – The Art of Discovering Bounded Contexts. By Nick Tune 8. Dec 21, 2019 What is DDD - Eric Evans - DDD Europe 2019. By Eric Evans 9. Oct 2, 2020 - Bounded Contexts - Eric Evans - DDD Europe 2020. By. Eric Evans 10. Oct 2, 2020 - DDD By Example - Paul Rayner - DDD Europe 2020. By Paul Rayner

Slide 146

Slide 146 text

@arafkarsh arafkarsh References 146 Event Sourcing and CQRS 1. IBM: Event Driven Architecture – Mar 21, 2021 2. Martin Fowler: Event Driven Architecture – GOTO 2017 3. Greg Young: A Decade of DDD, Event Sourcing & CQRS – April 11, 2016 4. Nov 13, 2014 GOTO 2014 – Event Sourcing. By Greg Young 5. Mar 22, 2016 Building Micro Services with Event Sourcing and CQRS 6. Apr 15, 2016 YOW! Nights – Event Sourcing. By Martin Fowler 7. May 08, 2017 When Micro Services Meet Event Sourcing. By Vinicius Gomes

Slide 147

Slide 147 text

@arafkarsh arafkarsh References 147 Kafka 1. Understanding Kafka 2. Understanding RabbitMQ 3. IBM: Apache Kafka – Sept 18, 2020 4. Confluent: Apache Kafka Fundamentals – April 25, 2020 5. Confluent: How Kafka Works – Aug 25, 2020 6. Confluent: How to integrate Kafka into your environment – Aug 25, 2020 7. Kafka Streams – Sept 4, 2021 8. Kafka: Processing Streaming Data with KSQL – Jul 16, 2018 9. Kafka: Processing Streaming Data with KSQL – Nov 28, 2019

Slide 148

Slide 148 text

@arafkarsh arafkarsh References 148 Databases: Big Data / Cloud Databases 1. Google: How to Choose the right database? 2. AWS: Choosing the right Database 3. IBM: NoSQL Vs. SQL 4. A Guide to NoSQL Databases 5. How does NoSQL Databases Work? 6. What is Better? SQL or NoSQL? 7. What is DBaaS? 8. NoSQL Concepts 9. Key Value Databases 10. Document Databases 11. Jun 29, 2012 – Google I/O 2012 - SQL vs NoSQL: Battle of the Backends 12. Feb 19, 2013 - Introduction to NoSQL • Martin Fowler • GOTO 2012 13. Jul 25, 2018 - SQL vs NoSQL or MySQL vs MongoDB 14. Oct 30, 2020 - Column vs Row Oriented Databases Explained 15. Dec 9, 2020 - How do NoSQL databases work? Simply Explained! 1. Graph Databases 2. Column Databases 3. Row Vs. Column Oriented Databases 4. Database Indexing Explained 5. MongoDB Indexing 6. AWS: DynamoDB Global Indexing 7. AWS: DynamoDB Local Indexing 8. Google Cloud Spanner 9. AWS: DynamoDB Design Patterns 10. Cloud Provider Database Comparisons 11. CockroachDB: When to use a Cloud DB?

Slide 149

Slide 149 text

@arafkarsh arafkarsh References 149 Docker / Kubernetes / Istio 1. IBM: Virtual Machines and Containers 2. IBM: What is a Hypervisor? 3. IBM: Docker Vs. Kubernetes 4. IBM: Containerization Explained 5. IBM: Kubernetes Explained 6. IBM: Kubernetes Ingress in 5 Minutes 7. Microsoft: How Service Mesh works in Kubernetes 8. IBM: Istio Service Mesh Explained 9. IBM: Kubernetes and OpenShift 10. IBM: Kubernetes Operators 11. 10 Consideration for Kubernetes Deployments Istio – Metrics 1. Istio – Metrics 2. Monitoring Istio Mesh with Grafana 3. Visualize your Istio Service Mesh 4. Security and Monitoring with Istio 5. Observing Services using Prometheus, Grafana, Kiali 6. Istio Cookbook: Kiali Recipe 7. Kubernetes: Open Telemetry 8. Open Telemetry 9. How Prometheus works 10. IBM: Observability vs. Monitoring

Slide 150

Slide 150 text

@arafkarsh arafkarsh References 150 1. Feb 6, 2020 – An introduction to TDD 2. Aug 14, 2019 – Component Software Testing 3. May 30, 2020 – What is Component Testing? 4. Apr 23, 2013 – Component Test By Martin Fowler 5. Jan 12, 2011 – Contract Testing By Martin Fowler 6. Jan 16, 2018 – Integration Testing By Martin Fowler 7. Testing Strategies in Microservices Architecture 8. Practical Test Pyramid By Ham Vocke Testing – TDD / BDD

Slide 151

Slide 151 text

@arafkarsh arafkarsh 151 1. Simoorg : LinkedIn’s own failure inducer framework. It was designed to be easy to extend and most of the important components are plug‐ gable. 2. Pumba : A chaos testing and network emulation tool for Docker. 3. Chaos Lemur : Self-hostable application to randomly destroy virtual machines in a BOSH- managed environment, as an aid to resilience testing of high-availability systems. 4. Chaos Lambda : Randomly terminate AWS ASG instances during business hours. 5. Blockade : Docker-based utility for testing network failures and partitions in distributed applications. 6. Chaos-http-proxy : Introduces failures into HTTP requests via a proxy server. 7. Monkey-ops : Monkey-Ops is a simple service implemented in Go, which is deployed into an OpenShift V3.X and generates some chaos within it. Monkey-Ops seeks some OpenShift components like Pods or Deployment Configs and randomly terminates them. 8. Chaos Dingo : Chaos Dingo currently supports performing operations on Azure VMs and VMSS deployed to an Azure Resource Manager-based resource group. 9. Tugbot : Testing in Production (TiP) framework for Docker. Testing tools

Slide 152

Slide 152 text

@arafkarsh arafkarsh References 152 CI / CD 1. What is Continuous Integration? 2. What is Continuous Delivery? 3. CI / CD Pipeline 4. What is CI / CD Pipeline? 5. CI / CD Explained 6. CI / CD Pipeline using Java Example Part 1 7. CI / CD Pipeline using Ansible Part 2 8. Declarative Pipeline vs Scripted Pipeline 9. Complete Jenkins Pipeline Tutorial 10. Common Pipeline Mistakes 11. CI / CD for a Docker Application

Slide 153

Slide 153 text

@arafkarsh arafkarsh References 153 DevOps 1. IBM: What is DevOps? 2. IBM: Cloud Native DevOps Explained 3. IBM: Application Transformation 4. IBM: Virtualization Explained 5. What is DevOps? Easy Way 6. DevOps?! How to become a DevOps Engineer??? 7. Amazon: https://www.youtube.com/watch?v=mBU3AJ3j1rg 8. NetFlix: https://www.youtube.com/watch?v=UTKIT6STSVM 9. DevOps and SRE: https://www.youtube.com/watch?v=uTEL8Ff1Zvk 10. SLI, SLO, SLA : https://www.youtube.com/watch?v=tEylFyxbDLE 11. DevOps and SRE : Risks and Budgets : https://www.youtube.com/watch?v=y2ILKr8kCJU 12. SRE @ Google: https://www.youtube.com/watch?v=d2wn_E1jxn4

Slide 154

Slide 154 text

@arafkarsh arafkarsh References 154 1. Lewis, James, and Martin Fowler. “Microservices: A Definition of This New Architectural Term”, March 25, 2014. 2. Miller, Matt. “Innovate or Die: The Rise of Microservices”. e Wall Street Journal, October 5, 2015. 3. Newman, Sam. Building Microservices. O’Reilly Media, 2015. 4. Alagarasan, Vijay. “Seven Microservices Anti-patterns”, August 24, 2015. 5. Cockcroft, Adrian. “State of the Art in Microservices”, December 4, 2014. 6. Fowler, Martin. “Microservice Prerequisites”, August 28, 2014. 7. Fowler, Martin. “Microservice Tradeoffs”, July 1, 2015. 8. Humble, Jez. “Four Principles of Low-Risk Software Release”, February 16, 2012. 9. Zuul Edge Server, Ketan Gote, May 22, 2017 10. Ribbon, Hysterix using Spring Feign, Ketan Gote, May 22, 2017 11. Eureka Server with Spring Cloud, Ketan Gote, May 22, 2017 12. Apache Kafka, A Distributed Streaming Platform, Ketan Gote, May 20, 2017 13. Functional Reactive Programming, Araf Karsh Hamid, August 7, 2016 14. Enterprise Software Architectures, Araf Karsh Hamid, July 30, 2016 15. Docker and Linux Containers, Araf Karsh Hamid, April 28, 2015

Slide 155

Slide 155 text

@arafkarsh arafkarsh References 155 16. MSDN – Microsoft https://msdn.microsoft.com/en-us/library/dn568103.aspx 17. Martin Fowler : CQRS – http://martinfowler.com/bliki/CQRS.html 18. Udi Dahan : CQRS – http://www.udidahan.com/2009/12/09/clarified-cqrs/ 19. Greg Young : CQRS - https://www.youtube.com/watch?v=JHGkaShoyNs 20. Bertrand Meyer – CQS - http://en.wikipedia.org/wiki/Bertrand_Meyer 21. CQS : http://en.wikipedia.org/wiki/Command–query_separation 22. CAP Theorem : http://en.wikipedia.org/wiki/CAP_theorem 23. CAP Theorem : http://www.julianbrowne.com/article/viewer/brewers-cap-theorem 24. CAP 12 years how the rules have changed 25. EBay Scalability Best Practices : http://www.infoq.com/articles/ebay-scalability-best-practices 26. Pat Helland (Amazon) : Life beyond distributed transactions 27. Stanford University: Rx https://www.youtube.com/watch?v=y9xudo3C1Cw 28. Princeton University: SAGAS (1987) Hector Garcia Molina / Kenneth Salem 29. Rx Observable : https://dzone.com/articles/using-rx-java-observable