Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Timing is Everything: Understanding Event-Time Processing in Flink SQL

Sharon Xie
March 21, 2024
39

Timing is Everything: Understanding Event-Time Processing in Flink SQL

In the stream processing context, event-time processing means the events are processed based on when the events occurred, rather than when the events are observed (processing-time) in the system. Apache Flink has a powerful framework for event-time processing, which plays a pivotal role in ensuring temporal order and result accuracy.

In this talk, we will introduce Flink event-time semantics and demonstrate how watermarks as a means of handling late-arriving events are generated, propagated, and triggered using Flink SQL. We will explore operators such as window and join that are often used with event time processing, and how different configurations can impact the processing speed, cost and correctness

Join us for this exploration where event-time theory meets practical SQL implementation, providing you with the tools to make informed decisions for making optimal trade-offs.

Sharon Xie

March 21, 2024
Tweet

Transcript

  1. Timing is Everything Understanding Event-Time Processing in Flink SQL Sharon

    Xie,Flink Babysitter Founding Engineer @ Decodable
  2. What is Apache Flink Stateful Computations over Data Streams •

    Highly Scalable • Exactly-once processing semantics • Event time semantics and watermarks • Layered APIs: Streaming SQL (easy to use) ↔ DataStream (expressive)
  3. Time in Flink Event Time • The time at which

    the event happened Processing Time • The time at which the event is observed by Flink
  4. Event time vs Processing Time • Event time is <

    processing time • The lag is arbitrary • Events can be out-of-order
  5. Challenges How do you know when all of the events

    are received for a particular window?
  6. Watermark • Measures the progress of event time • Tracks

    the maximum event time seen • Indicates the completeness of the event time
  7. Create table sensors ( id bigint, `value` integer _time timestamp(3),

    watermark for _time as _time - interval '3' minutes ) WITH ( 'scan.watermark.emit.strategy'='on-event', ... ); Define Watermark
  8. There is a window that ends at 1:05. When can

    the window close? Quiz - Answer
  9. Idle source/partition • If a partition is idle (no events),

    the watermark will not advance • No result will be produced • Solutions ◦ Configure source idle timeout ▪ set table.exec.source.idle-timeout = 1m ◦ Balance the partitions
  10. Implications • Tradeoff between Correctness and Latency • Latency ◦

    Results of a window is only seen after the window closes • Correctness ◦ Late arriving events are discarded after the window is closed
  11. But…can I have both? • Yes! Flink can process &

    emit “updates” (changelog) • No watermark is needed • Downstream system must support “updates” • It’s costly - need to store global state
  12. Quick Summary • Timely response & analytics are based on

    event time • Flink uses watermark to account for out-of-order events • Watermark allows trade-off between accuracy and latency
  13. Flink SQL (Window TVF) • TVF - Table-Valued Function •

    Returns a new relation with all columns of original stream and additional 3 columns: ◦ window_start, window_end, window_time
  14. Window Types - Cumulative Ref: https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/sql/queries/window-tvf/ • Similar to tumble

    window, but with early firing at the defined interval • Defined by max window size and window step
  15. Window Types - Session 😃 Supported in Flink 1.19 •

    A new window is created when the consecutive event time > session gap Ref: https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/sql/queries/window-tvf/#session
  16. Window Join • A window join adds the dimension of

    time into the join criteria themselves. • Use case: compute click-through events
  17. Temporal Join • Enrich a stream with the value of

    the joined record at the event time. • Example: Continuously computing the price for each order based on the exchange rate happened when the order is placed
  18. Summary • Event time is essential for timely response and

    analytics • Watermark and windowing are the key concepts • Flink SQL simplifies event time processing