Lock in $30 Savings on PRO—Offer Ends Soon! ⏳

Building Event-Driven Systems at Scale with Azu...

SQUER Solutions
November 04, 2024
12

Building Event-Driven Systems at Scale with Azure Cosmos DB

The most prominent shift we've observed in the field of distributed systems over the last decade is probably the transition from traditional synchronous integrated systems to message-driven means. This change is a logical response to the high demands for scalability, elasticity, and resilience within today's software systems. Regardless of how overwhelming the array of message-based options may seem, Azure Cosmos DB serves as an incredible solid backbone for most of them, whether utilized as a transactional outbox or a fully-fledged event store in the context of event-sourcing. By diving deeper into real-world experiences, we will discover the mechanics of how Azure Cosmos DB can guarantee nearly infinite scalability if you follow a few basic principles and patterns to efficiently model your data. While Azure Cosmos DB should not be considered as another silver bullet, this talk will teach you why you should certainly keep it as a prominent tool in your toolkit when it comes to building event-driven systems at scale.

SQUER Solutions

November 04, 2024
Tweet

Transcript

  1. > whoami Shahab Ganji Software Architecture Software Transformation .NET and

    C# enthusiast Embracing Change Telling dad jokes (Proudly) Code Artisan Lead Coding Architect MA IN F OC US ON T RI VI A @shahab-ganji @shahabganji @shahabganji
  2. What is an Event Driven Architecture? I N TR ODU

    C TI O N Has three main components Software components execute in response to events Uses events to communicate Promotes loose coupling
  3. Type of Events W H A T I S A

    N EV EN T D RI VE N A R C HI T EC T U RE ? ENTITY EVENT KEYED EVENT U NK EYED EV ENT Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. An entity is a unique thing and is keyed on the unique id of the thing Describes the properties and state of the entity at a given point in time Contains a key, but does not represent an entity Used for partitioning the stream of events to guarantee data locality within a single partition of an event stream Describes an event as a singular statement of a fact
  4. EVENT SOURCING Captures every change to the state Provides full

    audit trail Easier handling of complex transactions Replay what has happened in the system Related Patterns W H A T I S A N EV EN T D RI VE N A R C HI T EC T U RE ? CQR S Separate read and write models Enables optimized performance and scalability https://lostechies.com/jimmybogard/2012/08/22/busting-some-cqrs-myths
  5. Event Streams W H A T I S A N

    EV EN T D RI VE N A R C HI T EC T U RE Forward to EH Snapshot Email Service Inventory Service
  6. Schema free, NoSQL Cloud Solution A Z U R E

    C OS MO S DB Globally Distributed Horizontally Scalable Provisioned throughput Multi model database
  7. 3 Dimensions of scaling S C A L A BI

    LI TY https://microservices.io/articles/scalecube.html DATABASE PER APPLICATION Y axis – Functional Decomposition Scale by splitting different things 1 REPLICATION x axis – horizonal decomposition Scale by cloning 2 Sharding z axis – data partitioning Scale by splitting similar things 3
  8. S C A L A BI LI TY Sharding Application

    A single logical database Nodes have different data Cluster of databases Application Application
  9. S C A L A BI LI TY Sharding –

    Database Level PHYSICAL INSTANCES LOGICAL DATABASE
  10. S C A L A BI LI TY Sharding –

    Database Level PHYSICAL INSTANCES LOGICAL DATABASE Application READ/WRITE READ/WRITE
  11. Advantages S C A L A BI LI TY Each

    Server deals with a subset of data Improves transaction scalability Fault Isolation Cache Utilization Reduces Memory & I/O usage
  12. Disadvantages S C A L A BI LI TY Increased

    application complexity Design a Partition Schema Re-partitioning Improper Traffic Distribution Performance Issues with Queries – Cross Partition
  13. Containers, Partitions, Request Units A Z U R E C

    OS MO S DB 2 MB 20 GB 50 GB 50 GB LOGICAL PARTITION PHYSICAL PARTITION 10K RUs 10K RUs
  14. Transaction Scope A Z U R E C OS MO

    S DB LOGICAL PARTITION TRANSACTION BATCH /partition-key: 123 LOGICAL PARTITION
  15. Transaction Scope A Z U R E C OS MO

    S DB LOGICAL PARTITION TRANSACTION BATCH /partition-key: 123 LOGICAL PARTITION
  16. Dig deeper! A Z U R E C OS MO

    S DB WEST EUROPE LEADER FOLLOWER FOLLOWER FOLLOWER
  17. CAP theorem A Z U R E C OS MO

    S DB CONSISTENCY AVIALABILTY PARTITION TOLERANCE
  18. Globally Distributed A Z U R E C OS MO

    S DB WEST EUROPE LEADER FOLLOWER FOLLOWER FORWARDER SOUTH CENTRAL US LEADER FORWARDER FOLLOWER FOLLOWER
  19. Strong A Z U R E C OS MO S

    DB CONSISTENCY LEVELS
  20. Bounded Staleness A Z U R E C OS MO

    S DB CONSISTENCY LEVELS
  21. Session A Z U R E C OS MO S

    DB CONSISTENCY LEVELS
  22. Consistent Prefix A Z U R E C OS MO

    S DB CONSISTENCY LEVELS
  23. Eventual A Z U R E C OS MO S

    DB CONSISTENCY LEVELS
  24. Change Feed A Z U R E C OS MO

    S DB PHYSICAL PARTITION 1 LOGICAL DATABASE 1 1 1 1 1 1 2 2 2 2 1 1 3 3 2 2 2 2 PHYSICAL PARTITION 2
  25. Change Feed A Z U R E C OS MO

    S DB FeedIterator<Customer> iteratorForPartitionKey = _container.GetChangeFeedIterator<Customer>( ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("stream-id"))), ChangeFeedMode.LatestVersion);
  26. Summary BU I LD I N G EV EN T

    D RI VE N A R C HI T EC T U RE W IT H A Z U RE C O S MOS DB E DA AZURE COS MOS DB Global distribution Guaranteed Performance and SLAs Change Feed Automatic Indexing Multi-model and Multi-API support High volume of events Real-time processing Scalability is a primary concern Requires immediate reaction Separate read and write models Audit log Tracking state changes are critical C QR S E V E N T S OU R CI N G
  27. Q&A