O’Reilly Software Architecture conference in NYC • It’s a fundamental change in the recommended way to build complex applications, to make them more scalable, resilient & extensible • This talk will cover: ◦ Why is it important ◦ Some event-driven patterns ◦ Links to notable talks from the conference
do things • Procedural code usually implements the full flow of some task, usually involving the invocation of few other functions & doing something with their results. • Similarly, when writing services, we tend to create "God" services that implement a full flow, usually involving sync calls to other services.
service isn't responding or times-out, we need to retry the calls, using exponential backoff, or alternatively handle a back pressure logic Other service changes Every change in the invocation details of the other service requires us to change our service This creates a coupling between the services, and increases the complexity & cost of change New service is added When our service needs to call other services, any new service added to the flow requires us to change our service Think of few years from now, when this code will be legacy & someone will need to extend it
service isn't responding or times-out, we need to retry the calls, using exponential backoff, or alternatively handle a back pressure logic Other service changes Every change in the invocation details of the other service requires us to change our service This creates a coupling between the services, and increases the complexity & cost of change New service is added When our service needs to call other services, any new service added to the flow requires us to change our service Think of few years from now, when this code will be legacy & someone will need to extend it Note that by services we mean Microservices, which are supposed to be very independent & decoupled!
things • Unlike procedural code, functional programming is based on a different paradigm, in which simple functions handle events & generate new events • When applying this paradigm to services, we switch from "God" services to simple independent services interacting with each other using events
need to wait for other services or retry calls Services are decoupled - they don’t need to know about other services & aren’t impacted by them Other service changes Every change in the invocation details of the other service does not require any change to our service Services are isolated from implementation changes of other services New service is added You can add more services without any change to your service Think of few years from now, when this code will be legacy & someone will need to extend it
something that happened Services producing/sensing an event should publish it & any logic that needs to be triggered to handle the event, will be done in services who subscribe/listen to the event asynchronously The idea is to apply this to ANY interaction between services or applications - no direct calls, only events
message bus or stream-processing transport (RabbitMQ, Kafka, AWS Kinesis) Outside the firewall, you can have an event delivery mechanism based on: - HTTP (poll for events) - Atom (AtomPub) - Websockets (connect to stream of events) - Webhooks (event listeners register their HTTP endpoint, like GitHub & Twillio do)
data Without events, services query other services for data, & have to deal with outage, latency, cache invalidation &c When using events, some services generate events (usually command handlers), & other services listen to these events & maintain a Materialized view of the data This pattern also solves the problem of joining data from multiple services (each having its own DB)
interactions that try to ensure some conditions are met A common example is when you need to implement a “transaction” across services - either the whole flow succeeds or everything rolls back Usually, this means that you’ll manage a state & update it across services until all conditions are met
us build simple small services, handling a single function/concern This fits exactly to the serverless model of simple functions running on the cloud, triggered by events & generating further events Common example: AWS lambda function triggered whenever some input file is added to an S3 bucket
degree of freedom to change, adjust & adapt Using a message bus for events delivery results in a centralized immutable stream of facts, & decentralized services that process them This fits well frameworks like Apache Kafka & AWS Kinesis that provide: - Message durability - Scalable highly-available handling of events - Ability to divert/duplicate events - Use schemas for versioning
a stream of changes, so when adapting to an event-driven architecture, you can also adapt the business logic & data-model to be based on streams of immutable changes instead of mutable data - AKA “event-sourcing”, it’s like the difference between data in a DB & in journal/op log This model makes things more simple, scalable & powerful, but is a big & radical change
pose a problem when you have a flow that requires stateful orchestration To overcome this, you can use Workflow Engines, that model flows as a state-machine & manage the state of flows Every service has access to the current flow state, & can make decisions based on it
the state-machine could define a retry logic, or fallback actions to deal with the outage Workflow engines allow you to define flows in a high-level manner, so you can evolve & manage complex flows, without changing multiple services They also allow you to monitor flows in production
on its head Cornelia Davis, Pivotal Events on the outside, inside & core Chris Richardson, Eventuate Complex event flows in distributed systems Bernd Ruecker, Camunda