Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Don't Block on It - Migrate your codebase before its too late!

Don't Block on It - Migrate your codebase before its too late!

Don't Block on It - Migrate your codebase before its too late! -
Talk given at Reactive NY on April 27, 2016.

Let’s face it - many of us regularly encounter (or may still be producing) code that performs blocking operations and doesn’t play nicely with new “reactive” libraries or fit well in the world of micro services. Most shops are either starting with existing code, integrating with legacy components, or even migrating Java code to Scala for example. In this talk, we’ll go over various “mundane” techniques to introduce incremental changes that can make your software “non-blocking” and “event driven”. While Scala Futures and Akka actors will take center stage, we’ll also cover topics such as functional error handling, monad transformers, thread pools, the circuit breaker pattern, non-blocking HTTP request handling, and briefly touch upon Akka Streams. These building blocks eventually lead to more advanced topics such as location transparency, micro services, and fully reactive APIs, but in my experience its necessary for teams to master the basics first.

Sujan Kapadia

April 27, 2016
Tweet

Other Decks in Programming

Transcript

  1. Motivation: ‣ Scala provides amazing constructs for building non- blocking

    software ‣ Composability, immutability, and laziness make complex operations easier to express (usually)
  2. Motivation: BUT ‣ Still deal with legacy code, need to

    migrate Java to Scala, or work with blocking libraries ‣ Still building and exposing blocking services ‣ Feel warm and fuzzy dealing with Threads
  3. Motivation: BUT ‣ Still deal with legacy code, need to

    migrate Java to Scala, or work with blocking libraries ‣ Still building and exposing blocking services ‣ Feel warm and fuzzy dealing with Threads Most of us are not this guy!
  4. We’ll cover: ‣Futures ‣Functional error handling ‣Monad transformers ‣Non blocking:

    Play, Spray, Slick ‣Caching, Circuit Breakers ‣Thread Pools ‣Isolation & Location Transparency ‣Message Passing Patterns & Akka ‣Akka Streams
  5. We’ll cover: ‣Futures ‣Functional error handling ‣Monad transformers ‣Non blocking:

    Play, Spray, Slick ‣Caching, Circuit Breakers ‣Thread Pools ‣Isolation & Location Transparency ‣Message Passing Patterns & Akka ‣Akka Streams Fundamental concepts you should understand before delving into Reactive
  6. The painfully obvious: Blocking Operations File I / O Database

    persistence HTTP request / response ‣ Explicitly encode a stall ‣ Hold onto resources that could be used for other work By invoking a blocking operation, you: Long running computation
  7. The painfully obvious: Blocking Operations File I / O Database

    persistence HTTP request / response ‣ Explicitly encode a stall ‣ Hold onto resources that could be used for other work By invoking a blocking operation, you: Long running computation YOU’RE JUST BEING GREEDY
  8. James Roper, Tech Lead on Play and Lagom, in his

    Philly ETE talk: With all blocking operations, the entire system needs to be up, which essentially leads to a monolith.
  9. Future: The Power of Monads in an Asynchronous Context Why

    are monads powerful? ‣ Enrich / wrap computations in a context and then combine them ‣ Different operations, when threaded together, still exhibit consistent semantics / behavior ‣ A Future allows you to lift a computation into an asynchronous context State Future List Free Interpreters
  10. Future: The Power of Monads in an Asynchronous Context def

    expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global Future { expensiveComputation(100) } (ec)
  11. Future: The Power of Monads in an Asynchronous Context def

    expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global Future { expensiveComputation(100) } (ec)
  12. Future: The Power of Monads in an Asynchronous Context def

    expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global Future { expensiveComputation(100) } (ec) By Name Argument, BUT Future may execute immediately if thread available to do work
  13. Future: The Power of Monads in an Asynchronous Context def

    expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global Future { expensiveComputation(100) } (ec) Future { expensiveComputation(100) } (ec) Future { expensiveComputation(100) } (ec) Future { expensiveComputation(100) } (ec)
  14. Future: The Power of Monads in an Asynchronous Context def

    expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global Future { expensiveComputation(100) } (ec) Future { expensiveComputation(100) } (ec) Future { expensiveComputation(100) } (ec) Future { expensiveComputation(100) } (ec) STARVATION!!!
  15. Future: The Power of Monads in an Asynchronous Context: blocking

    def expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global blocking(Future(expensiveComputation(100))(ec) Indicates the operation to be performed will block, allowing a new Thread to be created if needed. Thread pool expanded
  16. Future: The Power of Monads in an Asynchronous Context: blocking

    def expensiveComputation(x: Int): String = ??? val ec = scala.concurrent.ExecutionContext.Implicits.global blocking(Future(expensiveComputation(100))(ec) If you’re interacting with legacy APIs or libraries you cannot change, use Future and blocking, and consider a separate ExecutionContext
  17. Future: Don’t use the global ExecutionContext ExecutionContext.fromExecutorService(Executors.newFixedThreadPool(8)) 
 ExecutionContext.fromExecutorService(Executors.newCachedThreadPool()) 


    ExecutionContext.fromExecutorService(Executors.newWorkStealingPool()) Build a custom ExecutionContext and pass this directly (or via implicit scope) where its needed. You can also provide external configuration.
  18. Response Conversion: map def retrieveCustomerData(id: String): Future[CustomerData] = ???
 retrieveCustomerData("someId")

    map { data =>
 Customer(data.id, data.firstName, data.lastName, !data.accounts.exists(_.balance < 0))
 }
  19. Response Serialization: map retrieveCustomerData("someId") map { data =>
 Customer(data.id, data.firstName,

    data.lastName, !data.accounts.exists(_.balance < 0))
 } map { Json.toJson(_) }
  20. Response Extraction: map def retrieveCustomerAccount(id: String): Future[CustomerAccountLookupResponse] = ???
 lookupCustomerAccount("someId")

    map { 
 msg => LocationInfo(msg.billingSystemId, msg.servingLocation) 
 } Dealing with bloated SOAP APIs for example…
  21. Response Manipulation: map def retrieveCustomerData(id: String): Future[CustomerData] = ???
 retrieveCustomerData("someId")

    map { data =>
 Customer(data.id, data.firstName, data.lastName, !data.accounts.exists(_.balance < 0))
 } retrieveCustomerData("someId") map { data =>
 Customer(data.id, data.firstName, data.lastName, !data.accounts.exists(_.balance < 0))
 } map { Json.toJson(_) } def retrieveCustomerAccount(id: String): Future[CustomerAccountLookupResponse] = ???
 lookupCustomerAccount("someId") map { 
 msg => LocationInfo(msg.billingSystemId, msg.servingLocation) 
 } Conversion Serialization Extraction
  22. Memory Hierarchy / Transparent Caching: fallbackTo def fromCache(id: String): Future[Customer]

    = ???
 def fromLocalDisk(id: String): Future[Customer] = ???
 def fromServer(id: String): Future[Customer] = ???
 
 def retrieveCustomer(id: String): Future[Customer] = {
 fromCache(id) fallbackTo fromLocalDisk(id) fallbackTo fromServer(id)
 } If this fails, try the next one…
  23. Memory Hierarchy / Transparent Caching: fallbackTo We could take this

    even further… def fromLocalCache(id: String): Future[Customer] = ???
 def fromLocalDisk(id: String): Future[Customer] = ???
 def fromHazelcast(id: String): Future[Customer] = ???
 def fromServer(id: String): Future[Customer] = ???
 
 def retrieveCustomer(id: String): Future[Customer] = {
 fromLocalCache(id) fallbackTo 
 fromLocalDisk(id) fallbackTo 
 fromHazelcast(id) fallbackTo 
 fromServer(id)
 } Asynchrony encapsulates the fact that multiple operations are being attempted
  24. Memory Hierarchy / Transparent Caching: fallbackTo We could take this

    even further… def retrieveCustomer(id: String): Future[Customer] = {
 fromLocalCache(id) fallbackTo
 fromLocalDisk(id) fallbackTo
 fromHazelcast(id) fallbackTo
 fromServer(id) onSuccess {
 case c: Customer => // load various caches
 }
 } Asynchrony encapsulates the fact that multiple operations are being attempted
  25. Do work in parallel: flatMap def getSatelliteLinkStatus: Future[CommStatus]
 def getRadarControlStatus:

    Future[RadarControlStatus]
 def discoverAlertService: Future[AlertServiceURL]
 
 val satelliteLinkStatusF = getSatelliteLinkStatus
 val radarControlInterfaceStatusF = getRadarControlStatus
 val alertServiceF = discoverAlertService
 
 val linkSummaryF: Future[LinkSummary] = for {
 commStatus <- satelliteLinkStatusF
 radarControlStatus <- radarControlInterfaceStatusF
 alertServiceURL <- alertServiceF
 } yield { LinkSummary(commStatus, radarControlStatus, alertServiceURL) } Building up status dependent on multiple disparate components
  26. Do work in parallel: flatMap def getSatelliteLinkStatus: Future[CommStatus]
 def getRadarControlStatus:

    Future[RadarControlStatus]
 def discoverAlertService: Future[AlertServiceURL]
 
 val satelliteLinkStatusF = getSatelliteLinkStatus
 val radarControlInterfaceStatusF = getRadarControlStatus
 val alertServiceF = discoverAlertService
 
 val linkSummaryF: Future[LinkSummary] = for {
 commStatus <- satelliteLinkStatusF
 radarControlStatus <- radarControlInterfaceStatusF
 alertServiceURL <- alertServiceF
 } yield { LinkSummary(commStatus, radarControlStatus, alertServiceURL) } Futures invoked before for comprehension!
  27. Do work in parallel: flatMap Splitting up work / divide

    & conquer def searchQuadrant(i: Int, query: Query): Future[SearchResult]
 
 val searchQuadrant1F = searchQuadrant(1, query)
 val searchQuadrant2F = searchQuadrant(2, query)
 val searchQuadrant3F = searchQuadrant(3, query)
 val searchQuadrant4F = searchQuadrant(4, query)
 
 val results: Future[Seq[SearchResult]] = for {
 resultQ1 <- searchQuadrant1F
 resultQ2 <- searchQuadrant2F
 resultQ3 <- searchQuadrant3F
 resultQ4 <- searchQuadrant4F
 } yield {
 Seq(resultQ1, resultQ2, resultQ3, resultQ4)
 }
  28. Do work in sequence: flatMap for {
 locInfo <- retrieveCustomerAccountDescription(id)

    map { 
 msg => LocationInfo(msg.billingSystemId, msg.servingLocation) 
 }
 billingSystemInfo <- retrieveBillingSystemInfo(locInfo)
 updatedAccounts <- settleAccounts(id, billingSystemInfo)
 updatedCustomer <- saveCustomerData(customer.copy(accounts = updatedAccounts))
 _ <- indexCustomer(updatedCustomer)
 } yield updatedCustomer Each Future returning method is invoked inside the for comprehension! The next future is not evaluated until the previous Future completes successfully.
  29. Collections: traverse, sequence Rule Branch Step def executeStep(step: Step): Future[StepResult]

    = ???
 def executeSteps(steps: Seq[Step]): Seq[Future[StepResult]] = {
 steps map { step => loadStep(step) }
 }
 
 val executedBranches: Seq[Seq[Future[StepResult]]] = rule.branches map { branch =>
 executeSteps(branch.steps)
 }
  30. Collections: traverse, sequence Can help you deal with complex, nested

    structures Rule Branch Step def executeStep(step: Step): Future[StepResult] = ???
 def executeSteps(steps: Seq[Step]): Future[Seq[StepResult]] = {
 Future.traverse(steps) { step => loadStep(step) }
 }
 
 val executedBranches: Future[Seq[Seq[StepResult]]] = Future.traverse(rule.branches) { branch =>
 executeSteps(branch.steps)
 }
  31. Collections: traverse, sequence Can help you deal with complex, nested

    structures Rule Branch Step def executeStep(step: Step): Future[StepResult] = ???
 def executeSteps(steps: Seq[Step]): Future[Seq[StepResult]] = {
 Future.traverse(steps) { step => loadStep(step) }
 }
 
 val executedBranches: Future[Seq[Seq[StepResult]]] = Future.traverse(rule.branches) { branch =>
 executeSteps(branch.steps)
 } Seq[Future[T]] -> Future[Seq[T]]
  32. Motivation ‣ We want to retain the power of monadic

    composition AND handle errors “within context”. ‣ We want to use the type system to express that an error can occur. ‣ Throwing exceptions leads to an imperative API that is difficult to compose
  33. Exceptions case class EntityNotFoundException(message: String, cause: Option[Throwable] = None) extends

    Exception(message, cause.orNull) @throws(classOf[EntityNotFoundException])
 def retrieveCustomerData(id: String): Future[CustomerData]
  34. Exceptions ‣ As we all know, Scala has no checked

    exceptions, thus the annotation. ‣ This exception never has to be handled, the compiler won’t complain ‣ To handle at each layer, you must catch and throw. ‣ We have to deal with an exception OR a failed future A caller never has to explicitly deal with the error case, which leads to fragile code
  35. Disjunctions: Cats Xor def retrieveCustomerData(id: String): Future[Err Xor CustomerData]
 def

    retrieveCustomer(id: String): Future[Err Xor Customer] = {
 retrieveCustomerData("someId") map { dataOrErr =>
 dataOrErr map { data => Customer(data.id, …)
 }
 }
 } Right-biased: dataOrErr map { data => Customer(…) } Left: Error Right: Success value Xor.right[Err, Customer](Customer(…)) Xor.left[Err, Customer](EntityNotFoundErr(custId)) Left conversion: dataOrErr leftMap { err => … }
  36. Disjunctions: Cats Xor Error is explicitly part of the contract

    - it can be transformed, enriched, etc. def retrieveCustomerData(id: String): Future[Err Xor CustomerData]
 def retrieveCustomer(id: String): Future[Err Xor Customer] = {
 retrieveCustomerData("someId") map { dataOrErr =>
 dataOrErr map { data => Customer(data.id, …)
 }
 }
 }
  37. Disjunctions: Converting failed Futures Don’t fret! You can wrap 3rd

    party APIs too! def retrieveCustomer(id: String): Future[Err Xor Customer] = {
 val promise = Promise[Err Xor Customer]()
 val custF = Future.failed[Customer](NotFoundException)
 custF onComplete {
 case Success(c) => promise.success(Xor.right(c))
 case Failure(t) => promise.success(Xor.left(Err("WTF", t)))
 } promise.future
 }
  38. Monad Transformers for {
 psi <- retrieveBillingSystemInfo(locationInfo)
 updatedAccounts <- settleAccounts(id,

    bsi)
 updatedCustomer <- saveCustomerData(customer.copy(accounts = updatedAccounts))
 _ <- indexCustomer(updatedCustomer)
 } yield updatedCustomer ‣ Remember the example below?
  39. Monad Transformers for {
 psi <- retrieveBillingSystemInfo(locationInfo)
 updatedAccounts <- settleAccounts(id,

    bsi)
 updatedCustomer <- saveCustomerData(customer.copy(accounts = updatedAccounts))
 _ <- indexCustomer(updatedCustomer)
 } yield updatedCustomer ‣ Remember the example below? ‣ What if all the methods returned Future[Err Xor T] instead?
  40. Monad Transformers for {
 psi <- retrieveBillingSystemInfo(locationInfo)
 updatedAccounts <- settleAccounts(id,

    bsi)
 updatedCustomer <- saveCustomerData(customer.copy(accounts = updatedAccounts))
 _ <- indexCustomer(updatedCustomer)
 } yield updatedCustomer ‣ Remember the example below? ‣ What if all the methods returned Future[Err Xor T] instead? ‣ The left hand side would be Err Xor T, a pain to deal with
  41. Monad Transformers: Cats XorT val result = (for {
 psi

    <- XorT(retrieveBillingSystemInfo(locationInfo))
 updatedAccounts <- XorT(settleAccounts(id, bsi))
 updatedCustomer <- XorT(saveCustomerData(customer.copy(accounts = updatedAccounts)))
 _ <- XorT(indexCustomer(updatedCustomer))
 } yield updatedCustomer).value ‣ Using XorT, you can thread through the wrapped monad, in this case Xor
  42. ‣ Using XorT, you can thread through the wrapped monad,

    in this case Xor ‣ Therefore you can deal with the right value of Xor, as if Future wasn’t there val result = (for {
 psi <- XorT(retrieveBillingSystemInfo(locationInfo))
 updatedAccounts <- XorT(settleAccounts(id, bsi))
 updatedCustomer <- XorT(saveCustomerData(customer.copy(accounts = updatedAccounts)))
 _ <- XorT(indexCustomer(updatedCustomer))
 } yield updatedCustomer).value
  43. ‣ Using XorT, you can thread through the wrapped monad,

    in this case Xor ‣ Therefore you can deal with the right value of Xor, as if Future wasn’t there ‣ And you retain both the semantics / constraints of Future and Xor val result = (for {
 psi <- XorT(retrieveBillingSystemInfo(locationInfo))
 updatedAccounts <- XorT(settleAccounts(id, bsi))
 updatedCustomer <- XorT(saveCustomerData(customer.copy(accounts = updatedAccounts)))
 _ <- XorT(indexCustomer(updatedCustomer))
 } yield updatedCustomer).value
  44. val wrapped: XorT[Future, Err, Int] = XorT(Future(Xor.right[Err, Int](1)))
 val unwrapped:

    Future[Err Xor Int] = wrapped.value Monad Transformers: Cats XorT
  45. What does all this buy us? ‣ Composable, non-blocking APIs

    ‣ Ability to wrap or interact with legacy APIs and integrate with other services ‣ Possibly more efficient utilization of resources ‣ Ability to adopt other parts of the Scala ecosystem / tech stack….
  46. What does all this buy us? ‣ Composable, non-blocking APIs

    ‣ Ability to wrap or interact with legacy APIs and integrate with other services ‣ Possibly more efficient utilization of resources ‣ Ability to adopt other parts of the Scala ecosystem / tech stack….
  47. What does all this buy us? ‣ Composable, non-blocking APIs

    ‣ Ability to wrap or interact with legacy APIs and integrate with other services ‣ Possibly more efficient utilization of resources ‣ Ability to adopt other parts of the Scala ecosystem / tech stack….
  48. Play: Non-blocking HTTP (Netty) final def async(block: => Future[Result]): Action[AnyContent]

    ActionBuilder def view(name: String, version: Option[Int]) = Authenticated.async { implicit req =>
 ruleService.load(name, version).map {
 case Xor.Right(rule) => Ok(views.html.rule.view(ruleForm.fill(rule)))
 case Xor.Left(error) => error.toStatus
 }
 }
  49. Play: Non-blocking HTTP (Netty) final def async(block: => Future[Result]): Action[AnyContent]

    ActionBuilder def view(name: String, version: Option[Int]) = Authenticated.async { implicit req =>
 ruleService.load(name, version).map {
 case Xor.Right(rule) => Ok(views.html.rule.view(ruleForm.fill(rule)))
 case Xor.Left(error) => error.toStatus
 }
 } Play Actions expect an asynchronous operation! Play controller method
  50. Spray: Non-blocking HTTP (Akka) type Route = RequestContext ⇒ Unit

    Spray Routes are essentially composable functions that can be manipulated by “Directives” object OnCompleteFutureMagnet {
 implicit def apply[T](future: ⇒ Future[T])(implicit ec: ExecutionContext) =
 new OnCompleteFutureMagnet[T] {
 def happly(f: (Try[T] :: HNil) ⇒ Route): Route = ctx ⇒
 future.onComplete { t ⇒
 try f(t :: HNil)(ctx)
 catch { case NonFatal(error) ⇒ ctx.failWith(error) } }
 }
 }
  51. Spray: Non-blocking HTTP (Akka) type Route = RequestContext ⇒ Unit

    Spray Routes are essentially composable functions that can be manipulated by “Directives” get {
 authenticate(isEmployee) {
 member =>
 complete(
 for {
 response <- memberApiServices.getMember(ApiGetMemberRequest(memberId))
 } yield response.memberDetails
 )
 }
 } Spray Route composition
  52. Spray: Non-blocking HTTP (Akka) If using a single Spray Actor,

    requests are handled one at a time! It is imperative blocking operations do NOT occur here. You can create an ActorRouter for internal load balancing and bulk heading
  53. Slick 3.1: DBIOAction ‣ Wraps a database operation (select, insert,

    update, etc.) ‣ Does not execute immediately (is deferred).
  54. Slick 3.1: DBIOAction ‣ Wraps a database operation (select, insert,

    update, etc.) ‣ Does not execute immediately (is deferred). ‣ Can be composed together with other DBIOAction instances, and then run within a single transaction to return a Future
  55. Slick 3.1: DBIOAction ‣ Wraps a database operation (select, insert,

    update, etc.) ‣ Does not execute immediately (is deferred). ‣ Can be composed together with other DBIOAction instances, and then run within a single transaction to return a Future ‣ You can lift immediate values, exceptions, or even a Future
  56. Slick 3.1: DBIOAction ‣ Wraps a database operation (select, insert,

    update, etc.) ‣ Does not execute immediately (is deferred). ‣ Can be composed together with other DBIOAction instances, and then run within a single transaction to return a Future ‣ You can lift immediate values, exceptions, or even a Future Sounds like a Future, doesn’t it? It’s a monad!
  57. Slick 3.1: DBIOAction def insertAction(parameter: Parameter): DBIOAction[Parameter, NoStream, Effect.Write] =

    {
 for {
 id <- parameters.insert(ParameterBase.fromEntity(parameter))
 _ <- parameterTags ++= parameter.tags.map(id -> _)
 } yield parameter.copy(id = id)
 }
  58. Slick 3.1: DBIOAction def updateAction(parameter: Parameter): DBIOAction[Int, NoStream, Effect.Write] =

    { 
 parameters.filter(_.id === parameter.id) .update(ParameterBase.fromEntity(parameter))
 }
  59. Slick 3.1: DBIOAction def listAction(query: DBIOAction[Seq[(ParameterBase, String, String)], NoStream, Effect.Read])

    = {
 for {
 params <- query
 ids = params.map(_._1.id)
 tags <- parameterTags.filter(_.id inSet ids).result
 } yield
 params.map(toEntity(_, tags))
 }
  60. Slick 3.1: DBIOAction def insertAction(parameter: Parameter): DBIOAction[Parameter, NoStream, Effect.Write] =

    {
 for {
 id <- parameters.insert(ParameterBase.fromEntity(parameter))
 _ <- parameterTags ++= parameter.tags.map(id -> _)
 } yield parameter.copy(id = id)
 }
 
 def updateAction(parameter: Parameter): DBIOAction[Int, NoStream, Effect.Write] = {
 parameters.filter(_.id === parameter.id).update(ParameterBase.fromEntity(parameter))
 }
 
 def listAction(query: DBIOAction[Seq[(ParameterBase, String, String)], NoStream, Effect.Read]) = {
 for {
 params <- query
 ids = params.map(_._1.id)
 tags <- parameterTags.filter(_.id inSet ids).result
 } yield
 params.map(toEntity(_, tags))
 } Lift methods into DBIOAction for composability and control of transaction boundaries
  61. Slick 3.1: DBIOAction def insert(newEntity: U, revision: Option[Long] = None):

    DBIOAction[U, NoStream, Effect.All] = {
 for {
 _ <- validateEntity(newEntity).validate 
 _ <- postValidateEntity(newEntity).validate
 newEntityWithId <- entityRepository.insertAction(newEntity)
 loadedEntity <- loadAction(newEntityWithId.id)
 _ <- indexEntity(loadedEntity, revision).toDBIOAction
 } yield newEntityWithId
 } implicit class FutureHelpers[A](t: Future[A]){
 def toDBIOAction = DBIOAction.from(t)
 } implicit class DBIOActionHelpers[R](q: DBIOAction[R, NoStream, _]){
 def execute: Future[R] = db.run(q.transactionally)
 }
  62. Slick 3.1: DBIOAction def insert(newEntity: U, revision: Option[Long] = None):

    DBIOAction[U, NoStream, Effect.All] = {
 for {
 _ <- validateEntity(newEntity).validate 
 _ <- postValidateEntity(newEntity).validate
 newEntityWithId <- entityRepository.insertAction(newEntity)
 loadedEntity <- loadAction(newEntityWithId.id)
 _ <- indexEntity(loadedEntity, revision).toDBIOAction
 } yield newEntityWithId
 } implicit class FutureHelpers[A](t: Future[A]){
 def toDBIOAction = DBIOAction.from(t)
 } implicit class DBIOActionHelpers[R](q: DBIOAction[R, NoStream, _]){
 def execute: Future[R] = db.run(q.transactionally)
 } Build non-blocking services all the way, from HTTP down to the repository layer. Turtles all the way!
  63. Circuit Breakers ‣ Even in a non-blocking world, there are

    ways an operation can effectively become blocking! ‣ If a service that is not responding or failing is repeatedly invoked, it can cause the system to back up. ‣ This can essentially block the system and use up more resources
  64. Circuit Breakers ‣ Even in a non-blocking world, there are

    ways an operation can effectively become blocking! ‣ If a service that is not responding or failing is repeatedly invoked, it can cause the system to back up. ‣ This can essentially block the system and use up more resources Blocking is not just about time or threads. You must consider memory, network, and the overall system. Anything the system needs to progress can cause it to block.
  65. Location Transparency ‣ Asynchrony cleanly encapsulates non-blocking and blocking operations.

    ‣ With proper safeguards in place such as circuit breakers, isolation via separate thread pools, resource management, etc., it shouldn’t matter to the caller how the underlying operation is executed (the system still cares).
  66. Location Transparency ‣ Asynchrony cleanly encapsulates non-blocking and blocking operations.

    ‣ With proper safeguards in place such as circuit breakers, isolation via separate thread pools, resource management, etc., it shouldn’t matter to the caller how the underlying operation is executed (the system still cares). “Function as a server” pattern! Microservices!!
  67. This is all nice, but… ‣ It still feels pretty

    low level. ‣ What if we wanted to decouple operations even more? ‣ What if we weren’t as concerned with how something was executed?
  68. Serialized and Immutable Serial processing and immutable messages = design

    at a higher level. Immutable messages One at a time (serial) Actor Mailbox / Dispatcher Thread pool
  69. Serialized and Immutable Serial processing and immutable messages = design

    at a higher level. Immutable messages One at a time (serial) Actor Mailbox / Dispatcher Thread pool Shouldn’t normally need to deal with threads, wait, notify, synchronization, locks, conditions, queues :)
  70. Fire and Forget A can produce, filter, enrich, log, store,

    etc. messages before sending to B Tell a message Actor B Actor A
  71. Scatter Gather Coordinator can spawn children actors and split up

    work, route work specifically to children based on business logic, etc. Coordinator
  72. Scatter Gather Cont’d Source can split up work and children

    can send results to an aggregator (update a state machine different from source logic, etc.) Source Aggregator
  73. Scatter Gather Cont’d Aggregator can coordinate Source (change data being

    produced or where its coming from) Source Aggregator
  74. Load Balance / Divide & Conquer Transaction Processors process transactions

    emitted by Transaction Source and then store results in database. Transaction Source Async DB Service Async DB Service Async DB Service Transaction Processors
  75. Finite State Machines Via become / unbecome or for more

    expressivity and control, Akka FSM Init Heartbeat Command Disconnected Connected Wait for command
  76. Akka Actors ‣ With threads, not only do we have

    to think about how to solve the problem, but also how its going to be executed ‣ Think in terms of the domain! We may be modeling external devices, agents, users ‣ Express workflows, protocols ‣ Take advantage of supervision for recovering from failures ‣ Connect to legacy services or components via Promises, non- blocking APIs, etc. Focus on the actor hierarchy and topology!
  77. Akka Streams: Basic Structure Source Outlet Inlet Outlet Flow Inlet

    Sink SourceShape[+T] FlowShape[-I, +O] SinkShape[-T]
  78. Akka Streams: Basic Structure Source Publisher Subscriber Flow Sink SourceShape[+T]

    FlowShape[-I, +O] SinkShape[-T] demand next This is just a reusable “blueprint”! It does not execute until you want it to.
  79. Akka Streams: Basic Structure Source Publisher Subscriber Flow Sink SourceShape[+T]

    FlowShape[-I, +O] SinkShape[-T] demand next ActorMaterializer Actor
  80. Akka Streams: Basic Structure Source Publisher Subscriber Flow Sink SourceShape[+T]

    FlowShape[-I, +O] SinkShape[-T] demand next ActorMaterializer Actor Fused
  81. Akka Streams: Basic Structure Source Publisher Subscriber Flow Sink SourceShape[+T]

    FlowShape[-I, +O] SinkShape[-T] demand next Sink SinkShape[-T] Broadcast Junction
  82. Akka Streams: Simple Backpressure Example implicit val system = ActorSystem("streamy")


    implicit val materializer = ActorMaterializer()
 
 val source: Source[Int, NotUsed] = Source(1 to 100)
 val factorials: Source[BigInt, NotUsed] = source.scan(BigInt(1)) ((acc, next) => acc * next) def lineSink(filename: String): Sink[String, Future[IOResult]] = {
 Flow[String]
 .alsoTo(Sink.foreach(s => println(s"$filename: $s")))
 .map(s => ByteString(s + "\n"))
 .toMat(FileIO.toFile(new File(filename)))(Keep.right)
 }
  83. Akka Streams: Fast Producer, Fast Consumers val sink1 = lineSink("factorial1.txt")


    val sink2 = lineSink("factorial2.txt") val g = RunnableGraph.fromGraph(GraphDSL.create() { implicit b =>
 import GraphDSL.Implicits._
 
 val bcast = b.add(Broadcast[String](2))
 factorials.map(_.toString) ~> bcast.in
 bcast.out(0) ~> sink1
 bcast.out(1) ~> sink2
 ClosedShape
 }) Both sinks process at essentially the same rate.
  84. Akka Streams: Fast Producer, Fast & Slow Consumers val sink1

    = lineSink("factorial1.txt")
 val slowSink2 = Flow[String].via(Flow[String].throttle(1, 1.second, 1, ThrottleMode.shaping)).toMat(sink2)(Keep.right) val g = RunnableGraph.fromGraph(GraphDSL.create() { implicit b =>
 import GraphDSL.Implicits._
 
 val bcast = b.add(Broadcast[String](2))
 factorials.map(_.toString) ~> bcast.in
 bcast.out(0) ~> sink1
 bcast.out(1) ~> slowSink2
 ClosedShape
 }) Backpressure is signalled up to the original source, factorials.
  85. Akka Streams: Fast Producer, Fast & Slow Consumers with Buffering

    val sink1 = lineSink("factorial1.txt")
 val bufferedSink2 = Flow[String].buffer(50, OverflowStrategy.dropNew).via(Flow[String].throttle(1, 1.second, 1, ThrottleMode.shaping)).toMat(sink2)(Keep.right) val g = RunnableGraph.fromGraph(GraphDSL.create() { implicit b =>
 import GraphDSL.Implicits._
 
 val bcast = b.add(Broadcast[String](2))
 factorials.map(_.toString) ~> bcast.in
 bcast.out(0) ~> sink1
 bcast.out(1) ~> slowSink2
 ClosedShape
 }) Backpressure is handled at the slow consumer via buffering. After buffer is full, new values are dropped.
  86. Akka Streams: Integration ‣ Akka HTTP, Websockets ‣ Slick ‣

    Kafka ‣ Build topologies to implement certain enterprise integration patterns ‣ Existing non-blocking services (Future based) ‣ Existing Actors
  87. TAKEAWAYS!!! ‣ Futures + Thread Pools + Monads = Non-blocking

    services ‣ Functional error handling can lead to cleaner, easier to compose APIs ‣ Asynchrony is liberating. It can encapsulate complex non- blocking and blocking operations.
  88. TAKEAWAYS!!! ‣ Consider all resources when designing: A system can

    block on I/O, network, memory, or other systems ‣ Akka and Akka Streams allow you to think at a higher level - focus on the workflow and topology ‣ Separating definition from execution is the ultimate in non- blocking. Execution can be optimized based on conditions.
  89. TAKEAWAYS!!! ‣ Futures + Thread Pools + Monads = Non-blocking

    services ‣ Functional error handling can lead to cleaner, easier to compose APIs ‣ Asynchrony is liberating. It can encapsulate complex non- blocking and blocking operations. ‣ Consider all resources when designing: A system can block on I/O, network, memory, or other systems ‣ Akka and Akka Streams allow you to think at a higher level - focus on the workflow and topology ‣ Separating definition from execution is the ultimate in non- blocking. Execution can be optimized based on conditions. The Scala / Lightbend ecosystem seriously lowers the barrier to entry to building non-blocking, reactive systems
  90. Me Me Me!! ‣ Engineer at Chariot Solutions since 2011.

    ‣ Doing Scala for the last 2.5 years in various spaces: Cable telecom, E-commerce, and Pharma ‣ Chariot Solutions is hiring! If you’re based in the Philly or NYC areas, drop me a line! ‣ http://chariotsolutions.com/who-we-are/careers/ ‣ http://chariotsolutions.com/blog/ ‣ http://chariotsolutions.com/podcasts/ ‣ http://chariotsolutions.com/screencasts/ ‣ Hope to see you at ScalaDays!