When Node and APIs become BFFs

When Node and APIs become BFFs

API development is a common challenge for many teams and companies. In a world of micro-services, multiple backends and ever more demanding front-ends, building the right set of APIs becomes more important.

How can a single API support a diverse set of clients, different data access patterns, and multiple SLAs?

Backend for Frontend (BFF) is a pattern that has emerged to solve some of these challenges and help software engineers build better-focused APIs that can cater to different user experiences. This is achieved by actually building one backend for every frontend client.

In this talk, I discuss and present some of the experience and the challenges me and my team ran into while building API services for all frontends at IFTTT.

D3247e8f4b25f6041ab71da426f1c86e?s=128

Ivayr Farah Netto

November 13, 2017
Tweet

Transcript

  1. When Node and APIs become BFFs @nettofarah

  2. Netto Farah @nettofarah Eng. Manager at

  3. This is a story about

  4. ☁ monorail v0 -> inception

  5. but as we grew…

  6. IFTTT ❤ • Millions of users • Billions of API

    calls every day • Website, iOS app, Android app
  7. that architecture became challenging

  8. ☁ monorail challenges with v0

  9. we knew we needed to make some changes

  10. v1 -> monorail + friends ☁ monorail Feeds Recommendations Geolocation

  11. None
  12. What makes this approach so challenging?

  13. reality monorail Feeds Recommendations Geolocation ⛈

  14. repeated concerns

  15. We looked at what other people were doing…

  16. http://microservices.io/patterns/apigateway.html https://nginx.com/blog/microservices-api-gateways- part-1-why-an-api-gateway/

  17. Feed Recommendations Geolocation MonoRail ☁ C
 O O
 R
 D


    I
 N
 A
 T I O N T
 R
 A C
 I
 N G A U T H … v2 -> API Gateway
  18. API Gateway

  19. but API Gateways have their limitations too

  20. different access patterns

  21. multiple use cases

  22. ambiguity

  23. solved with documentation or conventions

  24. [/,0,1..2] => ✅ 4

  25. 4 [5] => ask me later…

  26. backends for frontends

  27. BFFs

  28. Feed Recommendations Geolocation MonoRail ☁ v2 -> BFFs ☁ ☁

  29. None
  30. But there’s still a fair amount of repetition

  31. buffalo

  32. Feed Recommendations Geolocation MonoRail ☁ v2.1 -> Buffalo ☁ ☁

  33. But this is JSKongress

  34. ❤ + = =

  35. simple concurrency primitives

  36. building reliable BFFs with node

  37. loading async resources

  38. // get some async resource get(url).then(result => { // render

    json return res.json(result) })
  39. const promises = Promise.all([ fetch(feedURL), fetch(locationURL) ]) promises.then(result => {

    return res.json({ feed: result[0], location: result[1] }) })
  40. Service latency and network volatility

  41. let’s take a look at our feed service

  42. 99pct ~ 1500ms avg ~ 200ms

  43. const promises = Promise.race([ fetch(feedURL), fetch(feedURL), fetch(feedURL) fetch(feedURL) ]) promises.then(feed

    => { return res.json(feed) })
  44. None
  45. timeouts

  46. const pTimeout = require(‘p-timeout') // 99pct was ~1500ms
 pTimeout( get(feedURL),

    1600 ).then(onSuccess, onError) function onError(err) { console.error(err) //=> [TimeoutError: Promise timed out] return res.error(err) }
  47. failures Service

  48. retries

  49. const retry = require(‘p-retry') // retry 2 times before giving

    up retry(loadFeed, { retries: 2 }) // everything is still a promise function loadFeed() { return get(feedURL).then(feedRes => {
 return feedRes.json() }) }
  50. failures + latency Service

  51. timeouts + retries?

  52. 99pct ~ 1500ms avg ~ 200ms median ~ 150ms

  53. most responses are fast!
 ~ 200ms slow responses are actually

    pretty rare
  54. ~3 x 250ms ~= 750ms
 1 x 1500ms ~= 1500ms

  55. None
  56. // avg response time is ~200ms const feedTimeout = 250

    pTimeout( loadFeed, feedTimeout ) .then(onSuccess) .catch(onError)
  57. // avg response time is ~200ms const feedTimeout = 250

    const retries = 2 pRetry( () => pTimeout(loadFeed,feedTimeout), { retries } ) .then(onSuccess) .catch(onError)
  58. // avg response time is ~200ms
 const feedTimeout = 250

    const totalTimeout = 700 const retries = 2 pTimeout( pRetry( () => pTimeout(loadFeed, feedTimeout), { retries } ), totalTimeout ) .then(onSuccess) .catch(onError)
  59. homework • conditional retries • exponential backoffs • circuit breakers

  60. using promises to build well behaved BFFs

  61. let’s take a look at our analytics service

  62. response time throughput

  63. // I have 100 queries to run const queries =

    […] const pLimit = require(‘p-limit’) // Only 10 promises at a time const only10 = pLimit(10) const promises = queries.map(q => { return only10( () => fetchAnalyticsQuery(q) ) }) Promise.all(promises).then(onSuccess)
  64. const sleep = require('then-sleep') // Or something a bit simpler

    while (condition) { await runAnalyticsQuery(...) await sleep(250) }
  65. other cool ideas • reducing bursts with jittering: then-sleep, p-defer

    • priority queues: p-queue • circuit breaking: opossum
  66. Promises (and async/await) are simple concurrency primitives with a relatively

    low barrier of entry
  67. but they also have limitations • cancellations • error handling

    can be tricky • stack traces aren’t perfect
  68. Even more stuff! • param validation: https://github.com/nettofarah/ property-validator • testing:

    https://github.com/nettofarah/axios-vcr
  69. for a longer list of promise awesomeness https://github.com/sindresorhus/promise-fun

  70. for more sophisticated async programming
 primitives rx.js check out

  71. start with the simplest solution you can think of and

    then grow it from there
  72. @nettofarah nettofarah@gmail.com