Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes meets Finagle for Resilient Microservices

Kubernetes meets Finagle for Resilient Microservices

Finagle is an open-source, high-volume RPC client library, handling millions of QPS at companies like Twitter, Pinterest and Soundcloud. In this talk, we demonstrate how Finagle can be applied to Kubernetes applications via linkerd, an open-source, standalone Finagle proxy. By deploying linkerd as a sidecar container or with DaemonSets, we show how polyglot multi-service applications running in Kubernetes can be “wrapped” in Finagle’s operational model, adding connection pooling, load-balancing, failure detection, and failover mechanisms to existing applications with minimal code change. We demonstrate how linkerd communicates with the Kubernetes API and how the resulting systems perform under load and adverse conditions.

Oliver Gould

March 10, 2016
Tweet

More Decks by Oliver Gould

Other Decks in Technology

Transcript

  1. oliver gould • cto @ buoyant
 open-source microservice infrastructure •

    previously, tech lead @ twitter:
 observability, traffic • core contributor: finagle • creator: linkerd • loves: kubernetes, dogs @olix0r
 [email protected]
  2. overview 1. why microservices? 2. finagle: the once and future

    layer 5 3. resilient rpc 4. introducing linkerd 5. demo 6. questions! answers?
  3. Resilience is an imperative: our software runs on the truly

    dismal computers we call datacenters. Besides being heinously
 complex… they are unreliable and prone to
 operator error. Marius Eriksen @marius
 RPC Redux
  4. resilience in microservices software you didn’t write hardware you can’t

    touch network you can’t configure break in new and surprising ways and your customers shouldn’t notice
  5. datacenter [1] physical [2] link [3] network [4] transport kubernetes

    
 calico, … aws, azure, digitalocean, gce, … your code languages, libraries [7] application rpc [5] session [6] presentation json, protobuf, thrift, … http/2, mux, …
  6. programming finagle // proxy requests on 8080 to the users

    service // with a timeout of 1 second val users = Http.newClient(“/s/users”) Http.serve(“:8080”, Service.mk[Request, Response] { req => users(req).within(1.second).handle { case _: TimeoutException => Response(Status.BadGateway) } })
  7. operating finagle service discovery circuit breaking backpressure timeouts retries tracing

    metrics keep-alive multiplexing load balancing per-request routing service-level objectives
  8. “It’s slow”
 is the hardest problem you’ll ever debug. Jeff

    Hodges @jmhodges
 Notes on Distributed Systems for Young Bloods
  9. l5: load balance requests lb algorithms: • round-robin • fewest

    connections • queue depth • exponentially-weighted moving average (ewma) • aperture
  10. layer 5 routing • application configured against a logical name:


    /s/users • requests are bound to concrete names:
 /k8s/prod/http/users • delegations express routing by rewriting:
 /s => /k8s/prod/http
 /s/l5d-docs => /$/inet/linkerd.io/443
  11. per-request routing GET / HTTP/1.1
 Host: mysite.com
 Dtab-local: /s/users =>

    /s/users-v2 GET / HTTP/1.1
 Host: mysite.com
 Dtab-local: /s/slorbs => /s/debugproxy/s/slorbs
  12. make layer 5 great again transport layer security service discovery

    backpressure timeouts retries stats tracing routing multiplexing load balancing circuit breaking service-level objectives
  13. l5d.yaml namers:
 - kind: io.l5d.experimental.k8s
 authTokenFile: …/serviceaccount/token
 
 routers:
 -

    protocol: http
 label: incoming
 servers:
 - port: 8080
 ip: 0.0.0.0
 baseDtab: |
 /http/1.1 => /$/inet/127.1/8888;
 - protocol: http
 label: outgoing
 servers:
 - port: 4140
 baseDtab: |
 /srv => /io.l5d.k8s/default/http;
 /method => /$/io.buoyant.http.anyMethodPfx/srv;
 /http/1.1 => /method; kind: Service
 apiVersion: v1
 metadata:
 namespace: default
 name: $SERVICENAME spec:
 selector:
 app: $SERVICENAME
 type: LoadBalancer
 ports:
 - name: http
 port: 8080
 targetPort: 8080 svc.yaml.sh
  14. linkerd roadmap • use k8s 3rdparty for routing state kubernetes#18835

    • DaemonSets deployments? • tighter grpc support netty#3667 • cluster-wide routing control • service-level objectives • application-level circuit breaking • more configurable everything