Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Linkerd SIG - Deep Dive, KubeCon EU 2018

Linkerd SIG - Deep Dive, KubeCon EU 2018

Slides from the Linkerd Special Interest Group Deep Dive at KubeCon+CloudNativeCon EU 2018 in Copenhagen, Denmark

George Miranda

May 04, 2018
Tweet

More Decks by George Miranda

Other Decks in Technology

Transcript

  1. Introductions George Miranda - Buoyant Community (@gmiranda23) In this SIG:

    Dario Simonetti - Head of Core Engineering, Attest Chris Taylor - Software Engineer, Soundcloud Zack Angelo - Platform Engineering Director, BigCommerce Thomas Rampelberg - Software Engineer, Buoyant William Morgan - CEO and co-founder, Buoyant
  2. Talks to watch + Demos to try Evolving Systems Design:

    From Unreliable RPC to Resilience with Linkerd Edward Wilde, Form3 From Eval to Prod: How a Service Mesh Helped Us Build Production Cloud-Native Services Israel Sotomayor, Moltin Anatomy of a Production Kubernetes Outage (Keynote) Oliver Beattie, Monzo How to Get a Service Mesh Into Prod Without Getting Fired William Morgan, Buoyant Observability and the Depths of Debugging Cloud-Native Applications using Linkerd & Conduit Franziska von der Goltz, Buoyant Linkerd SIG (Intro) Andrew Seigner, Buoyant Hands-on demos (Buoyant Booth): http://bit.ly/linkerd & http://bit.ly/conduit-demo
  3. KubeCon + CloudNativeCon Europe 2018 Copenhagen 04/05/2018 Increase efficiency with

    linkerd and gRPC Dario Simonetti askattest.com @AskAttest
  4. Hello • Head of Core Engineering at Attest • Focus

    on tech and team efficiency • Linkerd enthusiast and minor contributor https://dario.tech Dario Simonetti @dariosatoshi @dariosatoshi
  5. Maker API Taker API Service A Service B Service C

    3rd party 3rd party The starting point
  6. The problems 1. Services are either bloated or not doing

    things right • Circuit breakers, retrial logic, connection pools, ... • Clients and servers code 2. AWS load balancers (both ELB and ALB) do not support HTTP/2 origins → It is not possible to have a load balancer in front of a gRPC service
  7. Linkerd to the rescue • Focuses on doing one thing

    well: service mesh data plane • It's open to integrate with other tools that do their thing well • Namers (service discovery): ZooKeeper, Consul, Kubernetes, Marathon, ... • Telemeters: Prometheus, StatsD, TraceLog, Zipkin, … • Control Planes: namerd, Istio • Supports: HTTP/1.1, HTTP/2, gRPC, Thrift, ThriftMux, Mux • Awesome community
  8. New setup • Service discovery • Circuit breaking • Retries

    and deadlines • Connection pooling • Distributed tracing • Client-side load balancing
  9. Conclusion But more importantly: We can focus on delivering value

    for the business, and having fun doing so • We can speak to gRPC services! • Lower latency • Higher resiliency • Lower costs • Easier troubleshooting
  10. T H A N K S F O R L

    I S T E N I N G ! linkerd on AWS ECS: https://medium.com/attest-engineering/937f201f847a (we are hiring in London)
  11. Chris Taylor @ccmtaylor Search at SoundCloud or: How I Learned

    to Stop Worrying and Love the Load-Balancer
  12. Learning dtabs: the & operator doesn’t work as expected: /svc/es

    => /es/cluster1 & /es/cluster2; doesn’t load-balance across all instances of cluster1 and cluster2 but splits traffic and then balances.
  13. @zackangelo ➔ Home to 60,000+ online stores ➔ $17B worth

    of products sold ➔ $200MM in VC funding (also, we’re hiring!)
  14. @zackangelo These all require you to remember which host or

    variable has what service. What we really wanted was to just create a client and go.
  15. @zackangelo Step 1: Consistent Service Names - Client code must

    have service endpoint names built-in - Ensures that all clients in all languages use the same name - We use gRPC HTTP/2 request: :method = POST :path = Bar/BazOperation Generated client
  16. @zackangelo Step 2: Augment Service Discovery Catalog - Consul stores

    IP:Port pairs about containers, but not gRPC services - Express gRPC service -> container associations as tags
  17. @zackangelo Step 3: Sell your soul to the service mesh

    Linkerd receives all service traffic and figures out what to do with it - Use the io.l5d.header.path identifier (gRPC service name and method)
  18. @zackangelo Linkerd receives the traffic... … can parse the gRPC

    service name … but how does it convert that to an IP and port?
  19. @zackangelo Step 4: Teach Linkerd convert gRPC name to IP:Port

    - Linkerd’s routing rules are expressed using “dtabs” (delegation tables) - We use dtabs to tell linkerd how to convert a gRPC service name to a consul service name We wrote a simple service: 1. Watch for Consul changes 2. Collect gRPC tags 3. Write dtab 4. Repeat
  20. @zackangelo Step 5: Service Discovery Nirvana ➔ No host:port environment

    variables ➔ No DNS names or SRV records ➔ Just create your client and send a request to linkerd, it’ll figure out where it needs to go
  21. First Steps - Getting Started - https://linkerd.io/getting-started/k8s/ - See how

    it works - https://github.com/linkerd/linkerd-examples/tree/master/add-steps/k8s
  22. Getting (and giving) Help - Discourse - https://discourse.linkerd.io/ - Slack

    - https://slack.linkerd.io/ - Twitter - https://twitter.com/linkerd
  23. Architecture - Life of a request - https://linkerd.io/advanced/routing/ - Finagle

    - https://twitter.github.io/finagle/guide/index.html
  24. Plugins - What kinds of plugins are there? - Identifiers

    - Namers - Interpreter - Routers (protocols) - Write an example plugin - https://linkerd.io/advanced/plugin/
  25. Contribution 1. Look through the issues - https://github.com/linkerd/linkerd/issues 2. Check

    out the contribution guidelines - https://github.com/linkerd/linkerd/blob/master/CONTRIBUTING.md 3. Create an issue! 4. Fork the repo 5. Create a PR (label it WIP for some early feedback)