Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Navigating the service mesh ecosystem (webinar)

Navigating the service mesh ecosystem (webinar)

Slides from the Buoyant & RedHat joint webinar

George Miranda

March 07, 2018
Tweet

More Decks by George Miranda

Other Decks in Technology

Transcript

  1. Service mesh architecture • Application infrastructure • Lives outside of

    the application • Application requests/messages flow through proxies • Applications may or may not be explicitly aware of these proxies • The proxies make up the “data plane” of a service mesh • Ability to understand what’s happening between our apps... HUGE!! • The “control plane” used to observe/configure/manage the behavior of data plane
  2. A service mesh differs from • CORBA/RMI/DCOM/etc • Messaging Oriented

    Middleware • Enterprise Service Bus (ESB) • Enterprise Application Integration (EAI) • API Gateway • Resilience libraries
  3. Linkerd • Built on Twitter’s Finagle library (Scala) • First

    released Feb 2016 (2 years) • ~50 companies running in production • Multi-platform (Docker, K8s, DC/OS, Amazon ECS, and more) • Built-in service discovery, latency-aware load balancing, retries, circuit breaking, protocol upgrades (TLS by default), top-line metrics, distributed tracing, per request routing (useful for CI/CD, testing, and more) • Support for H2, gRPC, HTTP/1.x, and all tcp traffic • Handles tens of thousands of requests per second, per instance • http://linkerd.io
  4. Envoy • https://www.envoyproxy.io • A service proxy for modern services

    architectures • Open sourced by (@mattklein123) and Lyft.com October 2016 • C++, highly performant, non-blocking architecture • Low tail latencies at scale/load (P99) • L3/4 filter at its core with many L7 filters out of the box • HTTP 2, gRPC support (upstream/downstream) • API-driven, dynamic configuration • Amenable to shared-proxy/sidecar-proxy deployment models • Foundation for more advanced application proxies
  5. Istio • http://istio.io • Launched May 2017 bootstrapped with Lyft,

    IBM, Google • Provides a control plane for service proxies (default Envoy) • Brings clustering control and observability • Fine grained routing • mTLS/RBAC/security • Resiliency • Policy control • Observability
  6. Conduit • Released December 2017 (current 0.3.0) • Data plane

    written in Rust, Control plane written in Go • Sub 1ms p99 latency, ~1MB footprint, designed for sidecar deployments • “Minimum viable service mesh” or a Zero Config philosophy • Supports gRPC, H2, HTTP1.x, and all tcp traffic out of the box • Performance based load-balancing • Export Prometheus metrics & Automatic TLS (0.4.0) • Controllable timeouts/deadlines, OpenTracing, & key rotation (0.5.0) • Rich ingress routing, auth policy, auto alerts & composable resiliency (0.6.0) • Late April 2018 ETA for 0.6 • https://conduit.io/roadmap/
  7. Questions to ask yourself • Am I ready for a

    service mesh? • What problems am I having today? • What platforms do you need to support? • What level of observability do your services have today? • What functionalities of a service mesh do you already have? How will that play out in your when you introduce a service mesh? • What does the division of responsibility look like in your teams? • Centralized or decentralized functionality? • Support expectations and team needs?