Upgrade to Pro — share decks privately, control downloads, hide ads and more …

What's so good about Envoy?

Avatar for Casey Wylie Casey Wylie
February 09, 2023

What's so good about Envoy?

Why Envoy? During my presentation, I will go into detail about what makes the cloud native, high-performance edge, middle, and service proxy unique, and what makes it tick. As a long-time nginx user I was skeptical about all the hype around Envoy and was curious if Envoy was a replacement for nginx, turns out that it is not. After working in Envoy using Service Mesh for the past few years, and then, working in Envoy every day for a long period I have realized that envoy is its own special beast.
We will answer the question of what is so good about Envoy and cover:
- Envoy Static/Dynamic Configuration
- Envoy Performance
- Go into a lot of detail around Envoy’s 3 types of threads and the role they play in Envoy’s performance.
- Envoy Thread Local Storage
- Envoy Architecture and explain the Envoy Objects (Listener, Filter Chains, Filter, Cluster, Endpoints)
- Do a demonstration using Envoy as a Front Proxy deployed locally in a virtual network using Docker Compose where we will look at how to:
- Use Envoy for Load Balancing
- Lookup Server Stats
- Lookup Server Info

Avatar for Casey Wylie

Casey Wylie

February 09, 2023
Tweet

Other Decks in Technology

Transcript

  1. What is Envoy Cloud-native high-performance edge, middle, service proxy Envoy

    is hosted by the Cloud Native Computing Foundation (CNCF) and is written in C++.
  2. Static/Dynamic Configuration Extremely flexible configuration style makes it very extendable.

    Envoy discovers its various dynamic resources via the filesystem or by querying one or more management servers.
  3. Performance Envoy is written in C++ and performance obsessed. Envoy

    is architected from a threading model to be as non-blocking as possible Chart of Requests per Second over HTTP by Load Balancer Source: https://www.loggly.com/blog/benchmarking-5-popular-load-balancers-nginx-haproxy-envoy-tra efik-and-alb/
  4. Threading Model Envoy maps connections to threads using a Thread

    Local Storage(TLS) system used internally to make code extremely parallel and high performing.
  5. Thread Types: Main Main • Owns server startup and shutdown

    • Owns xDS API handling (DNS, health checking, general cluster management) • Runtime • Stat flushing • Admin • General Process Management (Signal, hot restart, etc) Everything that happens on this thread is asynchronous and non-blocking. This thread coordinates all critical process functionality that does not require a large amount of CPU to accomplish. This allows the majority of management code to be written as if it were singled threaded.
  6. Thread Types: Worker Worker • Spawns a worker thread for

    every hardware thread in the system. • Each worker thread runs a non-blocking event loop responsible for listening on every listener, accepting new connections This also allows the majority of connection handling code to be written as if it were single threaded. Source: https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310
  7. Thread Types: File Flusher File Flusher • Every file that

    Envoy writes has an independent blocking flush thread. This is due to the fact that writing to file system cached files even when using 0_NONBLOCK can sometimes block. • When a worker thread needs to write a file, the data is actually moved into an in-memory buffer, where it is eventually flushed via the file flush thread. This is one area of the code in which technically call workers can block on the same lock trying to fill the memory buffer. This is primarily for access logs
  8. Connection Handling Once a connection is accepted on a worker,

    it never leaves that worker. All further handling of the connection is entirely processed within the worker thread, including forwarding behavior. BEST PRACTICE: Run envoy with low concurrency so that the performance roughly matches the services that are sitting beside/behind envoy All connection pools in Envoy are per worker thread. If there are four workers, there will be four HTTP/2 connections per upstream host. Envoy keeps all within a single worker thread, almost all code can be written without locks and as it if were single threaded. From a memory and connection pool efficiency standpoint, it is important to tune the --concurrency option. Having more workers than needed will waste memory, create more idle connections, and lead to a lower connection pool hit rate.
  9. Understanding when Envoy blocks In reality, almost nothing is ever

    completely non-blocking. Understanding when Envoy does block can help answer questions that arise out of complex or non-routine use-cases. When access logs are being written. lock to the central “stat store” during thread local stat handling When the main thread posts to worker threads When Envoy logs itself to standard error.
  10. Thread Local Storage (TLS) Complex processing can be done on

    the main thread and then be made available to each worker thread in a highly concurrent manner
  11. Thread Local Storage A common pattern is that a main

    thread process does some work, and then needs to update each worker thread with the result of that work, without the worker thread needings to acquire a lock.
  12. Thread Local Storage Envoy’s Thread Local Storage system: Code running

    on the main thread can allocate a process-wide TLS slot. The main thread can set arbitrary data into slots. When done, the data is posted into each worker as a normal event loop. Worker threads can read from their Thread Local Storage slot and will retrieve whatever thread local data available
  13. Envoy Architecture Inbound Outbound Listener Filter Chains Filter (TCP Proxy,

    HTTP Connection manager) Clusters Endpoints (Static Cluster, Dynamic Cluster)
  14. Listeners The listener is the network configuration, such as IP

    Address and ports that envoy listens on for Requests. Name: listener_0 listeners: - name: listener_0 address: socket_address : { address: 0.0.0.0, port_value: 10000 }
  15. Filter Chains and Filters Each listener has a set of

    filters. Filters define how to process requests. Define, virtual_hosts, domains, and routes. Name: envoy.http_connnection_manager filter_chains: - filters: - name: envoy.http_connection_manager config: stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: { prefix: "/" } route: { host_rewrite: www.google.com, cluster: service_google} http_filters: - name: envoy.router
  16. Clusters After a request matches the filter, the req is

    passed to the cluster. The cluster defined the host, http/https, and the load balancing policy clusters: - name: service_google connect_timeout: 0.25s type: LOGICAL_DNS dns_lookup_family: V4_ONLY lb_policy: ROUND_ROBIN hosts: [{ socket_address: { address: google.com, port_value: 443 }}] tls_context: { sni: www.google.com }
  17. Admin The admin section explains where the admin endpoints, access

    logs will live. admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 }
  18. Front Proxy Demo • 2 Services [service1, service2] • Services

    colocated with a running service proxy • 3 Containers will be deployed inside a docker-compose virtual network called envoymesh • All incoming requests are routed via the front Envoy, acting like a reverse proxy sitting on the edge of envoy mesh • All traffic routed by the front Envoy is routed to the Service Envoys frontend Source: https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy
  19. Demo During this Demo, i am going to show you

    how to access routes via HTTP, HTTPS, and how to use Load Balancing, and how to access envoy Admin to get information about envoy. frontend Source: https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy