Upgrade to Pro — share decks privately, control downloads, hide ads and more …

High Performances microservices with HTTP/2 and gRPC

High Performances microservices with HTTP/2 and gRPC

Julien Viet

May 12, 2017
Tweet

More Decks by Julien Viet

Other Decks in Programming

Transcript

  1. High performances microservices
    with HTTP/2 and gRPC

    View Slide

  2. Julien Viet
    Open source developer for 15+ years
    Current @vertx_project lead
    Principal software engineer at
    Marseille Java User Group Leader
    ! https://www.julienviet.com/
    " http://github.com/vietj
    # @julienviet
     https://www.mixcloud.com/cooperdbi/

    View Slide

  3. View Slide

  4. View Slide

  5. Page Load Time as bandwidth increases
    1000
    1550
    2100
    2650
    3200
    1Mbps 2Mbps 2Mbps 4Mbps 5Mbps 6Mbps 7Mbps 8Mbps 9Mbps10Mbps
    Page Load Time as latency decrease
    1000
    1750
    2500
    3250
    4000
    200 ms 180 ms 160 ms 140 ms 120 ms 100 ms 80 ms 60 ms 40 ms 20 ms

    View Slide

  6. Web performance state of the art

    View Slide

  7. % Persistent connections

    View Slide

  8. % Multiple connections

    View Slide

  9. % Domain sharding

    View Slide

  10. % Content Delivery Networks

    View Slide

  11. % Caching

    View Slide

  12. % Compression and minification

    View Slide

  13. Still a bottleneck!
    GET
    OK
    GET
    OK

    View Slide

  14. HTTP/2
    A better TCP transport for actual HTTP request and
    responses
    Same HTTP semantics
    RFC7540 : Hypertext Protocol version 2
    RFC7541 : Header Compression for HTTP/2

    View Slide

  15. HTTP/2
    H2
    TLS/SSL
    Mandatory for browsers
    ALPN extension to negotiate HTTP/1 or HTTP/2
    H2C
    Clear text
    Via HTTP/1 upgrade or directly

    View Slide

  16. HTTP/2 framed protocol
    Defines a set of frames encoded in binary on a single
    connection
    Settings
    Headers
    Data
    Flow control
    Push, Priority, Ping
    Reset

    View Slide

  17. Settings frames
    First frame exchanged by client and server
    Max concurrency
    Max frame size
    etc…

    View Slide

  18. Request headers
    GET /index.html HTTP/1.1\r\n
    Host: www.free.fr\r\n
    User-Agent: Mozilla/5.0\r\n
    Accept-Encoding: text/html\r\n
    Accept-Language: en-US\r\n
    \r\n
    00003501250000000300
    0000000f824488355217
    caf3a69a3f874189f1e3
    c2f2d852af2d9f7a88d0
    7f66a281b0dae0508749
    7ca589d34d1f51842d4b
    70dd
    length type flags stream_id
    2x smaller

    View Slide

  19. Subsequent request headers
    GET /products.html HTTP/1.1\r\n
    Host: www.free.fr\r\n
    User-Agent: Mozilla/5.0\r\n
    Accept-Encoding: text/html\r\n
    Accept-Language: en-US\r\n
    \r\n
    00001701250000000500
    0000000f82448aaec3c9
    691285e74d347f87c2c1
    c0bf
    4x smaller

    View Slide

  20. Response headers + data
    HTTP/1.1 200 OK\r\n
    Content-Type: text/html\r\n
    Cache-Control: max-age=86400\r\n
    Content-Length: 37\r\n
    \r\n
    Hello World

    0000250001000000033c
    68746d6c3e3c626f6479
    3e48656c6c6f20576f72
    6c643c2f626f64793e3c
    2f68746d6c3e
    00001f01240000000300
    0000000f885f87497ca5
    89d34d1f588aa47e561c
    c581e71a003f5c023337
    length type flags stream_id

    View Slide

  21. DEMO

    View Slide

  22. HTTP/2
    concurrency
    works!

    View Slide

  23. Scalable microservices with HTTP/2

    View Slide

  24. Target architectures
    service
    service
    gateway
    backend
    service
    backend

    View Slide

  25. High performance microservices … what is performance ?
    Performance is hard to define but in this talk we will study
    how HTTP/2 protocol can help to scale services

    View Slide

  26. Context
    0.1 ms < ping <1 ms
    1 G < bandwidth < 10 G
    1 ms < service time < 100 ms
    100 b < body size < 10 kb

    View Slide

  27. HTTP/1 vs HTTP/2
    server
    HTTP/1
    vs
    HTTP/2
    backend
    client
    20 ms
    think
    time

    View Slide

  28. performed (req/sec)
    0
    200
    400
    600
    800
    planned (req / sec)
    0 100 200 300 400 500 600 700 800
    HTTP/1 - 8 connections

    View Slide

  29. HTTP/1.1
    Client Server
    }20ms
    Backend
    GET
    response
    request
    200 OK
    }

    View Slide

  30. performed (req / sec)
    0
    300
    600
    900
    1200
    planned (req / sec)
    0 100 200 300 400 500 600 700 800 900 1000 1100 1200
    HTTP/1 - 8 connections
    HTTP/2 - 1 connection / max_concurrent_streams 20

    View Slide

  31. HTTP/2 multiplexing
    Client Server
    20ms
    Backend
    GET
    response
    request
    200 OK
    }
    }
    }

    View Slide

  32. Capacity planning
    Configure MAX_CONCURRENT_STREAM
    Backend connection sizing
    Avoid thread pools

    View Slide

  33. Thread pool concurrency

    View Slide

  34. performed (req / sec)
    0
    1250
    2500
    3750
    5000
    planned (req / sec)
    0 1000 2000 3000 4000 5000
    HTTP/2 thread pool

    View Slide

  35. Eclipse Vert.x is a toolkit for
    building reactive and polyglot
    applications for the JVM

    View Slide

  36. Vert.x
    Latest and greatest Vert.x 3.4.1
    Scala and Kotlin support
    RxJava improvements
    MQTT server
    Kafka client
    gRPC support
    Web client
    Infinispan cluster manager
    and much more!

    View Slide

  37. Toolkit
    Embeddable
    Composable
    Modular
    Minimum dependencies
    Classloading / Injection free

    View Slide

  38. View Slide

  39. Reactive
    Non blocking
    Event driven
    Distributed
    Rxified APIs
    Reactive-streams

    View Slide

  40. HTTP/2 with Vert.x
    Client / Server
    h2 / h2c
    HTTP Request / response API
    HTTP/2 specific features for extensions

    View Slide

  41. Event driven
    & NIO selectors
    ' disk operations
    ( timers
    ) messages
     database
    etc…

    View Slide

  42. Reactor pattern with Event Loop
    Single
    thread

    View Slide

  43. Events at scale
    &
    '
    '
    '
    '
    ' '
    (
    (
    (
    (
    (





    &
    &
    &
    &
    &
    buffer ->{…}
    timerID ->{…}
    asyncFile ->{…}
    rows ->{…}
    )
    )
    )
    )
    )
    message ->{…}

    View Slide

  44. Event Loop benefits
    Easier to scale
    Mechanical sympathetic
    Simple concurrency model

    View Slide

  45. Event Loop concurrency

    View Slide

  46. performed (req / sec)
    0
    1250
    2500
    3750
    5000
    planned (req / sec)
    0 1000 2000 3000 4000 5000
    HTTP/2 thread pool HTTP/2 non blocking

    View Slide

  47. Going multicore

    View Slide

  48. Multi-reactor pattern

    View Slide

  49. performed (req / sec)
    0
    3250
    6500
    9750
    13000
    planned (req / sec)
    0 1000 3000 5000 7000 9000 11000 13000
    HTTP/2 blocking
    HTTP/2 non blocking -1 core
    HTTP/2 non blocking - 2 cores

    View Slide

  50. Scalable microservices with gRPC

    View Slide

  51. RPC on top of HTTP/2
    Protobuf 3
    Bidirectional streaming
    vertx-grpc / vertx-grpc-compiler-java

    View Slide

  52. Vert.x integration
    Event loop integration
    SSL/TLS integration

    View Slide

  53. DEMO

    View Slide

  54. REACTIVE ECOSYSTEM

    View Slide

  55. Building Reactive Microservices in Java
    https://developers.redhat.com/promotions/building-reactive-microservices-in-java/

    View Slide

  56. TL;DR;
    Unleash concurrency with HTTP/2 and gRPC
    Non blocking is a key factor for high concurrency
    Vert.x is a great fit for HTTP/2
    Reactive ecosystem
    Easy to scale

    View Slide

  57. Merci
    Modern app development with Eclipse Vert.x and RxJava at
    11H20 / Salle 139
    QA
    Come grab your Vert.x sticker!

    View Slide