Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Better performance with HTTP/2

Julien Viet
November 09, 2016

Better performance with HTTP/2

Devoxx BE 2017 presentation.

Julien Viet

November 09, 2016
Tweet

More Decks by Julien Viet

Other Decks in Programming

Transcript

  1. @julienviet @vertx_project #Devoxx #vertx /me • Open source developer for

    15 years • Current @vertx_project lead • Software engineer at Red Hat • Marseille JUG Leader • http://github.com/vietj • @julienviet
  2. @julienviet @vertx_project #Devoxx #vertx Latency versus Bandwidth impact Page Load

    Time as bandwidth increases 1000 1550 2100 2650 3200 1Mbps 2Mbps 2Mbps 4Mbps 5Mbps 6Mbps 7Mbps 8Mbps 9Mbps 10Mbps Page Load Time as latency decrease 1000 1750 2500 3250 4000 200 ms 180 ms 160 ms 140 ms 120 ms 100 ms 80 ms 60 ms 40 ms 20 ms
  3. @julienviet @vertx_project #Devoxx #vertx Current solutions • Multiple connections •

    CDN • Concatenation / spriting • Compression / minification • Caching • Sharding
  4. @julienviet @vertx_project #Devoxx #vertx HTTP/2 • Evolution of SPDY •

    Same semantic than HTTP/1 • Change the format on the wire • Use a single connection • HTTPS mandatory for browsers • Does not affect websockets
  5. COMPR ESS headers headers headers headers headers headers headers headers

    headers headers headers headers headers headers headers headers headers headers
  6. @julienviet @vertx_project #Devoxx #vertx Benchmark Throughput ratio 0 0.25 0.5

    0.75 1 Aimed throughput (req / sec) 200 300 400 500 600 700 800 • Constant throughput benchmark • Pace requests at a different rates • Log the ratio: request performed/ request planned
  7. @julienviet @vertx_project #Devoxx #vertx #reactive Throughput ratio 0 0.25 0.5

    0.75 1 Aimed throughput (req / sec) 200 300 400 500 600 700 800 HTTP/1 - 8 connections pipelined
  8. @julienviet @vertx_project #Devoxx #vertx #reactive Throughput ratio 0 0.25 0.5

    0.75 1 Aimed throughput (req / sec) 200 300 400 500 600 700 800 1000 1200 HTTP/1 - 8 connections HTTP/2 - 1 connection / max_concurrent_streams 20
  9. @julienviet @vertx_project #Devoxx #vertx #reactive Throughput ratio 0 0.25 0.5

    0.75 1 Aimed throughput (req / sec) 1000 2000 3000 4000 5000 HTTP/2 - 1 connection - max_concurrency_streams 400
  10. @julienviet @vertx_project #Devoxx #vertx http://vertx.io Vert.x is a toolkit for

    building reactive and polyglot applications for the JVM
  11. public class Server extends AbstractVerticle { public void start() {

    vertx.createHttpServer() .requestHandler(request -> { request.response() .putHeader(“content-type”, “text/plain”) .end(“Hello from Vert.x”)); ).listen(8080); } }
  12. shared class Server() extends Verticle() { start() => vertx.createHttpServer() .requestHandler((req)

    => req.response() .putHeader(“content-type”, “text/plain”) .end(“Hello from Vert.x!”)) .listen(8080); }
  13. class Server extends ScalaVerticle { override def start(): Unit =

    { vertx .createHttpServer() .requestHandler(_.response() .putHeader(“content-type”, “text/plain”) .end(“Hello from Vert.x”)) .listen(8080) } }
  14. class Server : AbstractVerticle() { override fun start() { vertx.createHttpServer()

    .requestHandler() { req -> req.response() .putHeader(“content-type”, “text/plain”) .end("Hello from Vert.x”) } .listen(8080) } }
  15. @julienviet @vertx_project #Devoxx #vertx Blocking user service interface UserService {

    User loadUser(String userName) throws NotFoundException(); void close(); } User user = service.loadUser(“julien”); System.out.println(user.getName()); service.close(); System.out.println(“done”);
  16. @julienviet @vertx_project #Devoxx #vertx Asyncifying the service interface UserService {

    void loadUser(String userName, Handler<AsyncResult<User >> handler); void close(Handler<Void> handler); } @FunctionalInterface interface Handler<E> { /** * Something has happened, so handle it. */ void handle(E event); }
  17. @julienviet @vertx_project #Devoxx #vertx Asyncifying the service userService.loadUser((Handler<AsyncResult<User >> event)

    -> { if (event.succeeded()) { User user = event.result(); System.out.println(user.getName()); } else { event.cause().printStackTrace(); } userService.close(v - >{ System.out.println(“done”); }); });
  18. @julienviet @vertx_project #Devoxx #vertx What events can be ? Ȑ

    NIO selectors disk operations timers messages  database etc…
  19. @julienviet @vertx_project #Devoxx #vertx Events at scale Ȑ  

       Ȑ Ȑ Ȑ Ȑ Ȑ buffer ->{…} timerID ->{…} asyncFile ->{…} rows ->{…} message ->{…}
  20. @julienviet @vertx_project #Devoxx #vertx Event Loop benefits • Easier to

    scale • Mechanical sympathetic • Simple concurrency model
  21. @julienviet @vertx_project #Devoxx #vertx HTTP/2 with Vert.x • Server and

    client • h2 with Jetty ALPN or native OpenSSL / BoringSSL • h2c • HTTP Request / response API • HTTP/2 specific features for extensions like gRPC
  22. @julienviet @vertx_project #Devoxx #vertx Non blocking server public static void

    main(String [] args) { HttpServerOptions options = new HttpServerOptions() .setUseAlpn(true) .setSsl(true) .setKeyStore(new JksOptions() .setPath(“keystore.jks”). .setPassword(“the-password”)); Vertx vertx = Vertx.vertx(); HttpServer server = vertx.createHttpServer(options); ... }
  23. @julienviet @vertx_project #Devoxx #vertx Non blocking server public static void

    main(String [] args) { … HttpServer server = vertx.createHttpServer(options); server.requestHandler(req -> { req.response() .putHeader(“Content-Type”, “text/plain”) .end(“Hello World”); }); server.listen(8080); }
  24. @julienviet @vertx_project #Devoxx #vertx Non blocking client public static void

    main(String [] args) { HttpServerOptions options = new HttpServerOptions() .setProtocolVersion(HttpVersion.HTTP_2) .setUseAlpn(true) .setSsl(true) .setKeyStore(new TrustOptions() .setPath(“trustore.jks”). .setPassword(“the-password”)); HttpClient client = vertx.createHttpClient(options); … }
  25. @julienviet @vertx_project #Devoxx #vertx Non blocking client public static void

    main(String [] args) { … HttpClient client = vertx.createHttpClient(options); client.getNow(“http: //backend”, resp -> { int status = resp.status(); resp.bodyHandler(body -> { System.out.println(body.length()); }); }); }
  26. @julienviet @vertx_project #Devoxx #vertx Non blocking proxy server.requestHandler(req -> {

    HttpServerResponse resp = req.response(); client.getNow(“http: //backend”, clientResp -> { int code = clientResp.status() resp.setStatus(code); clientResp.bodyHandler(body -> { resp.end(body); }); }); });
  27. @julienviet @vertx_project #Devoxx #vertx #reactive Throughput ratio 0 0.25 0.5

    0.75 1 Aimed throughput (req / sec) 1000 2000 3000 4000 5000 HTTP/2 blocking HTTP/2 non blocking
  28. @julienviet @vertx_project #Devoxx #vertx #reactive Throughput ratio 0 0.25 0.5

    0.75 1 Aimed throughput (req / sec) 1000 2000 3000 4000 5000 7000 9000 11000 13000 HTTP/2 blocking HTTP/2 non blocking -1 core HTTP/2 non blocking - 2 cores
  29. @julienviet @vertx_project #Devoxx #vertx TL;DR; • Unleash concurrency with HTTP/2

    • Keep the good old HTTP semantics • Non blocking is a key factor for high concurrency • Vert.x is a great fit for HTTP/2 • reactive ecosystem • easy to scale
  30. @julienviet @vertx_project #Devoxx #vertx Thank you • QA • Come

    grab your Vert.x sticker! • Links • http://vertx.io • https://github.com/vietj/http2-bench • http://www.belshe.com/2010/05/24/more-bandwidth-doesnt-matter- much/ • https://www.infoq.com/presentations/latency-pitfalls • https://hpbn.co