Upgrade to Pro — share decks privately, control downloads, hide ads and more …

sync or async

sync or async

We live in exciting times. People say servlets - as we know them - are dead. The future is bright and asynchronous. Bright and event loop'ed. Bright and reactive.
People judge - it's not async / reactive / nodejs'ed, it's a dead end. But the world of sync and async is not black and white. It's shady, swampy and full of surprises. In this talk, we will explore some of these shades of grey.

In this talk, we will try to give an opinionated answer if reactive all the things is a way to go, at least in the context of web applications. We will explore the spectrum between traditionally synchronous servlets and a fully blown async reactive approach. With the support of numbers, measurements and experiments, we will investigate the simple and traditional approach to web requests. See where it works and where it fails - based on some typical web applications use cases. Having such unoptimal resource utilization as a baseline, we will look for some other approaches and evaluate them from different perspectives: performence (an obvoius one) but also readability, familiarity and a general common sense.

Jakub Marchwicki

September 19, 2019
Tweet

More Decks by Jakub Marchwicki

Other Decks in Technology

Transcript

  1. Jakub Marchwicki <@kubem> from sync to async the swampy grounds

    of handling http requests Jakub Marchwicki <@kubem>
  2. Jakub Marchwicki <@kubem> What is The Point a traffic which

    justifies the Netflix a reactive architecture
  3. Jakub Marchwicki <@kubem> What is The Point a traffic which

    justifies the Netflix a reactive architecture way of solving business goals
  4. Jakub Marchwicki <@kubem> https://dzone.com/articles/spring-boot-20-webflux-reactive-performance-test (...) better performance than synchronous code

    on high concurrency scenarios public Flux<User> getUsersAsync() { return Flux .fromIterable(userList) .delaySubscription( Duration.ofMillis(delay) ); }
  5. Jakub Marchwicki <@kubem> synchronous, multithreaded, blocking each http request in

    handled exclusively by a single thread (end-2-end) number of threads (therefore nb of concurrent connections) is limited to the pool size processing time for each requests determines the throughput
  6. Jakub Marchwicki <@kubem> ~450 ms/request with up to 200 threads

    ~444 requests / second on a long-term average
  7. Jakub Marchwicki <@kubem> ~450 ms/request with up to 200 threads

    ~444 requests / second on a long-term average no. items in the queue average time spent in the queue arrival rate https://www.process.st/littles-law/
  8. Jakub Marchwicki <@kubem> L = λ * W customers arrive

    at the rate of 10 per hour customer stay an average of 0.5 hour average number of customers in the store at any time to be 5 customers arrive at the rate of 20 per hour average number of customers in the store at any time to be ?? customer stay an average of ?? hour customer stay an average of 0.5 hour (stays the same) average number of customers in the store at any time to be 10
  9. Jakub Marchwicki <@kubem> @GetMapping("/loans/{loanId}") public LoanInformation loanDetails(@PathVariable("loanId") UUID loanId) {

    var riskInformation = riskClient.getRiskAssessmentDetails(loanId); var loanDetails = loansClient.getLoanDetails(loanId); var member = membersClient.getMemberDetails(loanDetails.getMemberId()); return LoanInformation .fromLoanDetails(loanDetails) .requestId(UUID.randomUUID()) .member(member) .riskAssessment(riskInformation) .build(); }
  10. Jakub Marchwicki <@kubem> Upon startup, Tomcat will create threads based

    on the value set for minSpareThreads (10) and increase that number based on demand, up to the number of maxThreads (200). If the maximum number of threads is reached, and all threads are busy, incoming requests are placed in a queue (acceptCount - 100) to wait for the next available thread. The server will only continue to accept a certain number of concurrent connections (as determined by maxConnections - 200). https://www.datadoghq.com/blog/tomcat-architecture-and-performance/
  11. Jakub Marchwicki <@kubem> Undertow uses XNIO as the default connector.

    XNIO (...) default configuration (...) is I/O threads initialized to the number of your logical threads and the worker thread equal to 8 * CPU cores. So on typical 4 cores Intel CPU with hyper-threading you will end up with 8 I/O threads and 64 working threads. https://jmnarloch.wordpress.com/2016/04/26/spring-boot-tuning-your-undertow-application-for-throughput/
  12. Jakub Marchwicki <@kubem> Undertow uses XNIO as the default connector.

    XNIO (...) default configuration (...) is I/O threads initialized to the number of your logical threads and t`he worker thread equal to 8 * CPU cores. So on typical 4 cores Intel CPU with hyper-threading you will end up with 8 I/O threads and 64 working threads. https://jmnarloch.wordpress.com/2016/04/26/spring-boot-tuning-your-undertow-application-for-throughput/ var ioThreads = Math.max( Runtime.getRuntime().availableProcessors(), 2 ); var workerThreads = ioThreads * 8; number of processors available to the JVM number of logical cores: Core i7 w. HyperThreading: 8 number of logical cores: Q6700: 4 number of logical cores: docker --cpus=1 on a quad core: 8 Compare it to 100 or 1000, defaults at Tomcat or Jetty number of logical cores: docker --cpus=1 on a quad core: 1 (JDK10) number of logical cores: docker --cpuset-cpus=0,1 on a quad core: 2
  13. TAKEAWAY Full stack developer doesn’t mean same technology on frontend

    and backend. Seniority comes from understanding layers beyond the code you craft.
  14. Jakub Marchwicki <@kubem> @GetMapping("/loans/{loanId}") public LoanInformation loanDetails(@PathVariable("loanId") UUID loanId) {

    var riskInformation = riskClient.getRiskAssessmentDetails(loanId); var loanDetails = loansClient.getLoanDetails(loanId); var member = membersClient.getMemberDetails(loanDetails.getMemberId()); return LoanInformation .fromLoanDetails(loanDetails) .requestId(UUID.randomUUID()) .member(member) .riskAssessment(riskInformation) .build(); } on demand computation, take time (~600ms) these two are relatively fast, direct lookups (~150 - 300 ms)
  15. Jakub Marchwicki <@kubem> operations are done by choosing a worker

    thread from a thread pool the io thread is returned to the pool to run other requests, and process the upstream response asynchronously too the worker thread notifies the request thread when its work is complete. to offset these risks of backend latency, throttling mechanisms and circuit breakers help keep the blocking systems stable and resilient.
  16. Jakub Marchwicki <@kubem> @GetMapping("/loans/{loanId}") public CompletableFuture<LoanInformation> loanDetails(@PathVariable("loanId") UUID loanId) {

    return supplyAsync(() -> loansClient.getLoanDetails(loanId), executor) .thenApply(l -> { Member memberDetails = membersClient.getMemberDetails(l.getMemberId()); return Tuple.of(l, memberDetails); }) .thenCombine(supplyAsync(() -> riskClient.getRiskAssessmentDetails(loanId), executor), (loanDetailsMember, riskInformation) -> LoanInformation .fromLoanDetails(loanDetailsMember.getLeft()) .requestId(UUID.randomUUID()) .member(loanDetailsMember.getRight()) .riskAssessment(riskInformation) .build()); }
  17. Jakub Marchwicki <@kubem> requests thread pool lookup member (~300ms) search

    members (~600ms) lookup loan (~150ms) service thread pool
  18. Jakub Marchwicki <@kubem> requests thread pool lookup member (~300ms) search

    members (~600ms) lookup loan (~150ms) service thread pool ?
  19. TAKEAWAY Thread pool tuning is tied to what the application

    needs Understand the nature of the traffic
  20. Jakub Marchwicki <@kubem> @GetMapping("/loans/{loanId}") public Observable<LoanInformation> loanDetails(@PathVariable("loanId") UUID loanId) {

    Single<LoanDetails> loanDetails = loansClient.getLoanDetailsSingle(loanId).cache(); Single<Member> member = loanDetails .flatMap(l -> membersClient.getMemberDetailsSingle(l.getMemberId())); Single<RiskInformation> riskInformation = riskClient.getRiskAssessmentDetailsSingle(loanId); return Single.zip( loanDetails, member, riskInformation, (l, m, r) -> LoanInformation .fromLoanDetails(l) .requestId(UUID.randomUUID()) .member(m) .riskAssessment(r) .build() ).toObservable(); }
  21. Jakub Marchwicki <@kubem> responsive handle requests in a reasonable time

    resilient stay responsive in the face of failures elastic scale up and down, be able to handle the load with minimal resources. message driven interactions using asynchronous message passing. a reactive system promise
  22. Jakub Marchwicki <@kubem> a reactive programming - in technical terms

    Handling huge volumes of data in multi-userness environment Efficiency gains: data stays on the same CPU, use of CPU level caches, fewer context switches 25% increase in throughput corresponding with a 25% reduction in CPU utilization
  23. Jakub Marchwicki <@kubem> a reactive programming - in technical terms

    Handling huge volumes of data in multi-userness environment Efficiency gains: data stays on the same CPU, use of CPU level caches, fewer context switches 25% increase in throughput corresponding with a 25% reduction in CPU utilization MOAR TRAFFIC!
  24. Jakub Marchwicki <@kubem> the reactive promise - but... Blocking systems

    are easy to grok and debug a thread is always doing a single operation The event loop’s stack trace is meaningless when trying to follow a request Unhandled exceptions create dangling resources (exception swallow)
  25. Jakub Marchwicki <@kubem> final List<String> results = getQueries().stream() //there're 6

    db queries .map(query -> db.apply(query)) .sorted(naturalOrder()) .collect(Collectors.toList()); final List<String> results = Observable.from(getQueries()) // there're 6 db queries .flatMap(query -> Async.start(() -> db.apply(query), scheduler)) .toSortedList() .toBlocking() .single(); final List<String> results = new ArrayList<>(); for (Query q: getQueries()) { String result = db.apply(q); results.add(result); } results.sort(naturalOrder());
  26. Jakub Marchwicki <@kubem> [EL Warning]: 2009-08-29 12:53:13.718--Exception [EclipseLink-4002] (Eclipse Persistence

    Services - 1.1.2.v20090612-r4475): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.BatchUpdateException: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL090829125312890' defined on 'REMINDISSUE'. Error Code: 20000 Exception in thread "main" javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 1.1.2.v20090612-r4475): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.BatchUpdateException: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL090829125312890' defined on 'REMINDISSUE'. Error Code: 20000 at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commitIntern at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(Entit at com.example.dao.jpa.JpaDAO.commit(JpaDAO.java:99) at com.example.dao.jpa.JpaDAO.persist(JpaDAO.java:41) at com.example.dao.jpa.JpaDAO.persist(JpaDAO.java:1) at com.example.test.TestDAO.main(TestDAO.java:44)
  27. Jakub Marchwicki <@kubem> public static void main(String[] args) { Observable.empty()

    .observeOn(Schedulers.io()) .toBlocking() .first(); } Exception in thread "main" java.util.NoSuchElementException: Sequence contains no el at rx.internal.operators.OperatorSingle$ParentSubscriber.onCompleted(OperatorSin at rx.internal.operators.OperatorTake$1.onCompleted(OperatorTake.java:53) at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.pollQueue(Operato at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber$1.call(OperatorOb at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$2 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Sche at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:114 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:61 at java.lang.Thread.run(Thread.java:745)
  28. TAKEAWAY Both WebMVC / Servlet / synchronous and WebFlux /

    RxJava / reactive have a reason to exist.
  29. TAKEAWAY Know your clients users and their flows Know your

    expectations what you optimizing for Know your domain Know your opportunity costs