The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation. 2
its affiliates • 2 Mb. per thread memory consumption • ~ 1 Ms. to start a thread • Context switch ~100 ns. (different for each OS) On an “infinite machine” you will need several minutes just to start them… 5
Threads Virtual Threads • Not a Green thread (who know what is the difference) • Virtual threads are mounted over Carrier Threads • When a virtual thread blocks, it is unmounted from the carrier, making room for others. 12
Threads Virtual Threads • Not a Green thread (who know what is the difference) • Virtual threads are mounted over Carrier Threads • When a virtual thread blocks, it is unmounted from the carrier, making room for others. • Once the virtual thread is unlocked, the scheduler will direct it to the real carrier thread 13
Threads Virtual Threads • Not a Green thread (who know what is the difference) • Virtual threads are mounted over Carrier Threads • When a virtual thread blocks, it is unmounted from the carrier, making room for others. • Once the virtual thread is unlocked, the scheduler will direct it to the real carrier thread • ForkJoinPool used for Carrier threads 14
Threads Virtual Threads • Not a Green thread (who know what is the difference) • Virtual threads are mounted over Carrier Threads • When a virtual thread blocks, it is unmounted from the carrier, making room for others. • Once the virtual thread is unlocked, the scheduler will direct it to the real carrier thread • ForkJoinPool used for Carrier threads • It has all the same properties as the threads we are used to 15
Threads Virtual Threads • Not a Green thread (who know what is the difference) • Virtual threads are mounted over Carrier Threads • When a virtual thread blocks, it is unmounted from the carrier, making room for others. • Once the virtual thread is unlocked, the scheduler will direct it to the real carrier thread • ForkJoinPool used for Carrier threads • It has all the same properties as the threads we are used to • Implemented with Continuations (Continuations.yield(), no external API) 16
There can be millions of threads • Always deamon threads • Always have group “VirtualThreads” • Have no permission with a SecurityManager (well.. It is deprecated) 17
follows the functional style • is reactive and non-blocking • is very Fast • has a tiny Footprint • is transparent, no “magic” Helidon MP • Is declarative style (Jakarta EE, Spring Boot) • Based on MicroProfile plus some Jakarta EE components • Is fast • Has a small Footprint • Uses Annotations & Dependency Injection Routing routing = Routing.builder() .get("/hello", (req, res) -> res.send( "Hello World")) .build(); WebServer.create(routing) .start(); @Path("hello") public class HelloWorld { @GET public String hello() { return "Hello World"; } } Choose your way! 25
Helidon has its own set of reactive operators that have no dependencies outside of the Helidon ecosystem. These operators can be used with java.util.concurrent.Flow based reactive streams. Stream processing operator chain can be easily constructed by io.helidon.common.reactive.Multi, or io.helidon.common.reactive.Single for streams with single value. <dependency> <groupId>io.helidon.common</groupId> <artifactId>helidon-common-reactive</artifactId> </dependency> 28
API consists of four basic interfaces: • Subscriber: The Subscriber subscribes to Publisher for callbacks. • Publisher: The Publisher publishes the stream of data items to the registered subscribers. • Subscription: The link between publisher and subscriber. • Processor: The processor sits between Publisher and Subscriber, and transforms one stream to another. It is a combination of both Iterator and Observer patterns 29
subscribers is a pitfall • Steep learning curve • Hard to get right™ ◦ Exceptionally ◦ Troubleshooting ◦ No useful stack traces ◦ More than one task in parallel is tough • Using blocking code requires executor services • “Callback Hell” Reactive Programming
Metrics MicroProfile Health Check MicroProfile Tracing MicroProfile Fault Tolerance MicroProfile JWT Auth MicroProfile REST Client MicroProfile Open API Jakarta RESTful Web Services Jakarta JSON Processing Jakarta JSON Binding Jakarta CDI Jakarta Persistence Jakarta Transactions MicroProfile Reactive Streams Operators MicroProfile Reactive Messaging CORS Jakarta WebSocket gRPC Server & Client Jakarta Annotations MicroProfile GraphQL MicroProfile LRA Components: 40 MicroProfile MP standalone Jakarta EE Helidon Specific Integrations
+ • Reactive • Great performance • High concurrency – • Scaffolding • Hard to debug • Hard to maintain • Hard to learn Helidon MP Helidon SE + • Standard • Easy to write • Easy to maintain • Easy to debug • Easy to learn – • Blocking • Average performance • Limited by quantity of threads
affiliates • The world’s first framework based on virtual threads! • Scalability of asynchronous programming models with the simplicity of synchronous code 49
affiliates • The world’s first framework based on virtual threads! • Scalability of asynchronous programming models with the simplicity of synchronous code • Built from the ground up in tight collaboration with the Java team 50
affiliates • The world’s first framework based on virtual threads! • Scalability of asynchronous programming models with the simplicity of synchronous code • Built from the ground up in tight collaboration with the Java team • Contains Nima Web Server plus additional libraries (observability, testing, etc.) 51
affiliates • The world’s first framework based on virtual threads! • Scalability of asynchronous programming models with the simplicity of synchronous code • Built from the ground up in tight collaboration with the Java team • Contains Nima Web Server plus additional libraries (observability, testing, etc.) • Performance comparable to Netty 52
affiliates • The world’s first framework based on virtual threads! • Scalability of asynchronous programming models with the simplicity of synchronous code • Built from the ground up in tight collaboration with the Java team • Contains Nima Web Server plus additional libraries (observability, testing, etc.) • Performance comparable to Netty • Will be a heart of next major Helidon release (planned in 2023) 53
affiliates • The world’s first framework based on virtual threads! • Scalability of asynchronous programming models with the simplicity of synchronous code • Built from the ground up in tight collaboration with the Java team • Contains Nima Web Server plus additional libraries (observability, testing, etc.) • Performance comparable to Netty • Will be a heart of next major Helidon release (planned in 2023) • Alpha-6 version is available in Maven Central 54
affiliates • HTTP/1.1 protocol with full pipelining support • HTTP/2 protocol • GRPC protocol • WebSocket protocol • Unit and integration testing support • TLS and mTLS support (ALPN for HTTP/2) • Extensible ◦ Other protocols (even non-HTTP) ◦ HTTP based protocols (Upgrade from HTTP/1, HTTP/2) 56
• Access Log support • CORS support • Static Content • OpenTelemetry tracing • Observability endpoint • Configuration information - /observe/config • Application information - /observe/info • Health checks - /observe/health • Other coming… 57
its affiliates • “Obstructing” of a thread is not possible in either blocking or reactive frameworks • Obstruction – long term, full utilization of a thread, requiring usage of yield • Reactive: designed to handle short non-blocking tasks, obstruction degrades performance heavily • Blocking: when obstructed, consumes the pinned thread of Fork-join pool, if used concurrently, degrades server performance • Solution: use a custom executor service ◦ Reactive: complete a CompletableFuture when processing done ◦ Blocking: block until processing done 68
To re-use or not (byte buffers/byte arrays) ◦ Due to the (huge) number of virtual threads, using a single component to cache buffers for re-use is not efficient. We have achieved higher throughput with discarding them (and let GC do its work) than with reuse. Similar results with native byte buffers and heap byte buffers • Asynchronous writes ◦ By default, we write to sockets asynchronously. On Linux, this provides higher performance when HTTP/1.1 pipelining is used (up to 3x). When not using pipelining, there is no additional advantage to this. So the async writes are configurable and can be disabled.
Blocking or non-blocking sockets/socket channels ◦ After a lot of testing and validation with Java team, we found out that the best performance is achieved with blocking sockets – e.g. we use ServerSocket in blocking mode to listen for connections and use “old school” approach of accepting a socket and running a new thread to process it (of course the thread is virtual) • Learn to code blocking! ◦ A lot of times we are used to asynchronous coding; forget about it, just block! ◦ Code is easier to read, cleaner and easier to troubleshoot.
HTTP/2 is hard ◦ Nevertheless we found a way to provide unified HTTP routing (regardless of version), with support to version specific routes ◦ Connection/stream interaction is the only complex threading scenario, that can cause race conditions • Grpc ◦ We do not need to use Netty grpc, so we can now serve grpc on same port as other protocols!
72 When virtual thread cannot unmount from carrier • OS level limitations (some file systems ops) • JDK limitations (Object.wait()) These operations cannot unmount from the carrier thread. JVM compensates by temporary increasing number of threads in the ForkJoinPool.
73 When virtual thread is pinned to carrier • Synchronized blocks • Native methods and foreign functions These operations may hinder application scalability, as the scheduler does not compensate by expanding its parallelism. These operations should be guarded using Reentrant Lock and other constructs from java.util.concurrent instead of synchronized
• “unlearn” asynchronous programming • Everything is easier to write, easier to read, easier to maintain, easier to debug • We will need to wait for a few libraries to catch up (Connection pools, messaging etc.) • Refactor synchronized to java.util.concurrent locking • Think about thread locals (ScopedValues) • Virtual Threads are the best improvement in Java (EVER so far)