• instrumented version of a j.u.c.ExecutorService • pluggable control mechanism to grow or shrink the pool • provides stats, e.g. QUEUE_LATENCY, TASK_ARRIVAL_RATE etc • tasks queue & j.u.c.RejectedExecutionException
non-blocking IO • communication design uniﬁcation with Pipeline, Channel & Handler • a lot of ready-to-use handlers and codecs • ChannelFuture, ChannelPromise • *ByteBuf, zero-copy, smart allocations, leaks detector
"low-level" • aleph.netty deﬁnes a lot of bridges • helpers to deal with ByteBufs • ChannelFuture → manifold's deferred • Channel represented as manifold's stream • a few macros to deﬁne ChannelHandlers • a lot more!
handler (provided by the user) on a given executor • (or inlined!) • catches j.u.c.RejectedExecutionException and passes to rejected-handler • (by default answering with 503) • sends response when ready
is set here • meaning you can mess this up a bit by setting header manually • Aleph server detects keep-alive "status" here and here • and uses here to send response • Aleph server adds the header automatically • ... still !
here ! • updates Pipeline instance with a few new handlers, most notably: • HttpClientCodec with appropriate settings • "main" handler with Aleph's client logic • pipeline-transform option might be useful to rebuild Pipeline when necessary
• raw-client-handler returns body as manifold's stream of Netty's ByteBuf • client-handler converts body to InputStream of bytes (additional copying but less frictions) • both implementations are kinda tricky • most of the complexity: buffering all the way down & chunks
• "jumps" to the executor speciﬁed (or default-response- executor) • this might throw j.u.c.RejectedExecutionException • aleph.http/request is responsible for cleaning up after response is ready and on timeouts • also responsible for "top-level" middlewares: redirects & cookies
connection-pool) • waits for the connection to be realized (either ready/reused or connecting) • "sends" the request applying connection function • chains on response and waits for :aleph/complete • disposes the connection from the pool when not keep-alive and on error
PoolTimeout, ConnectionTimeout, RequestTimeout, ReadTimeout • never perform async operations w/o timeout • ﬂexible error handling, easier to debug (reasoning is different) • you need this when implementing proxies or deciding on retries
be "persistent" forever • not always the best option ! • idle-timeout option is available both for the client and the server since • when set, just updates the Pipeline builder • heavy lifting is done by Netty's IdleStateHandler • catching IdleStateEvent to close the connection
waiting on responses • "allowed" with HTTP/1.1, not used widely (e.g. not used in modern browsers) • might dramatically reduce the number of TCP/IP packets • Aleph • supports pipelining on the server • does not support pipelining on the client
to trace what's going on with your connections • at least state changes: opened, closed, acquired, released • easiest way: inject a ChannelHandler that listens to all events and logs them • to catch acquire and release you need to wrap flow/ instrumented-pool
) • clj-http uses org.apache.http.entity.mime.MultipartEntityBuilder • Aleph implements "from scratch" on the client • supported Content-Transfer-Encodings • no support for the server • yada's implementation with manifold's stream
transmit body • server replies with status code 100 Continue or 417 Expectation failed • client send body • potentially, less pressure on the networks when sending large requests • rarely used in practise
when :body is seq, iterator or stream • if Content-Lenght header is not set explicitely • detection client disconnect is still kinda tough • think about buffering and throttling in advance, this talk might help
forces send-file-body to use send-chunked-file instead of send-file-region • why? send-file-region uses zero-copy ﬁle transfer with io.netty.channel.DefaultFileRegion • does not support user-space modiﬁcations, e.g. compression !
Protocol" • handshaking using HTTP Upgrade header (compatibility) • Aleph uses manifold's SplicedStream to represent duplex channel • supports Text and Binary frames, replies to Ping frames • a lot of cases and corners in the protocol (duplex communication is hard)
mind the difference with aleph.http/websocket- connection ! • http.client/websocket-connection builds a Channel with netty/create-client • websocket-client-handler creates a duplex stream and a handler
to http.server/initialize-websocket-handler • initialize-websocket-handler builds and runs handshaker • .websocket? mark is set to modify response sending behavior • Pipeline is rebuilt appropriately • 2 streams spliced into one, as for the client
events is "almost RFC" • client sends CloseFrame before closing the connection • on receiving CloseFrame saves status & reason • server sends CloseFrame w/o closing the connection • as it will be done by Netty • Netty behavior is "more RFC-ish"
extension since • ﬁne-grained Ping/Pong support is still an open question • to add ability to send http/websocket-ping manually, and to wait for Pong • helpful for heartbeats, online presence detection etc • pipeline-transform might be used to extend both server and client
infrastructure • used pretty heavily even for internal networks (yeah, servise mesh ! ) • long story, available in Aleph since • implementaion in not compatible with clj-http API, works on the connection-pool level only • heavy lifting is done by io.netty/netty-handler-proxy
should be added to the Pipeline earlier • passes to different engines, like OpenSSL, BoringSSL or even JDK • "Don't use the JDK for ALPN! But if you absolutely have to, here's how you do it... :)", grpc-java