Slide 1

Slide 1 text

for Web Raphael Amorim Rust 101

Slide 2

Slide 2 text

DISCLAIMER 1.0 Don’t panic.

Slide 3

Slide 3 text

DISCLAIMER 1.1 Our goal is to kickstart with Web since is impossible to cover Rust basics in few hours.

Slide 4

Slide 4 text

Agenda - Lifetimes & Async - Tokio - Exercises & Coffee Break (40min) - Tower, Hyper and Axum - Exercises & Coffee Break (50min) - AWS SDK & Open Telemetry - Lambdas

Slide 5

Slide 5 text

LIFETIMES

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

struct Z<'a, 'b> { a: &'a i32, b: &'b i32, } let z: Z;

Slide 8

Slide 8 text

let z: Z; let a = 1; { let b = 2; z = Z{ a: &a, b: &b }; } println!("{} {}", z.a, z.b);

Slide 9

Slide 9 text

error[E0597]: `b` does not live long enough

Slide 10

Slide 10 text

error[E0597]: `b` does not live long enough

Slide 11

Slide 11 text

fn append_str<'a>(x: &'a str, y: &str) -> &'a str { x } fn main() { println!("{}", append_str(“viaplay","group"));

Slide 12

Slide 12 text

fn append_str<'a>(x: &'a str, y: &str) -> &'a str { y } fn main() { println!("{}", append_str(“viaplay","group"));

Slide 13

Slide 13 text

error[E0621]: explicit lifetime required in the type of `y`

Slide 14

Slide 14 text

error[E0621]: explicit lifetime required in the type of `y`

Slide 15

Slide 15 text

let x: &'static str = "Hello, world."; println!("{}", x);

Slide 16

Slide 16 text

let z: Z; { let a = 1; let b = 2; z = Z{ a: &a, b: &b }; } println!("{} {}", z.a, z.b);

Slide 17

Slide 17 text

let a_but_lives_long_enough = 1; let za = { let b = 2; z = Z{ a: &a_but_lives_long_enough, b: &b }; z.a }; println!("{}", za);

Slide 18

Slide 18 text

There’s other ways to fix it as well...

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

Async

Slide 21

Slide 21 text

Why?

Slide 22

Slide 22 text

Concurrent programming is less mature and "standardized" than regular, sequential programming.

Slide 23

Slide 23 text

OS threads don't require any changes to the programming model, which makes it very easy to express concurrency. However, synchronizing between threads can be difficult, and the performance overhead is large. Thread pools can mitigate some of these costs, but not enough to support massive IO-bound workloads. other concurrency models

Slide 24

Slide 24 text

OS threads don't require any changes to the programming model, which makes it very easy to express concurrency. However, synchronizing between threads can be difficult, and the performance overhead is large. Thread pools can mitigate some of these costs, but not enough to support massive IO-bound workloads. other concurrency models

Slide 25

Slide 25 text

OS threads don't require any changes to the programming model, which makes it very easy to express concurrency. However, synchronizing between threads can be difficult, and the performance overhead is large. Thread pools can mitigate some of these costs, but not enough to support massive IO-bound workloads. other concurrency models

Slide 26

Slide 26 text

Event-driven programming, in conjunction with callbacks, can be very performant, but tends to result in a verbose, "non-linear" control flow. Data flow and error propagation is often hard to follow. other concurrency models

Slide 27

Slide 27 text

other concurrency models Event-driven programming, in conjunction with callbacks, can be very performant, but tends to result in a verbose, "non-linear" control flow. Data flow and error propagation is often hard to follow.

Slide 28

Slide 28 text

Coroutines, like threads, don't require changes to the programming model, which makes them easy to use. Like async, they can also support a large number of tasks. However, they abstract away low-level details that are important for systems programming and custom runtime implementors. other concurrency models

Slide 29

Slide 29 text

Coroutines, like threads, don't require changes to the programming model, which makes them easy to use. Like async, they can also support a large number of tasks. However, they abstract away low-level details that are important for systems programming and custom runtime implementors. other concurrency models

Slide 30

Slide 30 text

The actor model divides all concurrent computation into units called actors, which communicate through fallible message passing, much like in distributed systems. The actor model can be efficiently implemented, but it leaves many practical issues unanswered, such as flow control and retry logic. other concurrency models

Slide 31

Slide 31 text

The actor model divides all concurrent computation into units called actors, which communicate through fallible message passing, much like in distributed systems. The actor model can be efficiently implemented, but it leaves many practical issues unanswered, such as flow control and retry logic. other concurrency models

Slide 32

Slide 32 text

Asynchronous programming allows highly performant implementations that are suitable for low-level languages like Rust, while providing most of the ergonomic benefits of threads and coroutines.

Slide 33

Slide 33 text

Async in Rust Futures are inert in Rust and make progress only when polled. Dropping a future stops it from making further progress.

Slide 34

Slide 34 text

Async in Rust Async is zero-cost in Rust, which means that you only pay for what you use.

Slide 35

Slide 35 text

Async in Rust Specifically, you can use async without heap allocations and dynamic dispatch, which is great for performance! This also lets you use async in constrained environments, such as embedded systems.

Slide 36

Slide 36 text

Async in Rust No built-in runtime is provided by Rust. 
 
 Instead, runtimes are provided by community maintained crates. (We will see one example later which is Tokio)

Slide 37

Slide 37 text

Async in Rust Both single- and multithreaded runtimes are available in Rust, which have different strengths and weaknesses.

Slide 38

Slide 38 text

Async vs threads in Rust The primary alternative to async in Rust is using OS threads. Either directly through std::thread or indirectly through a thread pool.

Slide 39

Slide 39 text

Async vs threads in Rust OS threads are suitable for a small number of tasks, since threads come with CPU and memory overhead. Spawning and switching between threads is quite expensive as even idle threads consume system resources.

Slide 40

Slide 40 text

Async vs threads in Rust A thread pool library can help mitigate some of these costs, but not all. However, threads let you reuse existing synchronous code without significant code changes—no particular programming model is required. In some operating systems, you can also change the priority of a thread, which is useful for drivers and other latency sensitive applications.

Slide 41

Slide 41 text

Async vs threads in Rust Async provides significantly reduced CPU and memory overhead, especially for workloads with a large amount of IO-bound tasks, such as servers and databases. All else equal, you can have orders of magnitude more tasks than OS threads, because an async runtime uses a small amount of (expensive) threads to handle a large amount of (cheap) tasks.

Slide 42

Slide 42 text

Async vs threads in Rust However, async Rust results in larger binary blobs due to the state machines generated from async functions and since each executable bundles an async runtime. 
 
 On a last note, asynchronous programming is not better than threads, but different. If you don't need async for performance reasons, threads can often be the simpler alternative.

Slide 43

Slide 43 text

Example: Concurrent downloading

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

However, downloading a web page is a small task; creating a thread for such a small amount of work is quite wasteful. For a larger application, it can easily become a bottleneck.

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

Here, no extra threads are created. Additionally, all function calls are statically dispatched, and there are no heap allocations!

Slide 48

Slide 48 text

The State of Asynchronous in Rust

Slide 49

Slide 49 text

• Outstanding runtime performance for typical concurrent workloads. • More frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.

Slide 50

Slide 50 text

• Outstanding runtime performance for typical concurrent workloads. • More frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.

Slide 51

Slide 51 text

• Outstanding runtime performance for typical concurrent workloads. • More frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.

Slide 52

Slide 52 text

• Outstanding runtime performance for typical concurrent workloads. • More frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.

Slide 53

Slide 53 text

Language and library support

Slide 54

Slide 54 text

Futures

Slide 55

Slide 55 text

async transforms a block of code into a state machine that implements a trait called Future.

Slide 56

Slide 56 text

trait SimpleFuture { type Output; fn poll(&mut self, wake: fn()) -> Poll; } enum Poll { Ready(T), Pending, }

Slide 57

Slide 57 text

[dependencies] futures = "0.3"

Slide 58

Slide 58 text

use futures::executor::block_on; async fn hello_world() { println!("hello, world!"); } fn main() { let future = hello_world(); // Nothing is printed block_on(future); // `future` is run and "hello, world!" is printed }

Slide 59

Slide 59 text

fn main() { let song = block_on(learn_song()); block_on(sing_song(song)); block_on(dance()); }

Slide 60

Slide 60 text

Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej

Slide 61

Slide 61 text

.await

Slide 62

Slide 62 text

Inside an async fn, you can use .await to wait for the completion of another type that implements the Future trait, such as the output of another async fn. Unlike block_on, .await doesn't block the current thread, but instead asynchronously waits for the future to complete, allowing other tasks to run if the future is currently unable to make progress.

Slide 63

Slide 63 text

async fn learn_and_sing() { // Wait until the song has been learned before singing it. // We use `.await` here rather than `block_on` to prevent blocking the // thread, which makes it possible to `dance` at the same time. let song = learn_song().await; sing_song(song).await; } async fn async_main() { let f1 = learn_and_sing(); let f2 = dance(); // `join!` is like `.await` but can wait for multiple futures concurrently. // If we're temporarily blocked in the `learn_and_sing` future, the `dance` // future will take over the current thread. If `dance` becomes blocked, // `learn_and_sing` can take back over. If both futures are blocked, then // `async_main` is blocked and will yield to the executor. futures::join!(f1, f2); } fn main() { block_on(async_main()); }

Slide 64

Slide 64 text

In this example, learning the song must happen before singing the song, but both learning and singing can happen at the same time as dancing. If we used block_on(learn_song()) rather than learn_song().await in learn_and_sing, the thread wouldn't be able to do anything else while learn_song was running.

Slide 65

Slide 65 text

This would make it impossible to dance at the same time. By .await-ing the learn_song future, we allow other tasks to take over the current thread if learn_song is blocked. This makes it possible to run multiple futures to completion concurrently on the same thread.

Slide 66

Slide 66 text

Async / .await

Slide 67

Slide 67 text

// `foo()` returns a type that implements `Future`. // `foo().await` will result in a value of type `u8`. async fn foo() -> u8 { 5 }

Slide 68

Slide 68 text

Lifetimes

Slide 69

Slide 69 text

async fn foo(x: &u8) -> u8 { *x }

Slide 70

Slide 70 text

fn foo_expanded<'a>(x: &'a u8) -> impl Future + 'a { async move { *x } }

Slide 71

Slide 71 text

‘a non-'static arguments that are still valid

Slide 72

Slide 72 text

fn bad() -> impl Future { let x = 5; // ERROR: `x` does not live long enough borrow_x(&x) }

Slide 73

Slide 73 text

fn good() -> impl Future { async { let x = 5; borrow_x(&x).await } }

Slide 74

Slide 74 text

rust-lang.github.io/ async-book

Slide 75

Slide 75 text

Tokio

Slide 76

Slide 76 text

Tokio is an event-driven, non-blocking I/O platform for writing asynchronous applications with the Rust programming language.

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

- Synchronization primitives, channels and timeouts, sleeps, and intervals. - APIs for performing asynchronous I/O, including TCP and UDP sockets, filesystem operations, and process and signal management. - A runtime for executing asynchronous code, including a task scheduler, an I/O driver backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…), and a high performance timer.

Slide 79

Slide 79 text

- Synchronization primitives, channels and timeouts, sleeps, and intervals. - APIs for performing asynchronous I/O, including TCP and UDP sockets, filesystem operations, and process and signal management. - A runtime for executing asynchronous code, including a task scheduler, an I/O driver backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…), and a high performance timer.

Slide 80

Slide 80 text

- Synchronization primitives, channels and timeouts, sleeps, and intervals. - APIs for performing asynchronous I/O, including TCP and UDP sockets, filesystem operations, and process and signal management. - A runtime for executing asynchronous code, including a task scheduler, an I/O driver backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…), and a high performance timer.

Slide 81

Slide 81 text

Hands-on!

Slide 82

Slide 82 text

https://tokio.rs/tokio/tutorial 40min Coffee Break &

Slide 83

Slide 83 text

Tracing Tower Hyper Axum

Slide 84

Slide 84 text

Hyper
 
 An HTTP client and server library supporting both the HTTP 1 and 2 protocols.

Slide 85

Slide 85 text

Tower
 
 Modular components for building reliable clients and servers. Includes retry, load-balancing, filtering, request-limiting facilities, and more.

Slide 86

Slide 86 text

Tracing
 
 Unified insight into the application and libraries. Provides structured, event-based, data collection and logging.

Slide 87

Slide 87 text

Mio
 
 Minimal portable API on top of the operating-system's evented I/O API.

Slide 88

Slide 88 text

Axum
 
 Ergonomic and modular web framework built with Tokio, Tower, and Hyper.

Slide 89

Slide 89 text

Axum
 
 Ergonomic and modular web framework built with Tokio, Tower, and Hyper.

Slide 90

Slide 90 text

use axum::{response::Html, routing::get, Router}; use std::net::SocketAddr;

Slide 91

Slide 91 text

#[tokio::main] async fn main() { let app = Router::new().route("/", get(handler)); // run it let addr = SocketAddr::from(([127, 0, 0, 1], 3000)); println!("listening on {}", addr); axum::Server::bind(&addr) .serve(app.into_make_service()) .await .unwrap(); } async fn handler() -> Html<&'static str> { Html("

Hello, World!

") }

Slide 92

Slide 92 text

https:// github.com/ raphamorim/ axum-service-checklist

Slide 93

Slide 93 text

Open telemetry

Slide 94

Slide 94 text

use axum_tracing_opentelemetry::opentelemetry_tracing_layer; fn init_tracing() { use axum_tracing_opentelemetry::{ make_resource, otlp, //stdio, }; }

Slide 95

Slide 95 text

let otel_layer = tracing_opentelemetry::layer() .with_tracer(otel_tracer); let subscriber = tracing_subscriber::registry() .with(otel_layer); tracing::subscriber::set_global_default(subscriber) .unwrap();

Slide 96

Slide 96 text

Router::new() // request processed inside span .route("/", get(health)) // opentelemetry_tracing_layer setup `TraceLayer`, // that is provided by tower-http so you have // to add that as a dependency. .layer(opentelemetry_tracing_layer()) .route("/health", get(health))

Slide 97

Slide 97 text

No content

Slide 98

Slide 98 text

No content

Slide 99

Slide 99 text

No content

Slide 100

Slide 100 text

No content

Slide 101

Slide 101 text

Aws sdk

Slide 102

Slide 102 text

[dependencies] aws-config = "0.48.0" aws-sdk-dynamodb = "0.18.0" tokio = { version = "1", features = ["full"] }

Slide 103

Slide 103 text

use aws_sdk_dynamodb as dynamodb; #[tokio::main] async fn main() -> Result<(), dynamodb::Error> { let config = aws_config::load_from_env().await; // aws_config::from_conf(config_params); let client = dynamodb::Client::new(&config);

Slide 104

Slide 104 text

use aws_sdk_dynamodb as dynamodb; #[tokio::main] async fn main() -> Result<(), dynamodb::Error> { let config = aws_config::load_from_env().await; // aws_config::from_conf(config_params); let client = dynamodb::Client::new(&config);

Slide 105

Slide 105 text

Bonus: WebAssembly

Slide 106

Slide 106 text

No content

Slide 107

Slide 107 text

https:// github.com/ raphamorim/ LR35902

Slide 108

Slide 108 text

That’s it folks.