Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Rust 101 for Web

Rust 101 for Web

This talk and workshop was presented in Stockholm, Sweden. The goal of the presentation was to introduce Rust to Web engineers, talking about Tokio, Async, Lifetime, AWS SDK, Open Telemetry and etecetera.

Raphael Amorim

September 01, 2022
Tweet

More Decks by Raphael Amorim

Other Decks in Programming

Transcript

  1. for Web Raphael Amorim Rust 101

  2. DISCLAIMER 1.0 Don’t panic.

  3. DISCLAIMER 1.1 Our goal is to kickstart with Web since

    is impossible to cover Rust basics in few hours.
  4. Agenda - Lifetimes & Async - Tokio - Exercises &

    Coffee Break (40min) - Tower, Hyper and Axum - Exercises & Coffee Break (50min) - AWS SDK & Open Telemetry - Lambdas
  5. LIFETIMES

  6. None
  7. struct Z<'a, 'b> { a: &'a i32, b: &'b i32,

    } let z: Z;
  8. let z: Z; let a = 1; { let b

    = 2; z = Z{ a: &a, b: &b }; } println!("{} {}", z.a, z.b);
  9. error[E0597]: `b` does not live long enough

  10. error[E0597]: `b` does not live long enough

  11. fn append_str<'a>(x: &'a str, y: &str) -> &'a str {

    x } fn main() { println!("{}", append_str(“viaplay","group"));
  12. fn append_str<'a>(x: &'a str, y: &str) -> &'a str {

    y } fn main() { println!("{}", append_str(“viaplay","group"));
  13. error[E0621]: explicit lifetime required in the type of `y`

  14. error[E0621]: explicit lifetime required in the type of `y`

  15. let x: &'static str = "Hello, world."; println!("{}", x);

  16. let z: Z; { let a = 1; let b

    = 2; z = Z{ a: &a, b: &b }; } println!("{} {}", z.a, z.b);
  17. let a_but_lives_long_enough = 1; let za = { let b

    = 2; z = Z{ a: &a_but_lives_long_enough, b: &b }; z.a }; println!("{}", za);
  18. There’s other ways to fix it as well...

  19. None
  20. Async

  21. Why?

  22. Concurrent programming is less mature and "standardized" than regular, sequential

    programming.
  23. OS threads don't require any changes to the programming model,

    which makes it very easy to express concurrency. However, synchronizing between threads can be difficult, and the performance overhead is large. Thread pools can mitigate some of these costs, but not enough to support massive IO-bound workloads. other concurrency models
  24. OS threads don't require any changes to the programming model,

    which makes it very easy to express concurrency. However, synchronizing between threads can be difficult, and the performance overhead is large. Thread pools can mitigate some of these costs, but not enough to support massive IO-bound workloads. other concurrency models
  25. OS threads don't require any changes to the programming model,

    which makes it very easy to express concurrency. However, synchronizing between threads can be difficult, and the performance overhead is large. Thread pools can mitigate some of these costs, but not enough to support massive IO-bound workloads. other concurrency models
  26. Event-driven programming, in conjunction with callbacks, can be very performant,

    but tends to result in a verbose, "non-linear" control flow. Data flow and error propagation is often hard to follow. other concurrency models
  27. other concurrency models Event-driven programming, in conjunction with callbacks, can

    be very performant, but tends to result in a verbose, "non-linear" control flow. Data flow and error propagation is often hard to follow.
  28. Coroutines, like threads, don't require changes to the programming model,

    which makes them easy to use. Like async, they can also support a large number of tasks. However, they abstract away low-level details that are important for systems programming and custom runtime implementors. other concurrency models
  29. Coroutines, like threads, don't require changes to the programming model,

    which makes them easy to use. Like async, they can also support a large number of tasks. However, they abstract away low-level details that are important for systems programming and custom runtime implementors. other concurrency models
  30. The actor model divides all concurrent computation into units called

    actors, which communicate through fallible message passing, much like in distributed systems. The actor model can be efficiently implemented, but it leaves many practical issues unanswered, such as flow control and retry logic. other concurrency models
  31. The actor model divides all concurrent computation into units called

    actors, which communicate through fallible message passing, much like in distributed systems. The actor model can be efficiently implemented, but it leaves many practical issues unanswered, such as flow control and retry logic. other concurrency models
  32. Asynchronous programming allows highly performant implementations that are suitable for

    low-level languages like Rust, while providing most of the ergonomic benefits of threads and coroutines.
  33. Async in Rust Futures are inert in Rust and make

    progress only when polled. Dropping a future stops it from making further progress.
  34. Async in Rust Async is zero-cost in Rust, which means

    that you only pay for what you use.
  35. Async in Rust Specifically, you can use async without heap

    allocations and dynamic dispatch, which is great for performance! This also lets you use async in constrained environments, such as embedded systems.
  36. Async in Rust No built-in runtime is provided by Rust.

    
 
 Instead, runtimes are provided by community maintained crates. (We will see one example later which is Tokio)
  37. Async in Rust Both single- and multithreaded runtimes are available

    in Rust, which have different strengths and weaknesses.
  38. Async vs threads in Rust The primary alternative to async

    in Rust is using OS threads. Either directly through std::thread or indirectly through a thread pool.
  39. Async vs threads in Rust OS threads are suitable for

    a small number of tasks, since threads come with CPU and memory overhead. Spawning and switching between threads is quite expensive as even idle threads consume system resources.
  40. Async vs threads in Rust A thread pool library can

    help mitigate some of these costs, but not all. However, threads let you reuse existing synchronous code without significant code changes—no particular programming model is required. In some operating systems, you can also change the priority of a thread, which is useful for drivers and other latency sensitive applications.
  41. Async vs threads in Rust Async provides significantly reduced CPU

    and memory overhead, especially for workloads with a large amount of IO-bound tasks, such as servers and databases. All else equal, you can have orders of magnitude more tasks than OS threads, because an async runtime uses a small amount of (expensive) threads to handle a large amount of (cheap) tasks.
  42. Async vs threads in Rust However, async Rust results in

    larger binary blobs due to the state machines generated from async functions and since each executable bundles an async runtime. 
 
 On a last note, asynchronous programming is not better than threads, but different. If you don't need async for performance reasons, threads can often be the simpler alternative.
  43. Example: Concurrent downloading

  44. None
  45. However, downloading a web page is a small task; creating

    a thread for such a small amount of work is quite wasteful. For a larger application, it can easily become a bottleneck.
  46. None
  47. Here, no extra threads are created. Additionally, all function calls

    are statically dispatched, and there are no heap allocations!
  48. The State of Asynchronous in Rust

  49. • Outstanding runtime performance for typical concurrent workloads. • More

    frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.
  50. • Outstanding runtime performance for typical concurrent workloads. • More

    frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.
  51. • Outstanding runtime performance for typical concurrent workloads. • More

    frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.
  52. • Outstanding runtime performance for typical concurrent workloads. • More

    frequent interaction with advanced language features, such as lifetimes and pinning. • Some compatibility constraints, both between sync and async code, and between different async runtimes. • Higher maintenance burden, due to the ongoing evolution of async runtimes and language support.
  53. Language and library support

  54. Futures

  55. async transforms a block of code into a state machine

    that implements a trait called Future.
  56. trait SimpleFuture { type Output; fn poll(&mut self, wake: fn())

    -> Poll<Self::Output>; } enum Poll<T> { Ready(T), Pending, }
  57. [dependencies] futures = "0.3"

  58. use futures::executor::block_on; async fn hello_world() { println!("hello, world!"); } fn

    main() { let future = hello_world(); // Nothing is printed block_on(future); // `future` is run and "hello, world!" is printed }
  59. fn main() { let song = block_on(learn_song()); block_on(sing_song(song)); block_on(dance()); }

  60. Nej Nej Nej Nej Nej Nej Nej Nej Nej Nej

    Nej Nej Nej Nej Nej Nej Nej
  61. .await

  62. Inside an async fn, you can use .await to wait

    for the completion of another type that implements the Future trait, such as the output of another async fn. Unlike block_on, .await doesn't block the current thread, but instead asynchronously waits for the future to complete, allowing other tasks to run if the future is currently unable to make progress.
  63. async fn learn_and_sing() { // Wait until the song has

    been learned before singing it. // We use `.await` here rather than `block_on` to prevent blocking the // thread, which makes it possible to `dance` at the same time. let song = learn_song().await; sing_song(song).await; } async fn async_main() { let f1 = learn_and_sing(); let f2 = dance(); // `join!` is like `.await` but can wait for multiple futures concurrently. // If we're temporarily blocked in the `learn_and_sing` future, the `dance` // future will take over the current thread. If `dance` becomes blocked, // `learn_and_sing` can take back over. If both futures are blocked, then // `async_main` is blocked and will yield to the executor. futures::join!(f1, f2); } fn main() { block_on(async_main()); }
  64. In this example, learning the song must happen before singing

    the song, but both learning and singing can happen at the same time as dancing. If we used block_on(learn_song()) rather than learn_song().await in learn_and_sing, the thread wouldn't be able to do anything else while learn_song was running.
  65. This would make it impossible to dance at the same

    time. By .await-ing the learn_song future, we allow other tasks to take over the current thread if learn_song is blocked. This makes it possible to run multiple futures to completion concurrently on the same thread.
  66. Async / .await

  67. // `foo()` returns a type that implements `Future<Output = u8>`.

    // `foo().await` will result in a value of type `u8`. async fn foo() -> u8 { 5 }
  68. Lifetimes

  69. async fn foo(x: &u8) -> u8 { *x }

  70. fn foo_expanded<'a>(x: &'a u8) -> impl Future<Output = u8> +

    'a { async move { *x } }
  71. ‘a non-'static arguments that are still valid

  72. fn bad() -> impl Future<Output = u8> { let x

    = 5; // ERROR: `x` does not live long enough borrow_x(&x) }
  73. fn good() -> impl Future<Output = u8> { async {

    let x = 5; borrow_x(&x).await } }
  74. rust-lang.github.io/ async-book

  75. Tokio

  76. Tokio is an event-driven, non-blocking I/O platform for writing asynchronous

    applications with the Rust programming language.
  77. None
  78. - Synchronization primitives, channels and timeouts, sleeps, and intervals. -

    APIs for performing asynchronous I/O, including TCP and UDP sockets, filesystem operations, and process and signal management. - A runtime for executing asynchronous code, including a task scheduler, an I/O driver backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…), and a high performance timer.
  79. - Synchronization primitives, channels and timeouts, sleeps, and intervals. -

    APIs for performing asynchronous I/O, including TCP and UDP sockets, filesystem operations, and process and signal management. - A runtime for executing asynchronous code, including a task scheduler, an I/O driver backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…), and a high performance timer.
  80. - Synchronization primitives, channels and timeouts, sleeps, and intervals. -

    APIs for performing asynchronous I/O, including TCP and UDP sockets, filesystem operations, and process and signal management. - A runtime for executing asynchronous code, including a task scheduler, an I/O driver backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…), and a high performance timer.
  81. Hands-on!

  82. https://tokio.rs/tokio/tutorial 40min Coffee Break &

  83. Tracing Tower Hyper Axum

  84. Hyper
 
 An HTTP client and server library supporting both

    the HTTP 1 and 2 protocols.
  85. Tower
 
 Modular components for building reliable clients and servers.

    Includes retry, load-balancing, filtering, request-limiting facilities, and more.
  86. Tracing
 
 Unified insight into the application and libraries. Provides

    structured, event-based, data collection and logging.
  87. Mio
 
 Minimal portable API on top of the operating-system's

    evented I/O API.
  88. Axum
 
 Ergonomic and modular web framework built with Tokio,

    Tower, and Hyper.
  89. Axum
 
 Ergonomic and modular web framework built with Tokio,

    Tower, and Hyper.
  90. use axum::{response::Html, routing::get, Router}; use std::net::SocketAddr;

  91. #[tokio::main] async fn main() { let app = Router::new().route("/", get(handler));

    // run it let addr = SocketAddr::from(([127, 0, 0, 1], 3000)); println!("listening on {}", addr); axum::Server::bind(&addr) .serve(app.into_make_service()) .await .unwrap(); } async fn handler() -> Html<&'static str> { Html("<h1>Hello, World!</h1>") }
  92. https:// github.com/ raphamorim/ axum-service-checklist

  93. Open telemetry

  94. use axum_tracing_opentelemetry::opentelemetry_tracing_layer; fn init_tracing() { use axum_tracing_opentelemetry::{ make_resource, otlp, //stdio,

    }; }
  95. let otel_layer = tracing_opentelemetry::layer() .with_tracer(otel_tracer); let subscriber = tracing_subscriber::registry() .with(otel_layer);

    tracing::subscriber::set_global_default(subscriber) .unwrap();
  96. Router::new() // request processed inside span .route("/", get(health)) // opentelemetry_tracing_layer

    setup `TraceLayer`, // that is provided by tower-http so you have // to add that as a dependency. .layer(opentelemetry_tracing_layer()) .route("/health", get(health))
  97. None
  98. None
  99. None
  100. None
  101. Aws sdk

  102. [dependencies] aws-config = "0.48.0" aws-sdk-dynamodb = "0.18.0" tokio = {

    version = "1", features = ["full"] }
  103. use aws_sdk_dynamodb as dynamodb; #[tokio::main] async fn main() -> Result<(),

    dynamodb::Error> { let config = aws_config::load_from_env().await; // aws_config::from_conf(config_params); let client = dynamodb::Client::new(&config);
  104. use aws_sdk_dynamodb as dynamodb; #[tokio::main] async fn main() -> Result<(),

    dynamodb::Error> { let config = aws_config::load_from_env().await; // aws_config::from_conf(config_params); let client = dynamodb::Client::new(&config);
  105. Bonus: WebAssembly

  106. None
  107. https:// github.com/ raphamorim/ LR35902

  108. That’s it folks.