Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Rust 101 for Web

Rust 101 for Web

This talk and workshop was presented in Stockholm, Sweden. The goal of the presentation was to introduce Rust to Web engineers, talking about Tokio, Async, Lifetime, AWS SDK, Open Telemetry and etecetera.

Raphael Amorim

September 01, 2022
Tweet

More Decks by Raphael Amorim

Other Decks in Programming

Transcript

  1. for Web
    Raphael Amorim
    Rust 101

    View full-size slide

  2. DISCLAIMER 1.0
    Don’t panic.

    View full-size slide

  3. DISCLAIMER 1.1
    Our goal is to kickstart with Web since


    is impossible to cover Rust basics


    in few hours.

    View full-size slide

  4. Agenda
    - Lifetimes & Async


    - Tokio


    - Exercises & Coffee Break (40min)


    - Tower, Hyper and Axum


    - Exercises & Coffee Break (50min)


    - AWS SDK & Open Telemetry


    - Lambdas

    View full-size slide

  5. struct Z<'a, 'b> {
    a: &'a i32,
    b: &'b i32,
    }
    let z: Z;

    View full-size slide

  6. let z: Z;
    let a = 1;
    {
    let b = 2;
    z = Z{ a: &a, b: &b };
    }
    println!("{} {}", z.a, z.b);

    View full-size slide

  7. error[E0597]:
    `b` does not live long enough

    View full-size slide

  8. error[E0597]:
    `b` does not live long enough

    View full-size slide

  9. fn append_str<'a>(x: &'a str, y: &str) -> &'a str {
    x
    }
    fn main() {
    println!("{}", append_str(“viaplay","group"));

    View full-size slide

  10. fn append_str<'a>(x: &'a str, y: &str) -> &'a str {
    y
    }
    fn main() {
    println!("{}", append_str(“viaplay","group"));

    View full-size slide

  11. error[E0621]: explicit lifetime required in the type of `y`

    View full-size slide

  12. error[E0621]: explicit lifetime required in the type of `y`

    View full-size slide

  13. let x: &'static str = "Hello, world.";
    println!("{}", x);

    View full-size slide

  14. let z: Z;
    {
    let a = 1;
    let b = 2;
    z = Z{ a: &a, b: &b };
    }
    println!("{} {}", z.a, z.b);

    View full-size slide

  15. let a_but_lives_long_enough = 1;
    let za = {
    let b = 2;
    z = Z{ a: &a_but_lives_long_enough, b: &b };
    z.a
    };
    println!("{}", za);

    View full-size slide

  16. There’s other ways to fix it as well...

    View full-size slide

  17. Concurrent programming is
    less mature and
    "standardized" than regular,
    sequential programming.

    View full-size slide

  18. OS threads don't require any changes to the
    programming model, which makes it very easy
    to express concurrency. However, synchronizing
    between threads can be difficult, and the
    performance overhead is large. Thread pools can
    mitigate some of these costs, but not enough to
    support massive IO-bound workloads.
    other concurrency models

    View full-size slide

  19. OS threads don't require any changes to the
    programming model, which makes it very easy
    to express concurrency. However, synchronizing
    between threads can be difficult, and the
    performance overhead is large. Thread pools can
    mitigate some of these costs, but not enough to
    support massive IO-bound workloads.
    other concurrency models

    View full-size slide

  20. OS threads don't require any changes to the
    programming model, which makes it very easy
    to express concurrency. However, synchronizing
    between threads can be difficult, and the
    performance overhead is large. Thread pools can
    mitigate some of these costs, but not enough to
    support massive IO-bound workloads.
    other concurrency models

    View full-size slide

  21. Event-driven programming, in
    conjunction with callbacks, can be very
    performant, but tends to result in a
    verbose, "non-linear" control flow.

    Data flow and error propagation is
    often hard to follow.
    other concurrency models

    View full-size slide

  22. other concurrency models
    Event-driven programming, in
    conjunction with callbacks, can be very
    performant, but tends to result in a
    verbose, "non-linear" control flow.

    Data flow and error propagation is
    often hard to follow.

    View full-size slide

  23. Coroutines, like threads, don't require changes
    to the programming model, which makes them
    easy to use. Like async, they can also support a
    large number of tasks.

    However, they abstract away low-level details
    that are important for systems programming
    and custom runtime implementors.
    other concurrency models

    View full-size slide

  24. Coroutines, like threads, don't require changes
    to the programming model, which makes them
    easy to use. Like async, they can also support a
    large number of tasks.

    However, they abstract away low-level details
    that are important for systems programming
    and custom runtime implementors.
    other concurrency models

    View full-size slide

  25. The actor model divides all concurrent
    computation into units called actors, which
    communicate through fallible message passing,
    much like in distributed systems.

    The actor model can be efficiently implemented,
    but it leaves many practical issues unanswered,
    such as flow control and retry logic.
    other concurrency models

    View full-size slide

  26. The actor model divides all concurrent
    computation into units called actors, which
    communicate through fallible message passing,
    much like in distributed systems.

    The actor model can be efficiently implemented,
    but it leaves many practical issues unanswered,
    such as flow control and retry logic.
    other concurrency models

    View full-size slide

  27. Asynchronous programming allows highly
    performant implementations that are
    suitable for low-level languages like Rust,
    while providing most of the ergonomic
    benefits of threads and coroutines.

    View full-size slide

  28. Async in Rust
    Futures are inert in Rust and make
    progress only when polled.

    Dropping a future stops it from making
    further progress.

    View full-size slide

  29. Async in Rust
    Async is zero-cost in Rust,
    which means that you only pay
    for what you use.

    View full-size slide

  30. Async in Rust
    Specifically, you can use async without heap
    allocations and dynamic dispatch, which is great for
    performance!

    This also lets you use async in constrained
    environments, such as embedded systems.

    View full-size slide

  31. Async in Rust
    No built-in runtime is provided by Rust. 


    Instead, runtimes are provided by community
    maintained crates.

    (We will see one example later which is Tokio)

    View full-size slide

  32. Async in Rust
    Both single- and multithreaded
    runtimes are available in Rust,
    which have different strengths and
    weaknesses.

    View full-size slide

  33. Async vs threads in Rust
    The primary alternative to async in Rust is
    using OS threads.

    Either directly through std::thread or
    indirectly through a thread pool.

    View full-size slide

  34. Async vs threads in Rust
    OS threads are suitable for a small
    number of tasks, since threads come
    with CPU and memory overhead.

    Spawning and switching between
    threads is quite expensive as even idle
    threads consume system resources.

    View full-size slide

  35. Async vs threads in Rust
    A thread pool library can help mitigate some of
    these costs, but not all. However, threads let
    you reuse existing synchronous code without
    significant code changes—no particular
    programming model is required. In some
    operating systems, you can also change the
    priority of a thread, which is useful for drivers
    and other latency sensitive applications.

    View full-size slide

  36. Async vs threads in Rust
    Async provides significantly reduced CPU and
    memory overhead, especially for workloads with a
    large amount of IO-bound tasks, such as servers and
    databases.

    All else equal, you can have orders of magnitude more
    tasks than OS threads, because an async runtime uses
    a small amount of (expensive) threads to handle a large
    amount of (cheap) tasks.

    View full-size slide

  37. Async vs threads in Rust
    However, async Rust results in larger binary blobs due
    to the state machines generated from async functions
    and since each executable bundles an async runtime. 


    On a last note, asynchronous programming is not
    better than threads, but different. If you don't need
    async for performance reasons, threads can often be
    the simpler alternative.

    View full-size slide

  38. Example: Concurrent
    downloading

    View full-size slide

  39. However, downloading a web page is a
    small task; creating a thread for such a
    small amount of work is quite wasteful.

    For a larger application, it can easily
    become a bottleneck.

    View full-size slide

  40. Here, no extra threads are created.

    Additionally, all function calls are
    statically dispatched, and there are
    no heap allocations!

    View full-size slide

  41. The State of


    Asynchronous in Rust

    View full-size slide

  42. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View full-size slide

  43. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View full-size slide

  44. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View full-size slide

  45. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View full-size slide

  46. Language and library
    support

    View full-size slide

  47. async transforms a block of code into a state
    machine that implements a trait called Future.

    View full-size slide

  48. trait SimpleFuture {
    type Output;
    fn poll(&mut self, wake: fn()) -> Poll;
    }
    enum Poll {
    Ready(T),
    Pending,
    }

    View full-size slide

  49. [dependencies]
    futures = "0.3"

    View full-size slide

  50. use futures::executor::block_on;


    async fn hello_world() {


    println!("hello, world!");


    }


    fn main() {


    let future = hello_world(); // Nothing is printed


    block_on(future); // `future` is run and "hello, world!" is printed


    }

    View full-size slide

  51. fn main() {


    let song = block_on(learn_song());


    block_on(sing_song(song));


    block_on(dance());


    }


    View full-size slide

  52. Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej

    View full-size slide

  53. Inside an async fn, you can use .await to wait for
    the completion of another type that implements the
    Future trait, such as the output of another async
    fn.
    Unlike block_on, .await doesn't block the current
    thread, but instead asynchronously waits for the
    future to complete, allowing other tasks to run if
    the future is currently unable to make progress.

    View full-size slide

  54. async fn learn_and_sing() {


    // Wait until the song has been learned before singing it.


    // We use `.await` here rather than `block_on` to prevent blocking the


    // thread, which makes it possible to `dance` at the same time.


    let song = learn_song().await;


    sing_song(song).await;


    }


    async fn async_main() {


    let f1 = learn_and_sing();


    let f2 = dance();


    // `join!` is like `.await` but can wait for multiple futures concurrently.


    // If we're temporarily blocked in the `learn_and_sing` future, the `dance`


    // future will take over the current thread. If `dance` becomes blocked,


    // `learn_and_sing` can take back over. If both futures are blocked, then


    // `async_main` is blocked and will yield to the executor.


    futures::join!(f1, f2);


    }


    fn main() {


    block_on(async_main());


    }


    View full-size slide

  55. In this example, learning the song must
    happen before singing the song, but both
    learning and singing can happen at the same
    time as dancing. If we used
    block_on(learn_song()) rather than
    learn_song().await in learn_and_sing, the
    thread wouldn't be able to do anything else
    while learn_song was running.

    View full-size slide

  56. This would make it impossible to dance at
    the same time. By .await-ing the learn_song
    future, we allow other tasks to take over
    the current thread if learn_song is
    blocked.
    This makes it possible to run multiple
    futures to completion concurrently on the
    same thread.

    View full-size slide

  57. Async / .await

    View full-size slide

  58. // `foo()` returns a type that implements `Future= u8>`.
    // `foo().await` will result in a value of type `u8`.
    async fn foo() -> u8 { 5 }

    View full-size slide

  59. async fn foo(x: &u8) ->
    u8 { *x }

    View full-size slide

  60. fn foo_expanded<'a>(x: &'a u8) ->
    impl Future + 'a {
    async move { *x }
    }

    View full-size slide

  61. ‘a
    non-'static arguments that
    are still valid

    View full-size slide

  62. fn bad() -> impl Future {
    let x = 5;
    // ERROR: `x` does not live long enough
    borrow_x(&x)
    }

    View full-size slide

  63. fn good() -> impl Future {
    async {
    let x = 5;
    borrow_x(&x).await
    }
    }

    View full-size slide

  64. rust-lang.github.io/
    async-book

    View full-size slide

  65. Tokio is an event-driven, non-blocking I/O
    platform for writing asynchronous
    applications with the Rust programming
    language.

    View full-size slide

  66. - Synchronization primitives, channels and timeouts, sleeps,
    and intervals.
    - APIs for performing asynchronous I/O, including TCP and
    UDP sockets, filesystem operations, and process and signal
    management.
    - A runtime for executing asynchronous code, including a
    task scheduler, an I/O driver backed by the operating
    system’s event queue (epoll, kqueue, IOCP, etc…), and a high
    performance timer.

    View full-size slide

  67. - Synchronization primitives, channels and timeouts, sleeps,
    and intervals.
    - APIs for performing asynchronous I/O, including TCP and
    UDP sockets, filesystem operations, and process and signal
    management.
    - A runtime for executing asynchronous code, including a
    task scheduler, an I/O driver backed by the operating
    system’s event queue (epoll, kqueue, IOCP, etc…), and a high
    performance timer.

    View full-size slide

  68. - Synchronization primitives, channels and timeouts, sleeps,
    and intervals.
    - APIs for performing asynchronous I/O, including TCP and
    UDP sockets, filesystem operations, and process and signal
    management.
    - A runtime for executing asynchronous code, including a
    task scheduler, an I/O driver backed by the operating
    system’s event queue (epoll, kqueue, IOCP, etc…), and a high
    performance timer.

    View full-size slide

  69. https://tokio.rs/tokio/tutorial
    40min
    Coffee Break &

    View full-size slide

  70. Tracing Tower Hyper Axum


    View full-size slide

  71. Hyper


    An HTTP client and server library
    supporting both the HTTP 1 and 2 protocols.

    View full-size slide

  72. Tower


    Modular components for building reliable clients
    and servers. Includes retry, load-balancing,
    filtering, request-limiting facilities, and more.

    View full-size slide

  73. Tracing


    Unified insight into the application and
    libraries. Provides structured, event-based, data
    collection and logging.

    View full-size slide

  74. Mio


    Minimal portable API on top of the
    operating-system's evented I/O API.

    View full-size slide

  75. Axum


    Ergonomic and modular web framework built
    with Tokio, Tower, and Hyper.

    View full-size slide

  76. Axum


    Ergonomic and modular web framework built
    with Tokio, Tower, and Hyper.

    View full-size slide

  77. use axum::{response::Html, routing::get, Router};


    use std::net::SocketAddr;

    View full-size slide

  78. #[tokio::main]
    async fn main() {

    let app = Router::new().route("/", get(handler));

    // run it

    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));

    println!("listening on {}", addr);

    axum::Server::bind(&addr)

    .serve(app.into_make_service())

    .await

    .unwrap();

    }

    async fn handler() -> Html<&'static str> {

    Html("Hello, World!")

    }

    View full-size slide

  79. https://
    github.com/
    raphamorim/
    axum-service-checklist

    View full-size slide

  80. Open telemetry


    View full-size slide

  81. use axum_tracing_opentelemetry::opentelemetry_tracing_layer;
    fn init_tracing() {
    use axum_tracing_opentelemetry::{
    make_resource,
    otlp,
    //stdio,
    };
    }

    View full-size slide

  82. let otel_layer = tracing_opentelemetry::layer()
    .with_tracer(otel_tracer);
    let subscriber = tracing_subscriber::registry()
    .with(otel_layer);
    tracing::subscriber::set_global_default(subscriber)
    .unwrap();

    View full-size slide

  83. Router::new()
    // request processed inside span
    .route("/", get(health))
    // opentelemetry_tracing_layer setup `TraceLayer`,
    // that is provided by tower-http so you have
    // to add that as a dependency.
    .layer(opentelemetry_tracing_layer())
    .route("/health", get(health))

    View full-size slide

  84. [dependencies]
    aws-config = "0.48.0"
    aws-sdk-dynamodb = "0.18.0"
    tokio = { version = "1", features = ["full"] }

    View full-size slide

  85. use aws_sdk_dynamodb as dynamodb;
    #[tokio::main]
    async fn main() -> Result<(), dynamodb::Error> {
    let config = aws_config::load_from_env().await;
    // aws_config::from_conf(config_params);
    let client = dynamodb::Client::new(&config);

    View full-size slide

  86. use aws_sdk_dynamodb as dynamodb;
    #[tokio::main]
    async fn main() -> Result<(), dynamodb::Error> {
    let config = aws_config::load_from_env().await;
    // aws_config::from_conf(config_params);
    let client = dynamodb::Client::new(&config);

    View full-size slide

  87. Bonus: WebAssembly

    View full-size slide

  88. https://
    github.com/
    raphamorim/
    LR35902

    View full-size slide

  89. That’s it folks.

    View full-size slide