Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Rust 101 for Web

Rust 101 for Web

This talk and workshop was presented in Stockholm, Sweden. The goal of the presentation was to introduce Rust to Web engineers, talking about Tokio, Async, Lifetime, AWS SDK, Open Telemetry and etecetera.

Raphael Amorim

September 01, 2022
Tweet

More Decks by Raphael Amorim

Other Decks in Programming

Transcript

  1. for Web
    Raphael Amorim
    Rust 101

    View Slide

  2. DISCLAIMER 1.0
    Don’t panic.

    View Slide

  3. DISCLAIMER 1.1
    Our goal is to kickstart with Web since


    is impossible to cover Rust basics


    in few hours.

    View Slide

  4. Agenda
    - Lifetimes & Async


    - Tokio


    - Exercises & Coffee Break (40min)


    - Tower, Hyper and Axum


    - Exercises & Coffee Break (50min)


    - AWS SDK & Open Telemetry


    - Lambdas

    View Slide

  5. LIFETIMES

    View Slide

  6. View Slide

  7. struct Z {
    a: &'a i32,
    b: &'b i32,
    }
    let z: Z;

    View Slide

  8. let z: Z;
    let a = 1;
    {
    let b = 2;
    z = Z{ a: &a, b: &b };
    }
    println!("{} {}", z.a, z.b);

    View Slide

  9. error[E0597]:
    `b` does not live long enough

    View Slide

  10. error[E0597]:
    `b` does not live long enough

    View Slide

  11. fn append_str(x: &'a str, y: &str) -> &'a str {
    x
    }
    fn main() {
    println!("{}", append_str(“viaplay","group"));

    View Slide

  12. fn append_str(x: &'a str, y: &str) -> &'a str {
    y
    }
    fn main() {
    println!("{}", append_str(“viaplay","group"));

    View Slide

  13. error[E0621]: explicit lifetime required in the type of `y`

    View Slide

  14. error[E0621]: explicit lifetime required in the type of `y`

    View Slide

  15. let x: &'static str = "Hello, world.";
    println!("{}", x);

    View Slide

  16. let z: Z;
    {
    let a = 1;
    let b = 2;
    z = Z{ a: &a, b: &b };
    }
    println!("{} {}", z.a, z.b);

    View Slide

  17. let a_but_lives_long_enough = 1;
    let za = {
    let b = 2;
    z = Z{ a: &a_but_lives_long_enough, b: &b };
    z.a
    };
    println!("{}", za);

    View Slide

  18. There’s other ways to fix it as well...

    View Slide

  19. View Slide

  20. Async

    View Slide

  21. Why?

    View Slide

  22. Concurrent programming is
    less mature and
    "standardized" than regular,
    sequential programming.

    View Slide

  23. OS threads don't require any changes to the
    programming model, which makes it very easy
    to express concurrency. However, synchronizing
    between threads can be difficult, and the
    performance overhead is large. Thread pools can
    mitigate some of these costs, but not enough to
    support massive IO-bound workloads.
    other concurrency models

    View Slide

  24. OS threads don't require any changes to the
    programming model, which makes it very easy
    to express concurrency. However, synchronizing
    between threads can be difficult, and the
    performance overhead is large. Thread pools can
    mitigate some of these costs, but not enough to
    support massive IO-bound workloads.
    other concurrency models

    View Slide

  25. OS threads don't require any changes to the
    programming model, which makes it very easy
    to express concurrency. However, synchronizing
    between threads can be difficult, and the
    performance overhead is large. Thread pools can
    mitigate some of these costs, but not enough to
    support massive IO-bound workloads.
    other concurrency models

    View Slide

  26. Event-driven programming, in
    conjunction with callbacks, can be very
    performant, but tends to result in a
    verbose, "non-linear" control flow.

    Data flow and error propagation is
    often hard to follow.
    other concurrency models

    View Slide

  27. other concurrency models
    Event-driven programming, in
    conjunction with callbacks, can be very
    performant, but tends to result in a
    verbose, "non-linear" control flow.

    Data flow and error propagation is
    often hard to follow.

    View Slide

  28. Coroutines, like threads, don't require changes
    to the programming model, which makes them
    easy to use. Like async, they can also support a
    large number of tasks.

    However, they abstract away low-level details
    that are important for systems programming
    and custom runtime implementors.
    other concurrency models

    View Slide

  29. Coroutines, like threads, don't require changes
    to the programming model, which makes them
    easy to use. Like async, they can also support a
    large number of tasks.

    However, they abstract away low-level details
    that are important for systems programming
    and custom runtime implementors.
    other concurrency models

    View Slide

  30. The actor model divides all concurrent
    computation into units called actors, which
    communicate through fallible message passing,
    much like in distributed systems.

    The actor model can be efficiently implemented,
    but it leaves many practical issues unanswered,
    such as flow control and retry logic.
    other concurrency models

    View Slide

  31. The actor model divides all concurrent
    computation into units called actors, which
    communicate through fallible message passing,
    much like in distributed systems.

    The actor model can be efficiently implemented,
    but it leaves many practical issues unanswered,
    such as flow control and retry logic.
    other concurrency models

    View Slide

  32. Asynchronous programming allows highly
    performant implementations that are
    suitable for low-level languages like Rust,
    while providing most of the ergonomic
    benefits of threads and coroutines.

    View Slide

  33. Async in Rust
    Futures are inert in Rust and make
    progress only when polled.

    Dropping a future stops it from making
    further progress.

    View Slide

  34. Async in Rust
    Async is zero-cost in Rust,
    which means that you only pay
    for what you use.

    View Slide

  35. Async in Rust
    Specifically, you can use async without heap
    allocations and dynamic dispatch, which is great for
    performance!

    This also lets you use async in constrained
    environments, such as embedded systems.

    View Slide

  36. Async in Rust
    No built-in runtime is provided by Rust. 


    Instead, runtimes are provided by community
    maintained crates.

    (We will see one example later which is Tokio)

    View Slide

  37. Async in Rust
    Both single- and multithreaded
    runtimes are available in Rust,
    which have different strengths and
    weaknesses.

    View Slide

  38. Async vs threads in Rust
    The primary alternative to async in Rust is
    using OS threads.

    Either directly through std::thread or
    indirectly through a thread pool.

    View Slide

  39. Async vs threads in Rust
    OS threads are suitable for a small
    number of tasks, since threads come
    with CPU and memory overhead.

    Spawning and switching between
    threads is quite expensive as even idle
    threads consume system resources.

    View Slide

  40. Async vs threads in Rust
    A thread pool library can help mitigate some of
    these costs, but not all. However, threads let
    you reuse existing synchronous code without
    significant code changes—no particular
    programming model is required. In some
    operating systems, you can also change the
    priority of a thread, which is useful for drivers
    and other latency sensitive applications.

    View Slide

  41. Async vs threads in Rust
    Async provides significantly reduced CPU and
    memory overhead, especially for workloads with a
    large amount of IO-bound tasks, such as servers and
    databases.

    All else equal, you can have orders of magnitude more
    tasks than OS threads, because an async runtime uses
    a small amount of (expensive) threads to handle a large
    amount of (cheap) tasks.

    View Slide

  42. Async vs threads in Rust
    However, async Rust results in larger binary blobs due
    to the state machines generated from async functions
    and since each executable bundles an async runtime. 


    On a last note, asynchronous programming is not
    better than threads, but different. If you don't need
    async for performance reasons, threads can often be
    the simpler alternative.

    View Slide

  43. Example: Concurrent
    downloading

    View Slide

  44. View Slide

  45. However, downloading a web page is a
    small task; creating a thread for such a
    small amount of work is quite wasteful.

    For a larger application, it can easily
    become a bottleneck.

    View Slide

  46. View Slide

  47. Here, no extra threads are created.

    Additionally, all function calls are
    statically dispatched, and there are
    no heap allocations!

    View Slide

  48. The State of


    Asynchronous in Rust

    View Slide

  49. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View Slide

  50. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View Slide

  51. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View Slide

  52. • Outstanding runtime performance for typical
    concurrent workloads.

    • More frequent interaction with advanced language
    features, such as lifetimes and pinning.

    • Some compatibility constraints, both between sync
    and async code, and between different async
    runtimes.

    • Higher maintenance burden, due to the ongoing
    evolution of async runtimes and language support.

    View Slide

  53. Language and library
    support

    View Slide

  54. Futures

    View Slide

  55. async transforms a block of code into a state
    machine that implements a trait called Future.

    View Slide

  56. trait SimpleFuture {
    type Output;
    fn poll(&mut self, wake: fn()) -> Poll;
    }
    enum Poll {
    Ready(T),
    Pending,
    }

    View Slide

  57. [dependencies]
    futures = "0.3"

    View Slide

  58. use futures::executor::block_on;


    async fn hello_world() {


    println!("hello, world!");


    }


    fn main() {


    let future = hello_world(); // Nothing is printed


    block_on(future); // `future` is run and "hello, world!" is printed


    }

    View Slide

  59. fn main() {


    let song = block_on(learn_song());


    block_on(sing_song(song));


    block_on(dance());


    }


    View Slide

  60. Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej
    Nej

    View Slide

  61. .await

    View Slide

  62. Inside an async fn, you can use .await to wait for
    the completion of another type that implements the
    Future trait, such as the output of another async
    fn.
    Unlike block_on, .await doesn't block the current
    thread, but instead asynchronously waits for the
    future to complete, allowing other tasks to run if
    the future is currently unable to make progress.

    View Slide

  63. async fn learn_and_sing() {


    // Wait until the song has been learned before singing it.


    // We use `.await` here rather than `block_on` to prevent blocking the


    // thread, which makes it possible to `dance` at the same time.


    let song = learn_song().await;


    sing_song(song).await;


    }


    async fn async_main() {


    let f1 = learn_and_sing();


    let f2 = dance();


    // `join!` is like `.await` but can wait for multiple futures concurrently.


    // If we're temporarily blocked in the `learn_and_sing` future, the `dance`


    // future will take over the current thread. If `dance` becomes blocked,


    // `learn_and_sing` can take back over. If both futures are blocked, then


    // `async_main` is blocked and will yield to the executor.


    futures::join!(f1, f2);


    }


    fn main() {


    block_on(async_main());


    }


    View Slide

  64. In this example, learning the song must
    happen before singing the song, but both
    learning and singing can happen at the same
    time as dancing. If we used
    block_on(learn_song()) rather than
    learn_song().await in learn_and_sing, the
    thread wouldn't be able to do anything else
    while learn_song was running.

    View Slide

  65. This would make it impossible to dance at
    the same time. By .await-ing the learn_song
    future, we allow other tasks to take over
    the current thread if learn_song is
    blocked.
    This makes it possible to run multiple
    futures to completion concurrently on the
    same thread.

    View Slide

  66. Async / .await

    View Slide

  67. // `foo()` returns a type that implements `Future= u8>`.
    // `foo().await` will result in a value of type `u8`.
    async fn foo() -> u8 { 5 }

    View Slide

  68. Lifetimes

    View Slide

  69. async fn foo(x: &u8) ->
    u8 { *x }

    View Slide

  70. fn foo_expanded(x: &'a u8) ->
    impl Future + 'a {
    async move { *x }
    }

    View Slide

  71. ‘a
    non-'static arguments that
    are still valid

    View Slide

  72. fn bad() -> impl Future {
    let x = 5;
    // ERROR: `x` does not live long enough
    borrow_x(&x)
    }

    View Slide

  73. fn good() -> impl Future {
    async {
    let x = 5;
    borrow_x(&x).await
    }
    }

    View Slide

  74. rust-lang.github.io/
    async-book

    View Slide

  75. Tokio

    View Slide

  76. Tokio is an event-driven, non-blocking I/O
    platform for writing asynchronous
    applications with the Rust programming
    language.

    View Slide

  77. View Slide

  78. - Synchronization primitives, channels and timeouts, sleeps,
    and intervals.
    - APIs for performing asynchronous I/O, including TCP and
    UDP sockets, filesystem operations, and process and signal
    management.
    - A runtime for executing asynchronous code, including a
    task scheduler, an I/O driver backed by the operating
    system’s event queue (epoll, kqueue, IOCP, etc…), and a high
    performance timer.

    View Slide

  79. - Synchronization primitives, channels and timeouts, sleeps,
    and intervals.
    - APIs for performing asynchronous I/O, including TCP and
    UDP sockets, filesystem operations, and process and signal
    management.
    - A runtime for executing asynchronous code, including a
    task scheduler, an I/O driver backed by the operating
    system’s event queue (epoll, kqueue, IOCP, etc…), and a high
    performance timer.

    View Slide

  80. - Synchronization primitives, channels and timeouts, sleeps,
    and intervals.
    - APIs for performing asynchronous I/O, including TCP and
    UDP sockets, filesystem operations, and process and signal
    management.
    - A runtime for executing asynchronous code, including a
    task scheduler, an I/O driver backed by the operating
    system’s event queue (epoll, kqueue, IOCP, etc…), and a high
    performance timer.

    View Slide

  81. Hands-on!

    View Slide

  82. https://tokio.rs/tokio/tutorial
    40min
    Coffee Break &

    View Slide

  83. Tracing Tower Hyper Axum


    View Slide

  84. Hyper


    An HTTP client and server library
    supporting both the HTTP 1 and 2 protocols.

    View Slide

  85. Tower


    Modular components for building reliable clients
    and servers. Includes retry, load-balancing,
    filtering, request-limiting facilities, and more.

    View Slide

  86. Tracing


    Unified insight into the application and
    libraries. Provides structured, event-based, data
    collection and logging.

    View Slide

  87. Mio


    Minimal portable API on top of the
    operating-system's evented I/O API.

    View Slide

  88. Axum


    Ergonomic and modular web framework built
    with Tokio, Tower, and Hyper.

    View Slide

  89. Axum


    Ergonomic and modular web framework built
    with Tokio, Tower, and Hyper.

    View Slide

  90. use axum::{response::Html, routing::get, Router};


    use std::net::SocketAddr;

    View Slide

  91. #[tokio::main]
    async fn main() {

    let app = Router::new().route("/", get(handler));

    // run it

    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));

    println!("listening on {}", addr);

    axum::Server::bind(&addr)

    .serve(app.into_make_service())

    .await

    .unwrap();

    }

    async fn handler() -> Html {

    Html("Hello, World!")

    }

    View Slide

  92. https://
    github.com/
    raphamorim/
    axum-service-checklist

    View Slide

  93. Open telemetry


    View Slide

  94. use axum_tracing_opentelemetry::opentelemetry_tracing_layer;
    fn init_tracing() {
    use axum_tracing_opentelemetry::{
    make_resource,
    otlp,
    //stdio,
    };
    }

    View Slide

  95. let otel_layer = tracing_opentelemetry::layer()
    .with_tracer(otel_tracer);
    let subscriber = tracing_subscriber::registry()
    .with(otel_layer);
    tracing::subscriber::set_global_default(subscriber)
    .unwrap();

    View Slide

  96. Router::new()
    // request processed inside span
    .route("/", get(health))
    // opentelemetry_tracing_layer setup `TraceLayer`,
    // that is provided by tower-http so you have
    // to add that as a dependency.
    .layer(opentelemetry_tracing_layer())
    .route("/health", get(health))

    View Slide

  97. View Slide

  98. View Slide

  99. View Slide

  100. View Slide

  101. Aws sdk


    View Slide

  102. [dependencies]
    aws-config = "0.48.0"
    aws-sdk-dynamodb = "0.18.0"
    tokio = { version = "1", features = ["full"] }

    View Slide

  103. use aws_sdk_dynamodb as dynamodb;
    #[tokio::main]
    async fn main() -> Result {
    let config = aws_config::load_from_env().await;
    // aws_config::from_conf(config_params);
    let client = dynamodb::Client::new(&config);

    View Slide

  104. use aws_sdk_dynamodb as dynamodb;
    #[tokio::main]
    async fn main() -> Result {
    let config = aws_config::load_from_env().await;
    // aws_config::from_conf(config_params);
    let client = dynamodb::Client::new(&config);

    View Slide

  105. Bonus: WebAssembly

    View Slide

  106. View Slide

  107. https://
    github.com/
    raphamorim/
    LR35902

    View Slide

  108. That’s it folks.

    View Slide