text
stringlengths
0
3.16k
source
stringclasses
1 value
Runtimes, Wakers, and the Reactor-Executor Pattern 180 Note that we need to pass in an Events collection to mio 's Poll::poll method, but since there is only one top-level future to run, we don't really care which event happened; we only care that something happened and that it most likely means that data is ready (remember-we always have to account for false wakeups anyway). That's all the changes we need to make to the runtime module for now. The last thing we need to do is register interest for read events after we've written the request to the server in our http module. Let's open http. rs and make some changes. http. rs First of all, let's adjust our dependencies so that we pull in everything we need: ch08/a-runtime/src/http. rs use crate::{future::Poll State, runtime, Future}; use mio::{Interest, Token}; use std::io::{Error Kind, Read, Write}; We need to add a dependency on our runtime module as well as a few types from mio. We only need to make one more change in this file, and that's in our Future::poll implementation, so let's go ahead and locate that: We made one important change here that I've highlighted for you. The implementation is exactly the same, with one important difference: ch08/a-runtime/src/http. rs impl Future for Http Get Future { type Output = String; fn poll(&mut self)-> Poll State<Self::Output> { if self. stream. is_none() { println!("FIRST POLL-START OPERATION"); self. write_request(); runtime::registry() . register(self. stream. as_mut(). unwrap(), Token(0), Interest::READABLE)
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 181 . unwrap(); } let mut buff = vec![0u8; 4096]; loop { match self. stream. as_mut(). unwrap(). read(&mut buff) { Ok(0) => { let s = String::from_utf8_lossy(&self. buffer); break Poll State::Ready(s. to_string()); } Ok(n) => { self. buffer. extend(&buff[0..n]); continue; } Err(e) if e. kind() == Error Kind::Would Block => { break Poll State::Not Ready; } Err(e) => panic!("{e:?}"), } } } } On the first poll, after we've written the request, we register interest in READABLE events on this Tcp Stream. We also removed the line: return Poll State::Not Ready; By removing his line, we'll poll Tcp Stream immediately, which makes sense since we don't really want to return control to our scheduler if we get the response immediately. Y ou wouldn't go wrong either way here since we registered our Tcp Stream as an event source with our reactor and would get a wakeup in any case. These changes were the last piece we needed to get our example back up and running. If you remember the version from Chapter 7, we got the following output: Program starting FIRST POLL-START OPERATION Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 182 Schedule other tasks HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Thu, 16 Nov xxxx xx:xx:xx GMT Hello World1 FIRST POLL-START OPERATION Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Thu, 16 Nov xxxx xx:xx:xx GMT Hello World2 In our new and improved version, we get the following output if we run it with cargo run : Program starting FIRST POLL-START OPERATION Schedule other tasks HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Thu, 16 Nov xxxx xx:xx:xx GMT Hello Async Await FIRST POLL-START OPERATION Schedule other tasks HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Thu, 16 Nov xxxx xx:xx:xx GMT
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 183 Hello Async Await Note If you run the example on Windows, you'll see that you get two Schedule other tasks messages after each other. The reason for that is that Windows emits an extra event when the Tcp Stream is dropped on the server end. This doesn't happen on Linux. Filtering out these events is quite simple, but we won't focus on doing that in our example since it's more of an optimization that we don't really need for our example to work. The thing to make a note of here is how many times we printed Schedule other tasks. We print this message every time we poll and get Not Ready. In the first version, we printed this every 100 ms, but that's just because we had to delay on each sleep to not get overwhelmed with printouts. Without it, our CPU would work 100% on polling the future. If we add a delay, we also add latency even if we make the delay much shorter than 100 ms since we won't be able to respond to events immediately. Our new design makes sure that we respond to events as soon as they're ready, and we do no unnecessary work. So, by making these minor changes, we have already created a much better and more scalable version than we had before. This version is fully single-threaded, which keeps things simple and avoids the complexity and overhead synchronization. When you use Tokio's current-thread scheduler, you get a scheduler that is based on the same idea as we showed here. However, there are also some drawbacks to our current implementation, and the most noticeable one is that it requires a very tight integration between the reactor part and the executor part of the runtime centered on Poll. We want to yield to the OS scheduler when there is no work to do and have the OS wake us up when an event has happened so that we can progress. In our current design, this is done through blocking on Poll::poll. Consequently, both the executor (scheduler) and the reactor must know about Poll. The downside is, then, that if you've created an executor that suits a specific use case perfectly and want to allow users to use a different reactor that doesn't rely on Poll, you can't. More importantly, you might want to run multiple different reactors that wake up the executor for different reasons. Y ou might find that there is something that mio doesn't support, so you create a different reactor for those tasks. How are they supposed to wake up the executor when it's blocking on mio::Poll::poll(... ) ?
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 184 To give you a few examples, you could use a separate reactor for handling timers (for example, when you want a task to sleep for a given time), or you might want to implement a thread pool for handling CPU-intensive or blocking tasks as a reactor that wakes up the corresponding future when the task is ready. To solve these problems, we need a loose coupling between the reactor and executor part of the runtime by having a way to wake up the executor that's not tightly coupled to a single reactor implementation. Let's look at how we can solve this problem by creating a better runtime design. Creating a proper runtime So, if we visualize the degree of dependency between the different parts of our runtime, our current design could be described this way: Figure 8. 5-Tight coupling between reactor and executor If we want a loose coupling between the reactor and executor, we need an interface provided to signal the executor that it should wake up when an event that allows a future to progress has occurred. It's no coincidence that this type is called Waker (https://doc. rust-lang. org/stable/std/ task/struct. Waker. html ) in Rust's standard library. If we change our visualization to reflect this, it will look something like this:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating a proper runtime 185 Figure 8. 6-A loosely coupled reactor and executor It's no coincidence that we land on the same design as what we have in Rust today. It's a minimal design from Rust's point of view, but it allows for a wide variety of runtime designs without laying too many restrictions for the future. Note Even though the design is pretty minimal today from a language perspective, there are plans to stabilize more async-related traits and interfaces in the future. Rust has a working group tasked with including widely used traits and interfaces in the standard library, which you can find more information about here: https://rust-lang. github. io/wg-async/welcome. html. Y ou can also get an overview of items they work on and track their progress here: https://github. com/orgs/rust-lang/projects/28/ views/1. Maybe you even want to get involved ( https://rust-lang. github. io/wg-async/ welcome. html#-getting-involved ) in making async Rust better for everyone after reading this book? If we change our system diagram to reflect the changes we need to make to our runtime going forward, it will look like this:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 186 Figure 8. 7-Executor and reactor: final design We have two parts that have no direct dependency on each other. We have an Executor that schedules tasks and passes on a Waker when polling a future that eventually will be caught and stored by the Reactor. When the Reactor receives a notification that an event is ready, it locates the Waker associated with that task and calls Wake::wake on it. This enables us to: Run several OS threads that each have their own executor, but share the same reactor Have multiple reactors that handle different kinds of leaf futures and make sure to wake up the correct executor when it can progress So, now that we have an idea of what to do, it's time to start writing it in code.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 1-Improving our runtime design by adding a Reactor and a Waker 187 Step 1-Improving our runtime design by adding a Reactor and a Waker In this step, we'll make the following changes: 1. Change the project structure so that it reflects our new design. 2. Find a way for the executor to sleep and wake up that does not rely directly on Poll and create a Waker based on this that allows us to wake up the executor and identify which task is ready to progress. 3. Change the trait definition for Future so that poll takes a &Waker as an argument. Tip Y ou'll find this example in the ch08/b-reactor-executor folder. If you follow along by writing the examples from the book, I suggest that you create a new project called b-reactor-executor for this example by following these steps: 1. Create a new folder called b-reactor-executor. 2. Enter the newly created folder and write cargo init. 3. Copy everything in the src folder in the previous example, a-runtime, into the src folder of a new project. 4. Copy the dependencies section of the Cargo. toml file into the Cargo. toml file in the new project. Let's start by making some changes to our project structure to set it up so that we can build on it going forward. The first thing we do is divide our runtime module into two submodules, reactor and executor : 1. Create a new subfolder in the src folder called runtime. 2. Create two new files in the runtime folder called reactor. rs and executor. rs. 3. Just below the imports in runtime. rs, declare the two new modules by adding these lines: mod executor; mod reactor; Y ou should now have a folder structure that looks like this: src |--runtime |--executor. rs |--reactor. rs |--future. rs
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 188 |--http. rs |--main. rs |--runtime. rs To set everything up, we start by deleting everything in runtime. rs and replacing it with the following lines of code: ch08/b-reactor-executor/src/runtime. rs pub use executor::{spawn, Executor, Waker}; pub use reactor::reactor; mod executor; mod reactor; pub fn init()-> Executor { reactor::start(); Executor::new() } The new content of runtime. rs first declares two submodules called executor and reactor. We then declare one function called init that starts our Reactor and creates a new Executor that it returns to the caller. The next point on our list is to find a way for our Executor to sleep and wake up when needed without relying on Poll. Creating a Waker So, we need to find a different way for our executor to sleep and get woken up that doesn't rely directly on Poll. It turns out that this is quite easy. The standard library gives us what we need to get something working. By calling std::thread::current(), we can get a Thread object. This object is a handle to the current thread, and it gives us access to a few methods, one of which is unpark. The standard library also gives us a method called std::thread::park(), which simply asks the OS scheduler to park our thread until we ask for it to get unparked later on. It turns out that if we combine these, we have a way to both park and unpark the executor, which is exactly what we need. Let's create a Waker type based on this. In our example, we'll define the Waker inside the executor module since that's where we create this exact type of Waker, but you could argue that it belongs to the future module since it's a part of the Future trait.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 1-Improving our runtime design by adding a Reactor and a Waker 189 Important note Our Waker relies on calling park/unpark on the Thread type from the standard library. This is OK for our example since it's easy to understand, but given that any part of the code (including any libraries you use) can get a handle to the same thread by calling std::thread::current() and call park/unpark on it, it's not a robust solution. If unrelated parts of the code call park/unpark on the same thread, we can miss wakeups or end up in deadlocks. Most production libraries create their own Parker type or rely on something such as crossbeam::sync::Parker (https://docs. rs/crossbeam/ latest/crossbeam/sync/struct. Parker. html ) instead. We won't implement Waker as a trait since passing trait objects around will significantly increase the complexity of our example, and it's not in line with the current design of Future and Waker in Rust either. Open the executor. rs file located inside the runtime folder, and let's add all the imports we're going to need right from the start: ch08/b-reactor-executor/src/runtime/executor. rs use crate::future::{Future, Poll State}; use std::{ cell::{Cell, Ref Cell}, collections::Hash Map, sync::{Arc, Mutex}, thread::{self, Thread}, }; The next thing we add is our Waker : ch08/b-reactor-executor/src/runtime/executor. rs #[derive(Clone)] pub struct Waker { thread: Thread, id: usize, ready_queue: Arc<Mutex<Vec<usize>>>, } The Waker will hold three things for us: thread -A handle to the Thread object we mentioned earlier. id-An usize that identifies which task this Waker is associated with.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 190 ready_queue -This is a reference that can be shared between threads to a Vec<usize>, where usize represents the ID of a task that's in the ready queue. We share this object with the executor so that we can push the task ID associated with the Waker onto that queue when it's ready. The implementation of our Waker will be quite simple: ch08/b-reactor-executor/src/runtime/executor. rs impl Waker { pub fn wake(&self) { self. ready_queue . lock() . map(|mut q| q. push(self. id)) . unwrap(); self. thread. unpark(); } } When Waker::wake is called, we first take a lock on the Mutex that protects the ready queue we share with the executor. We then push the id value that identifies the task that this Waker is associated with onto the ready queue. After that's done, we call unpark on the executor thread and wake it up. It will now find the task associated with this Waker in the ready queue and call poll on it. It's worth mentioning that many designs take a shared reference (for example, an Arc<... >) to the future/ task itself, and push that onto the queue. By doing so, they skip a level of indirection that we get here by representing the task as a usize instead of passing in a reference to it. However, I personally think this way of doing it is easier to understand and reason about, and the end result will be the same. How does this Waker compare to the one in the standard library? The Waker we create here will take the same role as the Waker type from the standard library. The biggest difference is that the std::task::Waker method is wrapped in a Context struct and requires us to jump through a few hoops when we create it ourselves. Don't worry-we'll do all this at the end of this book, but neither of these differences is important for understanding the role it plays, so that's why we stick to our own simplified version of asynchronous Rust for now. The last thing we need to do is to change the definition of the Future trait so that it takes &Waker as an argument.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 1-Improving our runtime design by adding a Reactor and a Waker 191 Changing the Future definition Since our Future definition is in the future. rs file, we start by opening that file. The first thing we need to change is to pull in the Waker so that we can use it. At the top of the file, add the following code: ch08/b-reactor-executor/src/future. rs use crate::runtime::Waker; The next thing we do is to change our Future trait so that it takes &Waker as an argument: ch08/b-reactor-executor/src/future. rs pub trait Future { type Output; fn poll(&mut self, waker: &Waker )-> Poll State<Self::Output>; } At this point, you have a choice. We won't be using the join_all function or the Join All<F: Future> struct going forward. If you don't want to keep them, you can just delete everything related to join_all, and that's all you need to do in future. rs. If you want to keep them for further experimentation, you need to change the Future implementation for Join All so that it accepts a waker: &Waker argument, and remember to pass the Waker when polling the joined futures in match fut. poll(waker). The remaining things to do in step 1 are to make some minor changes where we implement the Future trait. Let's start in http. rs. The first thing we do is adjust our dependencies a little to reflect the changes we made to our runtime module, and we add a dependency on our new Waker. Replace the dependencies section at the top of the file with this: ch08/b-reactor-executor/src/http. rs use crate::{future::Poll State, runtime::{self, reactor, Waker}, Future}; use mio::Interest; use std::io::{Error Kind, Read, Write};
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 192 The compiler will complain about not finding the reactor yet, but we'll get to that shortly. Next, we have to navigate to the impl Future for Http Get Future block, where we need to change the poll method so that it accepts a &Waker argument: ch08/b-reactor-executor/src/http. rs impl Future for Http Get Future { type Output = String; fn poll(&mut self, waker: &Waker )-> Poll State<Self::Output> {... The last file we need to change is main. rs. Since corofy doesn't know about Waker types, we need to change a few lines in the coroutines it generated for us in main. rs. First of all, we have to add a dependency on our new Waker, so add this at the start of the file: ch08/b-reactor-executor/src/main. rs use runtime::Waker; In the impl Future for Coroutine block, change the following three lines of code that I've highlighted: ch08/b-reactor-executor/src/main. rs fn poll(&mut self, waker: &Waker ) match f1. poll( waker) match f2. poll( waker) And that's all we need to do in step 1. We'll get back to fixing the errors in this file as the last step we do; for now, we just focus on everything concerning the Waker. The next step will be to create a proper Executor. Step 2-Implementing a proper Executor In this step, we'll create an executor that will: Hold many top-level futures and switch between them Enable us to spawn new top-level futures from anywhere in our asynchronous program
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 2-Implementing a proper Executor 193 Hand out Waker types so that they can sleep when there is nothing to do and wake up when one of the top-level futures can progress Enable us to run several executors by having each run on its dedicated OS thread Note It's worth mentioning that our executor won't be fully multithreaded in the sense that tasks/ futures can't be sent from one thread to another, and the different Executor instances will not know of each other. Therefore, executors can't steal work from each other (no work-stealing), and we can't rely on executors picking tasks from a global task queue. The reason is that the Executor design will be much more complex if we go down that route, not only because of the added logic but also because we have to add constraints, such as requiring everything to be Send + Sync. Some of the complexity in asynchronous Rust today can be attributed to the fact that many runtimes in Rust are multithreaded by default, which makes asynchronous Rust deviate more from “normal” Rust than it actually needs to. It's worth mentioning that since most production runtimes in Rust are multithreaded by default, most of them also have a work-stealing executor. This will be similar to the last version of our bartender example in Chapter 1, where we achieved a slightly increased efficiency by letting the bartenders “steal” tasks that are in progress from each other. However, this example should still give you an idea of how we can leverage all the cores on a machine to run asynchronous tasks, giving us both concurrency and parallelism, even though it will have limited capabilities. Let's start by opening up executor. rs located in the runtime subfolder. This file should already contain our Waker and the dependencies we need, so let's start by adding the following lines of code just below our dependencies: ch08/b-reactor-executor/src/runtime/executor. rs type Task = Box<dyn Future<Output = String>>; thread_local! { static CURRENT_EXEC: Executor Core = Executor Core::default(); } The first line is a type alias ; it simply lets us create an alias called Task that refers to the type: Box<dyn Future<Output = String>>. This will help keep our code a little bit cleaner. The next line might be new to some readers. We define a thread-local static variable by using the thread_local! macro.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 194 The thread_local! macro lets us define a static variable that's unique to the thread it's first called from. This means that all threads we create will have their own instance, and it's impossible for one thread to access another thread's CURRENT_EXEC variable. We call the variable CURRENT_EXEC since it holds the Executor that's currently running on this thread. The next lines we add to this file is the definition of Executor Core : ch08/b-reactor-executor/src/runtime/executor. rs #[derive(Default)] struct Executor Core { tasks: Ref Cell<Hash Map<usize, Task>>, ready_queue: Arc<Mutex<Vec<usize>>>, next_id: Cell<usize>, } Executor Core holds all the state for our Executor : tasks -This is a Hash Map with a usize as the key and a Task (remember the alias we created previously) as data. This will hold all the top-level futures associated with the executor on this thread and allow us to give each an id property to identify them. We can't simply mutate a static variable, so we need internal mutability here. Since this will only be callable from one thread, a Ref Cell will do so since there is no need for synchronization. ready_queue -This is a simple Vec<usize> that stores the IDs of tasks that should be polled by the executor. If we refer back to Figure 8. 7, you'll see how this fits into the design we outlined there. As mentioned earlier, we could store something such as an Arc<dyn Future<... >> here instead, but that adds quite a bit of complexity to our example. The only downside with the current design is that instead of getting a reference to the task directly, we have to look it up in our tasks collection, which takes time. An Arc<... > (shared reference) to this collection will be given to each Waker that this executor creates. Since the Waker can (and will) be sent to a different thread and signal that a specific task is ready by adding the task's ID to ready_queue, we need to wrap it in an Arc<Mutex<... >>. next_id -This is a counter that gives out the next available I, which means that it should never hand out the same ID twice for this executor instance. We'll use this to give each top-level future a unique ID. Since the executor instance will only be accessible on the same thread it was created, a simple Cell will suffice in giving us the internal mutability we need. Executor Core derives the Default trait since there is no special initial state we need here, and it keeps the code short and concise.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 2-Implementing a proper Executor 195 The next function is an important one. The spawn function allows us to register new top-level futures with our executor from anywhere in our program: ch08/b-reactor-executor/src/runtime/executor. rs pub fn spawn<F>(future: F) where F: Future<Output = String> + 'static, { CURRENT_EXEC. with(|e| { let id = e. next_id. get(); e. tasks. borrow_mut(). insert(id, Box::new(future)); e. ready_queue. lock(). map(|mut q| q. push(id)). unwrap(); e. next_id. set(id + 1); }); } The spawn function does a few things: It gets the next available ID. It assigns the ID to the future it receives and stores it in a Hash Map. It adds the ID that represents this task to ready_queue so that it's polled at least once (remember that Future traits in Rust don't do anything unless they're polled at least once). It increases the ID counter by one. The unfamiliar syntax accessing CURRENT_EXEC by calling with and passing in a closure is just a consequence of how thread local statics is implemented in Rust. Y ou'll also notice that we must use a few special methods because we use Ref Cell and Cell for internal mutability for tasks and next_id, but there is really nothing inherently complex about this except being a bit unfamiliar. A quick note about static lifetimes When a 'static lifetime is used as a trait bound as we do here, it doesn't actually mean that the lifetime of the Future trait we pass in must be static (meaning it will have to live until the end of the program). It means that it must be able to last until the end of the program, or, put another way, the lifetime can't be constrained in any way. Most often, when you encounter something that requires a 'static bound, it simply means that you'll have to give ownership over the thing you pass in. If you pass in any references, they need to have a 'static lifetime. It's less difficult to satisfy this constraint than you might expect. The final part of step 2 will be to define and implement the Executor struct itself.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 196 The Executor struct is very simple, and there is only one line of code to add: ch08/b-reactor-executor/src/runtime/executor. rs pub struct Executor; Since all the state we need for our example is held in Executor Core, which is a static thread-local variable, our Executor struct doesn't need any state. This also means that we don't strictly need a struct at all, but to keep the API somewhat familiar, we do it anyway. Most of the executor implementation is a handful of simple helper methods that end up in a block_on function, which is where the interesting parts really happen. Since these helper methods are short and easy to understand, I'll present them all here and just briefly go over what they do: Note We open the impl Executor block here but will not close it until we've finished implementing the block_on function. ch08/b-reactor-executor/src/runtime/executor. rs impl Executor { pub fn new()-> Self { Self {} } fn pop_ready(&self)-> Option<usize> { CURRENT_EXEC. with(|q| q. ready_queue. lock(). map(|mut q| q. pop()). unwrap()) } fn get_future(&self, id: usize)-> Option<Task> { CURRENT_EXEC. with(|q| q. tasks. borrow_mut(). remove(&id)) } fn get_waker(&self, id: usize)-> Waker { Waker { id, thread: thread::current(), ready_queue: CURRENT_EXEC. with(|q| q. ready_queue. clone()), } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 2-Implementing a proper Executor 197 fn insert_task(&self, id: usize, task: Task) { CURRENT_EXEC. with(|q| q. tasks. borrow_mut(). insert(id, task)); } fn task_count(&self)-> usize { CURRENT_EXEC. with(|q| q. tasks. borrow(). len()) } So, we have six methods here: new -Creates a new Executor instance. For simplicity, we have no initialization here, and everything is done lazily by design in the thread_local! macro. pop_ready -This function takes a lock on read_queue and pops off an ID that's ready from the back of Vec. Calling pop here means that we also remove the item from the collection. As a side note, since Waker pushes its ID to the back of ready_queue and we pop off from the back as well, we essentially get a Last In First Out (LIFO ) queue. Using something such as Vec Deque from the standard library would easily allow us to choose the order in which we remove items from the queue if we wish to change that behavior. get_future -This function takes the ID of a top-level future as an argument, removes the future from the tasks collection, and returns it (if the task is found). This means that if the task returns Not Ready (signaling that we're not done with it), we need to remember to add it back to the collection again. get_waker -This function creates a new Waker instance. insert_task -This function takes an id property and a Task property and inserts them into our tasks collection. task_count -This function simply returns a count of how many tasks we have in the queue. The final and last part of the Executor implementation is the block_on function. This is also where we close the impl Executor block: ch08/b-reactor-executor/src/runtime/executor. rs pub fn block_on<F>(&mut self, future: F) where F: Future<Output = String> + 'static, { spawn(future); loop { while let Some(id) = self. pop_ready() {
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 198 let mut future = match self. get_future(id) { Some(f) => f, // guard against false wakeups None => continue, }; let waker = self. get_waker(id); match future. poll(&waker) { Poll State::Not Ready => self. insert_task(id, future), Poll State::Ready(_) => continue, } } let task_count = self. task_count(); let name = thread::current(). name(). unwrap_or_default(). to_string(); if task_count > 0 { println!("{name}: {task_count} pending tasks. Sleep until notified. "); thread::park(); } else { println!("{name}: All tasks are finished"); break; } } } } block_on will be the entry point to our Executor. Often, you will pass in one top-level future first, and when the top-level future progresses, it will spawn new top-level futures onto our executor. Each new future can, of course, spawn new futures onto the Executor too, and that's how an asynchronous program basically works. In many ways, you can view this first top-level future in the same way you view the main function in a normal Rust program. spawn is similar to thread::spawn, with the exception that the tasks stay on the same OS thread in this example. This means the tasks won't be able to run in parallel, which in turn allows us to avoid any need for synchronization between tasks to avoid data races. Let's go through the function step by step: 1. The first thing we do is spawn the future we received onto ourselves. There are many ways this could be implemented, but this is the easiest way to do it. 2. Then, we have a loop that will run as long as our asynchronous program is running.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 3-Implementing a proper Reactor 199 3. Every time we loop, we create an inner while let Some(... ) loop that runs as long as there are tasks in ready_queue. 4. If there is a task in ready_queue, we take ownership of the Future object by removing it from the collection. We guard against false wakeups by just continuing if there is no future there anymore (meaning that we're done with it but still get a wakeup). This will, for example, happen on Windows since we get a READABLE event when the connection closes, but even though we could filter those events out, mio doesn't guarantee that false wakeups won't happen, so we have to handle that possibility anyway. 5. Next, we create a new Waker instance that we can pass into Future::poll(). Remember that this Waker instance now holds the id property that identifies this specific Future trait and a handle to the thread we're currently running on. 6. The next step is to call Future::poll. 7. If we get Not Ready in return, we insert the task back into our tasks collection. I want to emphasize that when a Future trait returns Not Ready, we know it will arrange it so that Waker::wake is called at a later point in time. It's not the executor's responsibility to track the readiness of this future. 8. If the Future trait returns Ready, we simply continue to the next item in the ready queue. Since we took ownership over the Future trait, this will drop the object before we enter the next iteration of the while let loop. 9. Now that we've polled all the tasks in our ready queue, the first thing we do is get a task count to see how many tasks we have left. 10. We also get the name of the current thread for future logging purposes (it has nothing to do with how our executor works). 11. If the task count is larger than 0, we print a message to the terminal and call thread::park(). Parking the thread will yield control to the OS scheduler, and our Executor does nothing until it's woken up again. 12. If the task count is 0, we're done with our asynchronous program and exit the main loop. That's pretty much all there is to it. By this point, we've covered all our goals for step 2, so we can continue to the last and final step and implement a Reactor for our runtime that will wake up our executor when something happens. Step 3-Implementing a proper Reactor The final part of our example is the Reactor. Our Reactor will: Efficiently wait and handle events that our runtime is interested in Store a collection of Waker types and make sure to wake the correct Waker when it gets a notification on a source it's tracking
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 200 Provide the necessary mechanisms for leaf futures such as Http Get Future, to register and deregister interests in events Provide a way for leaf futures to store the last received Waker When we're done with this step, we should have everything we need for our runtime, so let's get to it. Start by opening the reactor. rs file. The first thing we do is add the dependencies we need: ch08/b-reactor-executor/src/runtime/reactor. rs use crate::runtime::Waker; use mio::{net::Tcp Stream, Events, Interest, Poll, Registry, Token}; use std::{ collections::Hash Map, sync::{ atomic::{Atomic Usize, Ordering}, Arc, Mutex, Once Lock, }, thread, }; After we've added our dependencies, we create a type alias called Wakers that aliases the type for our wakers collection: ch08/b-reactor-executor/src/runtime/reactor. rs type Wakers = Arc<Mutex<Hash Map<usize, Waker>>>; The next line will declare a static variable called REACTOR : ch08/b-reactor-executor/src/runtime/reactor. rs static REACTOR: Once Lock<Reactor> = Once Lock::new(); This variable will hold a Once Lock<Reactor>. In contrast to our CURRENT_EXEC static variable, this will be possible to access from different threads. Once Lock allows us to define a static variable that we can write to once so that we can initialize it when we start our Reactor. By doing so, we also make sure that there can only be a single instance of this specific reactor running in our program. The variable will be private to this module, so we create a public function allowing other parts of our program to access it:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 3-Implementing a proper Reactor 201 ch08/b-reactor-executor/src/runtime/reactor. rs pub fn reactor()-> &'static Reactor { REACTOR. get(). expect("Called outside an runtime context") } The next thing we do is define our Reactor struct: ch08/b-reactor-executor/src/runtime/reactor. rs pub struct Reactor { wakers: Wakers, registry: Registry, next_id: Atomic Usize, } This will be all the state our Reactor struct needs to hold: wakers -A Hash Map of Waker objects, each identified by an integer registry -Holds a Registry instance so that we can interact with the event queue in mio next_id -Stores the next available ID so that we can track which event occurred and which Waker should be woken The implementation of Reactor is actually quite simple. It's only four short methods for interacting with the Reactor instance, so I'll present them all here and give a brief explanation next: ch08/b-reactor-executor/src/runtime/reactor. rs impl Reactor { pub fn register(&self, stream: &mut Tcp Stream, interest: Interest, id: usize) { self. registry. register(stream, Token(id), interest). unwrap(); } pub fn set_waker(&self, waker: &Waker, id: usize) { let _ = self . wakers . lock() . map(|mut w| w. insert(id, waker. clone()). is_none()) . unwrap(); } pub fn deregister(&self, stream: &mut Tcp Stream, id: usize) { self. wakers. lock(). map(|mut w| w. remove(&id)). unwrap();
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 202 self. registry. deregister(stream). unwrap(); } pub fn next_id(&self)-> usize { self. next_id. fetch_add(1, Ordering::Relaxed) } } Let's briefly explain what these four methods do: register -This method is a thin wrapper around Registry::register, which we know from Chapter 4. The one thing to make a note of here is that we pass in an id property so that we can identify which event has occurred when we receive a notification later on. set_waker -This method adds a Waker to our Hash Map using the provided id property as a key to identify it. If there is a Waker there already, we replace it and drop the old one. An important point to remember is that we should always store the most recent Waker so that this function can be called multiple times, even though there is already a Waker associated with the Tcp Stream. deregister -This function does two things. First, it removes the Waker from our wakers collection. Then, it deregisters the Tcp Stream from our Poll instance. I want to remind you at this point that while we only work with Tcp Stream in our examples, this could, in theory, be done with anything that implements mio 's Source trait, so the same thought process is valid in a much broader context than what we deal with here. next_id -This simply gets the current next_id value and increments the counter atomically. We don't care about any happens before/after relationships happening here; we only care about not handing out the same value twice, so Ordering::Relaxed will suffice here. Memory ordering in atomic operations is a complex topic that we won't be able to dive into in this book, but if you want to know more about the different memory orderings in Rust and what they mean, the official documentation is the right place to start: https://doc. rust-lang. org/stable/std/sync/atomic/enum. Ordering. html. Now that our Reactor is set up, we only have two short functions left. The first one is event_loop, which will hold the logic for our event loop that waits and reacts to new events: ch08/b-reactor-executor/src/runtime/reactor. rs fn event_loop(mut poll: Poll, wakers: Wakers) { let mut events = Events::with_capacity(100); loop { poll. poll(&mut events, None). unwrap(); for e in events. iter() {
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 3-Implementing a proper Reactor 203 let Token(id) = e. token(); let wakers = wakers. lock(). unwrap(); if let Some(waker) = wakers. get(&id) { waker. wake(); } } } } This function takes a Poll instance and a Wakers collection as arguments. Let's go through it step by step: The first thing we do is create an events collection. This should be familiar since we did the exact same thing in Chapter 4. The next thing we do is create a loop that in our case will continue to loop for eternity. This makes our example short and simple, but it has the downside that we have no way of shutting our event loop down once it's started. Fixing that is not especially difficult, but since it won't be necessary for our example, we don't cover this here. Inside the loop, we call Poll::poll with a timeout of None, which means it will never time out and block until it receives an event notification. When the call returns, we loop through every event we receive. If we receive an event, it means that something we registered interest in happened, so we get the id we passed in when we first registered an interest in events on this Tcp Stream. Lastly, we try to get the associated Waker and call Waker::wake on it. We guard ourselves from the fact that the Waker may have been removed from our collection already, in which case we do nothing. It's worth noting that we can filter events if we want to here. Tokio provides some methods on the Event object to check several things about the event it reported. For our use in this example, we don't need to filter events. Finally, the last function is the second public function in this module and the one that initializes and starts the runtime: ch08/b-reactor-executor/src/runtime/runtime. rs pub fn start() { use thread::spawn; let wakers = Arc::new(Mutex::new(Hash Map::new()));
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 204 let poll = Poll::new(). unwrap(); let registry = poll. registry(). try_clone(). unwrap(); let next_id = Atomic Usize::new(1); let reactor = Reactor { wakers: wakers. clone(), registry, next_id, }; REACTOR. set(reactor). ok(). expect("Reactor already running"); spawn(move || event_loop(poll, wakers)); } The start method should be fairly easy to understand. The first thing we do is create our Wakers collection and our Poll instance. From the Poll instance, we get an owned version of Registry. We initialize next_id to 1 (for debugging purposes, I wanted to initialize it to a different start value than our Executor ) and create our Reactor object. Then, we set the static variable we named REACTOR by giving it our Reactor instance. The last thing is probably the most important one to pay attention to. We spawn a new OS thread and start our event_loop function on that one. This also means that we pass on our Poll instance to the event loop thread for good. Now, the best practice would be to store the Join Handle returned from spawn so that we can join the thread later on, but our thread has no way to shut down the event loop anyway, so joining it later makes little sense, and we simply discard the handle. I don't know if you agree with me, but the logic here is not that complex when we break it down into smaller pieces. Since we know how epoll and mio work already, the rest is pretty easy to understand. Now, we're not done yet. We still have some small changes to make to our Http Get Future leaf future since it doesn't register with the reactor at the moment. Let's fix that. Start by opening the http. rs file. Since we already added the correct imports when we opened the file to adapt everything to the new Future interface, there are only a few places we need to change that so this leaf future integrates nicely with our reactor. The first thing we do is give Http Get Future an identity. It's the source of events we want to track with our Reactor, so we want it to have the same ID until we're done with it: ch08/b-reactor-executor/src/http. rs struct Http Get Future { stream: Option<mio::net::Tcp Stream>,
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 3-Implementing a proper Reactor 205 buffer: Vec<u8>, path: String, id: usize, } We also need to retrieve a new ID from the reactor when the future is created: ch08/b-reactor-executor/src/http. rs impl Http Get Future { fn new(path: String)-> Self { let id = reactor(). next_id(); Self { stream: None, buffer: vec![], path, id, } } Next, we have to locate the poll implementation for Http Get Future. The first thing we need to do is make sure that we register interest with our Poll instance and register the Waker we receive with the Reactor the first time the future gets polled. Since we don't register directly with Registry anymore, we remove that line of code and add these new lines instead: ch08/b-reactor-executor/src/http. rs if self. stream. is_none() { println!("FIRST POLL-START OPERATION"); self. write_request(); let stream = self. stream. as_mut(). unwrap(); runtime::reactor(). register(stream, Interest::READABLE, self. id); runtime::reactor(). set_waker(waker, self. id); } Lastly, we need to make some minor changes to how we handle the different conditions when reading from Tcp Stream : ch08/b-reactor-executor/src/http. rs match self. stream. as_mut(). unwrap(). read(&mut buff) { Ok(0) => {
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 206 let s = String::from_utf8_lossy(&self. buffer); runtime::reactor(). deregister(self. stream. as_mut(). unwrap(), self. id); break Poll State::Ready(s. to_string()); } Ok(n) => { self. buffer. extend(&buff[0..n]); continue; } Err(e) if e. kind() == Error Kind::Would Block => { runtime::reactor(). set_waker(waker, self. id); break Poll State::Not Ready; } Err(e) => panic!("{e:?}"), } The first change is to deregister the stream from our Poll instance when we're done. The second change is a little more subtle. If you read the documentation for Future::poll in Rust (https://doc. rust-lang. org/stable/std/future/trait. Future. html#tymethod. poll ) carefully, you'll see that it's expected that the Waker from the most recent call should be scheduled to wake up. That means that every time we get a Would Block error, we need to make sure we store the most recent Waker. The reason is that the future could have moved to a different executor in between calls, and we need to wake up the correct one (it won't be possible to move futures like those in our example, but let's play by the same rules). And that's it! Congratulations! Y ou've now created a fully working runtime based on the reactor-executor pattern. Well done! Now, it's time to test it and run a few experiments with it. Let's go back to main. rs and change the main function so that we get our program running correctly with our new runtime. First of all, let's remove the dependency on the Runtime struct and make sure our imports look like this: ch08/b-reactor-executor/src/main. rs mod future; mod http; mod runtime;
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Step 3-Implementing a proper Reactor 207 use future::{Future, Poll State}; use runtime::Waker; Next, we need to make sure that we initialize our runtime and pass in our future to executor. block_on. Our main function should look like this: ch08/b-reactor-executor/src/main. rs fn main() { let mut executor = runtime::init(); executor. block_on(async_main()); } And finally, let's try it out by running it: cargo run. Y ou should get the following output: Program starting FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. HTTP/1. 1 200 OK content-length: 15 connection: close content-type: text/plain; charset=utf-8 date: Thu, xx xxx xxxx 15:38:08 GMT Hello Async Await FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. HTTP/1. 1 200 OK content-length: 15 connection: close content-type: text/plain; charset=utf-8 date: Thu, xx xxx xxxx 15:38:08 GMT Hello Async Await main: All tasks are finished Great-it's working just as expected!!! However, we're not really using any of the new capabilities of our runtime yet so before we leave this chapter, let's have some fun and see what it can do.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 208 Experimenting with our new runtime If you remember from Chapter 7, we implemented a join_all method to get our futures running concurrently. In libraries such as Tokio, you'll find a join_all function too, and the slightly more versatile Futures Unordered API that allows you to join a set of predefined futures and run them concurrently. These are convenient methods to have, but it does force you to know which futures you want to run concurrently in advance. If the futures you run using join_all want to spawn new futures that run concurrently with their “parent” future, there is no way to do that using only these methods. However, our newly created spawn functionality does exactly this. Let's put it to the test! An example using concurrency Note The exact same version of this program can be found in the ch08/c-runtime-executor folder. Let's try a new program that looks like this: fn main() { let mut executor = runtime::init(); executor. block_on(async_main()); } coro fn request(i: usize) { let path = format!("/{}/Hello World{i}", i * 1000); let txt = Http::get(&path). wait; println!("{txt}"); } coro fn async_main() { println!("Program starting"); for i in 0..5 { let future = request(i); runtime::spawn(future); } } This is pretty much the same example we used to show how join_all works in Chapter 7, only this time, we spawn them as top-level futures instead.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Experimenting with our new runtime 209 To run this example, follow these steps: 1. Replace everything below the imports in main. rs with the preceding code. 2. Run corofy. /src/main. rs. 3. Copy everything from main_corofied. rs to main. rs and delete main_corofied. rs. 4. Fix the fact that corofy doesn't know we changed our futures to take waker: &Waker as an argument. The easiest way is to simply run cargo check and let the compiler guide you to the places we need to change. Now, you can run the example and see that the tasks run concurrently, just as they did using join_all in Chapter 7. If you measured the time it takes to run the tasks, you' d find that it all takes around 4 seconds, which makes sense if you consider that you just spawned 5 futures, and ran them concurrently. The longest wait time for a single future was 4 seconds. Now, let's finish off this chapter with another interesting example. Running multiple futures concurrently and in parallel This time, we spawn multiple threads and give each thread its own Executor so that we can run the previous example simultaneously in parallel using the same Reactor for all Executor instances. We'll also make a small adjustment to the printout so that we don't get overwhelmed with data. Our new program will look like this: mod future; mod http; mod runtime; use crate::http::Http; use future::{Future, Poll State}; use runtime::{Executor, Waker}; use std::thread::Builder; fn main() { let mut executor = runtime::init(); let mut handles = vec![]; for i in 1..12 { let name = format!("exec-{i}"); let h = Builder::new(). name(name). spawn(move || { let mut executor = Executor::new(); executor. block_on(async_main()); }). unwrap(); handles. push(h);
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 210 } executor. block_on(async_main()); handles. into_iter(). for_each(|h| h. join(). unwrap()); } coroutine fn request(i: usize) { let path = format!("/{}/Hello World{i}", i * 1000); let txt = Http::get(&path). wait; let txt = txt. lines(). last(). unwrap_or_default(); println!(«{txt}»); } coroutine fn async_main() { println!("Program starting"); for i in 0..5 { let future = request(i); runtime::spawn(future); } } The machine I'm currently running has 12 cores, so when I create 11 new threads to run the same asynchronous tasks, I'll use all the cores on my machine. As you'll notice, we also give each thread a unique name that we'll use when logging so that it's easier to track what happens behind the scenes. Note While I use 12 cores, you should use the number of cores on your machine. If we increase this number too much, our OS will not be able to give us more cores to run our program in parallel on and instead start pausing/resuming the threads we create, which adds no value to us since we handle the concurrency aspect ourselves in an a^tsync runtime. Y ou'll have to do the same steps as we did in the last example: 1. Replace the code that's currently in main. rs with the preceding code. 2. Run corofy. /src/main. rs. 3. Copy everything from main_corofied. rs to main. rs and delete main_corofied. rs. 4. Fix the fact that corofy doesn't know we changed our futures to take waker: &Waker as an argument. The easiest way is to simply run cargo check and let the compiler guide you to the places we need to change.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 211 Now, if you run the program, you'll see that it still only takes around 4 seconds to run, but this time we made 60 GET requests instead of 5. This time, we ran our futures both concurrently and in parallel. At this point, you can continue experimenting with shorter delays or more requests and see how many concurrent tasks you can have before the system breaks down. Pretty quickly, printouts to stdout will be a bottleneck, but you can disable those. Create a blocking version using OS threads and see how many threads you can run concurrently before the system breaks down compared to this version. Only imagination sets the limit, but do take the time to have some fun with what you've created before we continue with the next chapter. The only thing to be careful about is testing the concurrency limit of your system by sending these kinds of requests to a random server you don't control yourself since you can potentially overwhelm it and cause problems for others. Summary So, what a ride! As I said in the introduction for this chapter, this is one of the biggest ones in this book, but even though you might not realize it, you've already got a better grasp of how asynchronous Rust works than most people do. Great work! In this chapter, you learned a lot about runtimes and why Rust designed the Future trait and the Waker the way it did. Y ou also learned about reactors and executors, Waker types, Futures traits, and different ways of achieving concurrency through the join_all function and spawning new top-level futures on the executor. By now, you also have an idea of how we can achieve both concurrency and parallelism by combining our own runtime with OS threads. Now, we've created our own async universe consisting of coro/wait, our own Future trait, our own Waker definition, and our own runtime. I've made sure that we don't stray away from the core ideas behind asynchronous programming in Rust so that everything is directly applicable to async/ await, Future traits, Waker types, and runtimes in day-to-day programming. By now, we're in the final stretch of this book. The last chapter will finally convert our example to use the real Future trait, Waker, async/await, and so on instead of our own versions of it. In that chapter, we'll also reserve some space to talk about the state of asynchronous Rust today, including some of the most popular runtimes, but before we get that far, there is one more topic I want to cover: pinning. One of the topics that seems hardest to understand and most different from all other languages is the concept of pinning. When writing asynchronous Rust, you will at some point have to deal with the fact that Future traits in Rust must be pinned before they're polled.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 212 So, the next chapter will explain pinning in Rust in a practical way so that you understand why we need it, what it does, and how to do it. However, you absolutely deserve a break after this chapter, so take some fresh air, sleep, clear your mind, and grab some coffee before we enter the last parts of this book.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9 Coroutines, Self-Referential Structs, and Pinning In this chapter, we'll start by improving our coroutines by adding the ability to store variables across state changes. We'll see how this leads to our coroutines needing to take references to themselves and the issues that arise as a result of that. The reason for dedicating a whole chapter to this topic is that it's an integral part of getting async/await to work in Rust, and also a topic that is somewhat difficult to get a good understanding of. The reason for this is that the whole concept of pinning is foreign to many developers and just like the Rust ownership system, it takes some time to get a good and working mental model of it. Fortunately, the concept of pinning is not that difficult to understand, but how it's implemented in the language and how it interacts with Rust's type system is abstract and hard to grasp. While we won't cover absolutely everything about pinning in this chapter, we'll try to get a good and sound understanding of it. The major goal here is to feel confident with the topic and understand why we need it and how to use it. As mentioned previously, this chapter is not only about pinning in Rust, so the first thing we'll do is make some important improvements where we left off by improving the final example in Chapter 8. Then, we'll explain what self-referential structs are and how they're connected to futures before we explain how pinning can solve our problems. This chapter will cover the following main topics Improving our example 1-variables Improving our example 2-references Improving our example 3-this is... not... good... Discovering self-referential structs
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 214 Pinning in Rust Improving our example 4-pinning to the rescue Technical requirements The examples in this chapter will build on the code from the previous chapter, so the requirements are the same. The examples will all be cross-platform and work on all platforms that Rust ( https:// doc. rust-lang. org/stable/rustc/platform-support. html ) and mio (https:// github. com/tokio-rs/mio#platforms ) support. The only thing you need is Rust installed and this book's Git Hub repository downloaded locally. All the code in this chapter can be found in the ch09 folder. To follow the examples step by step, you'll also need corofy installed on your machine. If you didn't install it in Chapter 7, install it now by going into the ch07/corofy folder in the repository and running the following: cargo install--force--path. We'll also use delayserver in this example, so you need to open a separate terminal, enter the delayserver folder at the root of the repository, and write cargo run so that it's ready and available for the examples going forward. Remember to change the port number in the code if you have to change what port delayserver listens on. Improving our example 1-variables So, let's recap what we have at this point by continuing where we left off in the previous chapter. We have the following: A Future trait A coroutine implementation using coroutine/await syntax and a preprocessor A reactor based on mio::Poll An executor that allows us to spawn as many top-level tasks as we want and schedules the ones that are ready to run An HTTP client that only makes HTTP GET requests to our local delayserver instance It's not that bad-we might argue that our HTTP client is a little bit limited, but that's not the focus of this book, so we can live with that. Our coroutine implementation, however, is severely limited. Let's take a look at how we can make our coroutines slightly more useful. The biggest downside with our current implementation is that nothing-and I mean nothing-can live across wait points. It makes sense to tackle this problem first.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 1-variables 215 Let's start by setting up our example. We'll use the “library” code from d-multiple-threads example in Chapter 8 (our last version of the example), but we'll change the main. rs file by adding a shorter and simpler example. Let's set up the base example that we'll iterate on and improve in this chapter. Setting up the base example Note Y ou can find this example in this book's Git Hub repository under ch09/a-coroutines-variables. Perform the following steps: 1. Create a folder called a-coroutines-variables. 2. Enter the folder and run cargo init. 3. Delete the default main. rs file and copy everything from the ch08/d-multiple-threads/src folder into the ch10/a-coroutines-variables/src folder. 4. Open Cargo. toml and add the dependency on mio to the dependencies section: mio = {version = "0. 8", features = ["net", "os-poll"]} Y ou should now have a folder structure that looks like this: src |--runtime |--executor. rs |--reactor. rs |--future. rs |--http. rs |--main. rs |--runtime. rs We'll use corofy one last time to generate our boilerplate state machine for us. Copy the following into main. rs : ch09/a-coroutines-variables/src/main. rs mod future; mod http; mod runtime; use crate:: http::Http; use future::{Future, Poll State};
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 216 use runtime::Waker; fn main() { let mut executor = runtime::init(); executor. block_on(async_main()); } coroutine fn async_main() { println!("Program starting"); let txt = Http::get("/600/Hello Async Await"). wait; println!("{txt}"); let txt = Http::get("/400/Hello Async Await"). wait; println!("{txt}"); } This time, let's take a shortcut and write our corofied file directly back to main. rs since we've compared the files side by side enough times at this point. Assuming you're in the base folder, a-coroutine-variables, write the following: corofy. /src/main. rs. /src/main. rs The last step is to fix the fact that corofy doesn't know about Waker. Y ou can let the compiler guide you to where you need to make changes by writing cargo check, but to help you along the way, there are three minor changes to make (note that the line number is the one reported by re-writing the same code that we wrote previously): 64: fn poll(&mut self, waker: &Waker ) 82: match f1. poll( waker) 102: match f2. poll( waker) Now, check that everything is working as expected by writing cargo run. Y ou should see the following output (the output has been abbreviated to save a little bit of space): Program starting FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. HTTP/1. 1 200 OK [==== ABBREVIATED ====] Hello Async Await main: All tasks are finished
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 1-variables 217 Note Remember that we need delayserver running in a terminal window so that we get a response to our HTTP GET requests. See the Technical requirements section for more information. Now that we've got the boilerplate out of the way, it's time to start making the improvements we talked about. Improving our base example We want to see how we can improve our state machine so that it allows us to hold variables across wait points. To do that, we need to store them somewhere and restore the variables that are needed when we enter each state in our state machine. Tip Pretend that these rewrites are done by corofy (or the compiler). Even though corofy can't do these rewrites, it's possible to automate this process as well. Or coroutine/wait program looks like this: coroutine fn async_main() { println!("Program starting"); let txt = Http::get("/600/Hello Async Await"). wait; println!("{txt}"); let txt = Http::get("/400/Hello Async Await"). wait; println!("{txt}"); } We want to change it so that it looks like this: coroutine fn async_main() { let mut counter = 0; println!("Program starting"); let txt = http::Http::get("/600/Hello Async Await"). wait; println!("{txt}"); counter += 1; let txt = http::Http::get("/400/Hello Async Await"). wait; println!("{txt}"); counter += 1; println!("Received {} responses. ", counter); }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 218 In this version, we simply create a counter variable at the top of our async_main function and increase the counter for each response we receive from the server. At the end, we print out how many responses we received. Note For brevity, I won't present the entire code base going forward; instead, I will only present the relevant additions and changes. Remember that you can always refer to the same example in this book's Git Hub repository. The way we implement this is to add a new field called stack to our Coroutine0 struct: ch09/a-coroutines-variables/src/main. rs struct Coroutine0 { stack: Stack0, state: State0, } The stack fields hold a Stack0 struct that we also need to define: ch09/a-coroutines-variables/src/main. rs #[derive(Default)] struct Stack0 { counter: Option<usize>, } This struct will only hold one field since we only have one variable. The field will be of the Option<usize> type. We also derive the Default trait for this struct so that we can initialize it easily.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 1-variables 219 Note Futures created by async/await in Rust store this data in a slightly more efficient manner. In our example, we store every variable in a separate struct since I think it's easier to reason about, but it also means that the more variables we need to store, the more space our coroutine will need. It will grow linearly with the number of different variables that need to be stored/restored between state changes. This could be a lot of data. For example, if we have 100 state changes that each need one distinct i64-sized variable to be stored to the next state, that would require a struct that takes up 100 * 8b = 800 bytes in memory. Rust optimizes this by implementing coroutines as enums, where each state only holds the data it needs to restore in the next state. This way, the size of a coroutine is not dependent on the total number of variables ; it's only dependent on the size of the largest state that needs to be saved/restored. In the preceding example, the size would be reduced to 8 bytes since the largest space any single state change needed is enough to hold one i64-sized variable. The same space will be reused over and over. The fact that this design allows for this optimization is significant and it's an advantage that stackless coroutines have over stackful coroutines when it comes to memory efficiency. The next thing we need to change is the new method on Coroutine0 : ch09/a-coroutines-variables/src/main. rs impl Coroutine0 { fn new()-> Self { Self { state: State0::Start, stack: Stack0::default(), } } } The default value for stack is not relevant to us since we'll overwrite it anyway. The next few steps are the ones of most interest to us. In the Future implementation for Coroutine0, we'll pretend that corofy added the following code to initialize, store, and restore the stack variables for us. Let's take a look at what happens on the first call to poll now: ch09/a-coroutines-variables/src/main. rs State0::Start => { // initialize stack (hoist variables) self. stack. counter = Some(0); //----Code you actually wrote---- println!("Program starting");
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 220 //--------------------------------- let fut1 = Box::new( http::Http::get("/600/ Hello Async Await")); self. state = State0::Wait1(fut1); // save stack } Okay, so there are some important changes here that I've highlighted. Let's go through them: The first thing we do when we're in the Start state is add a segment at the top where we initialize our stack. One of the things we do is hoist all variable declarations for the relevant code section (in this case, before the first wait point) to the top of the function. In our example, we also initialize the variables to their initial value, which in this case is 0. We also added a comment stating that we should save the stack, but since all that happens before the first wait point is the initialization of counter, there is nothing to store here. Let's take a look at what happens after the first wait point: ch09/a-coroutines-variables/src/main. rs State0::Wait1(ref mut f1) => { match f1. poll(waker) { Poll State::Ready(txt) => { // Restore stack let mut counter = self. stack. counter. take(). unwrap(); //----Code you actually wrote---- println!("{txt}"); counter += 1; //--------------------------------- let fut2 = Box::new( http::Http::get("/400/ Hello Async Await")); self. state = State0::Wait2(fut2); // save stack self. stack. counter = Some(counter); } Poll State::Not Ready => break Poll State::Not Ready, } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 1-variables 221 Hmm, this is interesting. I've highlighted the changes we need to make. The first thing we do is to restore the stack by taking ownership over the counter ( take() replaces the value currently stored in self. stack. counter with None in this case) and writing it to a variable with the same name that we used in the code segment ( counter ). Taking ownership and placing the value back in later is not an issue in this case and it mimics the code we wrote in our coroutine/wait example. The next change is simply the segment that takes all the code after the first wait point and pastes it in. In this case, the only change is that the counter variable is increased by 1. Lastly, we save the stack state back so that we hold onto its updated state between the wait points. Note In Chapter 5, we saw how we needed to store/restore the register state in our fibers. Since Chapter 5 showed an example of a stackful coroutine implementation, we didn't have to care about stack state at all since all the needed state was stored in the stacks we created. Since our coroutines are stackless, we don't store the entire call stack for each coroutine, but we do need to store/restore the parts of the stack that will be used across wait points. Stackless coroutines still need to save some information from the stack, as we've done here. When we enter the State0::Wait2 state, we start the same way: ch09/a-coroutines-variables/src/main. rs State0::Wait2(ref mut f2) => { match f2. poll(waker) { Poll State::Ready(txt) => { // Restore stack let mut counter = self. stack. counter. take(). unwrap(); //----Code you actually wrote---- println!("{txt}"); counter += 1; println!(«Received {} responses. », counter); //--------------------------------- self. state = State0::Resolved; // Save stack (all variables set to None already) break Poll State::Ready(String::new());
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 222 } Poll State::Not Ready => break Poll State::Not Ready, } } Since there are no more wait points in our program, the rest of the code goes into this segment and since we're done with counter at this point, we can simply drop it by letting it go out of scope. If our variable held onto any resources, they would be released here as well. With that, we've given our coroutines the power of saving variables across wait points. Let's try to run it by writing cargo run. Y ou should see the following output (I've removed the parts of the output that remain unchanged):... Hello Async Await Received 2 responses. main: All tasks are finished Okay, so our program works and does what's expected. Great! Now, let's take a look at an example that needs to store references across wait points since that's an important aspect of having our coroutine/wait functions behave like “normal” functions. Improving our example 2-references Let's set everything up for our next version of this example: Create a new folder called b-coroutines-references and copy everything from a-coroutines-variables over to it Y ou can change the name of the project so that it corresponds with the folder by changing the name attribute in the package section in Cargo. toml, but it's not something you need to do for the example to work Note Y ou can find this example in this book's Git Hub repository in the ch10/b-coroutines-references folder. This time, we'll learn how to store references to variables in our coroutines by using the following coroutine/wait example program: use std::fmt::Write; coroutine fn async_main() { let mut buffer = String::from( "\n BUFFER:\n----\n ");
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 2-references 223 let writer = &mut buffer; println!("Program starting"); let txt = http::Http::get("/600/Hello Async Await"). wait; writeln!(writer, "{txt}"). unwrap(); let txt = http::Http::get("/400/Hello Async Await"). wait; writeln!(writer, "{txt}"). unwrap(); println!( "{}", buffer); } So, in this example, we create a buffer variable of the String type that we initialize with some text, and we take a &mut reference to that and store it in a writer variable. Every time we receive a response, we write the response to the buffer through the &mut reference we hold in writer before we print the buffer to the terminal at the end of the program. Let's take a look at what we need to do to get this working. The first thing we do is pull in the fmt::Write trait so that we can write to our buffer using the writeln! macro. Add this to the top of main. rs : ch09/b-coroutines-references/src/main. rs use std::fmt::Write; Next, we need to change our Stack0 struct so that it represents what we must store across wait points in our updated example: ch09/b-coroutines-references/src/main. rs #[derive(Default)] struct Stack0 { buffer: Option<String>, writer: Option<*mut String>, } An important thing to note here is that writer can't be Option<&mut String> since we know it will be referencing the buffer field in the same struct. A struct where a field takes a reference on &self is called a self-referential struct and there is no way to represent that in Rust since the lifetime of the self-reference is impossible to express. The solution is to cast the &mut self-reference to a pointer instead and ensure that we manage the lifetimes correctly ourselves.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 224 The only other thing we need to change is the Future::poll implementation: ch09/b-coroutines-references/src/main. rs State0::Start => { // initialize stack (hoist variables) self. stack. buffer = Some(String::from("\n BUFFER:\ n----\n")); self. stack. writer = Some(self. stack. buffer. as_ mut(). unwrap()); //----Code you actually wrote---- println!("Program starting"); //--------------------------------- let fut1 = Box::new(http::Http::get("/600/ Hello Async Await")); self. state = State0::Wait1(fut1); // save stack } Okay, so this looks a bit odd. The first line we change is pretty straightforward. We initialize our buffer variable to a new String type, just like we did at the top of our coroutine/wait program. The next line, however, looks a bit dangerous. We cast the &mut reference to our buffer to a *mut pointer. Important Y es, I know we could have chosen another way of doing this since we can take a reference to buffer everywhere we need to instead of storing it in its variable, but that's only because our example is very simple. Imagine that we use a library that needs to borrow data that's local to the async function and we somehow have to manage the lifetimes manually like we do here but in a much more complex scenario. The self. stack. buffer. as_mut(). unwrap() line returns a &mut reference to the buffer field. Since self. stack. writer is of the Option<*mut String> type, the reference will be coerced to a pointer (meaning that Rust does this cast implicitly by inferring it from the context). Note We take *mut String here since we deliberately don't want a string slice (&str ), which is often what we get (and want) when using a reference to a String type in Rust.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 2-references 225 Let's take a look at what happens after the first wait point: ch09/b-coroutines-references/src/main. rs State0::Wait1(ref mut f1) => { match f1. poll(waker) { Poll State::Ready(txt) => { // Restore stack let writer = unsafe { &mut *self. stack. writer. take(). unwrap() }; //----Code you actually wrote---- writeln!(writer, «{txt}»). unwrap(); //--------------------------------- let fut2 = Box::new(http::Http::get("/400/ Hello Async Await")); self. state = State0::Wait2(fut2); // save stack self. stack. writer = Some(writer); } Poll State::Not Ready => break Poll State::Not Ready, } } The first change we make is regarding how we restore our stack. We need to restore our writer variable so that it holds a &mut String type that points to our buffer. To do this, we have to write some unsafe code that dereferences our pointer and lets us take a &mut reference to our buffer. Note Casting a reference to a pointer is safe. The unsafe part is dereferencing the pointer. Next, we add the line of code that writes the response. We can keep this the same as how we wrote it in our coroutine/wait function. Lastly, we save the stack state back since we need both variables to live across the wait point. Note We don't have to take ownership over the pointer stored in the writer field to use it since we can simply copy it, but to be somewhat consistent, we take ownership over it, just like we did in the first example. It also makes sense since if there is no need to store the pointer for the next await point, we can simply let it go out of scope by not storing it back.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 226 The last part is when we've reached Wait2 and our future returns Poll State::Ready : State0::Wait2(ref mut f2) => { match f2. poll(waker) { Poll State::Ready(txt) => { // Restore stack let buffer = self. stack. buffer. as_ref(). take(). unwrap(); let writer = unsafe { &mut *self. stack. writer. take(). unwrap() }; //----Code you actually wrote---- writeln!(writer, «{txt}»). unwrap(); println!("{}", buffer); //--------------------------------- self. state = State0::Resolved; // Save stack / free resources let _ = self. stack. buffer. take(); break Poll State::Ready(String::new()); } Poll State::Not Ready => break Poll State::Not Ready, } } In this segment, we restore both variables since we write the last response through our writer variable, and then print everything that's stored in our buffer to the terminal. I want to point out that the println!("{}", buffer); line takes a reference in the original coroutine/wait example, even though it might look like we pass in an owned String. Therefore, it makes sense that we restore the buffer to a &String type, and not the owned version. Transferring ownership would also invalidate the pointer in our writer variable. The last thing we do is drop the data we don't need anymore. Our self. stack. writer field is already set to None since we took ownership over it when we restored the stack at the start, but we need to take ownership over the String type that self. stack. buffer holds as well so that it gets dropped at the end of this scope too. If we didn't do that, we would hold on to the memory that's been allocated to our String until the entire coroutine is dropped (which could be much later). Now, we've made all our changes. If the rewrites we did previously were implemented in corofy, our coroutine/wait implementation could, in theory, support much more complex use cases.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 3-this is... not... good... 227 Let's take a look at what happens when we run our program by writing cargo run : Program starting FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. BUFFER:----HTTP/1. 1 200 OK content-length: 15 connection: close content-type: text/plain; charset=utf-8 date: Thu, 30 Nov 2023 22:48:11 GMT Hello Async Await HTTP/1. 1 200 OK content-length: 15 connection: close content-type: text/plain; charset=utf-8 date: Thu, 30 Nov 2023 22:48:11 GMT Hello Async Await main: All tasks are finished Puh, great. All that dangerous unsafe turned out to work just fine, didn't it? Good job. Let's make one small improvement before we finish. Improving our example 3-this is... not... good... Pretend you haven't read this section title and enjoy the fact that our previous example compiled and showed the correct result. I think our coroutine implementation is so good now that we can look at some optimizations instead. There is one optimization in our executor in particular that I want to do immediately. Before we get ahead of ourselves, let's set everything up: Create a new folder called c-coroutines-problem and copy everything from b-coroutines-references over to it Y ou can change the name of the project so that it corresponds with the folder by changing the name attribute in the package section in Cargo. toml, but it's not something you need to do for the example to work
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 228 Tip This example is located in this book's Git Hub repository in the ch09/c-coroutines-problem folder. With that, everything has been set up. Back to the optimization. Y ou see, new insights into the workload our runtime will handle in real life indicate that most futures will return Ready on the first poll. So, in theory, we can just poll the future we receive in block_on once and it will resolve immediately most of the time. Let's navigate to src/runtime/executor. rs and take a look at how we can take advantage of this by adding a few lines of code. If you navigate to our Executor::block_on function, you'll see that the first thing we do is spawn the future before we poll it. Spawning the future means that we allocate space for it in the heap and store the pointer to its location in a Hash Map variable. Since the future will most likely return Ready on the first poll, this is unnecessary work that could be avoided. Let's add this little optimization at the start of the block_on function to take advantage of this: pub fn block_on<F>(&mut self, future: F) where F: Future<Output = String> + 'static, { // ===== OPTIMIZATION, ASSUME READY let waker = self. get_waker(usize::MAX); let mut future = future; match future. poll(&waker) { Poll State::Not Ready => (), Poll State::Ready(_) => return, } // ===== END spawn(future); loop { ... Now, we simply poll the future immediately, and if the future resolves on the first poll, we return since we're all done. This way, we only spawn the future if it's something we need to wait on. Y es, this assumes we never reach usize::MAX for our IDs, but let's pretend this is only a proof of concept. Our Waker will be discarded and replaced by a new one if the future is spawned and polled again anyway, so that shouldn't be a problem.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Discovering self-referential structs 229 Let's try to run our program and see what we get: Program starting FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. /400/Hello Asyn free(): double free detected in tcache 2 Aborted Wait, what?!? That doesn't sound good! Okay, that's probably a kernel bug in Linux, so let's try it on Windows instead:... error: process didn't exit successfully: `target\release\c-coroutines-problem. exe` (exit code: 0xc0000374, STATUS_HEAP_CORRUPTION) That sounds even worse!! What happened here? Let's take a closer look at exactly what happened with our async system when we made our small optimization. Discovering self-referential structs What happened is that we created a self-referential struct, initialized it so that it took a pointer to itself, and then moved it. Let's take a closer look: 1. First, we received a future object as an argument to block_on. This is not a problem since the future isn't self-referential yet, so we can move it around wherever we want to without issues (this is also why moving futures before they're polled is perfectly fine using proper async/await). 2. Then, we polled the future once. The optimization we did made one essential change. The future was located on the stack (inside the stack frame of our block_on function) when we polled it the first time. 3. When we polled the future the first time, we initialized the variables to their initial state. Our writer variable took a pointer to our buffer variable (stored as a part of our coroutine) and made it self-referential at this point. 4. The first time we polled the future, it returned Not Ready 5. Since it returned Not Ready, we spawned the future, which moves it into the tasks collection with the Hash Map<usize, Box<dyn Future<Output = String>>> type in our Executor. The future is now placed in Box, which moves it to the heap.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 230 6. The next time we poll the future, we restore the stack by dereferencing the pointer we hold for our writer variable. However, there's a big problem: the pointer is now pointing to the old location on the stack where the future was located at the first poll. 7. That can't end well, and it doesn't in our case. Y ou've now seen firsthand the problem with self-referential structs, how this applies to futures, and why we need something that prevents this from happening. A self-referential struct is a struct that takes a reference to self and stores it in a field. Now, the term reference here is a little bit unprecise since there is no way to take a reference to self in Rust and store that reference in self. To do this in safe Rust, you have to cast the reference to a pointer (remember that references are just pointers with a special meaning in the programming language). Note When we create visualizations in this chapter, we'll disregard padding, even though we know structs will likely have some padding between fields, as we discussed in Chapter 4. When this value is moved to another location in memory, the pointer is not updated and points to the “old” location. If we take a look at a move from one location on the stack to another one, it looks something like this: Figure 9. 1-Moving a self-referential struct In the preceding figure, we can see the memory addresses to the left with a representation of the stack next to it. Since the pointer was not updated when the value was moved, it now points to the old location, which can cause serious problems.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Discovering self-referential structs 231 Note It can be very hard to detect these issues, and creating simple examples where a move like this causes serious issues is surprisingly difficult. The reason for this is that even though we move everything, the old values are not zeroed or overwritten immediately. Often, they're still there, so dereferencing the preceding pointer would probably produce the correct value. The problem only arises when you change the value of x in the new location, and expect y to point to it. Dereferencing y still produces a valid value in this case, but it's the wrong value. Optimized builds often optimize away needless moves, which can make bugs even harder to detect since most of the program will seem to work just fine, even though it contains a serious bug. What is a move? A move in Rust is one of those concepts that's unfamiliar to many programmers coming from C#, Javascript, and similar garbage-collected languages, and different from what you're used to for C and C++ programmers. The definition of move in Rust is closely related to its ownership system. Moving means transferring ownership. In Rust, a move is the default way of passing values around and it happens every time you change ownership over an object. If the object you move only consists of copy types (types that implement the Copy trait), this is as simple as copying the data over to a new location on the stack. For non-copy types, a move will copy all copy types that it contains over just like in the first example, but now, it will also copy pointers to resources such as heap allocations. The moved-from object is left inaccessible to us (for example, if you try to use the moved-from object, the compilation will fail and let you know that the object has moved), so there is only one owner over the allocation at any point in time. In contrast to cloning, it does not recreate any resources and make a clone of them. One more important thing is that the compiler makes sure that drop is never called on the moved-from object so that the only thing that can free the resources is the new object that took ownership over everything. Figure 9. 2 provides a simplified visual overview of the difference between move, clone, and copy (we've excluded any internal padding of the struct in this visualization). Here, we assume that we have a struct that holds two fields-a copy type, a, which is an i64 type, and a non-copy type, b, which is a Vec<u8> type:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 232 Figure 9. 2-Move, clone, and copy A move will in many ways be like a deep copy of everything in our struct that's located on the stack. This is problematic when you have a pointer that points to self, like we have with self-referential structs, since self will start at a new memory address after the move but the pointer to self won't be adjusted to reflect that change. Most of the time, when programming Rust, you probably won't think a lot about moves since it's part of the language you never explicitly use, but it's important to know what it is and what it does. Now that we've got a good understanding of what the problem is, let's take a closer look at how Rust solves this by using its type system to prevent us from moving structs that rely on a stable place in memory to function correctly.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Pinning in Rust 233 Pinning in Rust The following diagram shows a slightly more complex self-referential struct so that we have something visual to help us understand: Figure 9. 3-Moving a self-referential struct with three fields At a very high level, pinning makes it possible to rely on data that has a stable memory address by disallowing any operation that might move it: Figure 9. 4-Moving a pinned struct The concept of pinning is pretty simple. The complex part is how it's implemented in the language and how it's used.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 234 Pinning in theory Pinning is a part of Rust's standard library and consists of two parts: the type, Pin, and the marker-trait, Unpin. Pinning is only a language construct. There is no special kind of location or memory that you move values to so they get pinned. There is no syscall to ask the operating system to ensure a value stays the same place in memory. It's only a part of the type system that's designed to prevent us from being able to move a value. Pin does not remove the need for unsafe -it just gives the user of unsafe a guarantee that the value has a stable location in memory, so long as the user that pinned the value only uses safe Rust. This allows us to write self-referential types that are safe. It makes sure that all operations that can lead to problems must use unsafe. Back to our coroutine example, if we were to move the struct, we' d have to write unsafe Rust. That is how Rust upholds its safety guarantee. If you somehow know that the future you created never takes a self-reference, you could choose to move it using unsafe, but the blame now falls on you if you get it wrong. Before we dive a bit deeper into pinning, we need to define several terms that we'll need going forward. Definitions Here are the definitions we must understand: Pin<T> is the type it's all about. Y ou'll find this as a part of Rust's standard library under the std::pin module. Pin wrap types that implement the Deref trait, which in practical terms means that it wraps references and smart pointers. Unpin is a marker trait. If a type implements Unpin, pinning will have no effect on that type. Y ou read that right-no effect. The type will still be wrapped in Pin but you can simply take it out again. The impressive thing is that almost everything implements Unpin by default, and if you manually want to mark a type as !Unpin, you have to add a marker trait called Phantom Pinned to your type. Having a type, T, implement !Unpin is the only way for something such as Pin<&mut T> to have any effect. Pinning a type that's !Unpin will guarantee that the value remains at the same location in memory until it gets dropped, so long as you stay in safe Rust. Pin projections are helper methods on a type that's pinned. The syntax often gets a little weird since they're only valid on pinned instances of self. For example, they often look like fn foo(self: Pin<&mut self>).
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Pinning in Rust 235 Structural pinning is connected to pin projections in the sense that, if you have Pin<&mut T> where T has one field, a, that can be moved freely and one that can't be moved, b, you can do the following: ‚Write a pin projection for a with the fn a(self: Pin<&mut self>)-> &A signature. In this case, we say that pinning is not structural. ‚Write a projection for b that looks like fn b(self: Pin<&mut self>)-> Pin<&mut B>, in which case we say that pinning is structural for b since it's pinned when the struct, T, is pinned. With the most important definitions out of the way, let's look at the two ways we can pin a value. Pinning to the heap Note The small code snippets we'll present here can be found in this book's Git Hub repository in the ch09/d-pin folder. The different examples are implemented as different methods that you comment/uncomment in the main function. Let's write a small example to illustrate the different ways of pinning a value: ch09/d-pin/src/main. rs use std::{marker::Phantom Pinned, pin::Pin}; #[derive(Default)] struct Foo { a: Maybe Self Ref, b: String, } So, we want to be able to create an instance using Maybe Self Ref::default() that we can move around as we wish, but then at some point initialize it to a state where it references itself; moving it would cause problems.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 236 This is very much like futures that are not self-referential until they're polled, as we saw in our previous example. Let's write the impl block for Maybe Self Ref and take a look at the code:: ch09/d-pin/src/main. rs impl Maybe Self Ref { fn init(self: Pin<&mut Self>) { unsafe { let Self { a, b,.. } = self. get_unchecked_mut(); *b = Some(a); } } fn b(self: Pin<&mut Self>)-> Option<&mut usize> { unsafe { self. get_unchecked_mut(). b. map(|b| &mut *b) } } } As you can see, Maybe Stelf Ref will only be self-referential after we call init on it. We also define one more method that casts the pointer stored in b to Option<&mut usize>, which is a mutable reference to a. One thing to note is that both our functions require unsafe. Without Pin, the only method requiring unsafe would be b since we dereference a pointer there. Acquiring a mutable reference to a pinned value always require unsafe, since there is nothing preventing us from moving the pinned value at that point. Pinning to the heap is usually done by pinning a Box. There is even a convenient method on Box that allows us to get Pin<Box<... >>. Let's look at a short example: ch09/d-pin/src/main. rs fn main() { let mut x = Box:: pin(Maybe Self Ref::default()); x. as_mut(). init(); println!("{}", x. as_ref(). a); *x. as_mut(). b(). unwrap() = 2; println!("{}", x. as_ref(). a); }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Pinning in Rust 237 Here, we pin Maybe Self Ref to the heap and initialize it. We print out the value of a and then mutate the data through the self-reference in b, and set its value to 2. If we look at the output, we'll see that everything looks as expected: Finished dev [unoptimized + debuginfo] target(s) in 0. 56s Running `target\debug\x-pin-experiments. exe` 0 2 The pinned value can never move and as users of Maybe Self Ref, we didn't have to write any unsafe code. Rust can guarantee that we never (in safe Rust) get a mutable reference to Maybe Self Ref since Box took ownership of it. Heap pinning being safe is not so surprising since, in contrast to the stack, a heap allocation will be stable throughout the program, regardless of where we create it. Important This is the preferred way to pin values in Rust. Stack pinning is for those cases where you don't have a heap to work with or can't accept the cost of that extra allocation. Let's take a look at stack pinning while we're at it. Pinning to the stack Pinning to the stack can be somewhat difficult. In Chapter 5, we saw how the stack worked and we know that it grows and shrinks as values are popped and pushed to the stack. So, if we're going to pin to the stack, we have to pin it somewhere “high” on the stack. This means that if we pin a value to the stack inside a function call, we can't return from that function, and expect the value to still be pinned there. That would be impossible. Pinning to the stack is hard since we pin by taking &mut T, and we have to guarantee that we won't move T until it's dropped. If we're not careful, this is easy to get wrong. Rust can't help us here, so it's up to us to uphold that guarantee. This is why stack pinning is unsafe. Let's look at the same example using stack pinning: ch09/d-pin/src/main. rs fn stack_pinning_manual() { let mut x = Maybe Self Ref::default(); let mut x = unsafe { Pin::new_unchecked(&mut x) }; x. as_mut(). init(); println!("{}", x. as_ref(). a); *x. as_mut(). b(). unwrap() = 2;
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 238 println!("{}", x. as_ref(). a); } The noticeable difference here is that it's unsafe to pin to the stack, so now, we need unsafe both as users of Maybe Self Ref and as implementors. If we run the example with cargo run, the output will be the same as in our first example: Finished dev [unoptimized + debuginfo] target(s) in 0. 58s Running `target\debug\x-pin-experiments. exe` 0 2 The reason stack pinning requires unsafe is that it's rather easy to accidentally break the guarantees that Pin is supposed to provide. Let's take a look at this example: ch09/d-pin/src/main. rs use std::mem::swap; fn stack_pinning_manual_problem() { let mut x = Maybe Self Ref::default(); let mut y = Maybe Self Ref::default(); { let mut x = unsafe { Pin::new_unchecked(&mut x) }; x. as_mut(). init(); *x. as_mut(). b(). unwrap() = 2; } swap(&mut x, &mut y); println!(" x: {{ +----->a: {:p}, | b: {:?}, | }} | | y: {{ | a: {:p}, +-----|b: {:?}, }}", &x. a, x. b, &y. a, y. b, ); }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Pinning in Rust 239 In this example, we create two instances of Maybe Self Ref called x and y. Then, we create a scope where we pin x and set the value of x. a to 2 by dereferencing the self-reference in b, as we did previously. Now, when we exit the scope, x isn't pinned anymore, which means we can take a mutable reference to it without needing unsafe. Since this is safe Rust and we should be able to do what we want, we swap x and y. The output prints out the pointer address of the a field of both structs and the value of the pointer stored in b. When we look at the output, we should see the problem immediately: Finished dev [unoptimized + debuginfo] target(s) in 0. 58s Running `target\debug\x-pin-experiments. exe` x: { +----->a: 0xe45fcff558, | b: None, | } | | y: { | a: 0xe45fcff570, +-----|b: Some(0xe45fcff558), } Although the pointer values will differ from run to run, it's pretty evident that y doesn't hold a pointer to self anymore. Right now, it points somewhere in x. This is very bad and will cause the exact memory safety issues Rust is supposed to prevent. Note For this reason, the standard library has a pin! macro that helps us with safe stack pinning. The macro uses unsafe under the hood but makes it impossible for us to reach the pinned value again. Now that we've seen all the pitfalls of stack pinning, my clear recommendation is to avoid it unless you need to use it. If you have to use it, then use the pin! macro so that you avoid the issues we've described here.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 240 Tip In this book's Git Hub repository, you'll find a function called stack_pinning_macro() in the ch09/d-pin/src/main. rs file. This function shows the preceding example but using Rust's pin! macro. Pin projections and structural pinning Before we leave the topic of pinning, we'll quickly explain what pin projections and structural pinning are. Both sound complex, but they are very simple in practice. The following diagram shows how these terms are connected: Figure 9. 5-Pin projection and structural pinning Structural pinning means that if a struct is pinned, so is the field. We expose this through pin projections, as we'll see in the following code example. If we continue with our example and create a struct called Foo that holds both Maybe Self Ref (field a) and a String type (field b), we could write two projections that return a pinned version of a and a regular mutable reference to b: ch09/d-pin/src/main. rs #[derive(Default)] struct Foo { a: Maybe Self Ref, b: String, }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 4-pinning to the rescue 241 impl Foo { fn a(self: Pin<&mut Self>)-> Pin<&mut Maybe Self Ref> { unsafe { self. map_unchecked_mut(|s| &mut s. a) } } fn b(self: Pin<&mut Self>)-> &mut String { unsafe { &mut self. get_unchecked_mut(). b } } } Note that these methods will only be callable when Foo is pinned. Y ou won't be able to call either of these methods on a regular instance of Foo. Pin projections do have a few subtleties that you should be aware of, but they're explained in quite some detail in the official documentation ( https://doc. rust-lang. org/stable/std/ pin/index. html ), so I'll refer you there for more information about the precautions you must take when writing projections. Note Since pin projections can be a bit error-prone to create yourself, there is a popular create for making pin projections called pin_project (https://docs. rs/pin-project/latest/ pin_project/ ). If you ever end up having to make pin projections, it's worth checking out. With that, we've pretty much covered all the advanced topics in async Rust. However, before we go on to our last chapter, let's see how pinning will prevent us from making the big mistake we made in the last iteration of our coroutine example. Improving our example 4-pinning to the rescue Fortunately, the changes we need to make are small, but before we continue and make the changes, let's create a new folder and copy everything we had in our previous example over to that folder: Copy the entire c-coroutines-problem folder and name the new copy e-coroutines-pin Open Cargo. toml and rename the name of the package e-coroutines-pin
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 242 Tip Y ou'll find the example code we'll go through here in this book's Git Hub repository under the ch09/e-coroutines-pin folder. Now that we have a new folder set up, let's start making the necessary changes. The logical place to start is our Future definition in future. rs. future. rs The first thing we'll do is pull in Pin from the standard library at the very top: ch09/e-coroutines-pin/src/future. rs use std::pin::Pin; The only other change we need to make is in the definition of poll in our Future trait: fn poll(self: Pin<&mut Self>, waker: &Waker)-> Poll State<Self::Output>; That's pretty much it. However, the implications of this change are noticeable pretty much everywhere poll is called, so we need to fix that as well. Let's start with http. rs. http. rs The first thing we need to do is pull in Pin from the standard library. The start of the file should look like this: ch09/e-coroutines-pin/src/http. rs use crate::{future::Poll State, runtime::{self, reactor, Waker}, Future}; use mio::Interest; use std::{io::{Error Kind, Read, Write}, pin::Pin}; The only other place we need to make some changes is in the Future implementation for Http Get Future, so let's locate that. We'll start by changing the arguments in poll : ch09/e-coroutines-pin/src/http. rs fn poll(mut self: Pin<&mut Self>, waker: &Waker)-> Poll State<Self::Output>
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 4-pinning to the rescue 243 Since self is now Pin<&mut Self>, there are several small changes we need to make so that the borrow checker stays happy. Let's start from the top: ch09/e-coroutines-pin/src/http. rs let id = self. id; if self. stream. is_none() { println!("FIRST POLL-START OPERATION"); self. write_request(); let stream = (&mut self). stream. as_mut(). unwrap(); runtime::reactor(). register(stream, Interest::READABLE, id); runtime::reactor(). set_waker(waker, self. id); } The reason for assigning id to a variable at the top is that the borrow checker gives us some minor trouble when trying to pass in both &mut self and &self as arguments to the register/deregister functions, so we just assign id to a variable at the top and everyone is happy. There are only two more lines to change, and that is where we create a String type from our internal buffer and deregister interest with the reactor: ch09/e-coroutines-pin/src/http. rs let s = String::from_utf8_lossy(&self. buffer). to_string() ; runtime::reactor(). deregister(self. stream. as_mut(). unwrap(), id); break Poll State::Ready( s); Important Notice that this future is Unpin. There is nothing that makes it unsafe to move Http Get Future around, and this is indeed the case for most futures like this. Only the ones created by async/await are self-referential by design. That means there is no need for any unsafe here. Next, let's move on to main. rs since there are some important changes we need to make there.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 244 Main. rs Let's start from the top and make sure we have the correct imports: ch09/e-coroutines-pin/src/main. rs mod future; mod http; mod runtime; use future::{Future, Poll State}; use runtime::Waker; use std::{fmt::Write, marker::Phantom Pinned, pin::Pin }; This time, we need both the Phantom Pinned marker and Pin. The next thing we need to change is in our State0 enum. The futures we hold between states are now pinned: ch09/e-coroutines-pin/src/main. rs Wait1(Pin<Box<dyn Future<Output = String>>>), Wait2(Pin<Box<dyn Future<Output = String>>>), Next up is an important change. We need to make our coroutines !Unpin so that they can't be moved once they have been pinned. We can do this by adding a marker trait to our Coroutine0 struct: ch09/e-coroutines-pin/src/main. rs struct Coroutine0 { stack: Stack0, state: State0, _pin: Phantom Pinned, } We also need to add the Phantom Pinned marker to our new function: ch09/e-coroutines-pin/src/main. rs impl Coroutine0 { fn new()-> Self { Self { state: State0::Start, stack: Stack0::default(), _pin: Phantom Pinned, } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 4-pinning to the rescue 245 } The last thing we need to change is the poll method. Let's start with the function signature: ch09/e-coroutines-pin/src/main. rs fn poll(self: Pin<&mut Self>, waker: &Waker)-> Poll State<Self::Output> The easiest way I found to change our code was to simply define a new variable at the very top of the function called this, which replaces self everywhere in the function body. I won't go through every line since the change is so trivial, but after the first line, it's a simple search and replace everywhere self was used earlier, and change it to this : ch09/e-coroutines-pin/src/main. rs let this = unsafe { self. get_unchecked_mut() }; loop { match this. state { State0::Start => { // initialize stack (hoist declarations-no stack yet) this. stack. buffer = Some(String::from("\n BUFFER:\n----\n")); this. stack. writer = Some( this. stack. buffer. as_mut(). unwrap()); //----Code you actually wrote---- println!("Program starting");... The important line here was let this = unsafe { self. get_unchecked_mut() };. Here, we had to use unsafe since the pinned value is !Unpin because of the marker trait we added. Getting to the pinned value is unsafe since there is no way for Rust to guarantee that we won't move the pinned value. The nice thing about this is that if we encounter any such problems later, we know we can search for the places where we used unsafe and that the problem must be there.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 246 The next thing we need to change is to have the futures we store in our wait states pinned. We can do this by calling Box::pin instead of Box::new : ch09/e-coroutines-pin/src/main. rs let fut1 = Box:: pin(http::Http::get("/600/Hello Async Await")); let fut2 = Box:: pin(http::Http::get("/400/Hello Async Await")); The last place in main. rs where we need to make changes is in the locations where we poll our child futures since we now have to go through the Pin type to get a mutable reference: ch09/e-coroutines-pin/src/main. rs match f1. as_mut(). poll(waker) match f2. as_mut(). poll(waker) Note that we don't need unsafe here since these futures are !Unpin. The last place we need to change a few lines of code is in executor. rs, so let's head over there as our last stop. executor. rs The first thing we must do is make sure our dependencies are correct. The only change we're making here is adding Pin from the standard library: ch09/e-coroutines-pin/src/runtime/executor. rs... thread::{self, Thread}, pin::Pin, }; The next line we'll change is our Task type alias so that it now refers to Pin<Box<... >> : type Task = Pin<Box<dyn Future<Output = String>> >; The last line we'll change for now is in our spawn function. We have to pin the futures to the heap: e. tasks. borrow_mut(). insert(id, Box:: pin(future)); If we try to run our example now, it won't even compile and give us the following error: error[E0599]: no method named `poll` found for struct `Pin<Box<dyn future::Future<Output = String>>>` in the current scope --> src\runtime\executor. rs:89:30
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our example 4-pinning to the rescue 247 It won't even let us poll the future anymore without us pinning it first since poll is only callable for Pin<&mut Self> types and not &mut self anymore. So, we have to decide whether we pin the value to the stack or the heap before we even try to poll it. In our case, our whole executor works by heap allocating futures, so that's the only thing that makes sense to do. Let's remove our optimization entirely and change one line of code to make our executor work again: ch09/e-coroutines-pin/src/runtime/executor. rs match future. as_mut(). poll(&waker) { If you try to run the program again by writing cargo run, you should get the expected output back and not have to worry about the coroutine/wait generated futures being moved again (the output has been abbreviated slightly): Finished dev [unoptimized + debuginfo] target(s) in 0. 02s Running `target\debug\e-coroutines-pin. exe` Program starting FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. BUFFER:----HTTP/1. 1 200 OK content-length: 15 [=== ABBREVIATED ===] date: Sun, 03 Dec 2023 23:18:12 GMT Hello Async Await main: All tasks are finished Y ou now have self-referential coroutines that can safely store both data and references across wait points. Congratulations! Even though making these changes took up quite a few pages, the changes themselves were part pretty trivial for the most part. Most of the changes were due to Pin having a different API than what we had when using references before. The good thing is that this sets us up nicely for migrating our whole runtime over to futures created by async/await instead of our own futures created by coroutine/wait with very few changes.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines, Self-Referential Structs, and Pinning 248 Summary What a ride, huh? If you've got to the end of this chapter, you've done a fantastic job, and I have good news for you: you pretty much know everything about how Rust's futures work and what makes them special already. All the complicated topics are covered. In the next, and last, chapter, we'll switch over from our hand-made coroutines to proper async/await. This will seem like a breeze compared to what you've gone through so far. Before we continue, let's stop for a moment and take a look at what we've learned in this chapter. First, we expanded our coroutine implementation so that we could store variables across wait points. This is pretty important if our coroutine/wait syntax is going to rival regular synchronous code in readability and ergonomics. After that, we learned how we could store and restore variables that held references, which is just as important as being able to store data. Next, we saw firsthand something that we'll never see in Rust unless we implement an asynchronous system, as we did in this chapter (which is quite the task just to prove a single point). We saw how moving coroutines that hold self-references caused serious memory safety issues, and exactly why we need something to prevent them. That brought us to pinning and self-referential structs, and if you didn't know about these things already, you do now. In addition to that, you should at least know what a pin projection is and what we mean by structural pinning. Then, we looked at the differences between pinning a value to the stack and pinning a value to the heap. Y ou even saw how easy it was to break the Pin guarantee when pinning something to the stack and why you should be very careful when doing just that. Y ou also know about some tools that are widely used to tackle both pin projections and stack pinning and make both much safer and easier to use. Next, we got firsthand experience with how we could use pinning to prevent the issues we had with our coroutine implementation. If we take a look at what we've built so far, that's pretty impressive as well. We have the following: A coroutine implementation we've created ourselves Coroutine/wait syntax and a preprocessor that helps us with the boilerplate for our coroutines Coroutines that can safely store both data and references across wait points An efficient runtime that stores, schedules, and polls the tasks to completion
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 249 The ability to spawn new tasks onto the runtime so that one task can spawn hundreds of new tasks that will run concurrently A reactor that uses epoll /kqueue /IOCP under the hood to efficiently wait for and respond to new events reported by the operating system I think this is pretty cool. We're not quite done with this book yet. In the next chapter, you'll see how we can have our runtime run futures created by async/await instead of our own coroutine implementation with just a few changes. This enables us to leverage all the advantages of async Rust. We'll also take some time to discuss the state of async Rust today, the different runtimes you'll encounter, and what we might expect in the future. All the heavy lifting is done now. Well done!
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
10 Creating Y our Own Runtime In the last few chapters, we covered a lot of aspects that are relevant to asynchronous programming in Rust, but we did that by implementing alternative and simpler abstractions than what we have in Rust today. This last chapter will focus on bridging that gap by changing our runtime so that it works with Rust futures and async/await instead of our own futures and coroutine/wait. Since we've pretty much covered everything there is to know about coroutines, state machines, futures, wakers, runtimes, and pinning, adapting what we have now will be a relatively easy task. When we get everything working, we'll do some experiments with our runtime to showcase and discuss some of the aspects that make asynchronous Rust somewhat difficult for newcomers today. We'll also take some time to discuss what we might expect in the future with asynchronous Rust before we summarize what we've done and learned in this book. We'll cover the following main topics: Creating our own runtime with futures and async/await Experimenting with our runtime Challenges with asynchronous Rust The future of asynchronous Rust Technical requirements The examples in this chapter will build on the code from the last chapter, so the requirements are the same. The example is cross-platform and will work on all platforms that Rust ( https://doc. rust-lang. org/beta/rustc/platform-support. html#tier-1-with-host-tools ) and mio (https://github. com/tokio-rs/mio#platforms ) support.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 252 The only thing you need is Rust installed and the book's repository downloaded locally. All the code in this chapter can be found in the ch10 folder. We'll use delayserver in this example as well, so you need to open a separate terminal, enter the delayserver folder at the root of the repository, and type cargo run so it's ready and available for the examples going forward. Remember to change the ports in the code if for some reason you have to change what port delayserver listens on. Creating our own runtime with futures and async/await Okay, so we're in the home stretch; the last thing we'll do is change our runtime so it uses the Rust Future trait, Waker, and async/await. This will be a relatively easy task for us now that we've pretty much covered the most complex aspects of asynchronous programming in Rust by building everything up ourselves. We have even gone into quite some detail on the design decisions that Rust had to make along the way. The asynchronous programming model Rust has today is the result of an evolutionary process. Rust started in its early stages with green threads, but this was before it reached version 1. 0. At the point of reaching version 1. 0, Rust didn't have the notion of futures or asynchronous operations in its standard library at all. This space was explored on the side in the futures-rs crate ( https://github. com/ rust-lang/futures-rs ), which still serves as a nursery for async abstractions today. However, it didn't take long before Rust settled around a version of the Future trait similar to what we have today, often referred to as futures 0. 1. Supporting coroutines created by async/await was something that was in the works already at that point but it took a few years before the design reached its final stage and entered the stable version of the standard library. So, many of the choices we had to make with our async implementation are real choices that Rust had to make along the way. However, it all brings us to this point, so let's get to it and start adapting our runtime so it works with Rust futures. Before we get to the example, let's cover the things that are different from our current implementation: The Future trait Rust uses is slightly different from what we have now. The biggest difference is that it takes something called Context instead of Waker. The other difference is that it returns an enum called Poll instead of Poll State. Context is a wrapper around Rust's Waker type. Its only purpose is to future-proof the API so it can hold additional data in the future without having to change anything related to Waker. The Poll enum returns one of two states, Ready(T) or Pending. This is slightly different from what we have now with our Poll State enum, but the two states mean the same as Ready(T)/Not Ready in our current implementation. Wakers in Rust is slightly more complex to create than what we're used to with our current Waker. We'll go through how and why later in the chapter.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Technical requirements 253 Other than the differences outlined above, everything else can stay pretty much as is. For the most part, we're renaming and refactoring this time. Now that we've got an idea of what we need to do, it's time to set everything up so we can get our new example up and running. Note Even though we create a runtime to run futures properly in Rust, we still try to keep this simple by avoiding error handling and not focusing on making our runtime more flexible. Improving our runtime is certainly possible, and while it can be a bit tricky at times to use the type system correctly and please the borrow checker, it has relatively little to do with async Rust and more to do with Rust being Rust. Setting up our example Tip Y ou'll find this example in the book's repository in the ch10/a-rust-futures folder. We'll continue where we left off in the last chapter, so let's copy everything we had over to a new project: 1. Create a new folder called a-rust-futures. 2. Copy everything from the example in the previous chapter. If you followed the naming I suggested, it would be stored in the e-coroutines-pin folder. 3. Y ou should now have a folder containing a copy of our previous example, so the last thing to do is to change the project name in Cargo. toml to a-rust-futures. Okay, so let's start with the program we want to run. Open main. rs. main. rs We'll go back to the simplest version of our program and get it running before we try anything more complex. Open main. rs and replace all the code in that file with this: ch10/a-rust-futures/src/main. rs mod http; mod runtime; use crate::http::Http;
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 254 fn main() { let mut executor = runtime::init(); executor. block_on(async_main()); } async fn async_main() { println!("Program starting"); let txt = Http::get("/600/Hello Async Await"). await; println!("{txt}"); let txt = Http::get("/400/Hello Async Await"). await; println!("{txt}"); } No need for corofy or anything special this time. The compiler will rewrite this for us. Note Notice that we've removed the declaration of the future module. That's because we simply don't need it anymore. The only exception is if you want to retain and use the join_all function we created to join multiple futures together. Y ou can either try to rewrite that yourself or take a look in the repository and locate the ch10/a-rust-futures-bonus/src/ future. rs file, where you'll find the same version of our example, only this version retains the future module with a join_all function that works with Rust futures. future. rs Y ou can delete this file altogether as we don't need our own Future trait anymore. Let's move right along to http. rs and see what we need to change there. http. rs The first thing we need to change is our dependencies. We'll no longer rely on our own Future, Waker, and Poll State ; instead, we'll depend on Future, Context, and Poll from the standard library. Our dependencies should look like this now: ch10/a-rust-futures/src/http. rs use crate::runtime::{self, reactor}; use mio::Interest; use std::{ future::Future, io::{Error Kind, Read, Write},
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Technical requirements 255 pin::Pin, task::{Context, Poll}, }; We have to do some minor refactoring in the poll implementation for Http Get Future. First, we need to change the signature of the poll function so it complies with the new Future trait: ch10/a-rust-futures/src/http. rs fn poll(mut self: Pin<&mut Self>, cx: &mut Context )-> Poll<Self::Output> Since we named the new argument cx, we have to change what we pass in to set_waker with the following: ch10/a-rust-futures/src/http. rs runtime::reactor(). set_waker( cx, self. id); Next, we need to change our future implementation so it returns Poll instead of Poll State. To do that, locate the poll method and start by changing the signature so it matches the Future trait from the standard library: ch10/a-rust-futures/src/http. rs fn poll(mut self: Pin<&mut Self>, cx: &mut Context)-> Poll<Self::Output> Next, we need to change our return types wherever we return from the function (I've only presented the relevant part of the function body here): ch10/a-rust-futures/src/http. rs loop { match self. stream. as_mut(). unwrap(). read(&mut buff) { Ok(0) => { let s = String::from_utf8_lossy(&self. buffer). to_string(); runtime::reactor(). deregister(self. stream. as_ mut(). unwrap(), id); break Poll::Ready(s. to_string()) ; } Ok(n) => { self. buffer. extend(&buff[0..n]);
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 256 continue; } Err(e) if e. kind() == Error Kind::Would Block => { // always store the last given Waker runtime::reactor(). set_waker(cx, self. id); break Poll::Pending ; } Err(e) => panic!("{e:?}"), } } That's it for this file. Not bad, huh? Let's take a look at what we need to change in our executor and open executor. rs. executor. rs The first thing we need to change in executor. rs is our dependencies. This time, we only rely on types from the standard library, and our dependencies section should now look like this: ch10/a-rust-futures/src/runtime/executor. rs use std::{ cell::{Cell, Ref Cell}, collections::Hash Map, future::Future, pin::Pin, sync::{Arc, Mutex}, task::{Poll, Context, Wake, Waker}, thread::{self, Thread}, }; Our coroutines will no longer be limited to only output String, so we can safely use a more sensible Output type for our top-level futures: ch10/a-rust-futures/src/runtime/executor. rs type Task = Pin<Box<dyn Future<Output = ()>>>; The next thing we'll dive straight into is Waker since the changes we make here will result in several other changes to this file.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Technical requirements 257 Creating a waker in Rust can be quite a complex task since Rust wants to give us maximum flexibility on how we choose to implement wakers. The reason for this is twofold: Wakers must work just as well on a server as it does on a microcontroller A waker must be a zero-cost abstraction Realizing that most programmers never need to create their own wakers, the cost that the lack of ergonomics has was deemed acceptable. Until quite recently, the only way to construct a waker in Rust was to create something very similar to a trait object without being a trait object. To do so, you had to go through quite a complex process of constructing a v-table (a set of function pointers), combining that with a pointer to the data that the waker stored, and creating Raw Waker. Fortunately, we don't actually have to go through this process anymore as Rust now has the Wake trait. The Wake trait works if the Waker type we create is placed in Arc. Wrapping Waker in an Arc results in a heap allocation, but for most Waker implementations on the kind of systems we're talking about in this book, that's perfectly fine and what most production runtimes do. This simplifies things for us quite a bit. Info This is an example of Rust adopting what turns out to be best practices from the ecosystem. For a long time, a popular way to construct wakers was by implementing a trait called Arc Wake provided by the futures crate (https://github. com/rust-lang/futures-rs ). The futures crate is not a part of the language but it's in the rust-lang repository and can be viewed much like a toolbox and nursery for abstractions that might end up in the language at some point in the future. To avoid confusion by having multiple things with the same name, let's rename our concrete Waker type to My Waker : ch10/a-rust-futures/src/runtime/executor. rs #[derive(Clone)] pub struct My Waker { thread: Thread, id: usize, ready_queue: Arc<Mutex<Vec<usize>>>, }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 258 We can keep the implementation of wake pretty much the same, but we put it in the implementation of the Wake trait instead of just having a wake function on My Waker : ch10/a-rust-futures/src/runtime/executor. rs impl Wake for My Waker { fn wake(self: Arc<Self>) { self. ready_queue . lock() . map(|mut q| q. push(self. id)) . unwrap(); self. thread. unpark(); } } Y ou'll notice that the wake function takes a self: Arc<Self> argument, much like we saw when working with the Pin type. Writing the function signature this way means that wake is only callable on My Waker instances that are wrapped in Arc. Since our waker has changed slightly, there are a few places we need to make some minor corrections. The first is in the get_waker function: ch10/a-rust-futures/src/runtime/executor. rs fn get_waker(&self, id: usize)-> Arc<My Waker> { Arc::new( My Waker { id, thread: thread::current(), ready_queue: CURRENT_EXEC. with(|q| q. ready_queue. clone()), }) } So, not a big change here. The only difference is that we heap-allocate the waker by placing it in Arc. The next place we need to make a change is in the block_on function. First, we need to change its signature so that it matches our new definition of a top-level future: ch10/a-rust-futures/src/runtime/executor. rs pub fn block_on<F>(&mut self, future: F) where F: Future<Output = ()> + 'static, {
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Technical requirements 259 The next step is to change how we create a waker and wrap it in a Context struct in the block_ on function: ch10/a-rust-futures/src/runtime/executor. rs... // guard against false wakeups None => continue, }; let waker: Waker = self. get_waker(id). into(); let mut cx = Context::from_waker(&waker); match future. as_mut(). poll(&mut cx) {... This change is a little bit complex, so we'll go through it step by step: 1. First, we get Arc<My Waker> by calling the get_waker function just like we did before. 2. We convert My Waker into a simple Waker by specifying the type we expect with let waker: Waker and calling into() on My Waker. Since every instance of My Waker is also a kind of Waker, this will convert it into the Waker type that's defined in the standard library, which is just what we need. 3. Since Future::poll expects Context and not Waker, we create a new Context struct with a reference to the waker we just created. The last place we need to make changes is to the signature of our spawn function so that it takes the new definition of top-level futures as well: ch10/a-rust-futures/src/runtime/executor. rs pub fn spawn<F>(future: F) where F: Future<Output = ()> + 'static, That was the last thing we needed to change in our executor, and we're almost done. The last change we need to make to our runtime is in the reactor, so let's go ahead and open reactor. rs. reactor. rs The first thing we do is to make sure our dependencies are correct. We have to remove the dependency on our old Waker implementation and instead pull in these types from the standard library. The dependencies section should look like this:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 260 ch10/a-rust-futures/src/runtime/reactor. rs use mio::{net::Tcp Stream, Events, Interest, Poll, Registry, Token}; use std::{ collections::Hash Map, sync::{ atomic::{Atomic Usize, Ordering}, Arc, Mutex, Once Lock, }, thread, task::{Context, Waker}, }; There are two minor changes we need to make. The first one is that our set_waker function now accepts Context from which it needs to get a Waker object: ch10/a-rust-futures/src/runtime/reactor. rs pub fn set_waker(&self, cx: &Context, id: usize) { let _ = self . wakers . lock() . map(|mut w| w. insert(id, cx. waker(). clone() ). is_none()) . unwrap(); } The last change is that we need to call a slightly different method when calling wake in the event_ loop function: ch10/a-rust-futures/src/runtime/reactor. rs if let Some(waker) = wakers. get(&id) { waker. wake_by_ref() ; } Since calling wake now consumes self, we call the version that takes &self instead since we want to hold on to that waker for later. That's it. Our runtime can now run and take advantage of the full power of asynchronous Rust. Let's try it out by typing cargo run in the terminal.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Experimenting with our runtime 261 We should get the same output as we've seen before: Program starting FIRST POLL-START OPERATION main: 1 pending tasks. Sleep until notified. HTTP/1. 1 200 OK content-length: 15 [==== ABBREVIATED ====] Hello Async Await main: All tasks are finished That's pretty neat, isn't it? So, now we have created our own async runtime that uses Rust's Future, Waker, Context, and async/await. Now that we can pride ourselves on being runtime implementors, it's time to do some experiments. I'll choose a few that will also teach us a few things about runtimes and futures in Rust. We're not done learning just yet. Experimenting with our runtime Note Y ou'll find this example in the book's repository in the ch10/b-rust-futures-experiments folder. The different experiments will be implemented as different versions of the async_main function numbered chronologically. I'll indicate which function corresponds with which function in the repository example in the heading of the code snippet. Before we start experimenting, let's copy everything we have now to a new folder: 1. Create a new folder called b-rust-futures-experiments. 2. Copy everything from the a-rust-futures folder to the new folder. 3. Open Cargo. toml and change the name attribute to b-rust-futures-experiments. The first experiment will be to exchange our very limited HTTP client with a proper one. The easiest way to do that is to simply pick another production-quality HTTP client library that supports async Rust and use that instead. So, when trying to find a suitable replacement for our HTTP client, we check the list of the most popular high-level HTTP client libraries and find reqwest at the top. That might work for our purposes, so let's try that first.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 262 The first thing we do is add reqwest as a dependency in Cargo. toml by typing the following: cargo add reqwest@0. 11 Next, let's change our async_main function so we use reqwest instead of our own HTTP client: ch10/b-rust-futures-examples/src/main. rs (async_main2) async fn async_main() { println!("Program starting"); let url = "http://127. 0. 0. 1:8080/600/Hello Async Await 1"; let res = reqwest::get(url). await. unwrap(); let txt = res. text(). await. unwrap(); println!("{txt}"); let url = "http://127. 0. 0. 1:8080/400/Hello Async Await 2"; let res = reqwest::get(url). await. unwrap(); let txt = res. text(). await. unwrap(); println!("{txt}"); } Besides using the reqwest API, I also changed the message we send. Most HTTP clients don't return the raw HTTP response to us and usually only provide a convenient way to get the body of the response, which up until now was similar for both our requests. That should be all we need to change, so let's try to run our program by writing cargo run : Running `target\debug\a-rust-futures. exe` Program starting thread 'main' panicked at C:\Users\cf\. cargo\registry\src\index. crates. io-6f17d22bba15001f\tokio-1. 35. 0\src\net\tcp\stream. rs:160:18: there is no reactor running, must be called from the context of a Tokio 1. x runtime Okay, so the error tells us that there is no reactor running and that it must be called from the context of a Tokio 1. x runtime. Well, we know there is a reactor running, just not the one reqwest expects, so let's see how we can fix this. We obviously need to add Tokio to our program, and since Tokio is heavily feature-gated (meaning that it has very few features enabled by default), we'll make it easy on ourselves and enable all of them: cargo add tokio@1--features full According to the documentation, we need to start a Tokio runtime and explicitly enter it to enable the reactor. The enter function will return Enter Guard to us that we can hold on to it as long as we need the reactor up and running.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Experimenting with our runtime 263 Adding this to the top of our async_main function should work: ch10/b-rust-futures-examples/src/main. rs (async_main2) use tokio::runtime::Runtime; async fn async_main let rt = Runtime::new(). unwrap(); let _guard = rt. enter(); println!("Program starting"); let url = "http://127. 0. 0. 1:8080/600/Hello Async Await1"; ... Note Calling Runtime::new creates a multithreaded Tokio runtime, but Tokio also has a single-threaded runtime that you can create by using the runtime builder like this: Builder::new_ current_thread(). enable_all(). build(). unwrap(). If you do that, you end up with a peculiar problem: a deadlock. The reason for that is interesting and one that you should know about. Tokio's single-threaded runtime uses only the thread it's called on for both the executor and the reactor. This is very similar to what we did in the first version of our runtime in Chapter 8. We used the Poll instance to park our executor directly. When both our reactor and executor execute on the same thread, they must have the same mechanism to park themselves and wait for new events, which means there will be a tight coupling between them. When handling an event, the reactor has to wake up first to call Waker::wake, but the executor is the last one to park the thread. If the executor parked itself by calling thread::park (like we do), the reactor is parked as well and will never wake up since they're running on the same thread. The only way for this to work is that the executor parks on something shared with the reactor (like we did with Poll ). Since we're not tightly integrated with Tokio, all we get is a deadlock. Now, if we try to run our program once more, we get the following output: Program starting main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. Hello Async Await1 main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. Hello Async Await2 main: All tasks are finished
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 264 Okay, so now everything works as expected. The only difference is that we get woken up a few extra times, but the program finishes and produces the expected result. Before we discuss what we just witnessed, let's do one more experiment. Isahc is an HTTP client library that promises to be executor agnostic, meaning that it doesn't rely on any specific executor. Let's put that to the test. First, we add a dependency on isahc by typing the following: cargo add isahc@1. 7 Then, we rewrite our main function so it looks like this: ch10/b-rust-futures-examples/src/main. rs (async_main3) use isahc::prelude::*; async fn async_main() { println!("Program starting"); let url = "http://127. 0. 0. 1:8080/600/Hello Async Await1"; let mut res = isahc::get_async(url). await. unwrap(); let txt = res. text(). await. unwrap(); println!("{txt}"); let url = "http://127. 0. 0. 1:8080/400/Hello Async Await2"; let mut res = isahc::get_async(url). await. unwrap(); let txt = res. text(). await. unwrap(); println!("{txt}"); } Now, if we run our program by writing cargo run, we get the following output: Program starting main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. Hello Async Await1 main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. main: 1 pending tasks. Sleep until notified. Hello Async Await2 main: All tasks are finished So, we get the expected output without having to jump through any hoops.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Challenges with asynchronous Rust 265 Why does all this have to be so unintuitive? The answer to that brings us to the topic of common challenges that we all face when programming with async Rust, so let's cover some of the most noticeable ones and explain the reason they exist so we can figure out how to best deal with them. Challenges with asynchronous Rust So, while we've seen with our own eyes that the executor and reactor could be loosely coupled, which in turn means that you could in theory mix and match reactors and executors, the question is why do we encounter so much friction when trying to do just that? Most programmers that have used async Rust have experienced problems caused by incompatible async libraries, and we saw an example of the kind of error message you would get previously. To understand this, we have to dive a little bit deeper into the existing async runtimes in Rust, specifically those we typically use for desktop and server applications. Explicit versus implicit reactor instantiation Info The type of future we'll talk about going forward is leaf futures, the kind that actually represents an I/O operation (for example, Http Get Future ). When you create a runtime in Rust, you also need to create non-blocking primitives of the Rust standard library. Mutexes, channels, timers, Tcp Streams, and so on are all things that need an async equivalent. Most of these can be implemented as different kinds of reactors, but the question that then comes up is: how is that reactor started? In both our own runtime and in Tokio, the reactor is started as part of the runtime initialization. We have a runtime::init() function that calls reactor::start(), and Tokio has a Runtime::new() and Runtime::enter() function. If we try to create a leaf future (the only one we created ourselves is Http Get Future ) without the reactor started, both our runtime and Tokio will panic. The reactor has to be instantiated explicitly. Isahc, on the other hand, brings its own kind of reactor. Isahc is built on libcurl, a highly portable C library for multiprotocol file transfer. The thing that's relevant for us, however, is that libcurl accepts a callback that is called when an operation is ready. So, Isahc passes the waker it receives to this callback and makes sure that Waker::wake is called when the callback is executed. This is a bit oversimplified, but it's essentially what happens.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 266 In practice, that means that Isahc brings its own reactor since it comes with the machinery to store wakers and call wake on them when an operation is ready. The reactor is started implicitly. Incidentally, this is also one of the major differences between async_std and Tokio. Tokio requires explicit instantiation, and async_std relies on implicit instantiation. I'm not going into so much detail on this just for fun; while this seems like a minor difference, it has a rather big impact on how intuitive asynchronous programming in Rust is. This problem mostly arises when you start programming using a different runtime than Tokio and then have to use a library that internally relies on a Tokio reactor being present. Since you can't have two Tokio instances running on the same thread, the library can't implicitly start a Tokio reactor. Instead, what often happens is that you try to use that library and get an error like we did in the preceding example. Now, you have to solve this by starting a Tokio reactor yourself, use some kind of compatibility wrapper created by someone else, or seeing whether the runtime you use has a built-in mechanism for running futures that rely on a Tokio reactor being present. For most people who don't know about reactors, executors, and different kinds of leaf futures, this can be quite unintuitive and cause quite a bit of frustration. Note The problem we describe here is quite common, and it's not helped by the fact that async libraries rarely explain this well or even try to be explicit about what kind of runtime they use. Some libraries might only mention that they're built on top of Tokio somewhere in the README file, and some might simply state that they're built on top of Hyper, for example, assuming that you know that Hyper is built on top of Tokio (at least by default). But now, you know that you should check this to avoid any surprises, and if you encounter this issue, you know exactly what the problem is. Ergonomics versus efficiency and flexibility Rust is good at being ergonomic and efficient, and that almost makes it difficult to remember that when Rust is faced with the choice between being efficient or ergonomic, it will choose to be efficient. Many of the most popular crates in the ecosystem echo these values, and that includes async runtimes. Some tasks can be more efficient if they're tightly integrated with the executor, and therefore, if you use them in your library, you will be dependent on that specific runtime. Let's take timers as an example, but task notifications where Task A notifies Task B that it can continue is another example with some of the same trade-offs.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Challenges with asynchronous Rust 267 Tasks We've used the terms tasks and futures without making the difference explicitly clear, so let's clear that up here. We first covered tasks in Chapter 1, and they still retain the same general meaning, but when talking about runtimes in Rust, they have a more specific definition. A task is a top-level future, the one that we spawn onto our executor. The executor schedules between different tasks. Tasks in a runtime in many ways represent the same abstraction that threads do in an OS. Every task is a future in Rust, but every future is not a task by this definition. Y ou can think of thread::sleep as a timer, and we often need something like this in an asynchronous context, so our asynchronous runtime will therefore need to have a sleep equivalent that tells the executor to park this task for a specified duration. We could implement this as a reactor and have separate OS-thread sleep for a specified duration and then wake the correct Waker. That would be simple and executor agnostic since the executor is oblivious to what happens and only concern itself with scheduling the task when Waker::wake is called. However, it's also not optimally efficient for all workloads (even if we used the same thread for all timers). Another, and more common, way to solve this is to delegate this task to the executor. In our runtime, this could be done by having the executor store an ordered list of instants and a corresponding Waker, which is used to determine whether any timers have expired before it calls thread::park. If none have expired, we can calculate the duration until the next timer expires and use something such as thread::park_timeout to make sure that we at least wake up to handle that timer. The algorithms used to store the timers can be heavily optimized and you avoid the need for one extra thread just for timers with the additional overhead of synchronization between these threads just to signal that a timer has expired. In a multithreaded runtime, there might even be contention when multiple executors frequently add timers to the same reactor. Some timers are implemented reactor-style as separate libraries, and for many tasks, that will suffice. The important point here is that by using the defaults, you end up being tied to one specific runtime, and you have to make careful considerations if you want to avoid your library being tightly coupled to a specific runtime. Common traits that everyone agrees about The last topic that causes friction in async Rust is the lack of universally agreed-upon traits and interfaces for typical async operations. I want to preface this segment by pointing out that this is one area that's improving day by day, and there is a nursery for the traits and abstractions for asynchronous Rust in the futures-rs crate (https://github. com/rust-lang/futures-rs ). However, since it's still early days for async Rust, it's something worth mentioning in a book like this.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 268 Let's take spawning as an example. When you write a high-level async library in Rust, such as a web server, you'll likely want to be able to spawn new tasks (top-level futures). For example, each connection to the server will most likely be a new task that you want to spawn onto the executor. Now, spawning is specific to each executor, and Rust doesn't have a trait that defines how to spawn a task. There is a trait suggested for spawning in the future-rs crate, but creating a spawn trait that is both zero-cost and flexible enough to support all kinds of runtimes turns out to be very difficult. There are ways around this. The popular HTTP library Hyper ( https://hyper. rs/ ), for example, uses a trait to represent the executor and internally uses that to spawn new tasks. This makes it possible for users to implement this trait for a different executor and hand it back to Hyper. By implementing this trait for a different executor, Hyper will use a different spawner than its default option (which is the one in Tokio's executor). Here is an example of how this is used for async_std with Hyper: https://github. com/async-rs/async-std-hyper. However, since there is no universal way of making this work, most libraries that rely on executor-specific functionality do one of two things: 1. Choose a runtime and stick with it. 2. Implement two versions of the library supporting different popular runtimes that users choose by enabling the correct features. Async drop Async drop, or async destructors, is an aspect of async Rust that's somewhat unresolved at the time of writing this book. Rust uses a pattern called RAII, which means that when a type is created, so are its resources, and when a type is dropped, the resources are freed as well. The compiler automatically inserts a call to drop on objects when they go out of scope. If we take our runtime as an example, when resources are dropped, they do so in a blocking manner. This is normally not a big problem since a drop likely won't block the executor for too long, but it isn't always so. If we have a drop implementation that takes a long time to finish (for example, if the drop needs to manage I/O, or makes a blocking call to the OS kernel, which is perfectly legal and sometimes even unavoidable in Rust), it can potentially block the executor. So, an async drop would somehow be able to yield to the scheduler in such cases, and this is not possible at the moment. Now, this isn't a rough edge of async Rust you're likely to encounter as a user of async libraries, but it's worth knowing about since right now, the only way to make sure this doesn't cause issues is to be careful what you put in the drop implementation for types that are used in an async context. So, while this is not an extensive list of everything that causes friction in async Rust, it's some of the points I find most noticeable and worth knowing about.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The future of asynchronous Rust 269 Before we round off this chapter, let's spend a little time talking about what we should expect in the future when it comes to asynchronous programming in Rust. The future of asynchronous Rust Some of the things that make async Rust different from other languages are unavoidable. Asynchronous Rust is very efficient, has low latency, and is backed by a very strong type system due to how the language is designed and its core values. However, much of the perceived complexity today has more to do with the ecosystem and the kind of issues that result from a lot of programmers having to agree on the best way to solve different problems without any formal structure. The ecosystem gets fragmented for a while, and together with the fact that asynchronous programming is a topic that's difficult for a lot of programmers, it ends up adding to the cognitive load associated with asynchronous Rust. All the issues and pain points I've mentioned in this chapter are constantly getting better. Some points that would have been on this list a few years ago are not even worth mentioning today. More and more common traits and abstractions will end up in the standard library, making async Rust more ergonomic since everything that uses them will “just work. ” As different experiments and designs gain more traction than others, they become the de facto standard, and even though you will still have a lot of choices when programming asynchronous Rust, there will be certain paths to choose that cause a minimal amount of friction for those that want something that “just works. ” With enough knowledge about asynchronous Rust and asynchronous programming in general, the issues I've mentioned here are, after all, relatively minor, and since you know more about asynchronous Rust than most programmers, I have a hard time imagining that any of these issues will cause you a lot of trouble. That doesn't mean it's not something worth knowing about since chances are your fellow programmers will struggle with some of these issues at some point. Summary So, in this chapter, we did two things. First, we made some rather minor changes to our runtime so it works as an actual runtime for Rust futures. We tested the runtime using two external HTTP client libraries to learn a thing or two about reactors, runtimes, and async libraries in Rust. The next thing we did was to discuss some of the things that make asynchronous Rust difficult for many programmers coming from other languages. In the end, we also talked about what to expect going forward.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 270 Depending on how you've followed along and how much you've experimented with the examples we created along the way, it's up to you what project to take on yourself if you want to learn more. There is an important aspect of learning that only happens when you experiment on your own. Pick everything apart, see what breaks, and how to fix it. Improve the simple runtime we created to learn new stuff. There are enough interesting projects to pick from, but here are some suggestions: Change out the parker implementation where we used thread::park with a proper parker. Y ou can choose one from a library or create a parker yourself (I added a small bonus at the end of the ch10 folder called parker-bonus where you get a simple parker implementation). Implement a simple delayserver using the runtime you've created yourself. To do this, you have to be able to write some raw HTTP responses and create a simple server. If you went through the free introductory book called The Rust Programming Language, you created a simple server in one of the last chapters ( https://doc. rust-lang. org/book/ch20-02-multithreaded. html ), which gives you the basics you need. Y ou also need to create a timer as we discussed above or use an existing crate for async timers. Y ou can create a “proper” multithreaded runtime and explore the possibilities that come with having a global task queue, or as an alternative, implement a work-stealing scheduler that can steal tasks from other executors' local queues when they're done with their own. Only your imagination sets the limits on what you can do. The important thing to note is that there is a certain joy in doing something just because you can and just for fun, and I hope that you get some of the same enjoyment from this as I do. I'll end this chapter with a few words on how to make your life as an asynchronous programmer as easy as possible. The first thing is to realize that an async runtime is not just another library that you use. It's extremely invasive and impacts almost everything in your program. It's a layer that rewrites, schedules tasks, and reorders the program flow from what you're used to. My clear recommendation if you're not specifically into learning about runtimes, or have very specific needs, is to pick one runtime and stick to it for a while. Learn everything about it-not necessarily everything from the start, but as you need more and more functionality from it, you will learn everything eventually. This is almost like getting comfortable with everything in Rust's standard library.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 271 What runtime you start with depends a bit on what crates you're using the most. Smol and async-std share a lot of implementation details and will behave similarly. Their big selling point is that their API strives to stay as close as possible to the standard library. Combined with the fact that the reactors are instantiated implicitly, this can result in a slightly more intuitive experience and a more gentle learning curve. Both are production-quality runtimes and see a lot of use. Smol was originally created with the goal of having a code base that's easy for programmers to understand and learn from, which I think is true today as well. With that said, the most popular alternative for users looking for a general-purpose runtime at the time of writing is Tok io (https://tokio. rs/ ). Tokio is one of the oldest async runtimes in Rust. It is actively developed and has a welcoming and active community. The documentation is excellent. Being one of the most popular runtimes also means there is a good chance that you'll find a library that does exactly what you need with support for Tokio out of the box. Personally, I tend to reach for Tokio for the reasons mentioned, but you can't really go wrong with either of these runtimes unless you have very specific requirements. Finally, let's not forget to mention the futures-rs crate (https://github. com/rust-lang/futures-rs ). I mentioned this crate earlier, but it's really useful to know about as it contains several traits, abstractions, and executors ( https://docs. rs/futures/latest/futures/ executor/index. html ) for async Rust. It serves the purpose of an async toolbox that comes in handy in many situations.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Your Own Runtime 272 Epilogue So, you have reached the end. First of all, congratulations! Y ou've come to the end of quite a journey! We started by talking about concurrency and parallelism in Chapter 1. We even covered a bit about the history, CPUs and OSs, hardware, and interrupts. In Chapter 2, we discussed how programming languages modeled asynchronous program flow. We introduced coroutines and how stackful and stackless coroutines differ. We discussed OS threads, fibers/green threads, and callbacks and their pros and cons. Then, in Chapter 3, we took a look at OS-backed event queues such as epoll, kqueue, and IOCP. We even took quite a deep dive into syscalls and cross-platform abstractions. In Chapter 4, we hit some quite difficult terrain when implementing our own mio-like event queue using epoll. We even had to learn about the difference between edge-triggered and level-triggered events. If Chapter 4 was somewhat rough terrain, Chapter 5 was more like climbing Mount Everest. No one expects you to remember everything covered there, but you read through it and have a working example you can use to experiment with. We implemented our own fibers/green threads, and while doing so, we learned a little bit about processor architectures, ISAs, ABIs, and calling conventions. We even learned quite a bit about inline assembly in Rust. If you ever felt insecure about the stack versus heap difference, you surely understand it now that you've created stacks that we made our CPU jump to ourselves. In Chapter 6, we got a high-level introduction to asynchronous Rust, before we took a deep dive from Chapter 7 and onward, starting with creating our own coroutines and our own coroutine/wait syntax. In Chapter 8, we created the first versions of our own runtime while discussing basic runtime design. We also deep-dived into reactors, executors, and wakers. In Chapter 9, we improved our runtime and discovered the dangers of self-referential structs in Rust. We then took a thorough look at pinning in Rust and how that helped us solve the problems we got into. Finally, in Chapter 10, we saw that by making some rather minor changes, our runtime became a fully functioning runtime for Rust futures. We rounded everything off by discussing some well-known challenges with asynchronous Rust and some expectations for the future. The Rust community is very inclusive and welcoming, and we' d happily welcome you to engage and contribute if you find this topic interesting and want to learn more. One of the ways asynchronous Rust gets better is through contributions by people with all levels of experience. If you want to get involved, then the async work group ( https://rust-lang. github. io/wg-async/welcome. html ) is a good place to start. There is also a very active community centered around the Tokio project (https://github. com/tokio-rs/tokio/blob/master/CONTRIBUTING. md ), and many, many more depending on what specific area you want to dive deeper into. Don't be afraid to join the different channels and ask questions.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Epilogue 273 Now that we're at the end I want to thank you for reading all the way to the end. I wanted this book to feel like a journey we took together, not like a lecture. I wanted you to be the focus, not me. I hope I succeeded with that, and I genuinely hope that you learned something that you find useful and can take with you going forward. If you did, then I'm sincerely happy that my work was of value to you. I wish you the best of luck with your asynchronous programming going forward. Until the next time! Carl Fredrik
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Index Symbols 1:1 threading 29 A address space 30 application binary interface (ABI) 97 arithmetic logic units (ALUs) 5 Assembly language 102 asymmetric coroutines 39, 40 async/await keywords 154, 155 asynchronous programming versus concurrency 13 asynchronous Rust challenges 265 future 269 async runtime mental model 131-133 AT&T dialect 102 Await 39 B base example current implementation, changing 177 design 173-176http. rs, modifying 180-183 improving 171, 173 main. rs, modifying 177 runtime. rs, modifying 177-179 b-async-await 156-160 bitflags 76, 78 bitmasks 75-78 Boost. Coroutine 112 BSD/mac OS 51 C callback based approaches 37 advantages and drawbacks 37 callee saved 101 calling convention , 57 c-async-await 160-165 challenges, asynchronous Rust 267 async drop 268 ergonomics, versus efficiency and flexibility 266 explicit, versus implicit reactor instantiation 265, 266 traits 267 completion-based event queues 48, 49 completion port 49
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Index276 complex instruction set computers (CISC) 97 concurrency 9 relation, to I/O 11 right reference frame, selecting 12 use cases 11 versus asynchronous programming 12 versus parallelism 7, 8, 9, 10 concurrent 8 continuation-passing style 39 cooperative multitasking 5, 26 corofy 155, 156 coroutine implementation 227-229 coroutine preprocessor 155, 156 coroutines 39 advantages 40 creating 147-153 drawbacks 41 implementation 149 states 148 coroutine/wait syntax 155 CPU architecture 97 cross-platform abstractions 51 cross-platform event queues 50, 51 custom fibers implementing 112-115 runtime, implementing 115-121 D direct memory access controller ( DMAC) 21 direct memory access (DMA) 21 DNS lookup 72 driver 21E edge-triggered event versus level-triggered event 78-81 Embassy 169 epoll 47, 49 designing to 66-71 epoll/kqueue OS-backed event, queuing via 47 example project running 107, 108 setting up 103-105 executor 170, 171 executor. rs 256-259 F ffi module 73-76 bitflags 76, 78 bitmasks 76, 78 level-triggered event, versus edge-triggered event 78-81 fibers and green threads 33 context switching 35 FFI functions 36 scheduling 35 task, setting up with stack of fixed size 34, 35 file descriptors 52 file I/O 72, 73 Firmware 22 foreign function interface (FFI) 36, 43, 51 advantages and drawbacks 36 future 38, 39, 130 definition, changing 191, 192 poll phase 130 wait phase 130 wake phase 130
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Index277 future. rs 254 futures 38, 39 G generators versus, coroutines 139 Go 112 green threads 33 guard function 121-125 H hand-written coroutines code, writing 153, 154 example 139, 140 futures module 141 HTTP module 142-146 lazy future 146, 147 hardware interrupts 6, 20, 22 highest level of abstraction 61 http. rs 254-256 hyper-threading 6 performance 6 I inline assembly 51 input/output completion port (ICOP) 48, 49 OS-backed event, queuing via 47 instruction set architecture (ISA) 97 ARM ISA 97 x86 97 x86-64 97 Intel Advanced Vector Extensions (AVX) 100 Intel dialect 102 interrupt descriptor table (IDT) 17, 20interrupt handler 21 interrupt request line (IRQs) 20 interrupts 22 hardware interrupts 22 software interrupts 22 I/O intensive tasks versus CPU-intensive tasks 134 I/O operations blocking 72 DNS lookup 72 file I/O 72, 73 io_uring 44 Isahc 264 K kernel thread 27 kqueue 47, 49 L Last In First Out (LIFO) 197 leaf futures example 130 LEAN processes 9 level-triggered event versus edge-triggered event 78-81 libc 14 Linux 51 examples, running 45 OS-provided API, using in 56-58 raw syscall on 52-54 lowest level, of abstraction 51 raw syscall, on Linux 52-54 raw syscall, on mac OS 54, 55 raw syscall, on Windows 55
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Index278 M M:1 threading 33 mac OS OS-provided API, using in 56-58 raw syscall on 54, 55 main. rs file 84-93, 253, 254 memory management unit (MMU) 17 mental model, of async runtime 131-133 M*N threading 28, 33 move 231, 232 multicore processors 6 multiprotocol file transfer 265 multitasking 4 hyper-threading 5 multicore processors 6 non-preemptive multitasking 4 preemptive multitasking 5 synchronous code, writing 6 multithreaded programming 13 N network call 19 code 19 events, registering with OS 20 network card 20 data, reading and writing 21 next level of abstraction 55, 56 OS-provided API, using in Linux and mac OS 56-58 Windows API, using 58-60 non-cooperative multitasking 26 non-leaf futures 130 example 131 non-preemptive multitasking 4, 5O operating system and CPU 15-18 communicating with 14 concurrency 13 role 13 teaming up with 14 threads 12 OS-backed event blocking I/O and non-blocking I/O 46 queuing, need for 45 queuing, via epoll/kqueue 47 queuing, via IOCP 47 OS-provided API using, in Linux and mac OS 56-58 OS threads 27-29 asynchronous operations, decoupling 31 drawbacks and complexities 29-31 example 31, 32 out-of-order execution 7 P parallel 8 parallelism 7 versus concurrency 7-10 pinning 233, 234, 241 executor. rs 246, 247 future. rs 242 http. rs 242, 243 main. rs 244-246 to heap 235, 236, 237 to stack 237, 238, 239 Un Pin 234 pin_project reference link 241
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Index279 pointers 231 polling 46 Poll module 81-84 preemptive multitasking 5 pre-empt running tasks 26 privilege level 18 process 30 promises 38, 39 proper Executor implementing 192-199 proper Reactor implementing 199-207 proper runtime creating 184-186 R raw syscall on Linux 52-54 on mac OS 54, 55 reactor 170 reactor. rs 259, 260 readiness-based event queues 47, 48 real-time operating system (RTOS) 169 reduced instruction set computers (RISC) 97 references 222-227 repository using 96 resource 8 runtime design improvement, by adding Reactor and Walker 187, 188 runtimes 169 example, using concurrency 208, 209 experimenting with 208, 261-265multiple futures, running concurrently and in parallel 209, 210 Rust language 133 Pin<T> 234 pinning 233, 234 pinning, to heap 235-237 pinning, to stack 237-239 pin projections 234-41 standard library 133 structural pinning 235-241 Unpin 234 Rust inline assembly macro 105, 106 AT&T syntax 106 Intel syntax 106 options 107 S scheduler 28 segmentation fault 17 self-referential structs 223 discovering 229-231 move 231, 232 single-threaded asynchronous system task scheduling 170 skip function 121-125 software interrupts 22 stack 109-111 sizes 111 stack alignment 10 stackful tasks 27 stackless coroutines 138 stackless tasks 27 stack pointer 110 standard output (stdout) 51 static lifetimes 195 Streaming SIMD Extensions (SSE) 98
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf