text
stringlengths
0
3.16k
source
stringclasses
1 value
Create Your Own Event Queue 80 Figure 4. 1-Edge-triggered versus level-triggered events mio doesn't, at the time of writing, support EPOLLONESHOT and uses epoll in an edge-triggered mode, which we will do as well in our example. What about waiting on epoll_wait in multiple threads? As long as we only have one Poll instance, we avoid the problems and subtleties of having multiple threads calling epoll_wait on the same epoll instance. Using level-triggered events will wake up all threads that are waiting in the epoll_wait call, causing all of them to try to handle the event (this is often referred to as the problem of the thundering heard). epoll has another flag you can set, called EPOLLEXCLUSIVE, that solves this issue. Events that are set to be edge-triggered will only wake up one of the threads blocking in epoll_wait by default and avoid this issue. Since we only use one Poll instance from a single thread, this will not be an issue for us. I know and understand that this sounds very complex. The general concept of event queues is rather simple, but the details can get a bit complex. That said, epoll is one of the most complex APIs in my experience since the API has clearly been evolving over time to adapt the original design to suit modern requirements, and there is really no easy way to actually use and understand it correctly without covering at least the topics we covered here. One word of comfort here is that both kqueue and IOCP have APIs that are easier to understand. There is also the fact that Unix has a new asynchronous I/O interface called io_uring that will be more and more and more common in the future.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The Poll module 81 Now that we've covered the hard part of this chapter and gotten a high-level overview of how epoll works, it's time to implement our mio-inspired API in poll. rs. The Poll module If you haven't written or copied the code we presented in the Design and introduction to epoll section, it's time to do it now. We'll implement all the functions where we just had todo!() earlier. We start by implementing the methods on our Poll struct. First up is opening the impl Poll block and implementing the new function: ch04/a-epoll/src/poll. rs impl Poll { pub fn new()-> Result<Self> { let res = unsafe { ffi::epoll_create(1) }; if res < 0 { return Err(io::Error::last_os_error()); } Ok(Self { registry: Registry { raw_fd: res }, }) } Given the thorough introduction to epoll in the The ffi module section, this should be pretty straightforward. We call ffi::epoll_create with an argument of 1 (remember, the argument is ignored but must have a non-zero value). If we get any errors, we ask the operating system to report the last error for our process and return that. If the call succeeds, we return a new Poll instance that simply wraps around our registry that holds the epoll file descriptor. Next up is our registry method, which simply hands out a reference to the inner Registry struct: ch04/a-epoll/src/poll. rs pub fn registry(&self)-> &Registry { &self. registry }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 82 The last method on Poll is the most interesting one. It's the poll function, which will park the current thread and tell the operating system to wake it up when an event has happened on a source we're tracking, or the timeout has elapsed, whichever comes first. We also close the impl Poll block here: ch04/a-epoll/src/poll. rs pub fn poll(&mut self, events: &mut Events, timeout: Option<i32>)-> Result<()> { let fd = self. registry. raw_fd; let timeout = timeout. unwrap_or(-1); let max_events = events. capacity() as i32; let res = unsafe { ffi::epoll_wait(fd, events. as_mut_ptr(), max_events, timeout) }; if res < 0 { return Err(io::Error::last_os_error()); }; unsafe { events. set_len(res as usize) }; Ok(()) } } The first thing we do is to get the raw file descriptor for the event queue and store it in the fd variable. Next is our timeout. If it's Some, we unwrap that value, and if it's None, we set it to-1, which is the value that tells the operating system that we want to block until an event occurs even though that might never happen. At the top of the file, we defined Events as a type alias for Vec<ffi::Event>, so the next thing we do is to get the capacity of that Vec. It's important that we don't rely on Vec::len since that reports how many items we have in the Vec. Vec::capacity reports the space we've allocated and that's what we're after. Next up is the call to ffi::epoll_wait. This call will return successfully if it has a value of 0 or larger, telling us how many events have occurred. Note We would get a value of 0 if a timeout elapses before an event has happened. The last thing we do is to make an unsafe call to events. set_len(res as usize). This function is unsafe since we could potentially set the length so that we would access memory that's not been initialized yet in safe Rust. We know from the guarantee the operating system gives us that the number of events it returns is pointing to valid data in our Vec, so this is safe in our case.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The Poll module 83 Next up is our Registry struct. We will only implement one method, called register, and lastly, we'll implement the Drop trait for it, closing the epoll instance: ch04/a-epoll/src/poll. rs impl Registry { pub fn register(&self, source: &Tcp Stream, token: usize, interests: i32)-> Result<()> { let mut event = ffi::Event { events: interests as u32, epoll_data: token, }; let op = ffi::EPOLL_CTL_ADD; let res = unsafe { ffi::epoll_ctl(self. raw_fd, op, source. as_raw_fd(), &mut event) }; if res < 0 { return Err(io::Error::last_os_error()); } Ok(()) } } The register function takes a &Tcp Stream as a source, a token of type usize, and a bitmask named interests, which is of type i32. Note This is where mio does things differently. The source argument is specific to each platform. Instead of having the implementation of register on Registry, it's handled in a platform-specific way in the source argument it receives. The first thing we do is to create an ffi::Event object. The events field is simply set to the bitmask we received and named interests, and epoll_data is set to the value we passed in the token argument.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 84 The operation we want to perform on the epoll queue is adding interest in events on a new file descriptor. Therefore, we set the op argument to the ffi::EPOLL_CTL_ADD constant value. Next up is the call to ffi::epoll_ctl. We pass in the file descriptor to the epoll instance first, then we pass in the op argument to indicate what kind of operation we want to perform. The last two arguments are the file descriptor we want the queue to track and the Event object we created to indicate what kind of events we're interested in getting notifications for. The last part of the function body is simply the error handling, which should be familiar by now. The last part of poll. rs is the Drop implementation for Registry : ch04/a-epoll/src/poll. rs impl Drop for Registry { fn drop(&mut self) { let res = unsafe { ffi::close(self. raw_fd) }; if res < 0 { let err = io::Error::last_os_error(); eprintln!("ERROR: {err:?}"); } } } The Drop implementation simply calls ffi::close on the epoll file descriptor. Adding a panic to drop is rarely a good idea since drop can be called within a panic already, which will cause the process to simply abort. mio logs errors if they occur in its Drop implementation but doesn't handle them in any other way. For our simple example, we'll just print the error so we can see if anything goes wrong since we don't implement any kind of logging here. The last part is the code for running our example, and that leads us to main. rs. The main program Let's see how it all works in practice. Make sure that delayserver is up and running, because we'll need it for these examples to work. The goal is to send a set of requests to delayserver with varying delays and then use epoll to wait for the responses. Therefore, we'll only use epoll to track read events in this example. The program doesn't do much more than that for now.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The main program 85 The first thing we do is to make sure our main. rs file is set up correctly: ch04/a-epoll/src/main. rs use std::{io::{self, Read, Result, Write}, net::Tcp Stream}; use ffi::Event; use poll::Poll; mod ffi; mod poll; We import a few types from our own crate and from the standard library, which we'll need going forward, as well as declaring our two modules. We'll be working directly with Tcp Streams in this example, and that means that we'll have to format the HTTP requests we make to our delayserver ourselves. The server will accept GET requests, so we create a small helper function to format a valid HTTP GET request for us: ch04/a-epoll/src/main. rs fn get_req(path &str)-> Vec<u8> { format!( "GET {path} HTTP/1. 1\r\n\ Host: localhost\r\n\ Connection: close\r\n\ \r\n" ) } The preceding code simply takes a path as an input argument and formats a valid GET request with it. The path is the part of the URL after the scheme and host. In our case, the path would be everything in bold in the following URL: http://localhost:8080 /2000/hello-world. Next up is our main function. It's divided into two parts: Setup and sending requests Wait and handle incoming events
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 86 The first part of the main function looks like this: fn main()-> Result<()> { let mut poll = Poll::new()?; let n_events = 5; let mut streams = vec![]; let addr = "localhost:8080"; for i in 0..n_events { let delay = (n_events-i) * 1000; let url_path = format!("/{delay}/request-{i}"); let request = get_req(&url_path); let mut stream = std::net::Tcp Stream::connect(addr)?; stream. set_nonblocking(true)?; stream. write_all(request. as_bytes())?; poll. registry() . register(&stream, i, ffi::EPOLLIN | ffi::EPOLLET)?; streams. push(stream); } The first thing we do is to create a new Poll instance. We also specify what number of events we want to create and handle in our example. The next step is creating a variable to hold a collection of Vec<Tcp Stream> objects. We also store the address to our local delayserver in a variable called addr. The next part is where we create a set of requests that we issue to our delayserver, which will eventually respond to us. For each request, we expect a read event to happen sometime later on in the Tcp Stream we sent the request on. The first thing we do in the loop is set the delay time in milliseconds. Setting the delay to (n_events -i) * 1000 simply sets the first request we make to have the longest timeout, so we should expect the responses to arrive in the reverse order from which they were sent.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The main program 87 Note For simplicity, we use the index the event will have in the streams collection as its ID. This ID will be the same as the i variable in our loop. For example, in the first loop, i will be 0; it will also be the first stream to be pushed to our streams collection, so the index will be 0 as well. We therefore use 0 as the identification for this stream/event throughout since retrieving the Tcp Stream associated with this event will be as simple as indexing to that location in the streams collection. The next line, format!("/{delay}/request-{i}"), formats the path for our GET request. We set the timeout as described previously, and we also set a message where we store the identifier for this event, i, so we can track this event on the server side as well. Next up is creating a Tcp Stream. Y ou've probably noticed that the Tcp Stream in Rust doesn't accept &str but an argument that implements the To Socket Addrs trait. This trait is implemented for &str already, so that's why we can simply write it like we do in this example. Before Tcpstream::connect actually opens a socket, it will try to parse the address we pass in as an IP address. If it fails, it will parse it as a domain address and a port number, and then ask the operating system to do a DNS lookup for that address, which it then can use to actually connect to our server. So, you see, there is potentially quite a bit going on when we do a simple connection. Y ou probably remember that we discussed some of the nuances of the DNS lookup earlier and the fact that such a call could either be very fast since the operating system already has the information stored in memory or block while waiting for a response from the DNS server. This is a potential downside if you use Tcp Stream from the standard library if you want full control over the entire process. Tcp Stream in Rust and Nagle's algorithm Here is a little fact for you (I originally intended to call it a “fun fact, ” but realized that's stretching the concept of “fun” just a little too far!). In Rust's Tcp Stream, and, more importantly, most APIs that aim to mimic the standard library's Tcp Stream such as mio or Tokio, the stream is created with the TCP_NODELAY flag set to false. In practice, this means that Nagle's algorithm is used, which can cause some issues with latency outliers and possibly reduced throughput on some workloads. Nagle's algorithm is an algorithm that aims to reduce network congestion by pooling small network packages together. If you look at non-blocking I/O implementations in other languages, many, if not most, disable this algorithm by default. This is not the case in most Rust implementations and is worth being aware of. Y ou can disable it by simply calling Tcp Stream::set_ nodelay(true). If you try to create your own async library or rely on Tokio/mio, and observe lower throughput than expected or latency problems, it's worth checking whether this flag is set to true or not.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 88 To continue with the code, the next step is setting Tcp Stream to non-blocking by calling Tcp Stream::set_nonblocking(true). After that, we write our request to the server before we register interest in read events by setting the EPOLLIN flag bit in the interests bitmask. For each iteration, we push the stream to the end of our streams collection. The next part of the main function is handling incoming events. Let's take a look at the last part of our main function: let mut handled_events = 0; while handled_events < n_events { let mut events = Vec::with_capacity(10); poll. poll(&mut events, None)?; if events. is_empty() { println!("TIMEOUT (OR SPURIOUS EVENT NOTIFICATION)"); continue; } handled_events += handle_events(&events, &mut streams)?; } println!("FINISHED"); Ok(()) } The first thing we do is create a variable called handled_events to track how many events we have handled. Next is our event loop. We loop as long as the handled events are less than the number of events we expect. Once all events are handled, we exit the loop. Inside the loop, we create a Vec<Event> with the capacity to store 10 events. It's important that we create this using Vec::with_capacity since the operating system will assume that we pass it memory that we've allocated. We could choose any number of events here and it would work just fine, but setting too low a number would limit how many events the operating system could notify us about on each wakeup. Next is our blocking call to Poll::poll. As you know, this will actually tell the operating system to park our thread and wake us up when an event has occurred. If we're woken up, but there are no events in the list, it's either a timeout or a spurious event (which could happen, so we need a way to check whether a timeout has actually elapsed if that's important to us). If that's the case, we simply call Poll::poll once more.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The main program 89 If there are events to be handled, we pass these on to the handle_events function together with a mutable reference to our streams collection. The last part of main is simply to write FINISHED to the console to let us know we exited main at that point. The last bit of code in this chapter is the handle_events function. This function takes two arguments, a slice of Event structs and a mutable slice of Tcp Stream objects. Let's take a look at the code before we explain it: fn handle_events(events: &[Event], streams: &mut [Tcp Stream])-> Result<usize> { let mut handled_events = 0; for event in events { let index = event. token(); let mut data = vec![0u8; 4096]; loop { match streams[index]. read(&mut data) { Ok(n) if n == 0 => { handled_events += 1; break; } Ok(n) => { let txt = String::from_utf8_lossy(&data[..n]); println!("RECEIVED: {:?}", event); println!("{txt}\n------\n"); } // Not ready to read in a non-blocking manner. This could // happen even if the event was reported as ready Err(e) if e. kind() == io::Error Kind::Would Block => break, Err(e) => return Err(e), } } } Ok(handled_events) } The first thing we do is to create a variable, handled_events, to track how many events we consider handled on each wakeup. The next step is looping through the events we received.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 90 In the loop, we retrieve the token that identifies which Tcp Stream we received an event for. As we explained earlier in this example, this token is the same as the index for that particular stream in the streams collection, so we can simply use it to index into our streams collection and retrieve the right Tcp Stream. Before we start reading data, we create a buffer with a size of 4,096 bytes (you can, of course, allocate a larger or smaller buffer for this if you want to). We create a loop since we might need to call read multiple times to be sure that we've actually drained the buffer. Remember how important it is to fully drain the buffer when using epoll in edge-triggered mode. We match on the result of calling Tcp Stream::read since we want to take different actions based on the result: If we get Ok(n) and the value is 0, we've drained the buffer; we consider the event as handled and break out of the loop. If we get Ok(n) with a value larger than 0, we read the data to a String and print it out with some formatting. We do not break out of the loop yet since we have to call read until 0 is returned (or an error) to be sure that we've drained the buffers fully. If we get Err and the error is of the io::Error Kind::Would Block type, we simply break out of the loop. We don't consider the event handled yet since Would Block indicates that the data transfer is not complete, but there is no data ready right now. If we get any other error, we simply return that error and consider it a failure. Note There is one more error condition you' d normally want to cover, and that is io::Error Kind::Interrupted. Reading from a stream could be interrupted by a signal from the operating system. This should be expected and probably not considered a failure. The way to handle this is the same as what we do when we get an error of the Would Block type. If the read operation is successful, we return the number of events handled. Be careful with using Tcp Stream::read_to_end Y ou should be careful with using Tcp Stream::read_to_end or any other function that fully drains the buffer for you when using non-blocking buffers. If you get an error of the io::Would Block type, it will report that as an error even though you had several successful reads before you got that error. Y ou have no way of knowing how much data you read successfully other than observing any changes to the &mut Vec you passed in.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The main program 91 Now, if we run our program, we should get the following output: RECEIVED: Event { events: 1, epoll_data: 4 } HTTP/1. 1 200 OK content-length: 9 connection: close content-type: text/plain; charset=utf-8 date: Wed, 04 Oct 2023 15:29:09 GMT request-4------RECEIVED: Event { events: 1, epoll_data: 3 } HTTP/1. 1 200 OK content-length: 9 connection: close content-type: text/plain; charset=utf-8 date: Wed, 04 Oct 2023 15:29:10 GMT request-3------RECEIVED: Event { events: 1, epoll_data: 2 } HTTP/1. 1 200 OK content-length: 9 connection: close content-type: text/plain; charset=utf-8 date: Wed, 04 Oct 2023 15:29:11 GMT request-2------RECEIVED: Event { events: 1, epoll_data: 1 } HTTP/1. 1 200 OK content-length: 9 connection: close content-type: text/plain; charset=utf-8 date: Wed, 04 Oct 2023 15:29:12 GMT request-1------RECEIVED: Event { events: 1, epoll_data: 0 } HTTP/1. 1 200 OK
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 92 content-length: 9 connection: close content-type: text/plain; charset=utf-8 date: Wed, 04 Oct 2023 15:29:13 GMT request-0------FINISHED As you see, the responses are sent in reverse order. Y ou can easily confirm this by looking at the output on the terminal on running the delayserver instance. The output should look like this: #1-5000ms: request-0 #2-4000ms: request-1 #3-3000ms: request-2 #4-2000ms: request-3 #5-1000ms: request-4 The ordering might be different sometimes as the server receives them almost simultaneously, and can choose to handle them in a slightly different order. Say we track events on the stream with ID 4: 1. In send_requests, we assigned the ID 4 to the last stream we created. 2. Socket 4 sends a request to delayserver, setting a delay of 1,000 ms and a message of request-4 so we can identify it on the server side. 3. We register socket 4 with the event queue, making sure to set the epoll_data field to 4 so we can identify on what stream the event occurred. 4. delayserver receives that request and delays the response for 1,000 ms before it sends an HTTP/1. 1 200 OK response back, together with the message we originally sent. 5. epoll_wait wakes up, notifying us that an event is ready. In the epoll_data field of the Event struct, we get back the same data that we passed in when registering the event. This tells us that it was an event on stream 4 that occurred. 6. We then read data from stream 4 and print it out. In this example, we've kept things at a very low level even though we used the standard library to handle the intricacies of establishing a connection. Even though you've actually made a raw HTTP request to your own local server, you've set up an epoll instance to track events on a Tcp Stream and you've used epoll and syscalls to handle incoming events. That's no small feat-congratulations!
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 93 Before we leave this example, I wanted to point out how few changes we need to make to have our example use mio as the event loop instead of the one we created. In the repository under ch04/b-epoll-mio, you'll see an example where we do the exact same thing using mio instead. It only requires importing a few types from mio instead of our own modules and making only five minor changes to our code ! Not only have you replicated what mio does, but you pretty much know how to use mio to create an event loop as well! Summary The concept of epoll, kqueue, and IOCP is pretty simple at a high level, but the devil is in the details. It's just not that easy to understand and get it working correctly. Even programmers who work on these things will often specialize in one platform (epoll/kqueue or Windows). It's rare that one person will know all the intricacies of all platforms, and you could probably write a whole book about this subject alone. If we summarize what you've learned and got firsthand experience with in this chapter, the list is quite impressive: Y ou learned a lot about how mio is designed, enabling you to go to that repository and know what to look for and how to get started on that code base much easier than before reading this chapter Y ou learned a lot about making syscalls on Linux Y ou created an epoll instance, registered events with it, and handled those events Y ou learned quite a bit about how epoll is designed and its API Y ou learned about edge-triggering and level-triggering, which are extremely low-level, but useful, concepts to have an understanding of outside the context of epoll as well Y ou made a raw HTTP request Y ou saw how non-blocking sockets behave and how error codes reported by the operating system can be a way of communicating certain conditions that you're expected to handle Y ou learned that not all I/O is equally “blocking” by looking at DNS resolution and file I/O That's pretty good for a single chapter, I think! If you dive deeper into the topics we covered here, you'll soon realize that there are gotchas and rabbit holes everywhere-especially if you expand this example to abstract over epoll, kqueue, and IOCP. Y ou'll probably end up reading Linus Torvald's emails on how edge-triggered mode was supposed to work on pipes before you know it.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 94 At least you now have a good foundation for further exploration. Y ou can expand on our simple example and create a proper event loop that handles connecting, writing, timeouts, and scheduling; you can dive deeper into kqueue and IOCP by looking at how mio solves that problem; or you can be happy that you don't have to directly deal with it again and appreciate the effort that went into libraries such as mio, polling, and libuv. By this point, we've gained a lot of knowledge about the basic building blocks of asynchronous programming, so it's time to start exploring how different programming languages create abstractions over asynchronous operations and use these building blocks to give us as programmers efficient, expressive, and productive ways to write our asynchronous programs. First off is one of my favorite examples, where we'll look into how fibers (or green threads) work by implementing them ourselves. Y ou've earned a break now. Y eah, go on, the next chapter can wait. Get a cup of tea or coffee and reset so you can start the next chapter with a fresh mind. I promise it will be both fun and interesting.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
5 Creating Our Own Fibers In this chapter, we take a deep dive into a very popular way of handling concurrency. There is no better way of getting a fundamental understanding of the subject than doing it yourself. Fortunately, even though the topic is a little complex, we only need around 200 lines of code to get a fully working example in the end. What makes the topic complex is that it requires quite a bit of fundamental understanding of how CPUs, operating systems, and assembly work. This complexity is also what makes this topic so interesting. If you explore and work through this example in detail, you will be rewarded with an eye-opening understanding of topics you might only have heard about or only have a rudimentary understanding of. Y ou will also get the chance to get to know a few aspects of the Rust language that you haven't seen before, expanding your knowledge of both Rust and programming in general. We start off by introducing a little background knowledge that we need before we start writing code. Once we have that in place, we'll start with some small examples that will allow us to show and discuss the most technical and difficult parts of our example in detail so we can introduce the topics gradually. Lastly, we'll build on the knowledge we've gained and create our main example, which is a working example of fibers implemented in Rust. As a bonus, you'll get two expanded versions of the example in the repository to inspire you to go on and change, adapt, and build upon what we've created to make it your own. I'll list the main topics here so you can refer to them later on: How to use the repository alongside the book Background information An example we can build upon The stack Implementing our own fibers Final thoughts
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 96 Note In this chapter, we'll use the terms “fibers” and “green threads” to refer to this exact implementation of stackful coroutines. The term “threads” in this chapter, which is used in the code we write, will refer to the green threads/fibers we implement in our example and not OS threads. Technical requirements To run the examples, you will need a computer running on a CPU using the x86-64 instruction set. Most popular desktop, server, and laptop CPUs out there today use this instruction set, as do most modern CPUs from Intel and AMD (which are most CPU models from these manufacturers produced in the last 10-15 years). One caveat is that the modern M-series Macs use the ARM ISA (instruction set), which won't be compatible with the examples we write here. However, older Intel-based Macs do, so you should be able to use a Mac to follow along if you don't have the latest version. If you don't have a computer using this instruction set available, you have a few options to install Rust and run the examples: Mac users on M-series chips can use Rosetta (which ships with newer Mac OS versions) and get the examples working with just four simple steps. Y ou'll find the instructions in the repository under ch05/How-to-Mac OS-M. md. https://mac. getutm. app/ Rent (some even have a free layer) a remote server running Linux on x86-64. I have experience with Linode's offering ( https://www. linode. com /), but there are many more options out there. To follow along with the examples in the book, you also need a Unix-based operating system. The example code will work natively on any Linux and BSD operating system (such as Ubuntu or mac OS) as long as it's running on an x86-64 CPU. If you're on Windows, there is a version of the example in the repository that works natively with Windows too, but to follow along with the book, my clear recommendation is to set up Windows Subsystem for Linux (WSL ) (https://learn. microsoft. com/en-us/windows/wsl/ install ), install Rust, and follow along using Rust on WSL. I personally use VS Code as my editor, as it makes it very easy to switch between using a Linux version on WSL and Windows—simply press Ctrl + Shift + P and search for the Reopen folder in WSL. How to use the repository alongside the book The recommended way to read this chapter is to have the repository open alongside the book. In the repository, you'll find three different folders that correspond to the examples we go through in this chapter: ch05/a-stack swap
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Background information 97 ch05/b-show-stack ch05/c-fibers In addition, you will get two more examples that I refer to in the book but that should be explored in the repository: ch05/d-fibers-closure : This is an extended version of the first example that might inspire you to do more complex things yourself. The example tries to mimic the API used in the Rust standard library using std::thread::spawn. ch05/e-fibers-windows : This is a version of the example that we go through in this book that works on both Unix-based systems and Windows. There is a quite detailed explanation in the README of the changes we make for the example work on Windows. I consider this recommended reading if you want to dive deeper into the topic, but it's not important to understand the main concepts we go through in this chapter. Background information We are going to interfere with and control the CPU directly. This is not very portable since there are many kinds of CPUs out there. While the overall implementation will be the same, there is a small but important part of the implementation that will be very specific to the CPU architecture we're programming for. Another aspect that limits the portability of our code is that operating systems have different ABIs that we need to adhere to, and those same pieces of code will have to change based on the different ABIs. Let's explain exactly what we mean here before we go further so we know we're on the same page. Instruction sets, hardware architectures, and ABIs Okay, before we start, we need to know the differences between an application binary interface (ABI), a CPU architecture, and an instruction set architecture (ISA). We need this to write our own stack and make the CPU jump over to it. Fortunately, while this might sound complex, we only need to know a few specific things for our example to run. The information presented here is useful in many more circumstances than just our example, so it's worthwhile to cover it in some detail. An ISA describes an abstract model of a CPU that defines how the CPU is controlled by the software it runs. We often simply refer to this as the instruction set, and it defines what instructions the CPU can execute, what registers programmers can use, how the hardware manages memory, etc. Examples of ISAs are x86-64, x86, and the ARM ISA (used in Mac M-series chips). ISAs are broadly classified into two subgroups, complex instruction set computers (CISC ) and reduced instruction set computers (RISC ), based on their complexity. CISC architectures offer a lot of different instructions that the hardware must know how to execute, resulting in some instructions that are very specialized and rarely used by programs. RISC architectures accept fewer instructions
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 98 but require some operations to be handled by software that could be directly handled by the hardware in a CISC architecture. The x86-64 instruction set we'll focus on is an example of a CISC architecture. To add a little complexity (you know, it's not fun if it's too easy), there are different names that refer to the same ISA. For example, the x86-64 instruction set is also referred to as the AMD64 instruction set and the Intel 64 instruction set, so no matter which one you encounter, just know that they refer to the same thing. In our book, we'll simply call it the x86-64 instruction set. Tip To find the architecture on your current system, run one of the following commands in your terminal: On Linux and Mac OS: arch or uname-m On Windows Power Shell: $env:PROCESSOR_ARCHITECTURE On Windows Command Prompt: echo %PROCESSOR_ARCHITECTURE% The instruction set just defines how a program can interface with the CPU. The concrete implementation of an ISA can vary between different manufacturers, and a specific implementation is referred to as a CPU architecture, such as Intel Core processors. However, in practice, these terms are often used interchangeably since they all perform the same functions from a programmer's perspective and there is seldom a need to target a specific implementation of an ISA. The ISA specifies the minimum set of instructions the CPU must be able to execute. Over time, there have been extensions to this instruction set, such as Streaming SIMD Extensions (SSE), that add more instructions and registers that programmers can take advantage of. For the examples in this chapter, we will target the x86-64 ISA, a popular architecture used in most desktop computers and servers today. So, we know that a processor architecture presents an interface that programmers can use. Operating system implementors use this infrastructure to create operating systems. Operating systems such as Windows and Linux define an ABI that specifies a set of rules that the programmer has to adhere to for their programs to work correctly on that platform. Examples of operating system ABI's are System V ABI (Linux) and Win64 (Windows). The ABI specifies how the operating system expects a stack to be set up, how you should call a function, how you create a file that will load and run as a program, the name of the function that will be called once the program has loaded, etc. A very important part of the ABI that operating systems must specify is its calling convention. The calling convention defines how the stack is used and how functions are called. Let's illustrate this with an example of how Linux and Windows handle arguments to a function on x86-64; for example, a function with a signature such as fn foo(a: i64, b: i64).
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Background information 99 The x86-64 ISA defines 16 general-purpose registers. These are registers the CPU provides for programmers to use for whatever they see fit. Note that programmers here include the ones that write the operating system, and they can lay additional restrictions on what registers you can use for what when you create a program to run on their operating system. In our specific example, Windows and Unix-based systems have different requirements for where to place the arguments for a function: Linux specifies that a function that takes two arguments should place the first argument to the function in the rdi register and the second one in the rsi register Windows requires that the first two arguments be passed in the registers rcx and rdx This is just one of many ways in which a program that is written for one platform won't work on another. Usually, these details are the concern of compiler developers, and the compiler will handle the different calling conventions when you compile for a specific platform. So to sum it up, CPUs implement an instruction set. The instruction set defines what instructions the CPU can execute and the infrastructure it should provide to programmers (such as registers). An operating system uses this infrastructure in different ways, and it provides additional rules that a programmer must obey to run their program correctly on their platform. Most of the time, the only programmers that need to care about these details are the ones who write operating systems or compilers. However, when we write low-level code ourselves, we need to know about the ISA and the OS ABI to have our code work correctly. Since we need to write this kind of code to implement our own fibers/green threads, we must potentially write different code for each OS ABI/ISA combination that exists. That means one for Windows/ x86-64, one for Windows/ARM, one for Mac OS/x86-64, one for Macos/M, etc. As you understand, this is also one major contributor to the complexity of using fibers/green threads for handling concurrency. It has a lot of advantages once it's correctly implemented for an ISA/OS ABI combination, but it requires a lot of work to get it right. For the purpose of the examples in this book, we will only focus on one such combination: the System V ABI for x86-64. Note! In the accompanying repository, you will find a version of the main example for this chapter for Windows x86-64. The changes we have to make to make it work on Windows are explained in the README. The System V ABI for x86-64 As mentioned earlier, this architecture of the CPU features a set of 16 general-purpose 64-bit registers, 16 SSE registers with 128-bit width, and 8 floating point registers with 80-bit width:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 100 Figure 5. 1-x86-64 CPU registers There are architectures that build upon this base and extend it, such as the Intel Advanced Vector Extensions (AVX ), which provide an additional 16 registers of 256 bits in width. Let's take a look at a page from the System V ABI specification:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Background information 101 Figure 5. 2-Register usage Figure 5. 1 shows an overview of the general-purpose registers in the x86-64 architecture. Out of special interest for us right now are the registers marked as callee saved. These are the registers we need to keep track of our context across function calls. It includes the next instructions to run, the base pointer, the stack pointer, and so on. While the registers themselves are defined by the ISA, the rules on what is considered callee saved are defined by the System V ABI. We'll get to know this more in detail later. Note Windows has a slightly different convention. On Windows, the register XMM6:XMM15 is also calle-saved and must be saved and restored if our functions use them. The code we write in this first example runs fine on Windows since we don't really adhere to any ABI yet and just focus on how we'll instruct the CPU to do what we want.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 102 If we want to issue a very specific set of commands to the CPU directly, we need to write small pieces of code in assembly. Fortunately, we only need to know some very basic assembly instructions for our first mission. Specifically, we need to know how to move values to and from registers: mov rax, rsp A quick introduction to Assembly language First and foremost, Assembly language isn't particularly portable since it's the lowest level of human-readable instructions we can write to the CPU, and the instructions we write in assembly will vary from architecture to architecture. Since we will only write assembly targeting the x86-64 architecture going forward, we only need to learn a few instructions for this particular architecture. Before we go too deep into the specifics, you need to know that there are two popular dialects used in assembly: the AT&T dialect and the Intel dialect. The Intel dialect is the standard when writing inline assembly in Rust, but in Rust, we can specify that we want to use the AT&T dialect instead if we want to. Rust has its own take on how to do inline assembly that at first glance looks foreign to anyone used to inline assembly in C. It's well thought through though, and I'll spend a bit of time explaining it in more detail as we go through the code, so both readers with experience with the C-type inline assembly and readers who have no experience should be able to follow along. Note We will use the Intel dialect in our examples. Assembly has strong backward compatibility guarantees. That's why you will see that the same registers are addressed in different ways. Let's look at the rax register we used as an example as an explanation: rax # 64 bit register (8 bytes) eax # 32 low bits of the "rax" register ax # 16 low bits of the "rax" register ah # 8 high bits of the "ax" part of the "rax" register al # 8 low bits of the "ax" part of the "rax" register As you can see, this is basically like watching the history of CPUs evolve in front of us. Since most CPUs today are 64 bits, we will use the 64-bit versions in our code. The word size in the assembly also has historical reasons. It stems from the time when the CPU had 16-bit data buses, so a word is 16 bits. This is relevant because you will see many instructions suffixed with q (quad word) or l (long word). So, a movq would mean a move of 4 * 16 bits, which is 64 bits. A plain mov will use the size of the register you target on most modern assemblers. This is the one you will see most used in both AT&T and the Intel dialect when writing inline assembly, and it's the one we will use in our code.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example we can build upon 103 One more thing to note is that the stack alignment on x86-64 is 16 bytes. Just remember this for later. An example we can build upon This is a short example where we will create our own stack and make our CPU return out of its current execution context and over to the stack we just created. We will build on these concepts in the following chapters. Setting up our project First, let's start a new project by creating a folder named a-stack-swap. Enter the new folder and run the following: cargo init Tip Y ou can also navigate to the folder called ch05/a-stack-swap in the accompanying repository and see the whole example there. In our main. rs, we start by importing the asm! macro: ch05/a-stack-swap/src/main. rs use core::arch::asm; Let's set a small stack size of only 48 bytes here so that we can print the stack and look at it before we switch contexts after we get the first example to work: const SSIZE: isize = 48; Note There seems to be an issue in mac OS using such a small stack. The minimum for this code to run is a stack size of 624 bytes. The code works on the Rust Playground, at https://play. rust-lang. org, if you want to follow this exact example (however, you'll need to wait roughly 30 seconds for it to time out due to our loop in the end). Then let's add a struct that represents our CPU state. We'll only focus on the register that stores the stack pointer for now since that is all we need: #[derive(Debug, Default)] #[repr(C)] struct Thread Context { rsp: u64, }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 104 In later examples, we will use all the registers marked as callee saved in the specification document I linked to. These are the registers described in the System V x86-64 ABI that we'll need to save our context, but right now, we only need one register to make the CPU jump over to our stack. Note that this needs to be #[repr(C)] because of how we access the data in our assembly. Rust doesn't have a stable language ABI, so there is no way for us to be sure that this will be represented in memory with rsp as the first 8 bytes. C has a stable language ABI and that's exactly what this attribute tells the compiler to use. Granted, our struct only has one field right now, but we will add more later. For this very simple example, we will define a function that just prints out a message and then loops forever: fn hello()-> ! { println!("I LOVE WAKING UP ON A NEW STACK!"); loop {} } Next up is our inline assembly, where we switch over to our own stack: unsafe fn gt_switch(new: *const Thread Context) { asm!( "mov rsp, [{0} + 0x00]", "ret", in(reg) new, ); } At first glance, you might think that there is nothing special about this piece of code, but let's stop and consider what happens here for a moment. If we refer back to Figure 5. 1, we'll see that rsp is the register that stores the stack pointer that the CPU uses to figure out the current location on the stack. Now, what we actually want to do if we want the CPU to swap to a different stack is to set the register for the stack pointer ( rsp ) to the top of our new stack and set the instruction pointer ( rip ) on the CPU to point to the address hello. The instruction pointer, or program counter as it's sometimes called on different architectures, points to the next instruction to run. If we can manipulate it directly, the CPU would fetch the instruction pointed to by the rip register and execute the first instruction we wrote in our hello function. The CPU will then push/pop data on the new stack using the address pointed to by the stack pointer and simply leave our old stack as it was. Now, this is where it gets a little difficult. On the x86-64 instruction set, there is no way for us to manipulate rip directly, so we have to use a little trick.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example we can build upon 105 The first thing we do is set up the new stack and write the address to the function we want to run at a 16-byte offset from the top of the stack (the ABI dictates a 16-byte stack alignment, so the top of our stack frame must start at a 16-byte offset). We'll see how to create a continuous piece of memory a little later, but it's a rather straightforward process. Next, we pass the address of the first byte in which we stored this address on our newly created stack to the rsp register (the address we set to new. rsp will point to an address located on our own stack, which in turn is an address that leads to the hello function). Got it? The ret keyword transfers program control to what would normally be the return address located on top of the stack frame it's currently in. Since we placed the address to hello on our new stack and set the rsp register to point to our new stack, the CPU will think rsp now points to the return address of the function it's currently running, but instead, it's pointing to a location on our new stack. When the CPU executes the ret instruction it will pop the first value of the stack (which is conveniently the address to our hello function) and place that address in the rip register for us. On the next cycle, the CPU will fetch the instructions located at that function pointer and start executing those instructions. Since rsp now points to our new stack, it will use that stack going forward. Note If you feel a little confused right now, that's very understandable. These details are hard to understand and get right, and it takes time to get comfortable with how it works. As we'll see later in this chapter, there is a little more data that we need to save and restore (right now, we don't have a way to resume the stack we just swapped from), but the technical details on how the stack swap happens are the same as described previously. Before we explain how we set up the new stack, we'll use this opportunity to go line by line and explain how the inline assembly macro works. An introduction to Rust inline assembly macro We'll use the body of our gt_switch function as a starting point by going through everything step by step. If you haven't used inline assembly before, this might look foreign, but we'll use an extended version of the example later to switch contexts, so we need to understand what's going on. unsafe is a keyword that indicates that Rust cannot enforce the safety guarantees in the function we write. Since we are manipulating the CPU directly, this is most definitely unsafe. The function will also take a pointer to an instance of our Thread Context from which we will only read one field: unsafe gt_switch(new: *const Thread Context)
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 106 The next line is the asm! macro in the Rust standard library. It will check our syntax and provide an error message if it encounters something that doesn't look like valid Intel (by default) assembly syntax. asm!( The first thing the macro takes as input is the assembly template: "mov rsp, [{0} + 0x00]", This is a simple instruction that moves the value stored at 0x00 offset (that means no offset at all in hex) from the memory location at {0} to the rsp register. Since the rsp register usually stores a pointer to the most recently pushed value on the stack, we effectively push the address to hello on top of the current stack so that the CPU will return to that address instead of resuming where it left off in the previous stack frame. Note Note that we don't need to write [{0} + 0x00] when we don't want an offset from the memory location. Writing mov rsp, [{0}] would be perfectly fine. However, I chose to introduce how we do an offset here as we'll need it later on when we want to access more fields in our Thread Context struct. Note that the Intel syntax is a little backward. Y ou might be tempted to think mov a, b means “move what's at a to b”, but the Intel dialect usually dictates that the destination register is first and the source is second. To make this confusing, this is the opposite of what's typically the case with the AT&T syntax, where reading it as “move a to b” is the correct thing to do. This is one of the fundamental differences between the two dialects, and it's useful to be aware of. Y ou will not see {0} used like this in normal assembly. This is part of the assembly template and is a placeholder for the value passed as the first parameter to the macro. Y ou'll notice that this closely matches how string templates are formatted in Rust using println! or the like. The parameters are numbered in ascending order starting from 0. We only have one input parameter here, which corresponds to {0}. Y ou don't really have to index your parameters like this; writing {} in the correct order would suffice (as you would do using the println! macro). However, using an index improves readability and I would strongly recommend doing it that way. The [] basically means “get what's at this memory location”, you can think of it as the same as dereferencing a pointer.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example we can build upon 107 Let's try to sum up what we do here with words: Move what's at the + 0x00 offset from the memory location that {compiler_chosen_general_ purpose_register} points to to the rsp register. The next line is the ret keyword, which instructs the CPU to pop a memory location off the stack and then makes an unconditional jump to that location. In effect, we have hijacked our CPU and made it return to our stack. Next up is the first non-assembly argument to the asm! macro is our input parameter: in(reg) new, When we write in(reg), we let the compiler decide on a general-purpose register to store the value of new. out(reg) means that the register is an output, so if we write out(reg) new, we need new to be mut so we can write a value to it. Y ou'll also find other versions such as inout and lateout. Options The last thing we need to introduce to get a minimal understanding of Rust's inline assembly for now is the options keyword. After the input and output parameters, you'll often see something like options(att_syntax), which specifies that the assembly is written with the AT&T syntax instead of the Intel syntax. Other options include pure, nostack, and several others. I'll refer you to the documentation for you to read about them since they're explained in detail there: https://doc. rust-lang. org/nightly/reference/inline-assembly. html#options Inline assembly is quite complex, so we'll take this step by step and introduce more details on how it works along the way through our examples. Running our example The last bit we need is the main function to run our example. I'll present the whole function and we'll walk through it step by step: fn main() { let mut ctx = Thread Context::default(); let mut stack = vec![0_u8; SSIZE as usize]; unsafe { let stack_bottom = stack. as_mut_ptr(). offset(SSIZE); let sb_aligned = (stack_bottom as usize & !15) as *mut u8; std::ptr::write(sb_aligned. offset(-16) as *mut u64, hello as u64); ctx. rsp = sb_aligned. offset(-16) as u64; gt_switch(&mut ctx);
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 108 } } So, in this function, we're actually creating our new stack. hello is a pointer already (a function pointer), so we can cast it directly as an u64 since all pointers on 64-bit systems will be, well, 64-bit. Then, we write this pointer to our new stack. Note We'll talk more about the stack in the next segment, but one thing we need to know now is that the stack grows downwards. If our 48-byte stack starts at index 0 and ends on index 47, index 32 will be the first index of a 16-byte offset from the start/base of our stack. Make note that we write the pointer to an offset of 16 bytes from the base of our stack. What does the line let sb_aligned = (stack_bottom as usize &! 15) as *mut u8; do? When we ask for memory like we do when creating a Vec<u8>, there is no guarantee that the memory we get is 16-byte-aligned when we get it. This line of code essentially rounds our memory address down to the nearest 16-byte-aligned address. If it's already 16 byte-aligned, it does nothing. This way, we know that we end up at a 16-byte-aligned address if we simply subtract 16 from the base of our stack. We cast the address to hello as a pointer to a u64 instead of a pointer to a u8. We want to write to position “32, 33, 34, 35, 36, 37, 38, 39”, which is the 8-byte space we need to store our u64. If we don't do this cast, we try to write a u64 only to position 32, which is not what we want. When we run the example by writing cargo run in our terminal, we get: Finished dev [unoptimized + debuginfo] target(s) in 0. 58s Running `target\debug\a-stack-swap` I LOVE WAKING UP ON A NEW STACK! Tip As we end the program in an endless loop, you'll have to exit by pressing Ctrl + C. OK, so what happened? We didn't call the function hello at any point, but it still executed. What happened is that we actually made the CPU jump over to our own stack, and since it thinks it returns from a function, it will read the address to hello and start executing the instructions it points to. We have taken the first step toward implementing a context switch.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The stack 109 In the next sections, we will talk about the stack in a bit more detail before we implement our fibers. It will be easier now that we have covered so much of the basics. The stack A stack is nothing more than a piece of contiguous memory. This is important to know. A computer only has memory, it doesn't have a special stack memory and a heap memory; it's all part of the same memory. The difference is how this memory is accessed and used. The stack supports simple push/pop instructions on a contiguous part of memory, that's what makes it fast to use. The heap memory is allocated by a memory allocator on demand and can be scattered around in different locations. We'll not go through the differences between the stack and the heap here since there are numerous articles explaining them in detail, including a chapter in The Rust Programming Language at https:// doc. rust-lang. org/stable/book/ch04-01-what-is-ownership. html#the-stack-and-the-heap. What does the stack look like? Let's start with a simplified view of the stack. A 64-bit CPU will read 8 bytes at a time. Even though the natural way for us to see a stack is a long line of u8 as shown in Figure 5. 2, the CPU will treat it more like a long line of u64 instead since it won't be able to read less than 8 bytes when it makes a load or a store. Figure 5. 3-The stack
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 110 When we pass a pointer, we need to make sure we pass in a pointer to either address 0016, 0008, or 0000 in the example. The stack grows downwards, so we start at the top and work our way down. When we set the stack pointer in a 16-byte aligned stack, we need to make sure to put our stack pointer to an address that is a multiple of 16. In the example, the only address that satisfies this requirement is 0008 (remember the stack starts on the top). If we add the following lines of code to our example in the last chapter just before we do the switch in our main function, we can effectively print out our stack and have a look at it: ch05/b-show-stack for i in 0..SSIZE { println!("mem: {}, val: {}", sb_aligned. offset(-i as isize) as usize, *sb_aligned. offset(-i as isize)) } The output we get is as follows: mem: 2643866716720, val: 0 mem: 2643866716719, val: 0 mem: 2643866716718, val: 0 mem: 2643866716717, val: 0 mem: 2643866716716, val: 0 mem: 2643866716715, val: 0 mem: 2643866716714, val: 0 mem: 2643866716713, val: 0 mem: 2643866716712, val: 0 mem: 2643866716711, val: 0 mem: 2643866716710, val: 0 mem: 2643866716709, val: 127 mem: 2643866716708, val: 247 mem: 2643866716707, val: 172 mem: 2643866716706, val: 15 mem: 2643866716705, val: 29 mem: 2643866716704, val: 240 mem: 2643866716703, val: 0 mem: 2643866716702, val: 0 mem: 2643866716701, val: 0 mem: 2643866716700, val: 0 mem: 2643866716699, val: 0...
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The stack 111 mem: 2643866716675, val: 0 mem: 2643866716674, val: 0 mem: 2643866716673, val: 0 I LOVE WAKING UP ON A NEW STACK! I've printed out the memory addresses as u64 here, so it's easier to parse if you're not very familiar with hex. The first thing to note is that this is just a contiguous piece of memory, starting at address 2643866716673 and ending at 2643866716720. The addresses 2643866716704 to 2643866716712 are of special interest to us. The first address is the address of our stack pointer, the value we write to the rsp register of the CPU. The range represents the values we wrote to the stack before we made the switch. Note The actual addresses you get will be different every time you run the program. In other words, the values 240, 205, 252, 56, 67, 86, 0, 0 represent the pointer to our hello() function written as u8 values. Endianness An interesting side note here is that the order the CPU writes an u64 as a set of 8 u8 bytes is dependent on its endianness. In other words, a CPU can write our pointer address as 240, 205, 252, 56, 67, 86, 0, 0 if it's little-endian or 0, 0, 86, 67, 56, 252, 205, 240 if it's big-endian. Think of it like how Hebrew, Arabic, and Persian languages read and write from right to left, while Latin, Greek, and Indic languages read and write from left to right. It doesn't really matter as long as you know it in advance, and the results will be the same. The x86-64 architecture uses a little-endian format, so if you try to parse the data manually, you'll have to bear this in mind. As we write more complex functions, our extremely small 48-byte stack will soon run out of space. Y ou see, as we run the functions we write in Rust, the CPU will now push and pop values on our new stack to execute our program and it's left to the programmer to make sure they don't overflow the stack. This brings us to our next topic: stack sizes. Stack sizes We touched upon this topic earlier in Chapter 2, but now that we've created our own stack and made our CPU jump over to it, you might get a better sense of the issue. One of the advantages of creating our own green threads is that we can freely choose how much space we reserve for each stack.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 112 When you start a process in most modern operating systems, the standard stack size is normally 8 MB, but it can be configured differently. This is enough for most programs, but it's up to the programmer to make sure we don't use more than we have. This is the cause of the dreaded stack overflow that most of us have experienced. However, when we can control the stacks ourselves, we can choose the size we want. 8 MB for each task is way more than we need when running simple functions in a web server, for example, so by reducing the stack size, we can have millions of fibers/green threads running on a machine. We run out of memory a lot sooner using stacks provided by the operating system. Anyway, we need to consider how to handle the stack size, and most production systems such as Boost. Coroutine or the one you find in Go will use either segmented stacks or growable stacks. We will make this simple for ourselves and use a fixed stack size going forward. Implementing our own fibers Before we start, I want to make sure you understand that the code we write is quite unsafe and is not a “best practice” when writing Rust. I want to try to make this as safe as possible without introducing a lot of unnecessary complexity, but there is no way to avoid the fact that there will be a lot of unsafe code in this example. We will also prioritize focusing on how this works and explain it as simply as possible, which will be enough of a challenge in and of itself, so the focus on best practices and safety will have to take the back seat on this one. Let's start off by creating a whole new project called c-fibers and removing the code in main. rs so we start with a blank sheet. Note Y ou will also find this example in the repository under the ch05/c-fibers folder. This example, as well as ch05/d-fibers-closure and ch05/e-fibers-windows, needs to be compiled using the nightly compiler since we use an unstable feature. Y ou can do this in one of two ways: Override the default toolchain for the entire directory you're in by writing rustup override set nightly (I personally prefer this option). Tell cargo to use the nightly toolchain every time you compile or run the program using cargo +nightly run. We'll create a simple runtime with a very simple scheduler. Our fibers will save/restore their state so they can be stopped and resumed at any point during execution. Each fiber will represent a task that we want to progress concurrently, and we simply create a new fiber for each task we want to run.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Implementing our own fibers 113 We start off the example by enabling a specific feature we need, importing the asm macro, and defining a few constants: ch05/c-fibers/main. rs #![feature(naked_functions)] use std::arch::asm; const DEFAULT_STACK_SIZE: usize = 1024 * 1024 * 2; const MAX_THREADS: usize = 4; static mut RUNTIME: usize = 0; The feature we want to enable is called the naked_functions feature. Let's explain what a naked function is right away. Naked functions If you remember when we talked about the operating system ABI and calling conventions earlier, you probably remember that each architecture and OS have different requirements. This is especially important when creating new stack frames, which is what happens when you call a function. So, the compiler knows about what each architecture/OS requires and adjusts layout, and parameter placement on the stack and saves/restores certain registers to make sure we satisfy the ABI on the platform we're on. This happens both when we enter and exit a function and is often called a function prologue and epilogue. In Rust, we can enable this feature and mark a function as #[naked]. A naked function tells the compiler that we don't want it to create a function prologue and epilogue and that we want to take care of this ourselves. Since we do the trick where we return over to a new stack and want to resume the old one at a later point we don't want the compiler to think it manages the stack layout at these points. It worked in our first example since we never switched back to the original stack, but it won't work going forward. Our DEFAULT_STACK_SIZE is set to 2 MB, which is more than enough for our use. We also set MAX_THREADS to 4 since we don't need more for our example. The last static constant, RUNTIME, is a pointer to our runtime (yeah, I know, it's not pretty with a mutable global variable, but it's making it easier for us to focus on the important parts of the example later on). The next thing we do is set up some data structures to represent the data we'll be working with: pub struct Runtime { threads: Vec<Thread>, current: usize, }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 114 #[derive(Partial Eq, Eq, Debug)] enum State { Available, Running, Ready, } struct Thread { stack: Vec<u8>, ctx: Thread Context, state: State, } #[derive(Debug, Default)] #[repr(C)] struct Thread Context { rsp: u64, r15: u64, r14: u64, r13: u64, r12: u64, rbx: u64, rbp: u64, } Runtime is going to be our main entry point. We are basically going to create a very small runtime with a very simple scheduler and switch between our threads. The runtime holds an array of Thread structs and a current field to indicate which thread we are currently running. Thread holds data for a thread. The stack is similar to what we saw in our first example in earlier chapters. The ctx field is a context representing the data our CPU needs to resume where it left off on a stack and a state field that holds our thread state. State is an enum representing the states our threads can be in: Available means the thread is available and ready to be assigned a task if needed Running means the thread is running Ready means the thread is ready to move forward and resume execution Thread Context holds data for the registers that the CPU needs to resume execution on a stack.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Implementing our own fibers 115 Note The registers we save in our Thread Context struct are the registers that are marked as callee saved in Figure 5. 1. We need to save these since the ABI states that the callee (which will be our switch function from the perspective of the OS) needs to restore them before the caller is resumed. Next up is how we initialize the data to a newly created thread: impl Thread { fn new()-> Self { Thread { stack: vec![0_u8; DEFAULT_STACK_SIZE], ctx: Thread Context::default(), state: State::Available, } } } This is pretty easy. A new thread starts in the Available state, indicating it is ready to be assigned a task. One thing I want to point out here is that we allocate our stack here. That is not needed and is not an optimal use of our resources since we allocate memory for threads we might need instead of allocating on first use. However, this lowers the complexity in the parts of our code that have a more important focus than allocating memory for our stack. Note Once a stack is allocated it must not move! No push() on the vector or any other methods that might trigger a reallocation. If the stack is reallocated, any pointers that we hold to it are invalidated. It's worth mentioning that Vec<T> has a method called into_boxed_slice(), which returns a reference to an allocated slice Box<[T]>. Slices can't grow, so if we store that instead, we can avoid the reallocation problem. There are several other ways to make this safer, but we'll not focus on those in this example. Implementing the runtime The first thing we need to do is to initialize a new runtime to a base state. The next code segments all belong to the impl Runtime block, and I'll make sure to let you know when the block ends since it can be hard to spot the closing bracket when we divide it up as much as we do here.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 116 The first thing we do is to implement a new function on our Runtime struct: impl Runtime { pub fn new()-> Self { let base_thread = Thread { stack: vec![0_u8; DEFAULT_STACK_SIZE], ctx: Thread Context::default(), state: State::Running, }; let mut threads = vec![base_thread]; let mut available_threads: Vec<Thread> = (1..MAX_THREADS). map(|_| Thread::new()). collect(); threads. append(&mut available_threads); Runtime { threads, current: 0, } } When we instantiate our Runtime, we set up a base thread. This thread will be set to the Running state and will make sure we keep the runtime running until all tasks are finished. Then, we instantiate the rest of the threads and set the current thread (the base thread) to 0. The next thing we do is admittedly a little bit hacky since we do something that's usually a no-go in Rust. As I mentioned when we went through the constants, we want to access our runtime struct from anywhere in our code so that we can call yield on it at any point in our code. There are ways to do this safely, but the topic at hand is already complex, so even though we're juggling with knives here, I will do everything I can to keep everything that's not the main focal point of this example as simple as it can be. After we call initialize on the Runtime, we have to make sure we don't do anything that can invalidate the pointer we take to self once it's initialized. pub fn init(&self) { unsafe { let r_ptr: *const Runtime = self; RUNTIME = r_ptr as usize; } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Implementing our own fibers 117 This is where we start running our runtime. It will continually call t_yield() until it returns false, which means that there is no more work to do and we can exit the process: pub fn run(&mut self)-> ! { while self. t_yield() {} std::process::exit(0); } Note yield is a reserved word in Rust, so we can't name our function that. If that was not the case, it would be my preferred name for it over the slightly more cryptic t_yield. This is the return function that we call when a thread is finished. return is another reserved keyword in Rust, so we name this t_return(). Make a note that the user of our threads does not call this; we set up our stack so this is called when the task is done: fn t_return(&mut self) { if self. current != 0 { self. threads[self. current]. state = State::Available; self. t_yield(); } } If the calling thread is the base_thread, we won't do anything. Our runtime will call t_yield for us on the base thread. If it's called from a spawned thread, we know it's finished since all threads will have a guard function on top of their stack (which we'll show further down), and the only place where this function is called is on our guard function. We set its state to Available, letting the runtime know it's ready to be assigned a new task, and then immediately call t_yield, which will schedule a new thread to be run. So, finally, we get to the heart of our runtime: the t_yield function. The first part of this function is our scheduler. We simply go through all the threads and see if any are in the Ready state, which indicates that it has a task it is ready to make progress. This could be a database call that has returned in a real-world application. If no thread is Ready, we're all done. This is an extremely simple scheduler using only a round-robin algorithm. A real scheduler might have a much more sophisticated way of deciding what task to run next. If we find a thread that's ready to be run, we change the state of the current thread from Running to Ready.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 118 Let's present the function before we go on to explain the last part of it: #[inline(never)] fn t_yield(&mut self)-> bool { let mut pos = self. current; while self. threads[pos]. state != State::Ready { pos += 1; if pos == self. threads. len() { pos = 0; } if pos == self. current { return false; } } if self. threads[self. current]. state != State::Available { self. threads[self. current]. state = State::Ready; } self. threads[pos]. state = State::Running; let old_pos = self. current; self. current = pos; unsafe { let old: *mut Thread Context = &mut self. threads[old_pos]. ctx; let new: *const Thread Context = &self. threads[pos]. ctx; asm!("call switch", in("rdi") old, in("rsi") new, clobber_abi("C")); } self. threads. len() > 0 } The next thing we do is to call the function switch, which will save the current context (the old context) and load the new context into the CPU. The new context is either a new task or all the information the CPU needs to resume work on an existing task. Our switch function, which we will cover a little further down, takes two arguments and is marked as #[naked]. Naked functions are not like normal functions. They don't accept formal arguments, for example, so we can't simply call it in Rust as a normal function like switch(old, new). Y ou see, usually, when we call a function with two arguments, the compiler will place each argument in a register described by the calling convention for the platform. However, when we call a #[naked] function, we need to take care of this ourselves. Therefore, we pass in the address to our old and new Thread Context using assembly. rdi is the register for the first argument in the System V ABI calling convention and rsi is the register used for the second argument. The #[inline(never)] attribute prevents the compiler from simply substituting a call to our function with a copy of the function content wherever it's called (this is what inlining means). This is
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Implementing our own fibers 119 almost never a problem on debug builds, but in this case, our program will fail if the compiler inlines this function in a release build. The issue manifests itself by the runtime exiting before all the tasks are finished. Since we store Runtime as a static usize that we then cast as a *mut pointer (which is almost guaranteed to cause UB), it's most likely caused by the compiler making the wrong assumptions when this function is inlined and called by casting and dereferencing RUNTIME in one of the helper methods that will be outlined. Just make a note that this is probably avoidable if we change our design; it's not something worth dwelling on for too long in this specific case. More inline assembly We need to explain the new concepts we introduced here. The assembly calls the function switch (the function is tagged with #[no_mangle] so we can call it by name). The in("rdi") old and in("rsi") new arguments place the value of old and new to the rdi and rsi registers, respectively. The System V ABI for x86-64 states that the rdi register holds the first argument to a function and rsi holds the second argument. The clobber_abi("C") argument tells the compiler that it may not assume any that any general-purpose registers are preserved across the asm! block. The compiler will emit instructions to push the registers it uses to the stack and restore them when resuming after the asm! block. If you take one more look at the list in Figure 5. 1, we already know that we need to take special care with registers that are marked as callee saved. When calling a normal function, the compiler will insert code* to save/restore all the non-callee-saved, or caller saved, registers before calling a function so it can resume with the correct state when the function returns. Since we marked the function we're calling as #[naked], we explicitly told the compiler to not insert this code, so the safest thing is to make sure the compiler doesn't assume that it can rely on any register being untouched when it resumes after the call we make in our asm! block. *In some instances, the compiler will know that a register is untouched by the function call since it controls the register usage in both the caller and the callee and it will not emit any special instructions to save/restore registers they know will be untouched when the function returns The self. threads. len() > 0 line at the end is just a way for us to prevent the compiler from optimizing our code away. This happens to me on Windows but not on Linux, and it is a common problem when running benchmarks, for example. There are other ways of preventing the compiler from optimizing this code, but I chose the simplest way I could find. As long as it's commented, it should be OK to do. The code never reaches this point anyway. Next up is our spawn function. I'll present the function first and guide you through it after: pub fn spawn(&mut self, f: fn()) { let available = self . threads . iter_mut() . find(|t| t. state == State::Available) . expect("no available thread. ");
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 120 let size = available. stack. len(); unsafe { let s_ptr = available. stack. as_mut_ptr(). offset(size as isize); let s_ptr = (s_ptr as usize & !15) as *mut u8; std::ptr::write(s_ptr. offset(-16) as *mut u64, guard as u64); std::ptr::write(s_ptr. offset(-24) as *mut u64, skip as u64); std::ptr::write(s_ptr. offset(-32) as *mut u64, f as u64); available. ctx. rsp = s_ptr. offset(-32) as u64; } available. state = State::Ready; } } // We close the `impl Runtime` block here Note I promised to point out where we close the impl Runtime block, and we do that after the spawn function. The upcoming functions are “free” functions that don't belong to a struct. While I think t_yield is the logically interesting function in this example, I think spawn is the most interesting one technically. The first thing to note is that the function takes one argument: f: fn(). This is simply a function pointer to the function we take as an argument. This function is the task we want to run concurrently with other tasks. If this was a library, this is the function that users actually pass to us and want our runtime to handle concurrently. In this example, we take a simple function as an argument, but if we modify the code slightly we can also accept a closure. Tip In example ch05/d-fibers-closure, you can see a slightly modified example that accepts a closure instead, making it more flexible than the one we walk through here. I would really encourage you to check that one out once you've finished this example. The rest of the function is where we set up our stack as we discussed in the previous chapter and make sure our stack looks like the one specified in the System V ABI stack layout.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Implementing our own fibers 121 When we spawn a new fiber (or userland thread), we first check if there are any available userland threads (threads in Available state). If we run out of threads, we panic in this scenario, but there are several (better) ways to handle that. We'll keep things simple for now. When we find an available thread, we get the stack length and a pointer to our u8 byte array. In the next segment, we have to use some unsafe functions. We'll explain the functions we refer to here later, but this is where we set them up in our new stack so that they're called in the right order for our runtime to work. First, we make sure that the memory segment we'll use is 16-byte-aligned. Then, we write the address to our guard function that will be called when the task we provide finishes and the function returns. Second, we'll write the address to a skip function, which is there just to handle the gap when we return from f, so that guard will get called on a 16-byte boundary. The next value we write to the stack is the address to f. Why do we need the skip function? Remember how we explained how the stack works? We want the f function to be the first to run, so we set the base pointer to f and make sure it's 16-byte aligned. We then push the address to the skip function and lastly the guard function. Since, skip is simply one instruction, ret, doing this makes sure that our call to guard is 16-byte aligned so that we adhere to the ABI requirements. After we've written our function pointers to the stack, we set the value of rsp, which is the stack pointer to the address of our provided function, so we start executing that first when we are scheduled to run. Lastly, we set the state to Ready, which means we have work to do and that we are ready to do it. Remember, it's up to our scheduler to actually start up this thread. We're now finished implementing our Runtime, if you got all this, you basically understand how fibers/green threads work. However, there are still a few details needed to make it all work. Guard, skip, and switch functions There are a few functions we've referred to that are really important for our Runtime to actually work. Fortunately, all but one of them are extremely simple to understand. We'll start with the guard function: fn guard() { unsafe { let rt_ptr = RUNTIME as *mut Runtime; (*rt_ptr). t_return(); }; }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 122 The guard function is called when the function that we passed in, f, has returned. When f returns, it means our task is finished, so we de-reference our Runtime and call t_return(). We could have made a function that does some additional work when a thread is finished, but right now, our t_return() function does all we need. It marks our thread as Available (if it's not our base thread) and yields so we can resume work on a different thread. Next is our skip function: #[naked] unsafe extern "C" fn skip() { asm!("ret", options(noreturn)) } There is not much happening in the skip function. We use the #[naked] attribute so that this function essentially compiles down to just ret instruction. ret will just pop off the next value from the stack and jump to whatever instructions that address points to. In our case, this is the guard function. Next up is a small helper function named yield_thread : pub fn yield_thread() { unsafe { let rt_ptr = RUNTIME as *mut Runtime; (*rt_ptr). t_yield(); }; } This helper function lets us call t_yield on our Runtime from an arbitrary place in our code without needing any references to it. This function is very unsafe, and it's one of the places where we make big shortcuts to make our example slightly simpler to understand. If we call this and our Runtime is not initialized yet or the runtime is dropped, it will result in undefined behavior. However, making this safer is not a priority for us just to get our example up and running. We are very close to the finish line; just one more function to go. The last bit we need is our switch function, and you already know the most important parts of it already. Let's see how it looks and explain how it differs from our first stack swap function: #[naked] #[no_mangle] unsafe extern "C" fn switch() { asm!( "mov [rdi + 0x00], rsp", "mov [rdi + 0x08], r15", "mov [rdi + 0x10], r14", "mov [rdi + 0x18], r13", "mov [rdi + 0x20], r12", "mov [rdi + 0x28], rbx", "mov [rdi + 0x30], rbp",
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Implementing our own fibers 123 "mov rsp, [rsi + 0x00]", "mov r15, [rsi + 0x08]", "mov r14, [rsi + 0x10]", "mov r13, [rsi + 0x18]", "mov r12, [rsi + 0x20]", "mov rbx, [rsi + 0x28]", "mov rbp, [rsi + 0x30]", "ret", options(noreturn) ); } So, this is our full stack switch function. Y ou probably remember from our first example that this is just a bit more elaborate. We first read out the values of all the registers we need and then set all the register values to the register values we saved when we suspended execution on the new thread. This is essentially all we need to do to save and resume the execution. Here we see the #[naked] attribute used again. Usually, every function has a prologue and an epilogue and we don't want that here since this is all assembly and we want to handle everything ourselves. If we don't include this, we will fail to switch back to our stack the second time. Y ou can also see us using the offset we introduced earlier in practice: 0x00[rdi] # 0 0x08[rdi] # 8 0x10[rdi] # 16 0x18[rdi] # 24 These are hex numbers indicating the offset from the memory pointer to which we want to read/write. I wrote down the base 10 numbers as comments, so as you can see, we only offset the pointer in 8-byte steps, which is the same size as the u64 fields on our Thread Context struct. This is also why it's important to annotate Thread Context with #[repr(C)] ; it tells us that the data will be represented in memory in this exact way so we write to the right field. The Rust ABI makes no guarantee that they are represented in the same order in memory; however, the C-ABI does. Finally, there is one new option added to the asm! block. option(noreturn) is a requirement when writing naked functions and we will receive a compile error if we don't add it. Usually, the compiler will assume that a function call will return, but naked functions are not anything like the functions we're used to. They're more like labeled containers of assembly that we can call, so we don't want the compiler to emit ret instructions at the end of the function or make any assumptions that we return to the previous stack frame. By using this option, we tell the compiler to treat the assembly block as if it never returns, and we make sure that we never fall through the assembly block by adding a ret instruction ourselves. Next up is our main function, which is pretty straightforward, so I'll simply present the code here:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 124 fn main() { let mut runtime = Runtime::new(); runtime. init(); runtime. spawn(|| { println!("THREAD 1 STARTING"); let id = 1; for i in 0..10 { println!("thread: {} counter: {}", id, i); yield_thread(); } println!("THREAD 1 FINISHED"); }); runtime. spawn(|| { println!("THREAD 2 STARTING"); let id = 2; for i in 0..15 { println!("thread: {} counter: {}", id, i); yield_thread(); } println!("THREAD 2 FINISHED"); }); runtime. run(); } As you see here, we initialize our runtime and spawn two threads: one that counts to 10 and yields between each count and one that counts to 15. When we cargo run our project, we should get the following output: Finished dev [unoptimized + debuginfo] target(s) in 2. 17s Running `target/debug/green_threads` THREAD 1 STARTING thread: 1 counter: 0 THREAD 2 STARTING thread: 2 counter: 0 thread: 1 counter: 1 thread: 2 counter: 1 thread: 1 counter: 2 thread: 2 counter: 2 thread: 1 counter: 3 thread: 2 counter: 3 thread: 1 counter: 4 thread: 2 counter: 4 thread: 1 counter: 5 thread: 2 counter: 5 thread: 1 counter: 6 thread: 2 counter: 6 thread: 1 counter: 7
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Finishing thoughts 125 thread: 2 counter: 7 thread: 1 counter: 8 thread: 2 counter: 8 thread: 1 counter: 9 thread: 2 counter: 9 THREAD 1 FINISHED. thread: 2 counter: 10 thread: 2 counter: 11 thread: 2 counter: 12 thread: 2 counter: 13 thread: 2 counter: 14 THREAD 2 FINISHED. Beautiful! Our threads alternate since they yield control on each count until THREAD 1 finishes and THREAD 2 counts the last numbers before it finishes its task. Finishing thoughts I want to round off this chapter by pointing out some of the advantages and disadvantages of this approach, which we went through in Chapter 2, since we now have first-hand experience with this topic. First of all, the example we implemented here is an example of what we called a stackful coroutine. Each coroutine (or thread, as we call it in the example implementation) has its own stack. This also means that we can interrupt and resume execution at any point in time. It doesn't matter if we're in the middle of a stack frame (in the middle of executing a function); we can simply tell the CPU to save the state we need to the stack, return to a different stack and restore the state it needs there, and resume as if nothing has happened. Y ou can also see that we have to manage our stacks in some way. In our example, we just create a static stack (much like the OS does when we ask it for a thread, but smaller), but for this to be more efficient than using OS threads, we need to select a strategy to solve that potential problem. If you look at our slightly expanded example in ch05/d-fibers-closure, you'll notice that we can make the API pretty easy to use, much like the API used for std::thread::spawn in the standard library. The flipside is of course the complexity of implementing this correctly on all combinations of ISA/ABIs that we want to support, and while specific to Rust, it's challenging to create a great and safe API over these kinds of stackful coroutines without any native language support for it. To tie this into Chapter 3, where we discuss event queues and non-blocking calls, I want to point out that if you use fibers to handle concurrency, you would call yield after you've made a read interest in your non-blocking call. Typically, a runtime would supply these non-blocking calls, and the fact that we yield would be opaque to the user, but the fiber is suspended at that point. We would probably add one more state to our State enum called Pending or something else that signifies that the thread is waiting for some external event.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Creating Our Own Fibers 126 When the OS signals that the data is ready, we would mark the thread as State::Ready to resume and the scheduler would resume execution just like in this example. While it requires a more sophisticated scheduler and infrastructure, I hope that you have gotten a good idea of how such a system would work in practice. Summary First of all, congratulations! Y ou have now implemented a super simple but working example of fibers. Y ou've set up your own stack and learned about ISAs, ABIs, calling conventions, and inline assembly in Rust. It was quite the ride we had to take, but if you came this far and read through everything, you should give yourself a big pat on the back. This is not for the faint of heart, but you pulled through. This example (and chapter) might take a little time to fully digest, but there is no rush for that. Y ou can always go back to this example and read the code again to fully understand it. I really do recommend that you play around with the code yourself and get to know it. Change the scheduling algorithm around, add more context to the threads you create, and use your imagination. Y ou will probably experience that debugging problems in low-level code like this can be pretty hard, but that's part of the learning process and you can always revert back to a working version. Now that we have covered one of the largest and most difficult examples in this book, we'll go on to learn about another popular way of handling concurrency by looking into how futures and async/await works in Rust. The rest of this book is in fact dedicated solely to learning about futures and async/ await in Rust, and since we've gained so much fundamental knowledge at this point, it will be much easier for us to get a good and deep understanding of how they work. Y ou've done a great job so far!
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Part 3: Futures and async/await in Rust This part will explain Futures and async/await in Rust from the ground up. Building upon the knowledge acquired thus far, we will construct a central example that will serve as a recurring theme in the subsequent chapters, eventually leading to the creation of a runtime capable of executing futures in Rust. Throughout this exploration, we will delve into concepts such as coroutines, runtimes, reactors, executors, wakers, and much more. This part comprises the following chapters: Chapter 6, Futures in Rust Chapter 7, Coroutines and async/await Chapter 8, Runtimes, Wakers, and the Reactor-Executor Pattern Chapter 9, Coroutines, Self-referential Structs, and Pinning Chapter 10, Create Your Own Runtime
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
6 Futures in Rust In Chapter 5, we covered one of the most popular ways of modeling concurrency in a programming language: fibers/green threads. Fibers/green threads are an example of stackful coroutines. The other popular way of modeling asynchronous program flow is by using what we call stackless coroutines, and combining Rust's futures with async/await is an example of that. We will cover this in detail in the next chapters. This first chapter will introduce Rust's futures to you, and the main goals of this chapter are to do the following: Give you a high-level introduction to concurrency in Rust Explain what Rust provides and not in the language and standard library when working with async code Get to know why we need a runtime library in Rust Understand the difference between a leaf future and a non-leaf future Get insight into how to handle CPU-intensive tasks To accomplish this, we'll divide this chapter into the following sections: What is a future? Leaf futures Non-leaf futures Runtimes A mental model of an async runtime What the Rust language and standard library take care of I/O vs CPU-intensive tasks Advantages and disadvantages of Rust's async model
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Futures in Rust 130 What is a future? A future is a representation of some operation that will be completed in the future. Async in Rust uses a poll-based approach in which an asynchronous task will have three phases: 1. The poll phase : A future is polled, which results in the task progressing until a point where it can no longer make progress. We often refer to the part of the runtime that polls a future as an executor. 2. The wait phase : An event source, most often referred to as a reactor, registers that a future is waiting for an event to happen and makes sure that it will wake the future when that event is ready. 3. The wake phase : The event happens and the future is woken up. It's now up to the executor that polled the future in step 1 to schedule the future to be polled again and make further progress until it completes or reaches a new point where it can't make further progress and the cycle repeats. Now, when we talk about futures, I find it useful to make a distinction between non-leaf futures and leaf futures early on because, in practice, they're pretty different from one another. Leaf futures Runtimes create leaf futures, which represent a resource such as a socket. This is an example of a leaf future: let mut stream = tokio::net::Tcp Stream::connect("127. 0. 0. 1:3000"); Operations on these resources, such as a reading from a socket, will be non-blocking and return a future, which we call a leaf future since it's the future that we're actually waiting on. It's unlikely that you'll implement a leaf future yourself unless you're writing a runtime, but we'll go through how they're constructed in this book as well. It's also unlikely that you'll pass a leaf future to a runtime and run it to completion alone, as you'll understand by reading the next paragraph. Non-leaf futures Non-leaf futures are the kind of futures we as users of a runtime write ourselves using the async keyword to create a task that can be run on the executor. The bulk of an async program will consist of non-leaf futures, which are a kind of pause-able computation. This is an important distinction since these futures represent a set of operations. Often, such a task will await a leaf future as one of many operations to complete the task.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
A mental model of an async runtime 131 This is an example of a non-leaf future: let non_leaf = async { let mut stream = Tcp Stream::connect("127. 0. 0. 1:3000"). await. unwrap(); println!("connected!"); let result = stream. write(b"hello world\n"). await; println!("message sent!"); ... }; The two highlighted lines indicate points where we pause the execution, yield control to a runtime, and eventually resume. In contrast to leaf futures, these kinds of futures do not themselves represent an I/O resource. When we poll them, they will run until they get to a leaf future that returns Pending and then yields control to the scheduler (which is a part of what we call the runtime). Runtimes Languages such as C#, Java Script, Java, Go, and many others come with a runtime for handling concurrency. So, if you're used to one of those languages, this will seem a bit strange to you. Rust is different from these languages in the sense that Rust doesn't come with a runtime for handling concurrency, so you need to use a library that provides this for you. Quite a bit of complexity attributed to futures is actually complexity rooted in runtimes; creating an efficient runtime is hard. Learning how to use one correctly requires quite a bit of effort as well, but you'll see that there are several similarities between this kind of runtime, so learning one makes learning the next much easier. The difference between Rust and other languages is that you have to make an active choice when it comes to picking a runtime. Most often, in other languages, you'll just use the one provided for you. A mental model of an async runtime I find it easier to reason about how futures work by creating a high-level mental model we can use. To do that, I have to introduce the concept of a runtime that will drive our futures to completion. Note The mental model I create here is not the only way to drive futures to completion, and Rust's futures do not impose any restrictions on how you actually accomplish this task.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Futures in Rust 132 A fully working async system in Rust can be divided into three parts: Reactor (responsible for notifying about I/O events) Executor (scheduler) Future (a task that can stop and resume at specific points) So, how do these three parts work together? Let's take a look at a diagram that shows a simplified overview of an async runtime: Figure 6. 1-Reactor, executor, and waker In step 1 of the figure, an executor holds a list of futures. It will try to run the future by polling it (the poll phase), and when it does, it hands it a Waker. The future either returns Poll:Ready (which means it's finished) or Poll::Pending (which means it's not done but can't get further at the moment). When the executor receives one of these results, it knows it can start polling a different future. We call these points where control is shifted back to the executor yield points. In step 2, the reactor stores a copy of the Waker that the executor passed to the future when it polled it. The reactor tracks events on that I/O source, usually through the same type of event queue that we learned about in Chapter 4.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
What the Rust language and standard library take care of 133 In step 3, when the reactor gets a notification that an event has happened on one of the tracked sources, it locates the Waker associated with that source and calls Waker::wake on it. This will in turn inform the executor that the future is ready to make progress so it can poll it once more. If we write a short async program using pseudocode, it will look like this: async fn foo() { println!("Start!"); let txt = io::read_to_string(). await. unwrap(); println!("{txt}"); } The line where we write await is the one that will return control back to the scheduler. This is often called a yield point since it will return either Poll::Pending or Poll::Ready (most likely it will return Poll::Pending the first time the future is polled). Since the Waker is the same across all executors, reactors can, in theory, be completely oblivious to the type of executor, and vice-versa. Executors and reactors never need to communicate with one another directly. This design is what gives the futures framework its power and flexibility and allows the Rust standard library to provide an ergonomic, zero-cost abstraction for us to use. Note I introduced the concept of reactors and executors here like it's something everyone knows about. I know that's not the case, and don't worry, we'll go through this in detail in the next chapter. What the Rust language and standard library take care of Rust only provides what's necessary to model asynchronous operations in the language. Basically, it provides the following: A common interface that represents an operation, which will be completed in the future through the Future trait An ergonomic way of creating tasks (stackless coroutines to be precise) that can be suspended and resumed through the async and await keywords A defined interface to wake up a suspended task through the Waker type That's really what Rust's standard library does. As you see there is no definition of non-blocking I/O, how these tasks are created, or how they're run. There is no non-blocking version of the standard library, so to actually run an asynchronous program, you have to either create or decide on a runtime to use.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Futures in Rust 134 I/O vs CPU-intensive tasks As you know now, what you normally write are called non-leaf futures. Let's take a look at this async block using pseudo-Rust as an example: let non_leaf = async { let mut stream = Tcp Stream::connect("127. 0. 0. 1:3000"). await. unwrap(); // request a large dataset let result = stream. write(get_dataset_request). await. unwrap(); // wait for the dataset let mut response = vec![]; stream. read(&mut response). await. unwrap(); // do some CPU-intensive analysis on the dataset let report = analyzer::analyze_data(response). unwrap(); // send the results back stream. write(report). await. unwrap(); }; I've highlighted the points where we yield control to the runtime executor. It's important to be aware that the code we write between the yield points runs on the same thread as our executor. That means that while our analyzer is working on the dataset, the executor is busy doing calculations instead of handling new requests. Fortunately, there are a few ways to handle this, and it's not difficult, but it's something you must be aware of: 1. We could create a new leaf future, which sends our task to another thread and resolves when the task is finished. We could await this leaf-future like any other future. 2. The runtime could have some kind of supervisor that monitors how much time different tasks take and moves the executor itself to a different thread so it can continue to run even though our analyzer task is blocking the original executor thread. 3. Y ou can create a reactor yourself that is compatible with the runtime, which does the analysis any way you see fit and returns a future that can be awaited. Now, the first way is the usual way of handling this, but some executors implement the second method as well. The problem with #2 is that if you switch runtime, you need to make sure that it supports this kind of supervision as well or else you will end up blocking the executor.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 135 The third method is more of theoretical importance; normally, you' d be happy to send the task to the thread pool that most runtimes provide. Most executors have a way to accomplish #1 using methods such as spawn_blocking. These methods send the task to a thread pool created by the runtime where you can either perform CPU-intensive tasks or blocking tasks that are not supported by the runtime. Summary So, in this short chapter, we introduced Rust's futures to you. Y ou should now have a basic idea of what Rust's async design looks like, what the language provides for you, and what you need to get elsewhere. Y ou should also have an idea of what a leaf future and a non-leaf future are. These aspects are important as they're design decisions built into the language. Y ou know by now that Rust uses stackless coroutines to model asynchronous operations, but since a coroutine doesn't do anything in and of itself, it's important to know that the choice of how to schedule and run these coroutines is left up to you. We'll get a much better understanding as we start to explain how this all works in detail as we move forward. Now that we've seen a high-level overview of Rust's futures, we'll start explaining how they work from the ground up. The next chapter will cover the concept of futures and how they're connected with coroutines and the async/await keywords in Rust. We'll see for ourselves how they represent tasks that can pause and resume their execution, which is a prerequisite to having multiple tasks be in progress concurrently, and how they differ from the pausable/resumable tasks we implemented as fibers/green threads in Chapter 5.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
7 Coroutines and async/await Now that you've gotten a brief introduction to Rust's async model, it's time to take a look at how this fits in the context of everything else we've covered in this book so far. Rust's futures are an example of an asynchronous model based on stackless coroutines, and in this chapter, we'll take a look at what that really means and how it differs from stackful coroutines (fibers/green threads). We'll center everything around an example based on a simplified model of futures and async/ await and see how we can use that to create suspendable and resumable tasks just like we did when creating our own fibers. The good news is that this is a lot easier than implementing our own fibers/green threads since we can stay in Rust, which is safer. The flip side is that it's a little more abstract and ties into programming language theory as much as it does computer science. In this chapter, we'll cover the following: Introduction to stackless coroutines An example of hand-written coroutines async/await Technical requirements The examples in this chapter will all be cross-platform, so the only thing you need is Rust installed and the repository that belongs to the book downloaded locally. All the code in this chapter will be found in the ch07 folder. We'll use delayserver in this example as well, so you need to open a terminal, enter the delayserver folder at the root of the repository, and write cargo run so it's ready and available for the examples going forward.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 138 Remember to change the ports in the code if you for some reason have to change what port delayserver listens on. Introduction to stackless coroutines So, we've finally arrived at the point where we introduce the last method of modeling asynchronous operations in this book. Y ou probably remember that we gave a high-level overview of stackful and stackless coroutines in Chapter 2. In Chapter 5, we implemented an example of stackful coroutines when writing our own fibers/green threads, so now it's time to take a closer look at how stackless coroutines are implemented and used. A stackless coroutine is a way of representing a task that can be interrupted and resumed. If you remember all the way back in Chapter 1, we mentioned that if we want tasks to run concurrently (be in progress at the same time) but not necessarily in parallel, we need to be able to pause and resume the task. In its simplest form, a coroutine is just a task that can stop and resume by yielding control to either its caller, another coroutine, or a scheduler. Many languages will have a coroutine implementation that also provides a runtime that handles scheduling and non-blocking I/O for you, but it's helpful to make a distinction between what a coroutine is and the rest of the machinery involved in creating an asynchronous system. This is especially true in Rust, since Rust doesn't come with a runtime and only provides the infrastructure you need to create coroutines that have native support in the language. Rust makes sure that everyone programming in Rust uses the same abstraction for tasks that can be paused and resumed, but it leaves all the other details of getting an asynchronous system up and running for the programmer. Stackless coroutines or just coroutines? Most often you'll see stackless coroutines simply referred to as coroutines. To try to keep some consistency (you remember I don't like to introduce terms that mean different things based on the context), I've consistently referred to coroutines as either stackless or stackful, but going forward, I'll simply refer to stackless coroutines as coroutines. This is also what you'll have to expect when reading about them in other sources. Fibers/green threads represent this kind of resumable task in a very similar way to how an operating system does. A task has a stack where it stores/restores its current execution state, making it possible to pause and resume the task. A state machine in its simplest form is a data structure that has a predetermined set of states it can be in. In the case of coroutines, each state represents a possible pause/resume point. We don't store the state needed to pause/resume the task in a separate stack. We save it in a data structure instead.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 139 This has some advantages, which I've covered before, but the most prominent ones are that they're very efficient and flexible. The downside is that you' d never want to write these state machines by hand (you'll see why in this chapter), so you need some kind of support from the compiler or another mechanism for rewriting your code to state machines instead of normal function calls. The result is that you get something that looks very simple. It looks like a function/subroutine that you can easily map to something that you can run using a simple call instruction in assembly, but what you actually get is something pretty complex and different from this, and it doesn't look anything like what you' d expect. Generators vs coroutines Generators are state machines as well, exactly the kind we'll cover in this chapter. They're usually implemented in a language to create state machines that yield values to the calling function. Theoretically, you could make a distinction between coroutines and generators based on what they yield to. Generators are usually limited to yielding to the calling function. Coroutines can yield to another coroutine, a scheduler, or simply the caller, in which case they're just like generators. In my eyes, there is really no point in making a distinction between them. They represent the same underlying mechanism for creating tasks that can pause and resume their executions, so in this book, we'll treat them as basically the same thing. Now that we've covered what coroutines are in text, we can start looking at what they look like in code. An example of hand-written coroutines The example we'll use going forward is a simplified version of Rust's asynchronous model. We'll create and implement the following: Our own simplified Future trait A simple HTTP client that can only make GET requests A task we can pause and resume implemented as a state machine Our own simplified async/await syntax called coroutine/wait A homemade preprocessor to transform our coroutine/wait functions into state machines the same way async/await is transformed So, to actually demystify coroutines, futures, and async/await, we will have to make some compromises. If we didn't, we' d end up re-implementing everything that is async/await and futures in Rust today, which is too much for just understanding the underlying techniques and concepts.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 140 Therefore, our example will do the following: Avoid error handling. If anything fails, we panic. Be specific and not generic. Creating generic solutions introduces a lot of complexity and makes the underlying concepts harder to reason about since we consequently have to create extra abstraction levels. Our solution will have some generic aspects where needed, though. Be limited in what it can do. Y ou are of course free to expand, change, and play with all the examples (I encourage you to do so), but in the example, we only cover what we need and not anything more. Avoid macros. So, with that out of the way, let's get started on our example. The first thing you need to do is to create a new folder. This first example can be found in ch07/a-coroutine in the repository, so I suggest you name the folder a-coroutine as well. Then, initialize a new crate by entering the folder and write cargo init. Now that we have a new project up and running, we can create the modules and folders we need: First, in main. rs, declare two modules as follows: ch07/a-coroutine/src/main. rs mod http; mod future; Next, create two new files in the src folder: future. rs, which will hold our future-related code http. rs, which will be the code related to our HTTP client One last thing we need to do is to add a dependency on mio. We'll be using Tcp Stream from mio, as we'll build on this example in the following chapters and use mio as our non-blocking I/O library since we're already familiar with it: ch07/a-coroutine/Cargo. toml [dependencies] mio = { version = "0. 8", features = ["net", "os-poll"] } Let's start in future. rs and implement our future-related code first.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 141 Futures module In futures. rs, the first thing we'll do is define a Future trait. It looks as follows: ch07/a-coroutine/src/future. rs pub trait Future { type Output; fn poll(&mut self)-> Poll State<Self::Output>; } If we contrast this with the Future trait in Rust's standard library, you'll see it's very similar, except that we don't take cx: &mut Context<'_> as an argument and we return an enum with a slightly different name just to differentiate it so we don't mix them up: pub trait Future { type Output; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>)-> Poll<Self::Output>; } The next thing we do is to define a Poll State<T> enum : ch07/a-coroutine/src/future. rs pub enum Poll State<T> { Ready(T), Not Ready, } Again, if we compare this to the Poll enum in Rust's standard library, we see that they're practically the same: pub enum Poll<T> { Ready(T), Pending, } For now, this is all we need to get the first iteration of our example up and running. Let's move on to the next file: http. rs.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 142 HTTP module In this module, we'll implement a very simple HTTP client. This client can only make GET requests to our delayserver since we just use this as a representation of a typical I/O operation and don't care specifically about being able to do more than we need. The first thing we'll do is import some types and traits from the standard library as well as our Futures module: ch07/a-coroutine/src/http. rs use crate::future::{Future, Poll State}; use std::io::{Error Kind, Read, Write}; Next, we create a small helper function to write our HTTP requests. We've used this exact bit of code before in this book, so I'll not spend time explaining it again here: ch07/a-coroutine/src/http. rs fn get_req(path: &str)-> String { format!( "GET {path} HTTP/1. 1\r\n\ Host: localhost\r\n\ Connection: close\r\n\ \r\n" ) } So, now we can start writing our HTTP client. The implementation is very short and simple: pub struct Http; impl Http { pub fn get(path: &str)-> impl Future<Output = String> { Http Get Future::new(path) } } We don't really need a struct here, but we add one since we might want to add some state at a later point. It's also a good way to group functions belonging to the HTTP client together. Our HTTP client only has one function, get, which, eventually, will send a GET request to our delayserver with the path we specify (remember that the path is everything in bold in this example URL: http://127. 0. 0. 1:8080 /1000/Hello World),
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 143 The first thing you'll notice in the function body is that there is not much happening here. We only return Http Get Future and that's it. In the function signature, you see that it returns an object implementing the Future trait that outputs a String when it's resolved. The string we return from this function will be the response we get from the server. Now, we could have implemented the future trait directly on the Http struct, but I think it's a better design to allow one Http instance to give out multiple Futures instead of making the Http implement Future itself. Let's take a closer look at Http Get Future since there is much more happening there. Just to point this out so that there is no doubt going forward, Http Get Future is an example of a leaf future, and it will be the only leaf future we'll use in this example. Let's add the struct declaration to the file: ch07/a-coroutine/src/http. rs struct Http Get Future { stream: Option<mio::net::Tcp Stream>, buffer: Vec<u8>, path: String, } This data structure will hold onto some data for us: stream : This holds an Option<mio::net::Tcp Stream>. This will be an Option since we won't connect to the stream at the same point as we create this structure. buffer : We'll read the data from the Tcp Stream and put it all in this buffer until we've read all the data returned from the server. path : This simply stores the path for our GET request so we can use it later. The next thing we'll take a look at is the impl block for our Http Get Future : ch07/a-coroutine/src/http. rs impl Http Get Future { fn new(path: &'static str)-> Self { Self { stream: None, buffer: vec![], Path: path. to_string(), } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 144 fn write_request(&mut self) { let stream = std::net::Tcp Stream::connect("127. 0. 0. 1:8080"). unwrap(); stream. set_nonblocking(true). unwrap(); let mut stream = mio::net::Tcp Stream::from_std(stream); stream. write_all(get_req(&self. path). as_bytes()). unwrap(); self. stream = Some(stream); } } The impl block defines two functions. The first is new, which simply sets the initial state. The next function is write_requst, which sends the GET request to the server. Y ou've seen this code before in the example in Chapter 4, so this should look familiar. Note When creating Http Get Future, we don't actually do anything related to the GET request, which means that the call to Http::get returns immediately with just a simple data structure. In contrast to earlier examples, we pass in the IP address for localhost instead of the DNS name. We take the same shortcut as before and let connect be blocking and everything else be non-blocking. The next step is to write the GET request to the server. This will be non-blocking, and we don't have to wait for it to finish since we'll be waiting for the response anyway. The last part of this file is the most important one—the implementation of the Future trait we defined: ch07/a-coroutine/src/http. rs impl Future for Http Get Future { type Output = String; fn poll(&mut self)-> Poll State<Self::Output> { if self. stream. is_none() { println!("FIRST POLL-START OPERATION"); self. write_request(); return Poll State::Not Ready; } let mut buff = vec![0u8; 4096]; loop { match self. stream. as_mut(). unwrap(). read(&mut buff) { Ok(0) => {
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 145 let s = String::from_utf8_lossy(&self. buffer); break Poll State::Ready(s. to_string()); } Ok(n) => { self. buffer. extend(&buff[0..n]); continue; } Err(e) if e. kind() == Error Kind::Would Block => { break Poll State::Not Ready; } Err(e) if e. kind() == Error Kind::Interrupted => { continue; } Err(e) => panic!("{e:?}"), } } } } Okay, so this is where everything happens. The first thing we do is set the associated type called Output to String. The next thing we do is to check whether this is the first time poll was called or not. We do this by checking if self. stream is None. If it's the first time we call poll, we print a message (just so we can see the first time this future was polled), and then we write the GET request to the server. On the first poll, we return Poll State::Not Ready, so Http Get Future will have to be polled at least once more to actually return any results. The next part of the function is trying to read data from our Tcp Stream. We've covered this before, so I'll make this brief, but there are basically five things that can happen: 1. The call successfully returns with 0 bytes read. We've read all the data from the stream and have received the entire GET response. We create a String from the data we've read and wrap it in Poll State::Ready before we return. 2. The call successfully returns with n > 0 bytes read. If that's the case, we read the data into our buffer, append the data into self. buffer, and immediately try to read more data from the stream.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 146 3. We get an error of kind Would Block. If that's the case, we know that since we set the stream to non-blocking, the data isn't ready yet or there is more data but we haven't received it yet. In that case, we return Poll State::Not Ready to communicate that more calls to the poll are needed to finish the operation. 4. We get an error of kind Interrupted. This is a bit of a special case since reads can be interrupted by a signal. If it does, the usual way to handle the error is to simply try reading once more. 5. We get an error that we can't handle, and since our example does no error handling, we simply panic! There is one subtle thing I want to point out. We can view this as a very simple state machine with three states: Not started, indicated by self. stream being None Pending, indicated by self. stream being Some and a read to stream. read returning Would Block Resolved, indicated by self. stream being Some and a call to stream. read returning 0 bytes As you see, this model maps nicely to the states reported by the OS when trying to read our Tcp Stream. Most leaf futures such as this will be quite simple, and although we didn't make the states explicit here, it still fits in the state machine model that we're basing our coroutines around. Do all futures have to be lazy? A lazy future is one where no work happens before it's polled the first time. This will come up a lot if you read about futures in Rust, and since our own Future trait is based on that exact same model, the same question will arise here. The simple answer to this question is no! There is nothing that forces leaf futures, such as the one we wrote here, to be lazy. We could have sent the HTTP request when we called the Http::get function if we wanted to. If you think about it, if we did just that, it would have caused a potentially big change that would impact how we achieve concurrency in our program. The way it works now is that someone has to call poll at least one time to actually send the request. The consequence is that whoever calls poll on this future will have to call poll on many futures to kick off the operation if they want them to run concurrently. If we kicked off the operation immediately when the future was created, you could create many futures and they would all run concurrently even though you polled them to completion one by one. If you
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 147 poll them to completion one by one in the current design, the futures would not progress concurrently. Let that sink in for a moment. Languages such as Java Script start the operation when the coroutine is created, so there is no “one way” to do this. Every time you encounter a coroutine implementation, you should find out whether they're lazy or eager since this impacts how you program with them. Even though we could make our future eager in this case, we really shouldn't. Since programmers in Rust expect futures to be lazy, they might depend on nothing happening before you call poll on them, and there may be unexpected side effects if the futures you write behave differently. Now, when you read that Rust's futures are always lazy, a claim that I see very often, it refers to the compiler-generated state machines resulting from using async/await. As we'll see later, when your async functions are rewritten by the compiler, they're constructed in a way so that nothing you write in the body of an async function will execute before the first call to Future::poll. Okay, so we've covered the Future trait and the leaf future we named Http Get Future. The next step is to create a task that we can stop and resume at predefined points. Creating coroutines We'll continue to build our knowledge and understanding from the ground up. The first thing we'll do is create a task that we can stop and resume by modeling it as a state machine by hand. Once we've done that, we'll take a look at how this way of modeling pausable tasks enables us to write a syntax much like async/await and rely on code transformations to create these state machines instead of writing them by hand. We'll create a simple program that does the following: 1. Prints a message when our pausable task is starting. 2. Makes a GET request to our delayserver. 3. Waits for the GET request. 4. Prints the response from the server. 5. Makes a second GET request to our delayserver. 6. Waits for the second response from the server. 7. Prints the response from the server. 8. Exits the program. In addition, we'll execute our program by calling Future::poll on our hand-crafted coroutine as many times as required to run it to completion. There's no runtime, reactor, or executor yet since we'll cover those in the next chapter.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 148 If we wrote our program as an async function, it would look as follows: async fn async_main() { println!("Program starting") let txt = Http::get("/1000/Hello World"). await; println!("{txt}"); let txt2 = Http::("500/Hello World2"). await; println!("{txt2}"); } In main. rs, start by making the necessary imports and module declarations: ch07/a-coroutine/src/main. rs use std::time::Instant; mod future; mod http; use crate::http::Http; use future::{Future, Poll State}; The next thing we write is our stoppable/resumable task called Coroutine : ch07/a-coroutine/src/main. rs struct Coroutine { state: State, } Once that's done, we write the different states this task could be in: ch07/a-coroutine/src/main. rs enum State { Start, Wait1(Box<dyn Future<Output = String>>), Wait2(Box<dyn Future<Output = String>>), Resolved, } This specific coroutine can be in four states: Start : The Coroutine has been created but it hasn't been polled yet
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 149 Wait1 : When we call Http::get, we get a Http Get Future returned that we store in the State enum. At this point, we return control back to the calling function so it can do other things if needed. We chose to make this generic over all Future functions that output a String, but since we only have one kind of future right now, we could have made it simply hold a Http Get Future and it would work the same way. Wait2 : The second call to Http::get is the second place where we'll pass control back to the calling function. Resolved : The future is resolved and there is no more work to do. Note We could have simply defined Coroutine as an enum since the only state it holds is an enum indicating its state. But, we'll set up this example so we can add some state to Coroutine later on in this book. Next is the implementation of Coroutine : ch07/a-coroutine/src/main. rs impl Coroutine { fn new()-> Self { Self { state: State::Start, } } } So far, this is pretty simple. When creating a new Coroutine, we simply set it to State::Start and that's it. Now we come to the part where the work is actually done in the Future implementation for Coroutine. I'll walk you through the code: ch07/a-coroutine/src/main. rs impl Future for Coroutine { type Output = (); fn poll(&mut self)-> Poll State<Self::Output> { loop { match self. state { State::Start => { println!("Program starting"); let fut = Box::new(Http::get("/600/Hello World1"));
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 150 self. state = State::Wait1(fut); } State::Wait1(ref mut fut) => match fut. poll() { Poll State::Ready(txt) => { println!("{txt}"); let fut2 = Box::new(Http::get("/400/Hello World2")); self. state = State::Wait2(fut2); } Poll State::Not Ready => break Poll State::Not Ready, }, State::Wait2(ref mut fut2) => match fut2. poll() { Poll State::Ready(txt2) => { println!("{txt2}"); self. state = State::Resolved; break Poll State::Ready(()); } Poll State::Not Ready => break Poll State::Not Ready, }, State::Resolved => panic!("Polled a resolved future"), } } } } Let's start from the top: 1. The first thing we do is set the Output type to (). Since we won't be returning anything, it just makes our example simpler. 2. Next up is the implementation of the poll method. The first thing you notice is that we write a loop instance that matches self. state. We do this so we can drive the state machine forward until we reach a point where we can't progress any further without getting Poll State::Not Ready from one of our child futures. 3. If the state is State::Start, we know that this is the first time it was polled, so we run whatever instructions we need until we reach the point where we get a new future that we need to resolve. 4. When we call Http::get, we receive a future in return that we need to poll to completion before we progress any further.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 151 5. At this point, we change the state to State::Wait1 and we store the future we want to resolve so we can access it in the next state. 6. Our state machine has now changed its state from Start to Wait1. Since we're looping on the match statement, we immediately progress to the next state and will reach the match arm in State::Wait1 on the next iteration. 7. The first thing we do in Wait1 to call poll on the Future instance we're waiting on. 8. If the future returns Poll State::Not Ready, we simply bubble that up to the caller by breaking out of the loop and returning Not Ready. 9. If the future returns Poll State::Ready together with our data, we know that we can execute the instructions that rely on the data from the first future and advance to the next state. In our case, we only print out the returned data, so that's only one line of code. 10. Next, we get to the point where we get a new future by calling Http::get. We set the state to Wait2, just like we did when going from State::Start to State::Wait1. 11. Like we did the first time we got a future that we needed to resolve before we continue, we save it so we can access it in State::Wait2. 12. Since we're in a loop, the next thing that happens is that we reach the matching arm for Wait2, and here, we repeat the same steps as we did for State::Wait1 but on a different future. 13. If it returns Ready with our data, we act on it and we set the final state of our Coroutine to State::Resolved. There is one more important change: this time, we want to communicate to the caller that this future is done, so we break out of the loop and return Poll State::Ready. If anyone tries to call poll on our Coroutine again, we will panic, so the caller must make sure to keep track of when the future returns Poll State::Ready and make sure to not call poll on it ever again. The last thing we do before we get to our main function is create a new Coroutine in a function we call async_main. This way, we can keep the changes to a minimum when we start talking about async/await in the last part of this chapter: ch07/a-coroutine/src/main. rs fn async_main()-> impl Future<Output = ()> { Coroutine::new() } So, at this point, we're finished writing our coroutine and the only thing left is to write some logic to drive our state machine through its different stages of the main function.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 152 One thing to note here is that our main function is just a regular main function. The loop in our main function is what drives the asynchronous operations to completion: ch07/a-coroutine/src/main. rs fn main() { let mut future = async_main(); loop { match future. poll() { Poll State::Not Ready => { println!("Schedule other tasks"); }, Poll State::Ready(_) => break, } thread::sleep(Duration::from_millis(100)); } } This function is very simple. We first get the future returned from async_main and then we call poll on it in a loop until it returns Poll State::Ready. Every time we receive a Poll State::Not Ready in return, the control is yielded back to us. we could do other work here, such as scheduling another task, if we want to, but in our case, we just print Schedule other tasks. We also limit how often the loop is run by sleeping for 100 milliseconds on every call. This way we won't be overwhelmed with printouts and we can assume that there are roughly 100 milliseconds between every time we see "Schedule other tasks" printed to the console. If we run the example, we get this output: Program starting FIRST POLL-START OPERATION Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, 24 Oct 2023 20:39:13 GMT Hello World1 FIRST POLL-START OPERATION
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An example of hand-written coroutines 153 Schedule other tasks Schedule other tasks Schedule other tasks Schedule other tasks HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, 24 Oct 2023 20:39:13 GMT Hello World2 By looking at the printouts, you can get an idea of the program flow. 1. First, we see Program starting, which executes at the start of our coroutine. 2. We then see that we immediately move on to the FIRST POLL-START OPERATION message that we only print when the future returned from our HTTP client is polled the first time. 3. Next, we can see that we're back in our main function, and at this point, we could theoretically go ahead and run other tasks if we had any 4. Every 100 ms, we check if the task is finished and get the same message telling us that we can schedule other tasks 5. Then, after roughly 600 milliseconds, we receive a response that's printed out 6. We repeat the process once more until we receive and print out the second response from the server Congratulations, you've now created a task that can be paused and resumed at different points, allowing it to be in progress. Who on earth wants to write code like this to accomplish a simple task? The answer is no one! Y es, it's a bit bombastic, but I dare guess that very few programmers prefer writing a 55-line state machine when you compare it to the 7 lines of normal sequential code you' d have to write to accomplish the same thing. If we recall the goals of most userland abstractions over concurrent operations, we'll see that this way of doing it only checks one of the three boxes that we're aiming for: Efficient Expressive Easy to use and hard to misuse
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 154 Our state machine will be efficient, but that's pretty much it. However, you might also notice that there is a system to the craziness. This might not come as a surprise, but the code we wrote could be much simpler if we tagged the start of each function and each point we wanted to yield control back to the caller with a few keywords and had our state machine generated for us. And that's the basic idea behind async/await. Let's go and see how this would work in our example. async/await The previous example could simply be written as the following using async/await keywords: async fn async_main() { println!("Program starting") let txt = Http::get("/1000/Hello World"). await; println!("{txt}"); let txt2 = Http::("500/Hello World2"). await; println!("{txt2}"); } That's seven lines of code, and it looks very familiar to code you' d write in a normal subroutine/function. It turns out that we can let the compiler write these state machines for us instead of writing them ourselves. Not only that, we could get very far just using simple macros to help us, which is exactly how the current async/await syntax was prototyped before it became a part of the language. Y ou can see an example of that at https://github. com/alexcrichton/futures-await. The downside is of course that these functions look like normal subroutines but are in fact very different in nature. With a strongly typed language such as Rust, which borrow semantics instead of using a garbage collector, it's impossible to hide the fact that these functions are different. This can cause a bit of confusion for programmers, who expect everything to behave the same way. Coroutine bonus example To show how close our example is to the behavior we get using the std::future:::Future trait and async/await in Rust, I created the exact same example as we just did in a-coroutines using “proper” futures and the async/await syntax instead. The first thing you'll notice is that it only required very minor changes to the code. Secondly, you can see for yourself that the output shows the exact same program flow as it did in the example where we hand-wrote the state machine ourselves. Y ou will find this example in the ch07/a-coroutines-bonus folder in the repository.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
async/await 155 So, let's take this a step further. To avoid confusion, and since our coroutines only yield to the calling function right now (there is no scheduler, event loop, or anything like that yet), we use a slightly different syntax called coroutine/wait and create a way to have these state machines generated for us. coroutine/wait The coroutine/wait syntax will have clear similarities to the async/await syntax, although it's a lot more limited. The basic rules are as follows: Every function prefixed with coroutine will be rewritten to a state machine like the one we wrote. The return type of functions marked with coroutine will be rewritten so they return-> impl Future<Output = String> (yes, our syntax will only deal with futures that output a String ). Only objects implementing the Future trait can be postfixed with. wait. These points will be represented as separate stages in our state machine. Functions prefixed with coroutine can call normal functions, but normal functions can't call coroutine functions and expect anything to happen unless they call poll on them repeatedly until they return Poll State::Ready. Our implementation will make sure that if we write the following code, it will compile to the same state machine we wrote at the start of this chapter(with the exception that all coroutines will return a String): coroutine fn async_main() { println!("Program starting") let txt = Http::get("/1000/Hello World"). wait; println!("{txt}"); let txt2 = Http::("500/Hello World2"). wait; println!("{txt2}"); } But wait. coroutine/wait aren't valid keywords in Rust. I would get a compilation error if I wrote that! Y ou're right. So, I created a small program called corofy that rewrites the coroutine/wait functions into these state machines for us. Let's explain that quickly. corofy—the coroutine preprocessor The best way of rewriting code in Rust is using the macro system. The downside is that it's not clear exactly what it compiles down to, and expanding the macros is not optimal for our use case since one
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 156 of the main goals is to take a look at the differences between the code we write and what it transforms into. In addition to that, macros can get quite complex to read and understand unless you work a lot with them on a regular basis. Instead, corofy is a normal Rust program you can find in the repository under ch07/corofy. If you enter that folder, you can install the tool globally by writing the following: cargo install--path. Now you can use the tool from anywhere. It works by providing it with an input file containing the coroutine/wait syntax, such as corofy. /src/main. rs [optional output file]. If you don't specify an output file, it will create a file in the same folder postfixed with _corofied. Note The tool is extremely limited. The honest reason why is that I want to finish this example before we reach the year 2300, and I finished rewriting the entire Rust compiler from scratch just to give a robust experience using the coroutine/wait keywords. It turns out that writing transformations like this without access to Rust's type system is very difficult. The main use case for this tool will be to transform the examples we write here, but it would probably work for slight variations of the same examples as well (like adding more wait points or doing more interesting tasks in between each wait point). Take a look at the README for corofy for more information about its limitations. One more thing: I assume that you specified no explicit output file going forward so the output file will have the same name as the input file postfixed with _corofied. The program reads the file you give it and searches for usages of the coroutine keyword. It takes these functions, comments them out (so they're still in the file), puts them last in the file, and writes out the state machine implementation directly below, indicating what parts of the state machine are the code you actually wrote between the wait points. Now that I've introduced our new tool, it's time to put it to use. b-async-await—an example of a coroutine/wait transformation Let's start by expanding our example slightly. Now that we have a program that writes out our state machines, it's easier for us to create some examples and cover some more complex parts of our coroutine implementation.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
async/await 157 We'll base the following examples on the exact same code as we did in the first one. In the repository, you'll find this example under ch07/b-async-await. If you write every example from the book and don't rely on the existing code in the repository, you can do one of two things: Keep changing the code in the first example Create a new cargo project called b-async-await and copy everything in the src folder and the dependencies section from Cargo. toml from the previous example over to the new one. No matter what you choose, you should have the same code in front of you. Let's simply change the code in main. rs to this: ch07/b-async-await/src/main. rs use std::time::Instant; mod http; mod future; use future::*; use crate::http::Http; fn get_path(i: usize)-> String { format!("/{}/Hello World{i}", i * 1000) } coroutine fn async_main() { println!("Program starting"); let txt = Http::get(&get_path(0)). wait; println!("{txt}"); let txt = Http::get(&get_path(1)). wait; println!("{txt}"); let txt = Http::get(&get_path(2)). wait; println!("{txt}"); let txt = Http::get(&get_path(3)). wait; println!("{txt}"); let txt = Http::get(&get_path(4)). wait; println!("{txt}"); }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 158 fn main() { let start = Instant::now(); let mut future = async_main(); loop { match future. poll() { Poll State::Not Ready => (), Poll State::Ready(_) => break, } } println!("\n ELAPSED TIME: {}", start. elapsed(). as_secs_f32()); } This code contains a few changes. First, we add a convenience function for creating new paths for our GET request called get_path to create a path we can use in our GET request with a delay and a message based on the integer we pass in. Next, in our async_main function, we create five requests with delays varying from 0 to 4 seconds. The last change we've made is in our main function. We no longer print out a message on every call to poll, and therefore, we don't use thread::sleep to limit the number of calls. Instead, we measure the time from when we enter the main function to when we exit it because we can use that as a way to prove whether our code runs concurrently or not. Now that our main. rs looks like the preceding example, we can use corofy to rewrite it into a state machine, so assuming we're in the root folder of ch07/b-async-await, we can write the following: corofy. /src/main. rs That should output a file called main_corofied. rs in the src folder that you can open and inspect. Now, you can copy all the contents of main_corofied. rs in this file and paste it into main. rs. Note For convenience, there is a file called original_main. rs in the root of the project that contains the code for main. rs that we presented, so you don't need to save the original content of main. rs. If you write out every example yourself by copying it from the book in your own project, it would be smart to store the original contents of main. rs somewhere before you overwrite it. I won't show the entire state machine here since the 39 lines of code using coroutine/wait end up being 170 lines of code when written as a state machine, but our State enum now looks like this: enum State0 { Start, Wait1(Box<dyn Future<Output = String>>),
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
async/await 159 Wait2(Box<dyn Future<Output = String>>), Wait3(Box<dyn Future<Output = String>>), Wait4(Box<dyn Future<Output = String>>), Wait5(Box<dyn Future<Output = String>>), Resolved, } If you run the program using cargo run, you now get the following output: Program starting FIRST POLL-START OPERATION HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:05:55 GMT Hello World0 FIRST POLL-START OPERATION HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:05:56 GMT Hello World1 FIRST POLL-START OPERATION HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:05:58 GMT Hello World2 FIRST POLL-START OPERATION HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:06:01 GMT Hello World3 FIRST POLL-START OPERATION HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 160 date: Tue, xx xxx xxxx 21:06:05 GMT Hello World4 ELAPSED TIME: 10. 043025 So, you see that our code runs as expected. Since we called wait on every call to Http::get, the code ran sequentially, which is evident when we look at the elapsed time of 10 seconds. That makes sense since the delays we asked for were 0 + 1 + 2 + 3 + 4, which equals 10 seconds. What if we want our futures to run concurrently? Do you remember we talked about these futures being lazy? Good. So, you know that we won't get concurrency just by creating a future. We need to poll them to start the operation. To solve this, we take some inspiration from Tok io and create a function that does just that called join_all. It takes a collection of futures and drives them all to completion concurrently. Let's create the last example for this chapter where we do just this. c-async-await—concurrent futures Okay, so we'll build on the last example and do just the same thing. Create a new project called c-async-await and copy Cargo. toml and everything in the src folder over. The first thing we'll do is go to future. rs and add a join_all function below our existing code: ch07/c-async-await/src/future. rs pub fn join_all<F: Future>(futures: Vec<F>)-> Join All<F> { let futures = futures. into_iter(). map(|f| (false, f)). collect(); Join All { futures, finished_count: 0, } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
c-async-await—concurrent futures 161 This function takes a collection of futures as an argument and returns a Join All<F> future. The function simply creates a new collection. In this collection, we will have tuples consisting of the original futures we received and a bool value indicating whether the future is resolved or not. Next, we have the definition of our Join All struct: ch07/c-async-await/src/future. rs pub struct Join All<F: Future> { futures: Vec<(bool, F)>, finished_count: usize, } This struct will simply store the collection we created and a finished_count. The last field will make it a little bit easier to keep track of how many futures have been resolved. As we're getting used to by now, most of the interesting parts happen in the Future implementation for Join All : impl<F: Future> Future for Join All<F> { type Output = String; fn poll(&mut self)-> Poll State<Self::Output> { for (finished, fut) in self. futures. iter_mut() { if *finished { continue; } match fut. poll() { Poll State::Ready(_) => { *finished = true; self. finished_count += 1; } Poll State::Not Ready => continue, } } if self. finished_count == self. futures. len() { Poll State::Ready(String::new()) } else { Poll State::Not Ready } } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 162 We set Output to String. This might strike you as strange since we don't actually return anything from this implementation. The reason is that corofy will only work with futures that return a String (it's one of its many, many shortcomings), so we just accept that and return an empty string on completion. Next up is our poll implementation. The first thing we do is to loop over each (flag, future) tuple: for (finished, fut) in self. futures. iter_mut() Inside the loop, we first check if the flag for this future is set to finished. If it is, we simply go to the next item in the collection. If it's not finished, we poll the future. If we get Poll State::Ready back, we set the flag for this future to true so that we won't poll it again and we increase the finished count. Note It's worth noting that the join_all implementation we create here will not work in any meaningful way with futures that return a value. In our case, we simply throw the value away, but remember, we're trying to keep this as simple as possible for now and the only thing we want to show is the concurrency aspect of calling join_all. Tokio's join_all implementation puts all the returned values in a Vec<T> and returns them when the Join All future resolves. If we get Poll State::Not Ready, we simply continue to the next future in the collection. After iterating through the entire collection, we check if we've resolved all the futures we originally received in if self. finished_count == self. futures. len(). If all our futures have been resolved, we return Poll State::Ready with an empty string (to make corofy happy). If there are still unresolved futures, we return Poll State::Not Ready. Important There is one subtle point to make a note of here. The first time Join All::poll is called, it will call poll on each future in the collection. Polling each future will kick off whatever operation they represent and allow them to progress concurrently. This is one way to achieve concurrency with lazy coroutines, such as the ones we're dealing with here. Next up are the changes we'll make in main. rs.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
c-async-await—concurrent futures 163 The main function will be the same, as well as the imports and declarations at the start of the file, so I'll only present the coroutine/await functions that we've changed: coroutine fn request(i: usize) { let path = format!("/{}/Hello World{i}", i * 1000); let txt = Http::get(&path). wait; println!("{txt}"); } coroutine fn async_main() { println!("Program starting"); let mut futures = vec![]; for i in 0..5 { futures. push(request(i)); } future::join_all(futures). wait; } Note In the repository, you'll find the correct code to put in main. rs in ch07/c-async-await/ original_main. rs if you ever lose track of it with all the copy/pasting we're doing. Now we have two coroutine/wait functions. async_main stores a set of coroutines created by read_request in a Vec<T: Future>. Then it creates a Join All future and calls wait on it. The next coroutine/wait function is read_requests, which takes an integer as input and uses that to create GET requests. This coroutine will in turn wait for the response and print out the result once it arrives. Since we create the requests with delays of 0, 1, 2, 3, 4 seconds, we should expect the entire program to finish in just over four seconds because all the tasks will be in progress concurrently. The ones with short delays will be finished by the time the task with a four-second delay finishes. We can now transform our coroutine/await functions into state machines by making sure we're in the folder ch07/c-async-await and writing corofy. /src/main. rs. Y ou should now see a file called main_corofied. rs in the src folder. Copy its contents and replace what's in main. rs with it.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 164 If you run the program by writing cargo run, you should get the following output: Program starting FIRST POLL-START OPERATION FIRST POLL-START OPERATION FIRST POLL-START OPERATION FIRST POLL-START OPERATION FIRST POLL-START OPERATION HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:11:36 GMT Hello World0 HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:11:37 GMT Hello World1 HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:11:38 GMT Hello World2 HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:11:39 GMT Hello World3 HTTP/1. 1 200 OK content-length: 11 connection: close content-type: text/plain; charset=utf-8 date: Tue, xx xxx xxxx 21:11:40 GMT Hello World4 ELAPSED TIME: 4. 0084987
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Final thoughts 165 The thing to make a note of here is the elapsed time. It's now just over four seconds, just like we expected it would be when our futures run concurrently. If we take a look at how coroutine/await changed the experience of writing coroutines from a programmer's perspective, we'll see that we're much closer to our goal now: Efficient : State machines require no context switches and only save/restore the data associated with that specific task. We have no growing vs segmented stack issues, as they all use the same OS-provided stack. Expressive : We can write code the same way as we do in “normal” Rust, and with compiler support, we can get the same error messages and use the same tooling Easy to use and hard to misuse : This is a point where we probably fall slightly short of a typical fiber/green threads implementation due to the fact that our programs are heavily transformed “behind our backs” by the compiler, which can result in some rough edges. Specifically, you can't call an async function from a normal function and expect anything meaningful to happen; you have to actively poll it to completion somehow, which gets more complex as we start adding runtimes into the mix. However, for the most part, we can write programs just the way we're used to. Final thoughts Before we round off this chapter, I want to point out that it should now be clear to us why coroutines aren't really pre-emptable. If you remember back in Chapter 2, we said that a stackful coroutine (such as our fibers/green threads example) could be pre-empted and its execution could be paused at any point. That's because they have a stack, and pausing a task is as simple as storing the current execution state to the stack and jumping to another task. That's not possible here. The only places we can stop and resume execution are at the pre-defined suspension points that we manually tagged with wait. In theory, if you have a tightly integrated system where you control the compiler, the coroutine definition, the scheduler, and the I/O primitives, you could add additional states to the state machine and create additional points where the task could be suspended/resumed. These suspension points could be opaque to the user and treated differently than normal wait/suspension points. For example, every time you encounter a normal function call, you could add a suspension point (a new state to our state machine) where you check in with the scheduler if the current task has used up its time budget or something like that. If it has, you could schedule another task to run and resume the task at a later point even though this didn't happen in a cooperative manner. However, even though this would be invisible to the user, it's not the same as being able to stop/resume execution from any point in your code. It would also go against the usually implied cooperative nature of coroutines.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines and async/await 166 Summary Good job! In this chapter, we introduced quite a bit of code and set up an example that we'll continue using in the following chapters. So far, we've focused on futures and async/await to model and create tasks that can be paused and resumed at specific points. We know this is a prerequisite to having tasks that are in progress at the same time. We did this by introducing our own simplified Future trait and our own coroutine/ wait syntax that's way more limited than Rust's futures and async/await syntax, but it's easier to understand and get a mental idea of how this works in contrast to fibers/green threads (at least I hope so). We have also discussed the difference between eager and lazy coroutines and how they impact how you achieve concurrency. We took inspiration from Tokio's join_all function and implemented our own version of it. In this chapter, we simply created tasks that could be paused and resumed. There are no event loops, scheduling, or anything like that yet, but don't worry. They're exactly what we'll go through in the next chapter. The good news is that getting a clear idea of coroutines, like we did in this chapter, is one of the most difficult things to do.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
8 Runtimes, Wakers, and the Reactor-Executor Pattern In the previous chapter, we created our own pausable tasks (coroutines) by writing them as state machines. We created a common API for these tasks by requiring them to implement the Future trait. We also showed how we can create these coroutines using some keywords and programmatically rewrite them so that we don't have to implement these state machines by hand, and instead write our programs pretty much the same way we normally would. If we stop for a moment and take a bird's eye view over what we got so far, it's conceptually pretty simple: we have an interface for pausable tasks (the Future trait), and we have two keywords (coroutine/wait ) to indicate code segments we want rewritten as a state machine that divides our code into segments we can pause between. However, we have no event loop, and we have no scheduler yet. In this chapter, we'll expand on our example and add a runtime that allows us to run our program efficiently and opens up the possibility to schedule tasks concurrently much more efficiently than what we do now. This chapter will take you on a journey where we implement our runtime in two stages, gradually making it more useful, efficient, and capable. We'll start with a brief overview of what runtimes are and why we want to understand some of their characteristics. We'll build on what we just learned in Chapter 7, and show how we can make it much more efficient and avoid continuously polling the future to make it progress by leveraging the knowledge we gained in Chapter 4. Next, we'll show how we can get a more flexible and loosely coupled design by dividing the runtime into two parts: an executor and a reactor. In this chapter, you will learn about basic runtime design, reactors, executors, wakers, and spawning, and we'll build on a lot of the knowledge we've gained throughout the book. This will be one of the big chapters in this book, not because the topic is too complex or difficult, but because we have quite a bit of code to write. In addition to that, I try to give you a good mental model of what's happening by providing quite a few diagrams and explaining everything very thoroughly. It's
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 168 not one of those chapters you typically blaze through before going to bed, though, but I do promise it's absolutely worth it in the end. The chapter will be divided into the following segments: Introduction to runtimes and why we need them Improving our base example Creating a proper runtime Step 1-Improving our runtime design by adding a Reactor and a Waker Step 2-Implementing a proper Executor Step 3-Implementing a proper Reactor Experimenting with our new runtime So, let's dive right in! Technical requirements The examples in this chapter will build on the code from our last chapter, so the requirements are the same. The examples will all be cross-platform and work on all platforms that Rust ( https://doc. rust-lang. org/beta/rustc/platform-support. html#tier-1-with-host-tools ) and mio (https://github. com/tokio-rs/mio#platforms ) supports. The only thing you need is Rust installed and the repository that belongs to the book downloaded locally. All the code in this chapter will be found in the ch08 folder. To follow the examples step by step, you'll also need corofy installed on your machine. If you didn't install it in Chapter 7, install it now by going into the ch08/corofy folder in the repository and running this command: cargo install--force--path. Alternatively, you can just copy the relevant files in the repository when we come to the points where we use corofy to rewrite our coroutine/wait syntax. Both versions will be available to you there as well. We'll also use delayserver in this example, so you need to open a separate terminal, enter the delayserver folder at the root of the repository, and write cargo run so that it's ready and available for the examples going forward. Remember to change the ports in the code if you for some reason have to change the port delayserver listens on.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Introduction to runtimes and why we need them 169 Introduction to runtimes and why we need them As you know by now, you need to bring your own runtime for driving and scheduling asynchronous tasks in Rust. Runtimes come in many flavors, from the popular Embassy embedded runtime ( https://github. com/embassy-rs/embassy ), which centers more on general multitasking and can replace the need for a real-time operating system (RTOS ) on many platforms, to Tok io (https://github. com/tokio-rs/tokio ), which centers on non-blocking I/O on popular server and desktop operating systems. All runtimes in Rust need to do at least two things: schedule and drive objects implementing Rust's Future trait to completion. Going forward in this chapter, we'll mostly focus on runtimes for doing non-blocking I/O on popular desktop and server operating systems such as Windows, Linux, and mac OS. This is also by far the most common type of runtime most programmers will encounter in Rust. Taking control over how tasks are scheduled is very invasive, and it's pretty much a one-way street. If you rely on a userland scheduler to run your tasks, you cannot, at the same time, use the OS scheduler (without jumping through several hoops), since mixing them in your code will wreak havoc and might end up defeating the whole purpose of writing an asynchronous program. The following diagram illustrates the different schedulers: Figure 8. 1-Task scheduling in a single-threaded asynchronous system
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 170 An example of yielding to the OS scheduler is making a blocking call using the default std::net ::Tcp Stream or std::thread::sleep methods. Even potentially blocking calls using primitives such as Mutex provided by the standard library might yield to the OS scheduler. That's why you'll often find that asynchronous programming tends to color everything it touches, and it's tough to only run a part of your program using async/await. The consequence is that runtimes must use a non-blocking version of the standard library. In theory, you could make one non-blocking version of the standard library that all runtimes use, and that was one of the goals of the async_std initiative (https://book. async. rs/introduction ). However, having the community agree upon one way to solve this task was a tall order and one that hasn't really come to fruition yet. Before we start implementing our examples, we'll discuss the overall design of a typical async runtime in Rust. Most runtimes such as Tokio, Smol, or async-std will divide their runtime into two parts. The part that tracks events we're waiting on and makes sure to wait on notifications from the OS in an efficient manner is often called the reactor or driver. The part that schedules tasks and polls them to completion is called the executor. Let's take a high-level look at this design so that we know what we'll be implementing in our example. Reactors and executors Dividing the runtime into two distinct parts makes a lot of sense when we take a look at how Rust models asynchronous tasks. If you read the documentation for Future (https://doc. rust-lang. org/std/future/trait. Future. html ) and Waker (https://doc. rust-lang. org/std/task/struct. Waker. html ), you'll see that Rust doesn't only define a Future trait and a Waker type but also comes with important information on how they're supposed to be used. One example of this is that Future traits are inert, as we covered in Chapter 6. Another example is that a call to Waker::wake will guarantee at least one call to Future::poll on the corresponding task. So, already by reading the documentation, you will see that there is at least some thought put into how runtimes should behave. The reason for learning this pattern is that it's almost a glove-to-hand fit for Rust's asynchronous model. Since many readers, including me, will not have English as a first language, I'll explain the names here at the start since, well, they seem to be easy to misunderstand. If the name reactor gives you associations with nuclear reactors, and you start thinking of reactors as something that powers, or drives, a runtime, drop that thought right now. A reactor is simply something that reacts to a whole set of incoming events and dispatches them one by one to a handler. It's an event loop, and in our case, it dispatches events to an executor. Events that are handled by a reactor could
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 171 be anything from a timer that expires, an interrupt if you write programs for embedded systems, or an I/O event such as a READABLE event on Tcp Stream. Y ou could have several kinds of reactors running in the same runtime. If the name executor gives you associations to executioners (the medieval times kind) or executables, drop that thought as well. If you look up what an executor is, it's a person, often a lawyer, who administers a person's will. Most often, since that person is dead. Which is also the point where whatever mental model the naming suggests to you falls apart since nothing, and no one, needs to come in harm's way for the executor to have work to do in an asynchronous runtime, but I digress. The important point is that an executor simply decides who gets time on the CPU to progress and when they get it. The executor must also call Future::poll and advance the state machines to their next state. It's a type of scheduler. It can be frustrating to get the wrong idea from the start since the subject matter is already complex enough without thinking about how on earth nuclear reactors and executioners fit in the whole picture. Since reactors will respond to events, they need some integration with the source of the event. If we continue using Tcp Stream as an example, something will call read or write on it, and at that point, the reactor needs to know that it should track certain events on that source. For this reason, non-blocking I/O primitives and reactors need tight integration, and depending on how you look at it, the I/O primitives will either have to bring their own reactor or you'll have a reactor that provides I/O primitives such as sockets, ports, and streams. Now that we've covered some of the overarching design, we can start writing some code. Runtimes tend to get complex pretty quickly, so to keep this as simple as possible, we'll avoid any error handling in our code and use unwrap or expect for everything. We'll also choose simplicity over cleverness and readability over efficiency to the best of our abilities. Our first task will be to take the first example we wrote in Chapter 7 and improve it by avoiding having to actively poll it to make progress. Instead, we lean on what we learned about non-blocking I/O and epoll in the earlier chapters. Improving our base example We'll create a version of the first example in Chapter 7 since it's the simplest one to start with. Our only focus is showing how to schedule and drive the runtimes more efficiently. We start with the following steps: 1. Create a new project and name it a-runtime (alternatively, navigate to ch08/a-runtime in the book's repository).
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 172 2. Copy the future. rs and http. rs files in the src folder from the first project we created in Chapter 7, named a-coroutine (alternatively, copy the files from ch07/a-coroutine in the book's repository) to the src folder in our new project. 3. Make sure to add mio as a dependency by adding the following to Cargo. toml : [dependencies] mio = { version = "0. 8", features = ["net", "os-poll"] } 4. Create a new file in the src folder called runtime. rs. We'll use corofy to change the following coroutine/wait program into its state machine representation that we can run. In src/main. rs, add the following code: ch08/a-runtime/src/main. rs mod future; mod http; mod runtime; use future::{Future, Poll State}; use runtime::Runtime; fn main() { let future = async_main(); let mut runtime = Runtime::new(); runtime. block_on(future); } coroutine fn async_main() { println!("Program starting"); let txt = http::Http::get("/600/Hello Async Await"). wait; println!("{txt}"); let txt = http::Http::get("/400/Hello Async Await"). wait; println!("{txt}"); }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 173 This program is basically the same one we created in Chapter 7, only this time, we create it from our coroutine/wait syntax instead of writing the state machine by hand. Next, we need to transform this into code by using corofy since the compiler doesn't recognize our own coroutine/wait syntax. 1. If you're in the root folder of a-runtime, run corofy. /src/main. rs. 2. Y ou should now have a file that's called main_corofied. rs. 3. Delete the code in main. rs and copy the contents of main_corofied. rs into main. rs. 4. Y ou can now delete main_corofied. rs since we won't need it going forward. If everything is done right, the project structure should now look like this: src |--future. rs |--http. rs |--main. rs |--runtime. rs Tip Y ou can always refer to the book's repository to make sure everything is correct. The correct example is located in the ch08/a-runtime folder. In the repository, you'll also find a file called main_orig. rs in the root folder that contains the coroutine/wait program if you want to rerun it or have problems getting everything working correctly. Design Before we go any further, let's visualize how our system is currently working if we consider it with two futures created by coroutine/wait and two calls to Http::get. The loop that polls our Future trait to completion in the main function takes the role of the executor in our visualization, and as you see, we have a chain of futures consisting of: 1. Non-leaf futures created by async/await (or coroutine/wait in our example) that simply call poll on the next future until it reaches a leaf future 2. Leaf futures that poll an actual source that's either Ready or Not Ready
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 174 The following diagram shows a simplified overview of our current design: Figure 8. 2-Executor and Future chain: current design If we take a closer look at the future chain, we can see that when a future is polled, it polls all its child futures until it reaches a leaf future that represents something we're actually waiting on. If that future returns Not Ready, it will propagate that up the chain immediately. However, if it returns Ready, the state machine will advance all the way until the next time a future returns Not Ready. The top-level future will not resolve until all child futures have returned Ready.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 175 The next diagram takes a closer look at the future chain and gives a simplified overview of how it works: Figure 8. 3-Future chain: a detailed view The first improvement we'll make is to avoid the need for continuous polling of our top-level future to drive it forward.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 176 We'll change our design so that it looks more like this: Figure 8. 4-Executor and Future chain: design 2 In this design, we use the knowledge we gained in Chapter 4, but instead of simply relying on epoll, we'll use mio 's cross-platform abstraction instead. The way it works should be well known to us by now since we already implemented a simplified version of it earlier. Instead of continuously looping and polling our top-level future, we'll register interest with the Poll instance, and when we get a Not Ready result returned, we wait on Poll. This will put the thread to sleep, and no work will be done until the OS wakes us up again to notify us that an event we're waiting on is ready. This design will be much more efficient and scalable.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 177 Changing the current implementation Now that we have an overview of our design and know what to do, we can go on and make the necessary changes to our program, so let's go through each file we need to change. We'll start with main. rs. main. rs We already made some changes to main. rs when we ran corofy on our updated coroutine/ wait example. I'll just point out the change here so that you don't miss it since there is really nothing more we need to change here. Instead of polling the future in the main function, we created a new Runtime struct and passed the future as an argument to the Runtime::block_on method. There are no more changes that we need to in this file. Our main function changed to this: ch08/a-runtime/src/main. rs fn main() { let future = async_main(); let mut runtime = Runtime::new(); runtime. block_on(future); } The logic we had in the main function has now moved into the runtime module, and that's also where we need to change the code that polls the future to completion from what we had earlier. The next step will, therefore, be to open runtime. rs. runtime. rs The first thing we do in runtime. rs is pull in the dependencies we need: ch08/a-runtime/src/runtime. rs use crate::future::{Future, Poll State}; use mio::{Events, Poll, Registry}; use std::sync::Once Lock;
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Runtimes, Wakers, and the Reactor-Executor Pattern 178 The next step is to create a static variable called REGISTRY. If you remember, Registry is the way we register interest in events with our Poll instance. We want to register interest in events on our Tcp Stream when making the actual HTTP GET request. We could have Http::get accept a Registry struct that it stored for later use, but we want to keep the API clean, and instead, we want to access Registry inside Http Get Future without having to pass it around as a reference: ch08/a-runtime/src/runtime. rs static REGISTRY: Once Lock<Registry> = Once Lock::new(); pub fn registry()-> &'static Registry { REGISTRY. get(). expect("Called outside a runtime context") } We use std::sync::Once Lock so that we can initialize REGISTRY when the runtime starts, thereby preventing anyone (including ourselves) from calling Http::get without having a Runtime instance running. If we did call Http::get without having our runtime initialized, it would panic since the only public way to access it outside the runtime module is through the pub fn registry(){... } function, and that call would fail. Note We might as well have used a thread-local static variable using the thread_local! macro from the standard library, but we'll need to access this from multiple threads when we expand the example later in this chapter, so we start the design with this in mind. The next thing we add is a Runtime struct: ch08/a-runtime/src/runtime. rs pub struct Runtime { poll: Poll, } For now, our runtime will only store a Poll instance. The interesting part is in the implementation of Runtime. Since it's not too long, I'll present the whole implementation here and explain it next: ch08/a-runtime/src/runtime. rs impl Runtime { pub fn new()-> Self { let poll = Poll::new(). unwrap(); let registry = poll. registry(). try_clone(). unwrap(); REGISTRY. set(registry). unwrap();
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Improving our base example 179 Self { poll } } pub fn block_on<F>(&mut self, future: F) where F: Future<Output = String>, { let mut future = future; loop { match future. poll() { Poll State::Not Ready => { println!("Schedule other tasks\n"); let mut events = Events::with_capacity(100); self. poll. poll(&mut events, None). unwrap(); } Poll State::Ready(_) => break, } } } } The first thing we do is create a new function. This will initialize our runtime and set everything we need up. We create a new Poll instance, and from the Poll instance, we get an owned version of Registry. If you remember from Chapter 4, this is one of the methods we mentioned but didn't implement in our example. However, here, we take advantage of the ability to split the two pieces up. We store Registry in the REGISTRY global variable so that we can access it from the http module later on without having a reference to the runtime itself. The next function is the block_on function. I'll go through it step by step: 1. First of all, this function takes a generic argument and will block on anything that implements our Future trait with an Output type of String (remember that this is currently the only kind of Future trait we support, so we'll just return an empty string if there is no data to return). 2. Instead of having to take mut future as an argument, we define a variable that we declare as mut in the function body. It's just to keep the API slightly cleaner and avoid us having to make minor changes later on. 3. Next, we create a loop. We'll loop until the top-level future we received returns Ready. If the future returns Not Ready, we write out a message letting us know that at this point we could do other things, such as processing something unrelated to the future or, more likely, polling another top-level future if our runtime supported multiple top-level futures (don't worry-it will be explained later on).
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf