text
stringlengths
0
3.16k
source
stringclasses
1 value
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Asynchronous Programming in Rust Learn asynchronous programming by building working examples of futures, green threads, and runtimes Carl Fredrik Samson
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Asynchronous Programming in Rust Copyright © 2024 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Publishing Product Manager : Samriddhi Murarka Group Product Manager : Kunal Sawant Senior Editor : Kinnari Chohan Technical Editor : Rajdeep Chakraborty Copy Editor : Safis Editing Project Coordinator : Manisha Singh Indexer : Rekha Nair Production Designer : Joshua Misquitta Marketing Dev Rel Coordinator : Sonia Chauhan First published: February 2024 Production reference: 2020224 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-80512-813-7 www. packtpub. com
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
To my family—my brother, my parents, and especially my beloved wife and fantastic children that make every day an absolute joy.-Carl Fredrik Samson
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Contributors About the author Carl Fredrik Samson is a popular technology writer and has been active in the Rust community since 2018. He has an MSc in Business Administration where he specialized in strategy and finance. When not writing, he's a father of two children and a CEO of a company with 300 employees. He's been interested in different kinds of technologies his whole life and his programming experience ranges from programming against old IBM mainframes to modern cloud computing, using everything from assembly to Visual Basic for Applications. He has contributed to several open source projects including the official documentation for asynchronous Rust. I want to thank the Rust community for being so constructive, positive and welcoming. This book would not have happened had it not been for all the positive and insightful interaction with the community. A special thanks goes to the implementors of all the libraries that underpins the async ecosystem today like mio, Tokio, and async-std. I also want to thank my editor, Kinnari, who has been extraordinarily patient and helpful during the process of writing this book.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
About the reviewer Evgeni Pirianov is an experienced Senior Software Engineer with a deep expertise in Backend Technologies, Web3 an Blockchain. Evgeni has graduated with a degree in Engineering from Imperial College, London and has worked for a few years developing non-linear solvers in C++. Ever since, he has been at the forefront of architecturing, designing, and implementing decentralized applications in the fields of Defi and Metaverse. Evgeni's passion for Rust is unsurpassed and he is a true believer of its bright future and wide range of applications. Y age Hu is a software engineer specializing in systems programming and computer architecture. He has cut code in companies such as Uber, Amazon, and Meta and is currently conducting systems research with Web Assembly and Rust. Y age and his wife have just welcomed their first child, Maxine.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xiii Part 1: Asynchronous Programming Fundamentals 1 Concurrency and Asynchronous Programming: a Detailed Overview 3 Technical requirements 4 An evolutionary journey of multitasking 4 Non-preemptive multitasking 4 Preemptive multitasking 5 Hyper-threading 5 Multicore processors 6 Do you really write synchronous code? 6 Concurrency versus parallelism 7 The mental model I use 8 Let's draw some parallels to process economics 9 Concurrency and its relation to I/O 11 What about threads provided by the operating system? 12 Choosing the right reference frame 12 Asynchronous versus concurrent 12The role of the operating system 13 Concurrency from the operating system's perspective 13 Teaming up with the operating system 14 Communicating with the operating system 14 The CPU and the operating system 15 Down the rabbit hole 16 How does the CPU prevent us from accessing memory we're not supposed to access? 17 But can't we just change the page table in the CPU? 18 Interrupts, firmware, and I/O 19 A simplified overview 19 Interrupts 22 Firmware 22 Summary 23Table of Contents
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Table of Contents viii 2 How Programming Languages Model Asynchronous Program Flow 25 Definitions 26 Threads 27 Threads provided by the operating system 29 Creating new threads takes time 29 Each thread has its own stack 29 Context switching 30 Scheduling 30 The advantage of decoupling asynchronous operations from OS threads 31 Example 31Fibers and green threads 33 Each stack has a fixed space 34 Context switching 35 Scheduling 35 FFI 36 Callback based approaches 37 Coroutines: promises and futures 38 Coroutines and async/await 39 Summary 41 3 Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 43 Technical requirements 44 Running the Linux examples 45 Why use an OS-backed event queue? 45 Blocking I/O 46 Non-blocking I/O 46 Event queuing via epoll/kqueue and IOCP 47 Readiness-based event queues 47 Completion-based event queues 48epoll, kqueue, and IOCP 49 Cross-platform event queues 50 System calls, FFI, and cross-platform abstractions 51 The lowest level of abstraction 51 The next level of abstraction 55 The highest level of abstraction 61 Summary 61
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Table of Contents ix Part 2: Event Queues and Green Threads 4 Create Your Own Event Queue 65 Technical requirements 65 Design and introduction to epoll 66 Is all I/O blocking? 72 The ffi module 73 Bitflags and bitmasks 76Level-triggered versus edge-triggered events 78 The Poll module 81 The main program 84 Summary 93 5 Creating Our Own Fibers 95 Technical requirements 96 How to use the repository alongside the book 96 Background information 97 Instruction sets, hardware architectures, and ABIs 97 The System V ABI for x86-64 99 A quick introduction to Assembly language 102 An example we can build upon 103 Setting up our project 103 An introduction to Rust inline assembly macro 105Running our example 107 The stack 109 What does the stack look like? 109 Stack sizes 111 Implementing our own fibers 112 Implementing the runtime 115 Guard, skip, and switch functions 121 Finishing thoughts 125 Summary 126 Part 3: Futures and async/await in Rust 6 Futures in Rust 129 What is a future? 130 Leaf futures 130Non-leaf futures 130 A mental model of an async runtime 131
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Table of Contents x What the Rust language and standard library take care of 133I/O vs CPU-intensive tasks 134 Summary 135 7 Coroutines and async/await 137 Technical requirements 137 Introduction to stackless coroutines 138 An example of hand-written coroutines 139 Futures module 141 HTTP module 142 Do all futures have to be lazy? 146 Creating coroutines 147async/await 154 coroutine/wait 155 corofy—the coroutine preprocessor 155 b-async-await—an example of a coroutine/ wait transformation 156 c-async-await—concurrent futures 160 Final thoughts 165 Summary 166 8 Runtimes, Wakers, and the Reactor-Executor Pattern 167 Technical requirements 168 Introduction to runtimes and why we need them 169 Reactors and executors 170 Improving our base example 171 Design 173 Changing the current implementation 177 Creating a proper runtime 184 Step 1-Improving our runtime design by adding a Reactor and a Waker 187 Creating a Waker 188Changing the Future definition 191 Step 2-Implementing a proper Executor 192 Step 3-Implementing a proper Reactor 199 Experimenting with our new runtime 208 An example using concurrency 208 Running multiple futures concurrently and in parallel 209 Summary 211
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Table of Contents xi 9 Coroutines, Self-Referential Structs, and Pinning 213 Technical requirements 214 Improving our example 1-variables 214 Setting up the base example 215 Improving our base example 217 Improving our example 2-references 222 Improving our example 3-this is... not... good... 227 Discovering self-referential structs 229 What is a move? 231 Pinning in Rust 233Pinning in theory 234 Definitions 234 Pinning to the heap 235 Pinning to the stack 237 Pin projections and structural pinning 240 Improving our example 4-pinning to the rescue 241 future. rs 242 http. rs 242 Main. rs 244 executor. rs 246 Summary 248 10 Creating Your Own Runtime 251 Technical requirements 251 Setting up our example 253 main. rs 253 future. rs 254 http. rs 254 executor. rs 256 reactor. rs 259 Experimenting with our runtime 261 Challenges with asynchronous Rust 265Explicit versus implicit reactor instantiation 265 Ergonomics versus efficiency and flexibility 266 Common traits that everyone agrees about 267 Async drop 268 The future of asynchronous Rust 269 Summary 269 Epilogue 272 Index 275 Other Books You May Enjoy 282
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface The content in this book was initially written as a series of shorter books for programmers wanting to learn asynchronous programming from the ground up using Rust. I found the existing material I came upon at the time to be in equal parts frustrating, enlightening, and confusing, so I wanted to do something about that. Those shorter books became popular, so when I got the chance to write everything a second time, improve the parts that I was happy with, and completely rewrite everything else and put it in a single, coherent book, I just had to do it. The result is right in front of you. People start programming for a variety of different reasons. Scientists start programming to model problems and perform calculations. Business experts create programs that solve specific problems that help their businesses. Some people start programming as a hobby or in their spare time. Common to these programmers is that they learn programming from the top down. Most of the time, this is perfectly fine, but on the topic of asynchronous programming in general, and Rust in particular, there is a clear advantage to learning about the topic from first principles, and this book aims to provide a means to do just that. Asynchronous programming is a way to write programs where you divide your program into tasks that can be stopped and resumed at specific points. This, in turn, allows a language runtime, or a library, to drive and schedule these tasks so their progress interleaves. Asynchronous programming will, by its very nature, affect the entire program flow, and it's very invasive. It rewrites, reorders, and schedules the program you write in a way that's not always obvious to you as a programmer. Most programming languages try to make asynchronous programming so easy that you don't really have to understand how it works just to be productive in it. Y ou can get quite productive writing asynchronous Rust without really knowing how it works as well, but Rust is more explicit and surfaces more complexity to the programmer than most other languages. Y ou will have a much easier time handling this complexity if you get a deep understanding of asynchronous programming in general and what really happens when you write asynchronous Rust. Another huge upside is that learning from first principles results in knowledge that is applicable way beyond Rust, and it will, in turn, make it easier to pick up asynchronous programming in other languages as well. I would even go so far as to say that most of this knowledge will be useful even in your day-to-day programming. At least, that's how it's been for me.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xiv I want this book to feel like you're joining me on a journey, where we build our knowledge topic by topic and learn by creating examples and experiments along the way. I don't want this book to feel like a lecturer simply telling you how everything works. This book is created for people who are curious by nature, the kind of programmers who want to understand the systems they use, and who like creating small and big experiments as a way to explore and learn. Who this book is for This book is for developers with some prior programming experience who want to learn asynchronous programming from the ground up so they can be proficient in async Rust and be able to participate in technical discussions on the subject. The book is perfect for those who like writing working examples they can pick apart, expand, and experiment with. There are two kinds of personas that I feel this book is especially relevant to: Developers coming from higher-level languages with a garbage collector, interpreter, or runtime, such as C#, Java, Java Script, Python, Ruby, Swift, or Go. Programmers who have extensive experience with asynchronous programming in any of these languages but want to learn it from the ground up and programmers with no experience with asynchronous programming should both find this book equally useful. Developers with experience in languages such as C or C++ that have limited experience with asynchronous programming. What this book covers Chapter 1, Concurrency and Asynchronous Programming: A Detailed Overview, provides a short history leading up to the type of asynchronous programming we use today. We give several important definitions and provide a mental model that explains what kind of problems asynchronous programming really solves, and how concurrency differs from parallelism. We also cover the importance of choosing the correct reference frame when discussing asynchronous program flow, and we go through several important and fundamental concepts about CPUs, operating systems, hardware, interrupts, and I/O. Chapter 2, How Programming Languages Model Asynchronous Program Flow, narrows the scope from the previous chapter and focuses on the different ways programming languages deal with asynchronous programming. It starts by giving several important definitions before explaining stackful and stackless coroutines, OS threads, green threads, fibers, callbacks, promises, futures, and async/await. Chapter 3, Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions, explains what epoll, kqueue, and IOCP are and how they differ. It prepares us for the next chapters by giving an introduction to syscalls, FFI, and cross-platform abstractions.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xv Chapter 4, Create Your Own Event Queue, is the chapter where you create your own event queue that mimics the API of mio (the popular Rust library that underpins much of the current async ecosystem). The example will center around epoll and go into quite a bit of detail on how it works. Chapter 5, Creating Our Own Fibers, walks through an example where we create our own kind of stackful coroutines called fibers. They're the same kind of green threads that Go uses and show one of the most widespread and popular alternatives to the type of abstraction Rust uses with futures and async/await today. Rust used this kind of abstraction in its early days before it reached 1. 0, so it's also a part of Rust's history. This chapter will also cover quite a few general programming concepts, such as stacks, assembly, Application Binary Interfaces (ABIs ), and instruction set architecture (ISAs ), that are useful beyond the context of asynchronous programming as well. Chapter 6, Futures in Rust, gives a short introduction and overview of futures, runtimes, and asynchronous programming in Rust. Chapter 7, Coroutines and async/await, is a chapter where you write your own coroutines that are simplified versions of the ones created by async/await in Rust today. We'll write a few of them by hand and introduce a new syntax that allows us to programmatically rewrite what look like regular functions into the coroutines we wrote by hand. Chapter 8, Runtimes, Wakers, and the Reactor-Executor Pattern, introduces runtimes and runtime design. By iterating on the example we created in Chapter 7, we'll create a runtime for our coroutines that we'll gradually improve. We'll also do some experiments with our runtime once it's done to better understand how it works. Chapter 9, Coroutines, Self-Referential Structs, and Pinning, is the chapter where we introduce self-referential structs and pinning in Rust. By improving our coroutines further, we'll experience first-hand why we need something such as Pin, and how it helps us solve the problems we encounter. Chapter 10, Create Your Own Runtime, is the chapter where we finally put all the pieces together. We'll improve the same example from the previous chapters further so we can run Rust futures, which will allow us to use the full power of async/await and asynchronous Rust. We'll also do a few experiments that show some of the difficulties with asynchronous Rust and how we can best solve them. To get the most out of this book Y ou should have some prior programming experience and, preferably, some knowledge about Rust. Reading the free, and excellent, introductory book The Rust Programming Language (https:// doc. rust-lang. org/book/ ) should give you more than enough knowledge about Rust to follow along since any advanced topics will be explained step by step.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xvi The ideal way to read this book is to have the book and a code editor open side by side. Y ou should also have the accompanying repository available so you can refer to that if you encounter any issues. Software/hardware covered in the book Operating system requirements Rust (version 1. 51 or later) Windows, mac OS, or Linux Y ou need Rust installed. If you haven't already, follow the instructions here: https://www. rust-lang. org/tools/install. Some examples will require you to use Windows Subsystem for Linux (WSL ) on Windows. If you're following along on a Windows machine, I recommend that you enable WSL ( https://learn. microsoft. com/en-us/windows/wsl/install ) now and install Rust by following the instructions for installing Rust on WSL here: https://www. rust-lang. org/tools/install. If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book's Git Hub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code. The accompanying repository is organized in the following fashion: Code that belongs to a specific chapter is in that chapter's folder (e. g., ch01 ). Each example is organized as a separate crate. The letters in front of the example names indicate in what order the different examples are presented in the book. For example, the a-runtime example comes before the b-reactor-executor example. This way, they will be ordered chronologically (at least by default on most systems). Some examples have a version postfixed with-bonus. These versions will be mentioned in the book text and often contain a specific variant of the example that might be interesting to check out but is not important to the topic at hand. Download the example code files Y ou can download the example code files for this book from Git Hub at https://github. com/ Packt Publishing/Asynchronous-Programming-in-Rust. If there's an update to the code, it will be updated in the Git Hub repository. We also have other code bundles from our rich catalog of books and videos available at https:// github. com/Packt Publishing/. Check them out!
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xvii Conventions used There are a number of text conventions used throughout this book. Code in text : Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “So, now we have created our own async runtime that uses Rust's Futures, Waker, Context, and async/ await. ” A block of code is set as follows: pub trait Future { type Output; fn poll(&mut self)-> Poll State<Self::Output>; } When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: struct Coroutine0 { stack: Stack0, state: State0, } Any command-line input or output is written as follows: $ cargo run Tips or important notes Appear like this. Get in touch Feedback from our readers is always welcome. General feedback : If you have questions about any aspect of this book, email us at customercare@ packtpub. com and mention the book title in the subject of your message. Errata : Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www. packtpub. com/support/errata and fill in the form.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xviii Piracy : If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at copyright@packt. com with a link to the material. If you are interested in becoming an author : If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors. packtpub. com. Share your thoughts Once you've read Asynchronous Programming in Rust, we' d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback. Y our review is important to us and the tech community and will help us make sure we're delivering excellent quality content.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Preface xix Download a free PDF copy of this book Thanks for purchasing this book! Do you like to read on the go but are unable to carry your print books everywhere? Is your e Book purchase not compatible with the device of your choice? Don't worry, now with every Packt book you get a DRM-free PDF version of that book at no cost. Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. The perks don't stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily Follow these simple steps to get the benefits: 1. Scan the QR code or visit the link below https://packt. link/free-ebook/9781805128137 2. Submit your proof of purchase 3. That's it! We'll send your free PDF and other benefits to your email directly
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Part 1: Asynchronous Programming Fundamentals In this part, you'll receive a thorough introduction to concurrency and asynchronous programming. We'll also explore various techniques that programming languages employ to model asynchrony, examining the most popular ones and covering some of the pros and cons associated with each. Finally, we'll explain the concept of OS-backed event queues, such as epoll, kqueue, and IOCP, detailing how system calls are used to interact with the operating system and addressing the challenges encountered in creating cross-platform abstractions like mio. This section comprises the following chapters: Chapter 1, Concurrency and Asynchronous Programming: A Detailed Overview Chapter 2, How Programming Languages Model Asynchronous Program Flow Chapter 3, Understanding OS-Backed Event Queues, System Calls and Cross Platform Abstractions
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
1 Concurrency and Asynchronous Programming: a Detailed Overview Asynchronous programming is one of those topics many programmers find confusing. Y ou come to the point when you think you've got it, only to later realize that the rabbit hole is much deeper than you thought. If you participate in discussions, listen to enough talks, and read about the topic on the internet, you'll probably also come across statements that seem to contradict each other. At least, this describes how I felt when I first was introduced to the subject. The cause of this confusion is often a lack of context, or authors assuming a specific context without explicitly stating so, combined with terms surrounding concurrency and asynchronous programming that are rather poorly defined. In this chapter, we'll be covering a lot of ground, and we'll divide the content into the following main topics: Async history Concurrency and parallelism The operating system and the CPU Interrupts, firmware, and I/O This chapter is general in nature. It doesn't specifically focus on Rust, or any specific programming language for that matter, but it's the kind of background information we need to go through so we know that everyone is on the same page going forward. The upside is that this will be useful no matter what programming language you use. In my eyes, that fact also makes this one of the most interesting chapters in this book.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 4 There's not a lot of code in this chapter, so we're off to a soft start. It's a good time to make a cup of tea, relax, and get comfortable, as we're about start this journey together. Technical requirements All examples will be written in Rust, and you have two alternatives for running the examples: Write and run the examples we'll write on the Rust playground Install Rust on your machine and run the examples locally (recommended) The ideal way to read this chapter is to clone the accompanying repository ( https://github. com/Packt Publishing/Asynchronous-Programming-in-Rust/tree/main/ ch01/a-assembly-dereference ) and open the ch01 folder and keep it open while you read the book. There, you'll find all the examples we write in this chapter and even some extra information that you might find interesting as well. Y ou can of course also go back to the repository later if you don't have that accessible right now. An evolutionary journey of multitasking In the beginning, computers had one CPU that executed a set of instructions written by a programmer one by one. No operating system (OS), no scheduling, no threads, no multitasking. This was how computers worked for a long time. We're talking back when a program was assembled in a deck of punched cards, and you got in big trouble if you were so unfortunate that you dropped the deck onto the floor. There were operating systems being researched very early and when personal computing started to grow in the 80s, operating systems such as DOS were the standard on most consumer PCs. These operating systems usually yielded control of the entire CPU to the program currently executing, and it was up to the programmer to make things work and implement any kind of multitasking for their program. This worked fine, but as interactive UIs using a mouse and windowed operating systems became the norm, this model simply couldn't work anymore. Non-preemptive multitasking Non-preemptive multitasking was the first method used to be able to keep a UI interactive (and running background processes). This kind of multitasking put the responsibility of letting the OS run other tasks, such as responding to input from the mouse or running a background task, in the hands of the programmer. Typically, the programmer yielded control to the OS.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
An evolutionary journey of multitasking 5 Besides offloading a huge responsibility to every programmer writing a program for your platform, this method was naturally error-prone. A small mistake in a program's code could halt or crash the entire system. Note Another popular term for what we call non-preemptive multitasking is cooperative multitasking. Windows 3. 1 used cooperative multitasking and required programmers to yield control to the OS by using specific system calls. One badly-behaving application could thereby halt the entire system. Preemptive multitasking While non-preemptive multitasking sounded like a good idea, it turned out to create serious problems as well. Letting every program and programmer out there be responsible for having a responsive UI in an operating system can ultimately lead to a bad user experience, since every bug out there could halt the entire system. The solution was to place the responsibility of scheduling the CPU resources between the programs that requested it (including the OS itself) in the hands of the OS. The OS can stop the execution of a process, do something else, and switch back. On such a system, if you write and run a program with a graphical user interface on a single-core machine, the OS will stop your program to update the mouse position before it switches back to your program to continue. This happens so frequently that we don't usually observe any difference whether the CPU has a lot of work or is idle. The OS is responsible for scheduling tasks and does this by switching contexts on the CPU. This process can happen many times each second, not only to keep the UI responsive but also to give some time to other background tasks and IO events. This is now the prevailing way to design an operating system. Note Later in this book, we'll write our own green threads and cover a lot of basic knowledge about context switching, threads, stacks, and scheduling that will give you more insight into this topic, so stay tuned. Hyper-threading As CPUs evolved and added more functionality such as several arithmetic logic units (ALUs ) and additional logic units, the CPU manufacturers realized that the entire CPU wasn't fully utilized. For
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 6 example, when an operation only required some parts of the CPU, an instruction could be run on the ALU simultaneously. This became the start of hyper-threading. Y our computer today, for example, may have 6 cores and 12 logical cores.. This is exactly where hyper-threading comes in. It “simulates” two cores on the same core by using unused parts of the CPU to drive progress on thread 2 and simultaneously running the code on thread 1. It does this by using a number of smart tricks (such as the one with the ALU). Now, using hyper-threading, we could actually offload some work on one thread while keeping the UI interactive by responding to events in the second thread even though we only had one CPU core, thereby utilizing our hardware better. Y ou might wonder about the performance of hyper-threading It turns out that hyper-threading has been continuously improved since the 90s. Since you're not actually running two CPUs, there will be some operations that need to wait for each other to finish. The performance gain of hyper-threading compared to multitasking in a single core seems to be somewhere close to 30% but it largely depends on the workload. Multicore processors As most know, the clock frequency of processors has been flat for a long time. Processors get faster by improving caches, branch prediction, and speculative execution, and by working on the processing pipelines of the processors, but the gains seem to be diminishing. On the other hand, new processors are so small that they allow us to have many on the same chip. Now, most CPUs have many cores and most often, each core will also have the ability to perform hyper-threading. Do you really write synchronous code? Like many things, this depends on your perspective. From the perspective of your process and the code you write, everything will normally happen in the order you write it. From the operating system's perspective, it might or might not interrupt your code, pause it, and run some other code in the meantime before resuming your process. From the perspective of the CPU, it will mostly execute instructions one at a time. * It doesn't care who wrote the code, though, so when a hardware interrupt happens, it will immediately stop and give control to an interrupt handler. This is how the CPU handles concurrency.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency versus parallelism 7 Note *However, modern CPUs can also do a lot of things in parallel. Most CPUs are pipelined, meaning that the next instruction is loaded while the current one is executing. It might have a branch predictor that tries to figure out what instructions to load next. The processor can also reorder instructions by using out-of-order execution if it believes it makes things faster this way without 'asking' or 'telling' the programmer or the OS, so you might not have any guarantee that A happens before B. The CPU offloads some work to separate 'coprocessors' such as the FPU for floating-point calculations, leaving the main CPU ready to do other tasks et cetera. As a high-level overview, it's OK to model the CPU as operating in a synchronous manner, but for now, let's just make a mental note that this is a model with some caveats that become especially important when talking about parallelism, synchronization primitives (such as mutexes and atomics), and the security of computers and operating systems. Concurrency versus parallelism Right off the bat, we'll dive into this subject by defining what concurrency is. Since it is quite easy to confuse concurrent with parallel, we will try to make a clear distinction between the two from the get-go. Important Concurrency is about dealing with a lot of things at the same time. Parallelism is about doing a lot of things at the same time. We call the concept of progressing multiple tasks at the same time multitasking. There are two ways to multitask. One is by progressing tasks concurrently, but not at the same time. Another is to progress tasks at the exact same time in parallel. Figure 1. 1 depicts the difference between the two scenarios:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 8 Figure 1. 1-Multitasking two tasks First, we need to agree on some definitions: Resource : This is something we need to be able to progress a task. Our resources are limited. This could be CPU time or memory. Task : This is a set of operations that requires some kind of resource to progress. A task must consist of several sub-operations. Parallel : This is something happening independently at the exact same time. Concurrent : These are tasks that are in progress at the same time, but not necessarily progressing simultaneously. This is an important distinction. If two tasks are running concurrently, but are not running in parallel, they must be able to stop and resume their progress. We say that a task is interruptible if it allows for this kind of concurrency. The mental model I use I firmly believe the main reason we find parallel and concurrent programming hard to differentiate stems from how we model events in our everyday life. We tend to define these terms loosely, so our intuition is often wrong.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency versus parallelism 9 Note It doesn't help that concurrent is defined in the dictionary as operating or occurring at the same time, which doesn't really help us much when trying to describe how it differs from parallel. For me, this first clicked when I started to understand why we want to make a distinction between parallel and concurrent in the first place! The why has everything to do with resource utilization and efficiency. Efficiency is the (often measurable) ability to avoid wasting materials, energy, effort, money, and time in doing something or in producing a desired result. Parallelism is increasing the resources we use to solve a task. It has nothing to do with efficiency. Concurrency has everything to do with efficiency and resource utilization. Concurrency can never make one single task go faster. It can only help us utilize our resources better and thereby finish a set of tasks faster. Let's draw some parallels to process economics In businesses that manufacture goods, we often talk about LEAN processes. This is pretty easy to compare with why programmers care so much about what we can achieve if we handle tasks concurrently. Let's pretend we're running a bar. We only serve Guinness beer and nothing else, but we serve our Guinness to perfection. Y es, I know, it's a little niche, but bear with me. Y ou are the manager of this bar, and your goal is to run it as efficiently as possible. Now, you can think of each bartender as a CPU core, and each order as a task. To manage this bar, you need to know the steps to serve a perfect Guinness: Pour the Guinness draught into a glass tilted at 45 degrees until it's 3-quarters full (15 seconds). Allow the surge to settle for 100 seconds. Fill the glass completely to the top (5 seconds). Serve. Since there is only one thing to order in the bar, customers only need to signal using their fingers how many they want to order, so we assume taking new orders is instantaneous. To keep things simple, the same goes for payment. In choosing how to run this bar, you have a few alternatives. Alternative 1-Fully synchronous task execution with one bartender Y ou start out with only one bartender (CPU). The bartender takes one order, finishes it, and progresses to the next. The line is out the door and going two blocks down the street-great! One month later, you're almost out of business and you wonder why.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 10 Well, even though your bartender is very fast at taking new orders, they can only serve 30 customers an hour. Remember, they're waiting for 100 seconds while the beer settles and they're practically just standing there, and they only use 20 seconds to actually fill the glass. Only after one order is completely finished can they progress to the next customer and take their order. The result is bad revenue, angry customers, and high costs. That's not going to work. Alternative 2-Parallel and synchronous task execution So, you hire 12 bartenders, and you calculate that you can serve about 360 customers an hour. The line is barely going out the door now, and revenue is looking great. One month goes by and again, you're almost out of business. How can that be? It turns out that having 12 bartenders is pretty expensive. Even though revenue is high, the costs are even higher. Throwing more resources at the problem doesn't really make the bar more efficient. Alternative 3-Asynchronous task execution with one bartender So, we're back to square one. Let's think this through and find a smarter way of working instead of throwing more resources at the problem. Y ou ask your bartender whether they can start taking new orders while the beer settles so that they're never just standing and waiting while there are customers to serve. The opening night comes and... Wow! On a busy night where the bartender works non-stop for a few hours, you calculate that they now only use just over 20 seconds on an order. Y ou've basically eliminated all the waiting. Y our theoretical throughput is now 240 beers per hour. If you add one more bartender, you'll have higher throughput than you did while having 12 bartenders. However, you realize that you didn't actually accomplish 240 beers an hour, since orders come somewhat erratically and not evenly spaced over time. Sometimes, the bartender is busy with a new order, preventing them from topping up and serving beers that are finished almost immediately. In real life, the throughput is only 180 beers an hour. Still, two bartenders could serve 360 beers an hour this way, the same amount that you served while employing 12 bartenders. This is good, but you ask yourself whether you can do even better. Alternative 4-Parallel and asynchronous task execution with two bartenders What if you hire two bartenders, and ask them to do just what we described in Alternative 3, but with one change: you allow them to steal each other's tasks, so bartender 1 can start pouring and set the beer down to settle, and bartender 2 can top it up and serve it if bartender 1 is busy pouring a new order at that time? This way, it is only rarely that both bartenders are busy at the same time as one of the beers-in-progress becomes ready to get topped up and served. Almost all orders are finished and
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency versus parallelism 11 served in the shortest amount of time possible, letting customers leave the bar with their beer faster and giving space to customers who want to make a new order. Now, this way, you can increase throughput even further. Y ou still won't reach the theoretical maximum, but you'll get very close. On the opening night, you realize that the bartenders now process 230 orders an hour each, giving a total throughput of 460 beers an hour. Revenue looks good, customers are happy, costs are kept at a minimum, and you're one happy manager of the weirdest bar on earth (an extremely efficient bar, though). The key takeaway Concurrency is about working smarter. Parallelism is a way of throwing more resources at the problem. Concurrency and its relation to I/O As you might understand from what I've written so far, writing async code mostly makes sense when you need to be smart to make optimal use of your resources. Now, if you write a program that is working hard to solve a problem, there is often no help in concurrency. This is where parallelism comes into play, since it gives you a way to throw more resources at the problem if you can split it into parts that you can work on in parallel. Consider the following two different use cases for concurrency: When performing I/O and you need to wait for some external event to occur When you need to divide your attention and prevent one task from waiting too long The first is the classic I/O example: you have to wait for a network call, a database query, or something else to happen before you can progress a task. However, you have many tasks to do so instead of waiting, you continue to work elsewhere and either check in regularly to see whether the task is ready to progress, or make sure you are notified when that task is ready to progress. The second is an example that is often the case when having a UI. Let's pretend you only have one core. How do you prevent the whole UI from becoming unresponsive while performing other CPU-intensive tasks? Well, you can stop whatever task you're doing every 16 ms, run the update UI task, and then resume whatever you were doing afterward. This way, you will have to stop/resume your task 60 times a second, but you will also have a fully responsive UI that has a roughly 60 Hz refresh rate.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 12 What about threads provided by the operating system? We'll cover threads a bit more when we talk about strategies for handling I/O later in this book, but I'll mention them here as well. One challenge when using OS threads to understand concurrency is that they appear to be mapped to cores. That's not necessarily a correct mental model to use, even though most operating systems will try to map one thread to one core up to the number of threads equal to the number of cores. Once we create more threads than there are cores, the OS will switch between our threads and progress each of them concurrently using its scheduler to give each thread some time to run. Y ou also must consider the fact that your program is not the only one running on the system. Other programs might spawn several threads as well, which means there will be many more threads than there are cores on the CPU. Therefore, threads can be a means to perform tasks in parallel, but they can also be a means to achieve concurrency. This brings me to the last part about concurrency. It needs to be defined in some sort of reference frame. Choosing the right reference frame When you write code that is perfectly synchronous from your perspective, stop for a second and consider how that looks from the operating system perspective. The operating system might not run your code from start to end at all. It might stop and resume your process many times. The CPU might get interrupted and handle some inputs while you think it's only focused on your task. So, synchronous execution is only an illusion. But from the perspective of you as a programmer, it's not, and that is the important takeaway: When we talk about concurrency without providing any other context, we are using you as a programmer and your code (your process) as the reference frame. If you start pondering concurrency without keeping this in the back of your head, it will get confusing very fast. The reason I'm spending so much time on this is that once you realize the importance of having the same definitions and the same reference frame, you'll start to see that some of the things you hear and learn that might seem contradictory really are not. Y ou'll just have to consider the reference frame first. Asynchronous versus concurrent So, you might wonder why we're spending all this time talking about multitasking, concurrency, and parallelism, when the book is about asynchronous programming. The main reason for this is that all these concepts are closely related to each other, and can even have the same (or overlapping) meanings, depending on the context they're used in.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The role of the operating system 13 In an effort to make the definitions as distinct as possible, we'll define these terms more narrowly than you' d normally see. However, just be aware that we can't please everyone and we do this for our own sake of making the subject easier to understand. On the other hand, if you fancy heated internet debates, this is a good place to start. Just claim someone else's definition of concurrent is 100 % wrong or that yours is 100 % correct, and off you go. For the sake of this book, we'll stick to this definition: asynchronous programming is the way a programming language or library abstracts over concurrent operations, and how we as users of a language or library use that abstraction to execute tasks concurrently. The operating system already has an existing abstraction that covers this, called threads. Using OS threads to handle asynchrony is often referred to as multithreaded programming. To avoid confusion, we'll not refer to using OS threads directly as asynchronous programming, even though it solves the same problem. Given that asynchronous programming is now scoped to be about abstractions over concurrent or parallel operations in a language or library, it's also easier to understand that it's just as relevant on embedded systems without an operating system as it is for programs that target a complex system with an advanced operating system. The definition itself does not imply any specific implementation even though we'll look at a few popular ones throughout this book. If this still sounds complicated, I understand. Just sitting and reflecting on concurrency is difficult, but if we try to keep these thoughts in the back of our heads when we work with async code I promise it will get less and less confusing. The role of the operating system The operating system (OS) stands in the center of everything we do as programmers (well, unless you're writing an operating system or working in the embedded realm), so there is no way for us to discuss any kind of fundamentals in programming without talking about operating systems in a bit of detail. Concurrency from the operating system's perspective This ties into what I talked about earlier when I said that concurrency needs to be talked about within a reference frame, and I explained that the OS might stop and start your process at any time. What we call synchronous code is, in most cases, code that appears synchronous to us as programmers. Neither the OS nor the CPU lives in a fully synchronous world. Operating systems use preemptive multitasking and as long as the operating system you're running is preemptively scheduling processes, you won't have a guarantee that your code runs instruction by instruction without interruption. The operating system will make sure that all important processes get some time from the CPU to make progress.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 14 Note This is not as simple when we're talking about modern machines with 4, 6, 8, or 12 physical cores, since you might actually execute code on one of the CPUs uninterrupted if the system is under very little load. The important part here is that you can't know for sure and there is no guarantee that your code will be left to run uninterrupted. Teaming up with the operating system When you make a web request, you're not asking the CPU or the network card to do something for you-you're asking the operating system to talk to the network card for you. There is no way for you as a programmer to make your system optimally efficient without playing to the strengths of the operating system. Y ou basically don't have access to the hardware directly. You must remember that the operating system is an abstraction over the hardware. However, this also means that to understand everything from the ground up, you'll also need to know how your operating system handles these tasks. To be able to work with the operating system, you'll need to know how you can communicate with it, and that's exactly what we're going to go through next. Communicating with the operating system Communication with an operating system happens through what we call a system call (syscall ). We need to know how to make system calls and understand why it's so important for us when we want to cooperate and communicate with the operating system. We also need to understand how the basic abstractions we use every day use system calls behind the scenes. We'll have a detailed walkthrough in Chapter 3, so we'll keep this brief for now. A system call uses a public API that the operating system provides so that programs we write in 'userland' can communicate with the OS. Most of the time, these calls are abstracted away for us as programmers by the language or the runtime we use. Now, a syscall is an example of something that is unique to the kernel you're communicating with, but the UNIX family of kernels has many similarities. UNIX systems expose this through libc. Windows, on the other hand, uses its own API, often referred to as Win API, and it can operate radically differently from how the UNIX-based systems operate. Most often, though, there is a way to achieve the same things. In terms of functionality, you might not notice a big difference but as we'll see later, and especially when we dig into how epoll, kqueue, and IOCP work, they can differ a lot in how this functionality is implemented.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The CPU and the operating system 15 However, a syscall is not the only way we interact with our operating system, as we'll see in the following section. The CPU and the operating system Does the CPU cooperate with the operating system? If you had asked me this question when I first thought I understood how programs work, I would most likely have answered no. We run programs on the CPU and we can do whatever we want if we know how to do it. Now, first of all, I wouldn't have thought this through, but unless you learn how CPUs and operating systems work together, it's not easy to know for sure. What started to make me think I was very wrong was a segment of code that looked like what you're about to see. If you think inline assembly in Rust looks foreign and confusing, don't worry just yet. We'll go through a proper introduction to inline assembly a little later in this book. I'll make sure to go through each of the following lines until you get more comfortable with the syntax: Repository reference: ch01/ac-assembly-dereference/src/main. rs fn main() { let t = 100; let t_ptr: *const usize = &t; let x = dereference(t_ptr); println!("{}", x); } fn dereference(ptr: *const usize)-> usize { let mut res: usize; unsafe { asm!("mov {0}, [{1}]", out(reg) res, in(reg) ptr) }; res } What you've just looked at is a dereference function written in assembly. The mov {0}, [{1}] line needs some explanation. {0} and {1} are templates that tell the compiler that we're referring to the registers that out(reg) and in(reg) represent. The number is just an index, so if we had more inputs or outputs they would be numbered {2}, {3}, and so on. Since we only specify reg and not a specific register, we let the compiler choose what registers it wants to use. The mov instruction instructs the CPU to take the first 8 bytes (if we're on a 64-bit machine) it gets when reading the memory location that {1} points to and place that in the register represented by {0}. The [] brackets will instruct the CPU to treat the data in that register as a memory address,
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 16 and instead of simply copying the memory address itself to {0}, it will fetch what's at that memory location and move it over. Anyway, we're just writing instructions to the CPU here. No standard library, no syscall; just raw instructions. There is no way the OS is involved in that dereference function, right? If you run this program, you get what you' d expect: 100 Now, if you keep the dereference function but replace the main function with a function that creates a pointer to the 99999999999999 address, which we know is invalid, we get this function: fn main() { let t_ptr = 99999999999999 as *const usize; let x = dereference(t_ptr); println!("{}", x); } Now, if we run that we get the following results. This is the result on Linux: Segmentation fault (core dumped) This is the result on Windows: error: process didn't exit successfully: `target\debug\ac-assembly-dereference. exe` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION) We get a segmentation fault. Not surprising, really, but as you also might notice, the error we get is different on different platforms. Surely, the OS is involved somehow. Let's take a look at what's really happening here. Down the rabbit hole It turns out that there is a great deal of cooperation between the OS and the CPU, but maybe not in the way you would naively think. Many modern CPUs provide some basic infrastructure that operating systems use. This infrastructure gives us the security and stability we expect. Actually, most advanced CPUs provide a lot more options than operating systems such as Linux, BSD, and Windows actually use. There are two in particular that I want to address here: How the CPU prevents us from accessing memory we're not supposed to access How the CPU handles asynchronous events such as I/O
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The CPU and the operating system 17 We'll cover the first one here and the second in the next section. How does the CPU prevent us from accessing memory we're not supposed to access? As I mentioned, modern CPU architectures define some basic concepts by design. Some examples of this are as follows: Virtual memory Page table Page fault Exceptions Privilege level Exactly how this works will differ depending on the specific CPU, so we'll treat them in general terms here. Most modern CPUs have a memory management unit (MMU ). This part of the CPU is often etched on the same dye, even. The MMU's job is to translate the virtual address we use in our programs to a physical address. When the OS starts a process (such as our program), it sets up a page table for our process and makes sure a special register on the CPU points to this page table. Now, when we try to dereference t_ptr in the preceding code, the address is at some point sent for translation to the MMU, which looks it up in the page table to translate it to a physical address in the memory where it can fetch the data. In the first case, it will point to a memory address on our stack that holds the value 100. When we pass in 99999999999999 and ask it to fetch what's stored at that address (which is what dereferencing does), it looks for the translation in the page table but can't find it. The CPU then treats this as a page fault. At boot, the OS provided the CPU with an interrupt descriptor table. This table has a predefined format where the OS provides handlers for the predefined conditions the CPU can encounter. Since the OS provided a pointer to a function that handles page fault, the CPU jumps to that function when we try to dereference 99999999999999 and thereby hands over control to the operating system. The OS then prints a nice message for us, letting us know that we encountered what it calls a segmentation fault. This message will therefore vary depending on the OS you run the code on.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 18 But can't we just change the page table in the CPU? Now, this is where the privilege level comes in. Most modern operating systems operate with two ring levels : ring 0, the kernel space, and ring 3, the user space. Figure 1. 2-Privilege rings Most CPUs have a concept of more rings than what most modern operating systems use. This has historical reasons, which is also why ring 0 and ring 3 are used (and not 1 and 2). Every entry in the page table has additional information about it. Amongst that information is the information about which ring it belongs to. This information is set up when your OS boots up. Code executed in ring 0 has almost unrestricted access to external devices and memory, and is free to change registers that provide security at the hardware level. The code you write in ring 3 will typically have extremely restricted access to I/O and certain CPU registers (and instructions). Trying to issue an instruction or setting a register from ring 3 to change the page table will be prevented by the CPU. The CPU will then treat this as an exception and jump to the handler for that exception provided by the OS. This is also the reason why you have no other choice than to cooperate with the OS and handle I/O tasks through syscalls. The system wouldn't be very secure if this wasn't the case. So, to sum it up: yes, the CPU and the OS cooperate a great deal. Most modern desktop CPUs are built with an OS in mind, so they provide the hooks and infrastructure that the OS latches onto upon bootup. When the OS spawns a process, it also sets its privilege level, making sure that normal processes stay within the borders it defines to maintain stability and security.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Interrupts, firmware, and I/O 19 Interrupts, firmware, and I/O We're nearing the end of the general CS subjects in this book, and we'll start to dig our way out of the rabbit hole soon. This part tries to tie things together and look at how the whole computer works as a system to handle I/O and concurrency. Let's get to it! A simplified overview Let's look at some of the steps where we imagine that we read from a network card: Remember that we're simplifying a lot here. This is a rather complex operation but we'll focus on the parts that are of most interest to us and skip a few steps along the way. Step 1-Our code We register a socket. This happens by issuing a syscall to the OS. Depending on the OS, we either get a file descriptor (mac OS/Linux) or a socket (Windows). The next step is that we register our interest in Read events on that socket.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 20 Step 2-Registering events with the OS This is handled in one of three ways: 1. We tell the operating system that we're interested in Read events but we want to wait for it to happen by yielding control over our thread to the OS. The OS then suspends our thread by storing the register state and switches to some other thread From our perspective, this will be blocking our thread until we have data to read. 2. We tell the operating system that we're interested in Read events but we just want a handle to a task that we can poll to check whether the event is ready or not. The OS will not suspend our thread, so this will not block our code. 3. We tell the operating system that we are probably going to be interested in many events, but we want to subscribe to one event queue. When we poll this queue, it will block our thread until one or more events occur. This will block our thread while we wait for events to occur. Chapters 3 and 4 will go into detail about the third method, as it's the most used method for modern async frameworks to handle concurrency. Step 3-The network card We're skipping some steps here, but I don't think they're vital to our understanding. On the network card, there is a small microcontroller running specialized firmware. We can imagine that this microcontroller is polling in a busy loop, checking whether any data is incoming. The exact way the network card handles its internals is a little different from what I suggest here, and will most likely vary from vendor to vendor. The important part is that there is a very simple but specialized CPU running on the network card doing work to check whether there are incoming events. Once the firmware registers incoming data, it issues a hardware interrupt. Step 4-Hardware interrupt A modern CPU has a set of interrupt request line (IRQs ) for it to handle events that occur from external devices. A CPU has a fixed set of interrupt lines. A hardware interrupt is an electrical signal that can occur at any time. The CPU immediately interrupts its normal workflow to handle the interrupt by saving the state of its registers and looking up the interrupt handler. The interrupt handlers are defined in the interrupt descriptor table (IDT ).
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Interrupts, firmware, and I/O 21 Step 5-Interrupt handler The IDT is a table where the OS (or a driver) registers handlers for different interrupts that may occur. Each entry points to a handler function for a specific interrupt. The handler function for a network card would typically be registered and handled by a driver for that card. Note The IDT is not stored on the CPU as it might seem in Figure 1. 3. It's located in a fixed and known location in the main memory. The CPU only holds a pointer to the table in one of its registers. Step 6-Writing the data This is a step that might vary a lot depending on the CPU and the firmware on the network card. If the network card and the CPU support direct memory access (DMA ), which should be the standard on all modern systems today, the network card will write data directly to a set of buffers that the OS already has set up in the main memory. In such a system, the firmware on the network card might issue an interrupt when the data is written to memory. DMA is very efficient, since the CPU is only notified when the data is already in memory. On older systems, the CPU needed to devote resources to handle the data transfer from the network card. The direct memory access controller ( DMAC ) is added to the diagram since in such a system, it would control the access to memory. It's not part of the CPU as indicated in the previous diagram. We're deep enough in the rabbit hole now, and exactly where the different parts of a system are is not really important to us right now, so let's move on. Step 7-The driver The driver would normally handle the communication between the OS and the network card. At some point, the buffers are filled and the network card issues an interrupt. The CPU then jumps to the handler of that interrupt. The interrupt handler for this exact type of interrupt is registered by the driver, so it's actually the driver that handles this event and, in turn, informs the kernel that the data is ready to be read. Step 8-Reading the data Depending on whether we chose method 1, 2, or 3, the OS will do as follows: Wake our thread Return Ready on the next poll Wake the thread and return a Read event for the handler we registered
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Concurrency and Asynchronous Programming: a Detailed Overview 22 Interrupts As you know by now, there are two kinds of interrupts: Hardware interrupts Software interrupts They are very different in nature. Hardware interrupts Hardware interrupts are created by sending an electrical signal through an IRQ. These hardware lines signal the CPU directly. Software interrupts These are interrupts issued from software instead of hardware. As in the case of a hardware interrupt, the CPU jumps to the IDT and runs the handler for the specified interrupt. Firmware Firmware doesn't get much attention from most of us; however, it's a crucial part of the world we live in. It runs on all kinds of hardware and has all kinds of strange and peculiar ways to make the computers we program on work. Now, the firmware needs a microcontroller to be able to work. Even the CPU has firmware that makes it work. That means there are many more small 'CPUs' on our system than the cores we program against. Why is this important? Well, you remember that concurrency is all about efficiency, right? Since we have many CPUs/microcontrollers already doing work for us on our system, one of our concerns is to not replicate or duplicate that work when we write code. If a network card has firmware that continually checks whether new data has arrived, it's pretty wasteful if we duplicate that by letting our CPU continually check whether new data arrives as well. It's much better if we either check once in a while, or even better, get notified when data has arrived.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 23 Summary This chapter covered a lot of ground, so good job on doing all that legwork. We learned a little bit about how CPUs and operating systems have evolved from a historical perspective and the difference between non-preemptive and preemptive multitasking. We discussed the difference between concurrency and parallelism, talked about the role of the operating system, and learned that system calls are the primary way for us to interact with the host operating system. Y ou've also seen how the CPU and the operating system cooperate through an infrastructure designed as part of the CPU. Lastly, we went through a diagram on what happens when you issue a network call. Y ou know there are at least three different ways for us to deal with the fact that the I/O call takes some time to execute, and we have to decide which way we want to handle that waiting time. This covers most of the general background information we need so that we have the same definitions and overview before we go on. We'll go into more detail as we progress through the book, and the first topic that we'll cover in the next chapter is how programming languages model asynchronous program flow by looking into threads, coroutines and futures.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
2 How Programming Languages Model Asynchronous Program Flow In the previous chapter, we covered asynchronous program flow, concurrency, and parallelism in general terms. In this chapter, we'll narrow our scope. Specifically, we'll look into different ways to model and deal with concurrency in programming languages and libraries. It's important to keep in mind that threads, futures, fibers, goroutines, promises, etc. are abstractions that give us a way to model an asynchronous program flow. They have different strengths and weaknesses, but they share a goal of giving programmers an easy-to-use (and importantly, hard to misuse), efficient, and expressive way of creating a program that handles tasks in a non-sequential, and often unpredictable, order. The lack of precise definitions is prevalent here as well; many terms have a name that stems from a concrete implementation at some point in time but has later taken on a more general meaning that encompasses different implementations and varieties of the same thing. We'll first go through a way of grouping different abstractions together based on their similarities before we go on to discuss the pros and cons of each of them. We'll also go through important definitions that we'll use throughout the book and discuss OS threads in quite some detail. The topics we discuss here are quite abstract and complicated so don't feel bad if you don't understand everything immediately. As we progress through the book and you get used to the different terms and techniques by working through some examples, more and more pieces will fall into place. Specifically, the following topics will be covered: Definitions Threads provided by the operating system Green threads/stackfull coroutines/fibers
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 26 Callback based approaches Promises, futures, and async/await Definitions We can broadly categorize abstractions over concurrent operations into two groups: 1. Cooperative : These are tasks that yield voluntarily either by explicitly yielding or by calling a function that suspends the task when it can't progress further before another operation has finished (such as making a network call). Most often, these tasks yield to a scheduler of some sort. Examples of this are tasks generated by async /await in Rust and Java Script. 2. Non-cooperative : Tasks that don't necessarily yield voluntarily. In such a system, the scheduler must be able to pre-empt a running task, meaning that the scheduler can stop the task and take control over the CPU even though the task would have been able to do work and progress. Examples of this are OS threads and Goroutines (after GO version 1. 14). Figure 2. 1-Non-cooperative vs. cooperative multitasking
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Definitions 27 Note In a system where the scheduler can pre-empt running tasks, tasks can also yield voluntarily as they do in a cooperative system, and it's rare with a system that only relies on pre-emption. We can further divide these abstractions into two broad categories based on the characteristics of their implementation: 1. Stackful : Each task has its own call stack. This is often implemented as a stack that's similar to the stack used by the operating system for its threads. Stackful tasks can suspend execution at any point in the program as the whole stack is preserved. 2. Stackless : There is not a separate stack for each task; they all run sharing the same call stack. A task can't be suspended in the middle of a stack frame, limiting the runtime's ability to pre-empt the task. However, they need to store/restore less information when switching between tasks so they can be more efficient. There are more nuances to these two categories that you'll get a deep understanding of when we implement an example of both a stackful coroutine (fiber) and a stackless coroutine (Rust futures generated by async /await ) later in the book. For now, we keep the details to a minimum to just provide an overview. Threads We keep referring to threads all throughout this book, so before we get too far in, let's stop and give “thread” a good definition since it's one of those fundamental terms that causes a lot of confusion. In the most general sense, a thread refers to a thread of execution, meaning a set of instructions that need to be executed sequentially. If we tie this back to the first chapter of this book, where we provided several definitions under the Concurrency vs. Parallelism subsection, a thread of execution is similar to what we defined as a task with multiple steps that need resources to progress. The generality of this definition can be a cause of some confusion. A thread to one person can obviously refer to an OS thread, and to another person, it can simply refer to any abstraction that represents a thread of execution on a system. Threads are often divided into two broad categories: OS threads : These threads are created by the OS and managed by the OS scheduler. On Linux, this is known as a kernel thread. User-level threads : These threads are created and managed by us as programmers without the OS knowing about them. Now, this is where things get a bit tricky: OS threads on most modern operating systems have a lot of similarities. Some of these similarities are dictated by the design of modern CPUs. One example
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 28 of this is that most CPUs assume that there is a stack it can perform operations on and that it has a register for the stack pointer and instructions for stack manipulation. User-level threads can, in their broadest sense, refer to any implementation of a system (runtime) that creates and schedules tasks, and you can't make the same assumptions as you do with OS threads. They can closely resemble OS threads by using separate stacks for each task, as we'll see in Chapter 5 when we go through our fiber/green threads example, or they can be radically different in nature, as we'll see when we go through how Rust models concurrent operations later on in Part 3 of this book. No matter the definition, a set of tasks needs something that manages them and decides who gets what resources to progress. The most obvious resource on a computer system that all tasks need to progress is CPU time. We call the “something” that decides who gets CPU time to progress a scheduler. Most likely, when someone refers to a “thread” without adding extra context, they refer to an OS thread/kernel thread, so that's what we'll do going forward. I'll also keep referring to a thread of execution as simply a task. I find the topic of asynchronous programming easier to reason about when we limit the use of terms that have different assumptions associated with them depending on the context as much as possible. With that out of the way, let's go through some defining characteristics of OS threads while we also highlight their limitations. Important! Definitions will vary depending on what book or article you read. For example, if you read about how a specific operating system works, you might see that processes or threads are abstractions that represent “tasks”, which will seem to contradict the definitions we use here. As I mentioned earlier, the choice of reference frame is important, and it's why we take so much care to define the terms we use thoroughly as we encounter them throughout the book. The definition of a thread can also vary by operating system, even though most popular systems share a similar definition today. Most notably, Solaris (pre-Solaris 9, which was released in 2002) used to have a two-level thread system that differentiated between application threads, lightweight processes, and kernel threads. This was an implementation of what we call M:N threading, which we'll get to know more about later in this book. Just beware that if you read older material, the definition of a thread in such a system might differ significantly from the one that's commonly used today. Now that we've gone through the most important definitions for this chapter, it's time to talk more about the most popular ways of handling concurrency when programming.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Threads provided by the operating system 29 Threads provided by the operating system Note! We call this 1:1 threading. Each task is assigned one OS thread. Since this book will not focus specifically on OS threads as a way to handle concurrency going forward, we treat them more thoroughly here. Let's start with the obvious. To use threads provided by the operating system, you need, well, an operating system. Before we discuss the use of threads as a means to handle concurrency, we need to be clear about what kind of operating systems we're talking about since they come in different flavors. Embedded systems are more widespread now than ever before. This kind of hardware might not have the resources for an operating system, and if they do, you might use a radically different kind of operating system tailored to your needs, as the systems tend to be less general purpose and more specialized in nature. Their support for threads, and the characteristics of how they schedule them, might be different from what you're used to in operating systems such as Windows or Linux. Since covering all the different designs is a book on its own, we'll limit the scope to talk about treads, as they're used in Windows and Linux-based systems running on popular desktop and server CPUs. OS threads are simple to implement and simple to use. We simply let the OS take care of everything for us. We do this by spawning a new OS thread for each task we want to accomplish and write code as we normally would. The runtime we use to handle concurrency for us is the operating system itself. In addition to these advantages, you get parallelism for free. However, there are also some drawbacks and complexities resulting from directly managing parallelism and shared resources. Creating new threads takes time Creating a new OS thread involves some bookkeeping and initialization overhead, so while switching between two existing threads in the same process is pretty fast, creating new ones and discarding ones you don't use anymore involves work that takes time. All the extra work will limit throughput if a system needs to create and discard a lot of them. This can be a problem if you have huge amounts of small tasks that need to be handled concurrently, which often is the case when dealing with a lot of I/O. Each thread has its own stack We'll cover stacks in detail later in this book, but for now, it's enough to know that they occupy a fixed size of memory. Each OS thread comes with its own stack, and even though many systems allow this size
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 30 to be configured, they're still fixed in size and can't grow or shrink. They are, after all, the cause of stack overflows, which will be a problem if you configure them to be too small for the tasks you're running. If we have many small tasks that only require a little stack space but we reserve much more than we need, we will occupy large amounts of memory and possibly run out of it. Context switching As you now know, threads and schedulers are tightly connected. Context switching happens when the CPU stops executing one thread and proceeds with another one. Even though this process is highly optimized, it still involves storing and restoring the register state, which takes time. Every time that you yield to the OS scheduler, it can choose to schedule a thread from a different process on that CPU. Y ou see, threads created by these systems belong to a process. When you start a program, it starts a process, and the process creates at least one initial thread where it executes the program you've written. Each process can spawn multiple threads that share the same address space. That means that threads within the same process can access shared memory and can access the same resources, such as files and file handles. One consequence of this is that when the OS switches contexts by stopping one thread and resuming another within the same process, it doesn't have to save and restore all the state associated with that process, just the state that's specific to that thread. On the other hand, when the OS switches from a thread associated with one process to a thread associated with another, the new process will use a different address space, and the OS needs to take measures to make sure that process “ A ” doesn't access data or resources that belong to process “B”. If it didn't, the system wouldn't be secure. The consequence is that caches might need to be flushed and more state might need to be saved and restored. In a highly concurrent system under load, these context switches can take extra time and thereby limit the throughput in a somewhat unpredictable manner if they happen frequently enough. Scheduling The OS can schedule tasks differently than you might expect, and every time you yield to the OS, you're put in the same queue as all other threads and processes on the system. Moreover, since there is no guarantee that the thread will resume execution on the same CPU core as it left off or that two tasks won't run in parallel and try to access the same data, you need to synchronize data access to prevent data races and other pitfalls associated with multicore programming. Rust as a language will help you prevent many of these pitfalls, but synchronizing data access will require extra work and add to the complexity of such programs. We often say that using OS threads to handle concurrency gives us parallelism for free, but it isn't free in terms of added complexity and the need for proper data access synchronization.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Threads provided by the operating system 31 The advantage of decoupling asynchronous operations from OS threads Decoupling asynchronous operations from the concept of threads has a lot of benefits. First of all, using OS threads as a means to handle concurrency requires us to use what essentially is an OS abstraction to represent our tasks. Having a separate layer of abstraction to represent concurrent tasks gives us the freedom to choose how we want to handle concurrent operations. If we create an abstraction over concurrent operations such as a future in Rust, a promise in Java Script, or a goroutine in GO, it is up to the runtime implementor to decide how these concurrent tasks are handled. A runtime could simply map each concurrent operation to an OS thread, they could use fibers/green threads or state machines to represent the tasks. The programmer that writes the asynchronous code will not necessarily have to change anything in their code if the underlying implementation changes. In theory, the same asynchronous code could be used to handle concurrent operations on a microcontroller without an OS if there's just a runtime for it. To sum it up, using threads provided by the operating system to handle concurrency has the following advantages: Simple to understand Easy to use Switching between tasks is reasonably fast Y ou get parallelism for free However, they also have a few drawbacks: OS-level threads come with a rather large stack. If you have many tasks waiting simultaneously (as you would in a web server under heavy load), you'll run out of memory pretty fast. Context switching can be costly and you might get an unpredictable performance since you let the OS do all the scheduling. The OS has many things it needs to handle. It might not switch back to your thread as fast as you' d wish. It is tightly coupled to an OS abstraction. This might not be an option on some systems. Example Since we'll not spend more time talking about OS threads in this book, we'll go through a short example so you can see how they're used:
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 32 ch02/aa-os-threads use std::thread::{self, sleep}; fn main() { println!("So, we start the program here!"); let t1 = thread::spawn(move || { sleep(std::time::Duration::from_millis(200)); println!("The long running tasks finish last!"); }); let t2 = thread::spawn(move || { sleep(std::time::Duration::from_millis(100)); println!("We can chain callbacks... "); let t3 = thread::spawn(move || { sleep(std::time::Duration::from_millis(50)); println!("... like this!"); }); t3. join(). unwrap(); }); println!("The tasks run concurrently!"); t1. join(). unwrap(); t2. join(). unwrap(); } In this example, we simply spawn several OS threads and put them to sleep. Sleeping is essentially the same as yielding to the OS scheduler with a request to be re-scheduled to run after a certain time has passed. To make sure our main thread doesn't finish and exit (which will exit the process) before our children thread has had time to run we join them at the end of our main function. If we run the example, we'll see how the operations occur in a different order based on how long we yielded each thread to the scheduler: So, we start the program here! The tasks run concurrently! We can chain callbacks...... like this! The long-running tasks finish last! So, while using OS threads is great for a number of tasks, we also outlined a number of good reasons to look at alternatives by discussing their limitations and downsides. The first alternatives we'll look at are what we call fibers and green threads.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Fibers and green threads 33 Fibers and green threads Note! This is an example of M:N threading. Many tasks can run concurrently on one OS thread. Fibers and green threads are often referred to as stackful coroutines. The name “green threads” originally stems from an early implementation of an M:N threading model used in Java and has since been associated with different implementations of M:N threading. Y ou will encounter different variations of this term, such as “green processes” (used in Erlang), which are different from the ones we discuss here. Y ou'll also see some that define green threads more broadly than we do here. The way we define green threads in this book makes them synonymous with fibers, so both terms refer to the same thing going forward. The implementation of fibers and green threads implies that there is a runtime with a scheduler that's responsible for scheduling what task (M) gets time to run on the OS thread (N). There are many more tasks than there are OS threads, and such a system can run perfectly fine using only one OS thread. The latter case is often referred to as M:1 threading. Goroutines is an example of a specific implementation of stackfull coroutines, but it comes with slight nuances. The term “coroutine” usually implies that they're cooperative in nature, but Goroutines can be pre-empted by the scheduler (at least since version 1. 14), thereby landing them in somewhat of a grey area using the categories we present here. Green threads and fibers use the same mechanisms as an OS, setting up a stack for each task, saving the CPU's state, and jumping from one task(thread) to another by doing a context switch. We yield control to the scheduler (which is a central part of the runtime in such a system), which then continues running a different task. The state of execution is stored in each stack, so in such a solution, there would be no need for async, await, Future, or Pin. In many ways, green threads mimic how an operating system facilitates concurrency, and implementing them is a great learning experience. A runtime using fibers/green threads for concurrent tasks can have a high degree of flexibility. Tasks can, for example, be pre-empted and context switched at any time and at any point in their execution, so a long-running task that hogs the CPU could in theory be pre-empted by the runtime, acting as a safeguard from having tasks that end up blocking the whole system due to an edge-case or a programmer error. This gives the runtime scheduler almost the same capabilities as the OS scheduler, which is one of the biggest advantages of systems using fibers/green threads.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 34 The typical flow goes as follows: Y ou run some non-blocking code Y ou make a blocking call to some external resource The CPU jumps to the main thread, which schedules a different thread to run and jumps to that stack Y ou run some non-blocking code on the new thread until a new blocking call or the task is finished The CPU jumps back to the main thread, schedules a new thread that is ready to make progress, and jumps to that thread Figure 2. 2-Program flow using fibers/green threads Each stack has a fixed space As fibers and green threads are similar to OS threads, they do have some of the same drawbacks as well. Each task is set up with a stack of a fixed size, so you still have to reserve more space than you actually use. However, these stacks can be growable, meaning that once the stack is full, the runtime can grow the stack. While this sounds easy, it's a rather complicated problem to solve. We can't simply grow a stack as we grow a tree. What actually needs to happen is one of two things: 1. Y ou allocate a new piece of continuous memory and handle the fact that your stack is spread over two disjointed memory segments 2. Y ou allocate a new larger stack (for example, twice the size of the previous stack), move all your data over to the new stack, and continue from there
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Fibers and green threads 35 The first solution sounds pretty simple, as you can leave the original stack as it is, and you can basically context switch over to the new stack when needed and continue from there. However, modern CPUs can work extremely fast if they can work on a contiguous piece of memory due to caching and their ability to predict what data your next instructions are going to work on. Spreading the stack over two disjointed pieces of memory will hinder performance. This is especially noticeable when you have a loop that happens to be just at the stack boundary, so you end up making up to two context switches for each iteration of the loop. The second solution solves the problems with the first solution by having the stack as a contiguous piece of memory, but it comes with some problems as well. First, you need to allocate a new stack and move all the data over to the new stack. But what happens with all pointers and references that point to something located on the stack when everything moves to a new location? Y ou guessed it: every pointer and reference to anything located on the stack needs to be updated so they point to the new location. This is complex and time-consuming, but if your runtime already includes a garbage collector, you already have the overhead of keeping track of all your pointers and references anyway, so it might be less of a problem than it would for a non-garbage collected program. However, it does require a great deal of integration between the garbage collector and the runtime to do this every time the stack grows, so implementing this kind of runtime can get very complicated. Secondly, you have to consider what happens if you have a lot of long-running tasks that only require a lot of stack space for a brief period of time (for example, if it involves a lot of recursion at the start of the task) but are mostly I/O bound the rest of the time. Y ou end up growing your stack many times over only for one specific part of that task, and you have to make a decision whether you will accept that the task occupies more space than it needs or at some point move it back to a smaller stack. The impact this will have on your program will of course vary greatly based on the type of work you do, but it's still something to be aware of. Context switching Even though these fibers/green threads are lightweight compared to OS threads, you still have to save and restore registers at every context switch. This likely won't be a problem most of the time, but when compared to alternatives that don't require context switching, it can be less efficient. Context switching can also be pretty complex to get right, especially if you intend to support many different platforms. Scheduling When a fiber/green thread yields to the runtime scheduler, the scheduler can simply resume execution on a new task that's ready to run. This means that you avoid the problem of being put in the same run queue as every other task in the system every time you yield to the scheduler. From the OS perspective, your threads are busy doing work all the time, so it will try to avoid pre-empting them if it can.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 36 One unexpected downside of this is that most OS schedulers make sure all threads get some time to run by giving each OS thread a time slice where it can run before the OS pre-empts the thread and schedules a new thread on that CPU. A program using many OS threads might be allotted more time slices than a program with fewer OS threads. A program using M:N threading will most likely only use a few OS threads (one thread per CPU core seems to be the starting point on most systems). So, depending on whatever else is running on the system, your program might be allotted fewer time slices in total than it would be using many OS threads. However, with the number of cores available on most modern CPUs and the typical workload on concurrent systems, the impact from this should be minimal. FFI Since you create your own stacks that are supposed to grow/shrink under certain conditions and might have a scheduler that assumes it can pre-empt running tasks at any point, you will have to take extra measures when you use FFI. Most FFI functions will assume a normal OS-provided C-stack, so it will most likely be problematic to call an FFI function from a fiber/green thread. Y ou need to notify the runtime scheduler, context switch to a different OS thread, and have some way of notifying the scheduler that you're done and the fiber/green thread can continue. This naturally creates overhead and added complexity both for the runtime implementor and the user making the FFI call. Advantages It is simple to use for the user. The code will look like it does when using OS threads. Context switching is reasonably fast. Abundant memory usage is less of a problem when compared to OS threads. Y ou are in full control over how tasks are scheduled and if you want you can prioritize them as you see fit. It's easy to incorporate pre-emption, which can be a powerful feature. Drawbacks Stacks need a way to grow when they run out of space creating additional work and complexity Y ou still need to save the CPU state on every context switch It's complicated to implement correctly if you intend to support many platforms and/or CPU architectures FFI can have a lot of overhead and add unexpected complexity
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Callback based approaches 37 Callback based approaches Note! This is another example of M:N threading. Many tasks can run concurrently on one OS thread. Each task consists of a chain of callbacks. Y ou probably already know what we're going to talk about in the next paragraphs from Java Script, which I assume most know. The whole idea behind a callback-based approach is to save a pointer to a set of instructions we want to run later together with whatever state is needed. In Rust, this would be a closure. Implementing callbacks is relatively easy in most languages. They don't require any context switching or pre-allocated memory for each task. However, representing concurrent operations using callbacks requires you to write the program in a radically different way from the start. Re-writing a program that uses a normal sequential program flow to one using callbacks represents a substantial rewrite, and the same goes the other way. Callback-based concurrency can be hard to reason about and can become very complicated to understand. It's no coincidence that the term “callback hell” is something most Java Script developers are familiar with. Since each sub-task must save all the state it needs for later, the memory usage will grow linearly with the number of callbacks in a task. Advantages Easy to implement in most languages No context switching Relatively low memory overhead (in most cases) Drawbacks Memory usage grows linearly with the number of callbacks. Programs and code can be hard to reason about. It's a very different way of writing programs and it will affect almost all aspects of the program since all yielding operations require one callback. Ownership can be hard to reason about. The consequence is that writing callback-based programs without a garbage collector can become very difficult.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 38 Sharing state between tasks is difficult due to the complexity of ownership rules. Debugging callbacks can be difficult. Coroutines: promises and futures Note! This is another example of M:N threading. Many tasks can run concurrently on one OS thread. Each task is represented as a state machine. Promises in Java Script and futures in Rust are two different implementations that are based on the same idea. There are differences between different implementations, but we'll not focus on those here. It's worth explaining promises a bit since they're widely known due to their use in Java Script. Promises also have a lot in common with Rust's futures. First of all, many languages have a concept of promises, but I'll use the one from Java Script in the following examples. Promises are one way to deal with the complexity that comes with a callback-based approach. Instead of: set Timer(200, () => { set Timer(100, () => { set Timer(50, () => { console. log("I'm the last one"); }); }); }); We can do: function timer(ms) { return new Promise((resolve) => set Timeout(resolve, ms)); } timer(200). then(() => timer(100)). then(() => timer(50)). then(() => console. log("I'm the last one"));
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Coroutines: promises and futures 39 The latter approach is also referred to as the continuation-passing style. Each subtask calls a new one once it's finished. The difference between callbacks and promises is even more substantial under the hood. Y ou see, promises return a state machine that can be in one of three states: pending, fulfilled, or rejected. When we call timer(200) in the previous example, we get back a promise in the pending state. Now, the continuation-passing style does fix some of the issues related to callbacks, but it still retains a lot of them when it comes to complexity and the different ways of writing programs. However, they enable us to leverage the compiler to solve a lot of these problems, which we'll discuss in the next paragraph. Coroutines and async/await Coroutines come in two flavors: asymmetric and symmetric. Asymmetric coroutines yields to a scheduler, and they're the ones we'll focus on. Symmetric coroutines yield a specific destination; for example, a different coroutine. While coroutines are a pretty broad concept in general, the introduction of coroutines as objects in programming languages is what really makes this way of handling concurrency rival the ease of use that OS threads and fibers/green threads are known for. Y ou see when you write async in Rust or Java Script, the compiler re-writes what looks like a normal function call into a future (in the case of Rust) or a promise (in the case of Java Script). Await, on the other hand, yields control to the runtime scheduler, and the task is suspended until the future/ promise you're awaiting has finished. This way, we can write programs that handle concurrent operations in almost the same way we write our normal sequential programs. Our Java Script program can now be written as follows: async function run() { await timer(200); await timer(100); await timer(50); console. log("I'm the last one"); } Y ou can consider the run function as a pausable task consisting of several sub-tasks. On each “await” point, it yields control to the scheduler (in this case, it's the well-known Java Script event loop). Once one of the sub-tasks changes state to either fulfilled or rejected, the task is scheduled to continue to the next step.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
How Programming Languages Model Asynchronous Program Flow 40 When using Rust, you can see the same transformation happening with the function signature when you write something such as this: async fn run()-> () {... } The function wraps the return object, and instead of returning the type (), it returns a Future with an output type of (): Fn run()-> impl Future<Output = ()> Syntactically, Rust's futures 0. 1 was a lot like the promise example we just showed, and the Rust futures we use today have a lot in common with how async /await works in Java Script.. This way of rewriting what look like normal functions and code into something else has a lot of benefits, but it's not without its drawbacks. As with any stackless coroutine implementation, full pre-emption can be hard, or impossible, to implement. These functions have to yield at specific points, and there is no way to suspend execution in the middle of a stack frame in contrast to fibers/green threads. Some level of pre-emption is possible by having the runtime or compiler insert pre-emption points at every function call, for example, but it's not the same as being able to pre-empt a task at any point during its execution. Pre-emption points Pre-emption points can be thought of as inserting code that calls into the scheduler and asks it if it wishes to pre-empt the task. These points can be inserted by the compiler or the library you use before every new function call for example. Furthermore, you need compiler support to make the most out of it. Languages that have metaprogramming abilities (such as macros) can emulate much of the same, but this will still not be as seamless as it will when the compiler is aware of these special async tasks. Debugging is another area where care must be taken when implementing futures/promises. Since the code is re-written as state machines (or generators), you won't have the same stack traces as you do with normal functions. Usually, you can assume that the caller of a function is what precedes it both in the stack and in the program flow. For futures and promises, it might be the runtime that calls the function that progresses the state machine, so there might not be a good backtrace you can use to see what happened before calling the function that failed. There are ways to work around this, but most of them will incur some overhead. Advantages Y ou can write code and model programs the same way you normally would No context switching
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 41 It can be implemented in a very memory-efficient way It's easy to implement for various platforms Drawbacks Pre-emption can be hard, or impossible, to fully implement, as the tasks can't be stopped in the middle of a stack frame It needs compiler support to leverage its full advantages Debugging can be difficult both due to the non-sequential program flow and the limitations on the information you get from the backtraces. Summary Y ou're still here? That's excellent! Good job on getting through all that background information. I know going through text that describes abstractions and code can be pretty daunting, but I hope you see why it's so valuable for us to go through these higher-level topics now at the start of the book. We'll get to the examples soon. I promise! In this chapter, we went through a lot of information on how we can model and handle asynchronous operations in programming languages by using both OS-provided threads and abstractions provided by a programming language or a library. While it's not an extensive list, we covered some of the most popular and widely used technologies while discussing their advantages and drawbacks. We spent quite some time going in-depth on threads, coroutines, fibers, green threads, and callbacks, so you should have a pretty good idea of what they are and how they're different from each other. The next chapter will go into detail about how we do system calls and create cross-platform abstractions and what OS-backed event queues such as Epoll, Kqueue, and IOCP really are and why they're fundamental to most async runtimes you'll encounter out in the wild.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
3 Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions In this chapter, we'll take a look at how an OS-backed event queue works and how three different operating systems handle this task in different ways. The reason for going through this is that most async runtimes I know of use OS-backed event queues such as this as a fundamental part of achieving high-performance I/O. Y ou'll most likely hear references to these frequently when reading about how async code really works. Event queues based on the technology we discuss in this chapter is used in many popular libraries like: mio (https://github. com/tokio-rs/mio ), a key part of popular runtimes like Tokio polling (https://github. com/smol-rs/polling ), the event queue used in Smol and async-std libuv (https://libuv. org/ ), the library used to create the event queue used in Node. js (a Java Script runtime) and the Julia programming language C# for its asynchronous network calls Boost. Asio, a library for asynchronous network I/O for C++ All our interactions with the host operating system are done through system calls (syscalls ). To make a system call using Rust, we need to know how to use Rust's foreign function interface (FFI). In addition to knowing how to use FFI and make syscalls, we need to cover cross-platform abstractions. When creating an event queue, whether you create it yourself or use a library, you'll notice that the
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 44 abstractions might seem a bit unintuitive if you only have a high-level overview of how, for example, IOCP works on Windows. The reason for this is that these abstractions need to provide one API that covers the fact that different operating systems handle the same task differently. This process often involves identifying a common denominator between the platforms and building a new abstraction on top of that. Instead of using a rather complex and lengthy example to explain FFI, syscalls, and cross-platform abstractions, we'll ease into the topic using a simple example. When we encounter these concepts later on, we'll already know these subjects well enough, so we're well prepared for the more interesting examples in the following chapters. In this chapter, we'll go through the following main topics: Why use an OS-backed event queue? Readiness-based event queues Completion-based event queues epoll kqueue IOCP Syscalls, FFI, and cross-platform abstractions Note There are popular, although lesser-used, alternatives you should know about even though we don't cover them here: wepoll : This uses specific APIs on Windows and wraps IOCP so it closely resembles how epoll works on Linux in contrast to regular IOCP. This makes it easier to create an abstraction layer with the same API on top of the two different technologies. It's used by both libuv and mio. io_uring : This is a relatively new API on Linux with many similarities to IOCP on Windows. I'm pretty confident that after you've gone through the next two chapters, you will have an easy time reading up on these if you want to learn more about them. Technical requirements This chapter doesn't require you to set up anything new, but since we're writing some low-level code for three different platforms, you need access to these platforms if you want to run all the examples. The best way to follow along is to open the accompanying repository on your computer and navigate to the ch03 folder.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Why use an OS-backed event queue? 45 This chapter is a little special since we build some basic understanding from the ground up, which means some of it is quite low-level and requires a specific operating system and CPU family to run. Don't worry; I've chosen the most used and popular CPU, so this shouldn't be a problem, but it is something you need to be aware of. The machine must use a CPU using the x86-64 instruction set on Windows and Linux. Intel and AMD desktop CPUs use this architecture, but if you run Linux (or WSL) on a machine using an ARM processor you might encounter issues with some of the examples using inline assembly. On mac OS, the example in the book targets the newer M-family of chips, but the repository also contains examples targeting the older Intel-based Macs. Unfortunately, some examples targeting specific platforms require that specific operating system to run. However, this will be the only chapter where you need access to three different platforms to run all the examples. Going forward, we'll create examples that will run on all platforms either natively or using Windows Subsystem for Linux (WSL ), but to understand the basics of cross-platform abstractions, we need to actually create examples that target these different platforms. Running the Linux examples If you don't have a Linux machine set up, you can run the Linux example on the Rust Playground, or if you're on a Windows system, my suggestion is to set up WSL and run the code there. Y ou can find the instructions on how to do that at https://learn. microsoft. com/en-us/windows/ wsl/install. Remember, you have to install Rust in the WSL environment as well, so follow the instructions in the Preface section of this book on how to install Rust on Linux. If you use VS Code as your editor, there is a very simple way of switching your environment to WSL. Press Ctrl+Shift +P and write Reopen folder in WSL. This way, you can easily open the example folder in WSL and run the code examples using Linux there. Why use an OS-backed event queue? Y ou already know by now that we need to cooperate closely with the OS to make I/O operations as efficient as possible. Operating systems such as Linux, mac OS, and Windows provide several ways of performing I/O, both blocking and non-blocking.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 46 I/O operations need to go through the operating system since they are dependent on resources that our operating system abstracts over. This can be the disk drive, the network card, or other peripherals. Especially in the case of network calls, we're not only dependent on our own hardware, but we also depend on resources that might reside far away from our own, causing a significant delay. In the previous chapter, we covered different ways to handle asynchronous operations when programming, and while they're all different, they all have one thing in common: they need control over when and if they should yield to the OS scheduler when making a syscall. In practice, this means that syscalls that normally would yield to the OS scheduler (blocking calls) needs to be avoided and we need to use non-blocking calls instead. We also need an efficient way to know the status of each call so we know when the task that made the otherwise blocking call is ready to progress. This is the main reason for using an OS-backed event queue in an asynchronous runtime. We'll look at three different ways of handling an I/O operation as an example. Blocking I/O When we ask the operating system to perform a blocking operation, it will suspend the OS thread that makes the call. It will then store the CPU state it had at the point where we made the call and go on to do other things. When data arrives for us through the network, it will wake up our thread again, restore the CPU state, and let us resume as if nothing has happened. Blocking operations are the least flexible to use for us as programmers since we yield control to the OS at every call. The big advantage is that our thread gets woken up once the event we're waiting for is ready so we can continue. If we take the whole system running on the OS into account, it's a pretty efficient solution since the OS will give threads that have work to do time on the CPU to progress. However, if we narrow the scope to look at our process in isolation, we find that every time we make a blocking call, we put a thread to sleep, even if we still have work that our process could do. This leaves us with the choice of spawning new threads to do work on or just accepting that we have to wait for the blocking call to return. We'll go a little more into detail about this later. Non-blocking I/O Unlike a blocking I/O operation, the OS will not suspend the thread that made an I/O request, but instead give it a handle that the thread can use to ask the operating system if the event is ready or not. We call the process of querying for status polling. Non-blocking I/O operations give us as programmers more freedom, but, as usual, that comes with a responsibility. If we poll too often, such as in a loop, we will occupy a lot of CPU time just to ask for an updated status, which is very wasteful. If we poll too infrequently, there will be a significant delay between an event being ready and us doing something about it, thus limiting our throughput.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Readiness-based event queues 47 Event queuing via epoll/kqueue and IOCP This is a sort of hybrid of the previous approaches. In the case of a network call, the call itself will be non-blocking. However, instead of polling the handle regularly, we can add that handle to an event queue, and we can do that with thousands of handles with very little overhead. As programmers, we now have a new choice. We can either query the queue with regular intervals to check if any of the events we added have changed status or we can make a blocking call to the queue, telling the OS that we want to be woken up when at least one event in our queue has changed status so that the task that was waiting for that specific event can continue. This allows us to only yield control to the OS when there is no more work to do and all tasks are waiting for an event to occur before they can progress. We can decide exactly when we want to issue such a blocking call ourselves. Note We will not cover methods such as poll and select. Most operating systems have methods that are older and not widely used in modern async runtimes today. Just know that there are other calls we can make that essentially seek to give the same flexibility as the event queues we just discussed. Readiness-based event queues epoll and kqueue are known as readiness-based event queues, which means they let you know when an action is ready to be performed. An example of this is a socket that is ready to be read from. To give an idea about how this works in practice, we can take a look at what happens when we read data from a socket using epoll/kqueue: 1. We create an event queue by calling the syscall epoll_create or kqueue. 2. We ask the OS for a file descriptor representing a network socket. 3. Through another syscall, we register an interest in Read events on this socket. It's important that we also inform the OS that we'll be expecting to receive a notification when the event is ready in the event queue we created in step 1. 4. Next, we call epoll_wait or kevent to wait for an event. This will block (suspend) the thread it's called on. 5. When the event is ready, our thread is unblocked (resumed) and we return from our wait call with data about the event that occurred.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 48 6. We call read on the socket we created in step 2. Figure 3. 1-A simplified view of the epoll and kqueue flow Completion-based event queues IOCP stands for input/output completion port. This is a completion-based event queue. This type of queue notifies you when events are completed. An example of this is when data has been read into a buffer. The following is a basic breakdown of what happens in this type of event queue: 1. We create an event queue by calling the syscall Create Io Completion Port. 2. We create a buffer and ask the OS to give us a handle to a socket. 3. We register an interest in Read events on this socket with another syscall, but this time we also pass in the buffer we created in (step 2), which the data will be read to. 4. Next, we call Get Queued Completion Status Ex, which will block until an event has been completed.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
epoll, kqueue, and IOCP 49 5. Our thread is unblocked and our buffer is now filled with the data we're interested in. Figure 3. 2-A simplified view of the IOCP flow epoll, kqueue, and IOCP epoll is the Linux way of implementing an event queue. In terms of functionality, it has a lot in common with kqueue. The advantage of using epoll over other similar methods on Linux, such as select or poll, is that epoll was designed to work very efficiently with a large number of events. kqueue is the mac OS way of implementing an event queue (which originated from BSD) in operating systems such as Free BSD and Open BSD. In terms of high-level functionality, it's similar to epoll in concept but different in actual use. IOCP is the way Windows handle this type of event queue. In Windows, a completion port will let you know when an event has been completed. Now, this might sound like a minor difference, but it's not. This is especially apparent when you want to write a library since abstracting over both means you'll either have to model IOCP as readiness-based or model epoll/kqueue as completion-based.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 50 Lending out a buffer to the OS also provides some challenges since it's very important that this buffer stays untouched while waiting for an operation to return. Windows Linux mac OS IOCP epoll kqueue Completion based Readiness based Readiness based Table 3. 1-Different platforms and event queues Cross-platform event queues When creating a cross-platform event queue, you have to deal with the fact that you have to create one unified API that's the same whether it's used on Windows (IOCP), mac OS (kqueue), or Linux (epoll). The most obvious difference is that IOCP is completion-based while kqueue and epoll are readiness-based. This fundamental difference means that you have to make a choice: Y ou can create an abstraction that treats kqueue and epoll as completion-based queues, or Y ou can create an abstraction that treats IOCP as a readiness-based queue From my personal experience, it's a lot easier to create an abstraction that mimics a completion-based queue and handle the fact that kqueue and epoll are readiness-based behind the scenes than the other way around. The use of wepoll, as I alluded to earlier, is one way of creating a readiness-based queue on Windows. It will simplify creating such an API greatly, but we'll leave that out for now because it's less well known and not an approach that's officially documented by Microsoft. Since IOCP is completion-based, it needs a buffer to read data into since it returns when data is read into that buffer. Kqueue and epoll, on the other hand, don't require that. They'll only return when you can read data into a buffer without blocking. By requiring the user to supply a buffer of their preferred size to our API, we let the user control how they want to manage their memory. The user defines the size of the buffers, and the re-usages and controls all the aspects of the memory that will be passed to the OS when using IOCP. In the case of epoll and kqueue in such an API, you can simply call read for the user and fill the same buffers, making it appear to the user that the API is completion-based.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
System calls, FFI, and cross-platform abstractions 51 If you wanted to present a readiness-based API instead, you have to create an illusion of having two separate operations when doing I/O on Windows. First, request a notification when the data is ready to be read on a socket, and then actually read the data. While possible to do, you'll most likely find yourself having to create a very complex API or accept some inefficiencies on Windows platforms due to having intermediate buffers to keep the illusion of having a readiness-based API. We'll leave the topic of event queues for when we go on to create a simple example showing how exactly they work. Before we do that, we need to become really comfortable with FFI and syscalls, and we'll do that by writing an example of a syscall on three different platforms. We'll also use this opportunity to talk about abstraction levels and how we can create a unified API that works on the three different platforms. System calls, FFI, and cross-platform abstractions We' ll implement a very basic syscall for the three architectures: BSD/mac OS, Linux, and Windows. We'll also see how this is implemented in three levels of abstraction. The syscall we'll implement is the one used when we write something to the standard output (stdout ) since that is such a common operation and it's interesting to see how it really works. We'll start off by looking at the lowest level of abstraction we can use to make system calls and build our understanding of them from the ground up. The lowest level of abstraction The lowest level of abstraction is to write what is often referred to as a “raw” syscall. A raw syscall is one that bypasses the OS-provided library for making syscalls and instead relies on the OS having a stable syscall ABI. A stable syscall ABI means it guarantees that if you put the right data in certain registers and call a specific CPU instruction that passes control to the OS, it will always do the same thing. To make a raw syscall, we need to write a little inline assembly, but don't worry. Even though we introduce it abruptly here, we'll go through it line by line, and in Chapter 5, we'll introduce inline assembly in more detail so you become familiar with it. At this level of abstraction, we need to write different code for BSD/mac OS, Linux, and Windows. We also need to write different code if the OS is running on different CPU architectures.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 52 Raw syscall on Linux On Linux and mac OS, the syscall we want to invoke is called write. Both systems operate based on the concept of file descriptors, and stdout is already present when you start a process. If you don't run Linux on your machine, you have some options to run this example. Y ou can copy and paste the code into the Rust Playground or you can run it using WSL in Windows. As mentioned in the introduction, I'll list what example you need to go to at the start of each example and you can run the example there by writing cargo run. The source code itself is always located in the example folder at src/main. rs. The first thing we do is to pull in the standard library module that gives us access to the asm! macro. Repository reference: ch03/a-raw-syscall use std::arch::asm; The next step is to write our syscall function: #[inline(never)] fn syscall(message: String) { let msg_ptr = message. as_ptr(); let len = message. len(); unsafe { asm!( "mov rax, 1", "mov rdi, 1", "syscall", in("rsi") msg_ptr, in("rdx") len, out("rax") _, out("rdi") _, lateout("rsi") _, lateout("rdx") _ ); } } We'll go through this first one line by line. The next ones will be pretty similar, so we only need to cover this in great detail once.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
System calls, FFI, and cross-platform abstractions 53 First, we have an attribute named #[inline(never)] that tells the compiler that we never want this function to be inlined during optimization. Inlining is when the compiler omits the function call and simply copies the body of the function instead of calling it. In this case, we don't want that to ever happen. Next, we have our function call. The first two lines in the function simply get the raw pointer to the memory location where our text is stored and the length of the text buffer. The next line is an unsafe block since there is no way to call assembly such as this safely in Rust. The first line of assembly puts the value 1 in the rax register. When the CPU traps our call later on and passes control to the OS, the kernel knows that a value of one in rax means that we want to make a write. The second line puts the value 1 in the rdi register. This tells the kernel where we want to write to, and a value of one means that we want to write to stdout. The third line calls the syscall instruction. This instruction issues a software interrupt, and the CPU passes on control to the OS. Rust's inline assembly syntax will look a little intimidating at first, but bear with me. We'll cover this in detail a little later in this book so that you get comfortable with it. For now, I'll just briefly explain what it does. The fourth line writes the address to the buffer where our text is stored in the rsi register. The fifth line writes the length (in bytes) of our text buffer to the rdx register. The next four lines are not instructions to the CPU; they're meant to tell the compiler that it can't store anything in these registers and assume the data is untouched when we exit the inline assembly block. We do that by telling the compiler that there will be some unspecified data (indicated by the underscore) written to these registers. Finally, it's time to call our raw syscall: fn main() { let message = "Hello world from raw syscall!\n"; let message = String::from(message); syscall(message); } This function simply creates a String and calls our syscall function, passing it in as an argument.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 54 If you run this on Linux, you should now see the following message in your console: Hello world from raw syscall! Raw syscall on mac OS Now, since we use instructions that are specific to the CPU architecture, we'll need different functions depending on if you run an older Mac with an intel CPU or if you run a newer Mac with an Arm 64-based CPU. We only present the one working for the new M series of chips using an ARM 64 architecture, but don't worry, if you've cloned the Github repository, you'll find code that works on both versions of Mac there. Since there are only minor changes, I'll present the whole example here and just go through the differences. Remember, you need to run this code on a machine with mac OS and an M-series chip. Y ou can't try this in the Rust playground. ch03/a-raw-syscall use std::arch::asm; fn main() { let message = "Hello world from raw syscall!\n" let message = String::from(message); syscall(message); } #[inline(never)] fn syscall(message: String) { let ptr = message. as_ptr(); let len = message. len(); unsafe { asm!( "mov x16, 4", "mov x0, 1", "svc 0", in("x1") ptr, in("x2") len, out("x16") _, out("x0") _, lateout("x1") _, lateout("x2") _ ); } }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
System calls, FFI, and cross-platform abstractions 55 Aside from different register naming, there is not that much difference from the one we wrote for Linux, with the exception of the fact that a write operation has the code 4 on mac OS instead of 1 as it did on Linux. Also, the CPU instruction that issues a software interrupt is svc 0 instead of syscall. Again, if you run this on mac OS, you'll get the following printed to your console: Hello world from raw syscall! What about raw syscalls on Windows? This is a good opportunity to explain why writing raw syscalls, as we just did, is a bad idea if you want your program or library to work across platforms. Y ou see, if you want your code to work far into the future, you have to worry about what guarantees the OS gives. Linux guarantees that, for example, the value 1 written to the rax register will always refer to write, but Linux works on many platforms, and not everyone uses the same CPU architecture. We have the same problem with mac OS that just recently changed from using an Intel-based x86_64 architecture to an ARM 64-based architecture. Windows gives absolutely zero guarantees when it comes to low-level internals such as this. Windows has changed its internals numerous times and provides no official documentation on this matter. The only things we have are reverse-engineered tables that you can find on the internet, but these are not a robust solution since what was a write syscall can be changed to a delete syscall the next time you run Windows update. Even if that's unlikely, you have no guarantee, which in turn makes it impossible for you to guarantee to users of your program that it's going to work in the future. So, while raw syscalls in theory do work and are good to be familiar with, they mostly serve as an example of why we' d rather link to the libraries that the different operating systems supply for us when making syscalls. The next segment will show how we do just that. The next level of abstraction The next level of abstraction is to use the API, which all three operating systems provide for us. We'll soon see that this abstraction helps us remove some code. In this specific example, the syscall is the same on Linux and on mac OS, so we only need to worry if we're on Windows. We can differentiate between the platforms by using the #[cfg(target_family = "windows")] and #[cfg(target_family = "unix")] conditional compilation flags. Y ou'll see these used in the example in the repository. Our main function will look the same as it did before: ch03/b-normal-syscall use std::io; fn main() {
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 56 let message = "Hello world from syscall!\n"; let message = String::from(message); syscall(message). unwrap(); } The only difference is that instead of pulling in the asm module, we pull in the io module. Using the OS-provided API in Linux and mac OS Y ou can run this code directly in the Rust playground since it runs on Linux, or you can run it locally on a Linux machine using WSL or on mac OS: ch03/b-normal-syscall #[cfg(target_family = "unix")] #[link(name = "c")] extern "C" { fn write(fd: u32, buf: *const u8, count: usize)-> i32; } fn syscall(message: String)-> io::Result<()> { let msg_ptr = message. as_ptr(); let len = message. len(); let res = unsafe { write(1, msg_ptr, len) }; if res ==-1 { return Err(io::Error::last_os_error()); } Ok(()) } Let's go through the different steps one by one. Knowing how to do a proper syscall will be very useful for us later on in this book. #[link(name = "c")] Every Linux (and mac OS) installation comes with a version of libc, which is a C library for communicating with the operating system. Having libc, with a consistent API, allows us to program the same way without worrying about the underlying platform architecture. Kernel developers can also make changes to the underlying ABI without breaking everyone's program. This flag tells the compiler to link to the "c" library on the system. Next up is the definition of what functions in the linked library we want to call: extern "C" { fn write(fd: u32, buf: *const u8, count: usize); }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
System calls, FFI, and cross-platform abstractions 57 extern "C" (sometimes written without the "C", since "C" is assumed if nothing is specified) means we want to use the "C" calling convention when calling the function write in the "C" library we're linking to. This function needs to have the exact same name as the function in the library we're linking to. The parameters don't have to have the same name, but they must be in the same order. It's good practice to name them the same as in the library you're linking to. Here, we use Rusts FFI, so when you read about using FFI to call external functions, it's exactly what we're doing here. The write function takes a file descriptor, fd, which in this case is a handle to stdout. In addition, it expects us to provide a pointer to an array of u8, buf values and the length of that buffer, count. Calling convention This is the first time we've encountered this term, so I'll go over a brief explanation, even though we dive deeper into this topic later in the book. A calling convention defines how function calls are performed and will, amongst other things, specify:-How arguments are passed into the function-What registers the function is expected to store at the start and restore before returning-How the function returns its result-How the stack is set up (we'll get back to this one later) So, before you call a foreign function you need to specify what calling convention to use since there is no way for the compiler to know if we don't tell it. The C calling convention is by far the most common one to encounter. Next, we wrap the call to our linked function in a normal Rust function. ch03/b-normal-syscall #[cfg(target_family = "unix")] fn syscall(message: String)-> io::Result<()> { let msg_ptr = message. as_ptr(); let len = message. len(); let res = unsafe { write(1, msg_ptr, len) }; if res ==-1 { return Err(io::Error::last_os_error()); } Ok(()) }
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 58 Y ou'll probably be familiar with the first two lines now, as they're the same as we wrote for our raw syscall example. We get the pointer to the buffer where our text is stored and the length of that buffer. Next is our call to the write function in libc, which needs to be wrapped in an unsafe block since Rust can't guarantee safety when calling external functions. Y ou might wonder how we know that the value 1 refers to the file handle of stdout. Y ou'll meet this situation a lot when writing syscalls from Rust. Usually, constants are defined in the C header files, so we need to manually search them up and look for these definitions. 1 is always the file handle to stdout on UNIX systems, so it's easy to remember. Note Wrapping the libc functions and providing these constants is exactly what the create libc (https://github. com/rust-lang/libc ) provides for us. Most of the time, you can use that instead of doing all the manual work of linking to and defining functions as we do here. Lastly, we have the error handling, and you'll see this all the time when using FFI. C functions often use a specific integer to indicate if the function call was successful or not. In the case of this write call, the function will either return the number of bytes written or, if there is an error, it will return the value-1. Y ou'll find this information easily by reading the man-pages (https://man7. org/ linux/man-pages/index. html ) for Linux. If there is an error, we use the built-in function in Rust's standard library to query the OS for the last error it reported for this process and convert that to a rust io::Error type. If you run this function using cargo run, you will see this output: Hello world from syscall! Using Windows API On Windows, things work a bit differently. While UNIX models almost everything as “files” you interact with, Windows uses other abstractions. On Windows, you get a handle that represents some object you can interact with in specific ways depending on exactly what kind of handle you have. We will use the same main function as before, but we need to link to different functions in the Windows API and make changes to our syscall function. ch03/b-normal-syscall #[link(name = "kernel32")] extern "system" { fn Get Std Handle(n Std Handle: i32)-> i32; fn Write Console W(
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
System calls, FFI, and cross-platform abstractions 59 h Console Output: i32, lp Buffer: *const u16, number Of Chars To Write: u32, lp Number Of Chars Written: *mut u32, lp Reserved: *const std::ffi::c_void, )-> i32; } The first thing you notice is that we no longer link to the "C" library. Instead, we link to the kernel32 library. The next change is the use of the system calling convention. This calling convention is a bit peculiar. Y ou see, Windows uses different calling conventions depending on whether you write for a 32-bit x86 Windows version or a 64-bit x86_64 Windows version. Newer Windows versions running on x86_64 use the "C" calling convention, so if you have a newer system you can try changing that out and see that it still works. “Specifying system” lets the compiler figure out the right one to use based on the system. We link to two different syscalls in Windows: Get Std Handle : This retrieves a reference to a standard device like stdout Write Console W : Write Console comes in two types. Write Console W takes Unicode text and Write Console A takes ANSI-encoded text. We're using the one that takes Unicode text in our program. Now, ANSI-encoded text works fine if you only write English text, but as soon as you write text in other languages, you might need to use special characters that are not possible to represent in ANSI but possible in Unicode. If you mix them up, your program will not work as you expect. Next is our new syscall function: ch03/b-normal-syscall fn syscall(message: String)-> io::Result<()> { let msg: Vec<u16> = message. encode_utf16(). collect(); let msg_ptr = msg. as_ptr(); let len = msg. len() as u32; let mut output: u32 = 0; let handle = unsafe { Get Std Handle(-11) }; if handle ==-1 { return Err(io::Error::last_os_error()) } let res = unsafe { Write Console W( handle,
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Understanding OS-Backed Event Queues, System Calls, and Cross-Platform Abstractions 60 msg_ptr, len, &mut output, std::ptr::null() )}; if res == 0 { return Err(io::Error::last_os_error()); } Ok(()) } The first thing we do is convert the text to utf-16-encoded text, which Windows uses. Fortunately, Rust has a built-in function to convert our utf-8-encoded text to utf-16 code points. encode_ utf16 returns an iterator over u16 code points that we can collect to a Vec. The next two lines should be familiar by now. We get the pointer to where the text is stored and the length of the text in bytes. The next thing we do is call Get Std Handle and pass in the value-11. The values we need to pass in for the different standard devices are described together with the Get Std Handle documentation at https://learn. microsoft. com/en-us/windows/console/getstdhandle. This is convenient, as we don't have to dig through C header files to find all the constant values we need. The return code to expect is also documented thoroughly for all functions, so we handle potential errors here in the same way as we did for the Linux/mac OS syscalls. Finally, we have the call to the Write Console W function. There is nothing too fancy about this, and you'll notice similarities with the write syscall we used for Linux. One difference is that the output is not returned from the function but written to an address location we pass in in the form of a pointer to our output variable. Note Now that you've seen how we create cross-platform syscalls, you will probably also understand why we're not including the code to make every example in this book cross-platform. It's simply the case that the book would be extremely long if we did, and it's not apparent that all that extra information will actually benefit our understanding of the key concepts.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Summary 61 The highest level of abstraction This is simple, but I wanted to add this just for completeness. Rust standard library wraps the calls to the underlying OS APIs for us, so we don't have to care about what syscalls to invoke. fn main() { println!("Hello world from the standard library"); } Congratulations! Y ou've now written the same syscall using three levels of abstraction. Y ou now know what FFI looks like, you've seen some inline assembly (which we'll cover in greater detail later), and you've made a proper syscall to print something to the console. Y ou've also seen one of the things our standard library tries to solve by wrapping these calls for different platforms so we don't have to know these syscalls to print something to the console. Summary In this chapter, we went through what OS-backed event queues are and gave a high-level overview of how they work. We also went through the defining characteristics of epoll, kqueue, and IOCP and focused on how they differ from each other. In the last half of this chapter, we introduced some examples of syscalls. We discussed raw syscalls, and “normal” syscalls so that you know what they are and have seen examples of both. We also took the opportunity to talk about abstraction levels and the advantages of relying on good abstractions when they're available to us. As a part of making system calls, you also got an introduction to Rusts FFI. Finally, we created a cross-platform abstraction. Y ou also saw some of the challenges that come with creating a unifying API that works across several operating systems. The next chapter will walk you through an example using epoll to create a simple event queue, so you get to see exactly how this works in practice. In the repository, you'll also find the same example for both Windows and mac OS, so you have that available if you ever want to implement an event queue for either of those platforms.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Part 2: Event Queues and Green Threads In this part, we'll present two examples. The first example demonstrates the creation of an event queue using epoll. We will design the API to closely resemble the one used by mio, allowing us to grasp the fundamentals of both mio and epoll. The second example illustrates the use of fibers/green threads, similar to the approach employed by Go. This method is one of the popular alternatives to Rust's asynchronous programming using futures and async/await. Rust also utilized green threads before reaching version 1. 0, making it a part of Rust's asynchronous history. Throughout the exploration, we will delve into fundamental programming concepts such as ISAs, ABIs, calling conventions, stacks, and touch on assembly programming. This section comprises the following chapters: Chapter 4, Create Your Own Event Queue Chapter 5, Creating Our Own Fibers
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
4 Create Y our Own Event Queue In this chapter, we'll create a simple version of an event queue using epoll. We'll take inspiration from mio (https://github. com/tokio-rs/mio ), a low-level I/O library written in Rust that underpins much of the Rust async ecosystem. Taking inspiration from mio has the added benefit of making it easier to dive into their code base if you wish to explore how a real production-ready library works. By the end of this chapter, you should be able to understand the following: The difference between blocking and non-blocking I/O How to use epoll to make your own event queue The source code of cross-platform event queue libraries such as mio Why we need an abstraction layer on top of epoll, kqueue, and IOCP if we want a program or library to work across different platforms We've divided the chapter into the following sections: Design and introduction to epoll The ffi module The Poll module The main program Technical requirements This chapter focuses on epoll, which is specific to Linux. Unfortunately, epoll is not part of the Portable Operating System Interface (POSIX ) standard, so this example will require you to run Linux and won't work with mac OS, BSD, or Windows operating systems. If you're on a machine running Linux, you're already set and can run the examples without any further steps.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 66 If you're on Windows, my recommendation is to set up WSL (https://learn. microsoft. com/en-us/windows/wsl/install ), if you haven't already, and install Rust on the Linux operating system running on WSL. If you're using Mac, you can create a virtual machine (VM) running Linux, for example, by using the QEMU-based UTM application ( https://mac. getutm. app/ ) or any other solution for managing VMs on a Mac. A last option is to rent a Linux server (there are even some providers with a free layer), install Rust, and either use an editor such as Vim or Emacs in the console or develop on the remote machine using VS Code through SSH ( https://code. visualstudio. com/docs/remote/ssh ). I personally have good experience with Linode's offering ( https://www. linode. com/ ), but there are many, many other options out there. It's theoretically possible to run the examples on the Rust playground, but since we need a delay server, we would have to use a remote delay server service that accepts plain HTTP requests (not HTTPS) and modify the code so that the modules are all in one file instead. It's possible in a clinch but not really recommended. The delay server This example relies on calls made to a server that delays the response for a configurable duration. In the repository, there is a project named delayserver in the root folder. Y ou can set up the server by simply entering the folder in a separate console window and writing cargo run. Just leave the server running in a separate, open terminal window as we'll use it in our example. The delayserver program is cross-platform, so it works without any modification on all platforms that Rust supports. If you're running WSL on Windows, I recommend running the delayserver program in WSL as well. Depending on your setup, you might get away with running the server in a Windows console and still be able to reach it when running the example in WSL. Just be aware that it might not work out of the box. The server will listen to port 8080 by default and the examples there assume this is the port used. Y ou can change the listening port in the delayserver code before you start the server, but just remember to make the same corrections in the example code. The actual code for delayserver is less than 30 lines, so going through the code should only take a few minutes if you want to see what the server does. Design and introduction to epoll Okay, so this chapter will be centered around one main example you can find in the repository under ch04/a-epoll. We'll start by taking a look at how we design our example.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Design and introduction to epoll 67 As I mentioned at the start of this chapter, we'll take our inspiration from mio. This has one big upside and one downside. The upside is that we get a gentle introduction to how mio is designed, making it much easier to dive into that code base if you want to learn more than what we cover in this example. The downside is that we introduce an overly thick abstraction layer over epoll, including some design decisions that are very specific to mio. I think the upsides outweigh the downsides for the simple reason that if you ever want to implement a production-quality event loop, you'll probably want to look into the implementations that are already out there, and the same goes for if you want to dig deeper into the building blocks of asynchronous programming in Rust. In Rust, mio is one of the important libraries underpinning much of the async ecosystem, so gaining a little familiarity with it is an added bonus. It's important to note that mio is a cross-platform library that creates an abstraction over epoll, kqueue, and IOCP (through Wepoll, as we described in Chapter 3 ). Not only that, mio supports i OS and Android, and in the future, it will likely support other platforms as well. So, leaving the door open to unify an API over so many different systems is bound to also come with some compromises if you compare it to what you can achieve if you only plan to support one platform. mio mio describes itself as a “ fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building performance I/O apps with as little overhead as possible over the OS abstractions. ” mio drives the event queue in Tokio, which is one of the most popular and widely used asynchronous runtimes in Rust. This means that mio is driving I/O for popular frameworks such as Actix Web ( https://actix. rs/ ), Warp (https://github. com/seanmonstar/ warp ), and Rocket ( https://rocket. rs/ ). The version of mio we'll use as design inspiration in this example is version 0. 8. 8. The API has changed in the past and may change in the future, but the parts of the API we cover here have been stable since 2019, so it's a good bet that there will not be significant changes to it in the near future. As is the case with all cross-platform abstractions, it's often necessary to go the route of choosing the least common denominator. Some choices will limit flexibility and efficiency on one or more platforms in the pursuit of having a unified API that works with all of them. We'll discuss some of those choices in this chapter. Before we go further, let's create a blank project and give it a name. We'll refer to it as a-epoll going forward, but you will of course need to replace that with the name you choose. Enter the folder and type the cargo init command.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 68 In this example, we'll divide the project into a few modules, and we'll split the code up into the following files: src |--ffi. rs |--main. rs |--poll. rs Their descriptions are as follows: ffi. rs : This module will contain the code related to the syscalls we need to communicate with the host operating system main. rs : This is the example program itself poll. rs : This module contains the main abstraction, which is a thin layer over epoll Next, create the four files, mentioned in the preceding list, in the src folder. In main. rs, we need to declare the modules as well: a-epoll/src/main. rs mod ffi; mod poll; Now that we have our project set up, we can start by going through how we'll design the API we'll use. The main abstraction is in poll. rs, so go ahead and open that file. Let's start by stubbing out the structures and functions we need. It's easier to discuss them when we have them in front of us: a-epoll/src/poll. rs use std::{io::{self, Result}, net::Tcp Stream, os::fd::As Raw Fd}; use crate::ffi; type Events = Vec<ffi::Event>; pub struct Poll { registry: Registry, } impl Poll { pub fn new()-> Result<Self> { todo!()
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Design and introduction to epoll 69 } pub fn registry(&self)-> &Registry { &self. registry } pub fn poll(&mut self, events: &mut Events, timeout: Option<i32>)-> Result<()> { todo!() } } pub struct Registry { raw_fd: i32, } impl Registry { pub fn register(&self, source: &Tcp Stream, token: usize, interests: i32)-> Result<()> { todo!() } } impl Drop for Registry { fn drop(&mut self) { todo!() } } We've replaced all the implementations with todo!() for now. This macro will let us compile the program even though we've yet to implement the function body. If our execution ever reaches todo!(), it will panic. The first thing you'll notice is that we'll pull the ffi module in scope in addition to some types from the standard library. We'll also use the std::io::Result type as our own Result type. It's convenient since most errors will stem from one of our calls into the operating system, and an operating system error can be mapped to an io::Error type. There are two main abstractions over epoll. One is a structure called Poll and the other is called Registry. The name and functionality of these functions are the same as they are in mio. Naming abstractions such as these is surprisingly difficult, and both constructs could very well have had a different name, but let's lean on the fact that someone else has spent time on this before us and decided to go with these in our example.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 70 Poll is a struct that represents the event queue itself. It has a few methods: new : Creates a new event queue registry : Returns a reference to the registry that we can use to register interest to be notified about new events poll : Blocks the thread it's called on until an event is ready or it times out, whichever occurs first Registry is the other half of the equation. While Poll represents the event queue, Registry is a handle that allows us to register interest in new events. Registry will only have one method: register. Again, we mimic the API mio uses (https:// docs. rs/mio/0. 8. 8/mio/struct. Registry. html ), and instead of accepting a predefined list of methods for registering different interests, we accept an interests argument, which will indicate what kind of events we want our event queue to keep track of. One more thing to note is that we won't use a generic type for all sources. We'll only implement this for Tcp Stream, even though there are many things we could potentially track with an event queue. This is especially true when we want to make this cross-platform since, depending on the platforms you want to support, there are many types of event sources we might want to track. mio solves this by having Registry::register accept an object implementing the Source trait that mio defines. As long as you implement this trait for the source, you can use the event queue to track events on it. In the following pseudo-code, you'll get an idea of how we plan to use this API: let queue = Poll::new(). unwrap(); let id = 1; // register interest in events on a Tcp Stream queue. registry(). register(&stream, id,... ). unwrap(); let mut events = Vec::with_capacity(1); // This will block the curren thread queue. poll(&mut events, None). unwrap(); //... data is ready on one of the tracked streams Y ou might wonder why we need the Registry struct at all. To answer that question, we need to remember that mio abstracts over epoll, kqueue, and IOCP. It does this by making Registry wrap around a Selector object. The Selector object is conditionally compiled so that every platform has its own Selector implementation corresponding to the relevant syscalls to make IOCP, kqueue, and epoll do the same thing.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Design and introduction to epoll 71 Registry implements one important method we won't implement in our example, called try_clone. The reason we won't implement this is that we don't need it to understand how an event loop like this works and we want to keep the example simple and easy to understand. However, this method is important for understanding why the responsibility of registering events and the queue itself is divided. Important note By moving the concern of registering interests to a separate struct like this, users can call Registry::try_clone to get an owned Registry instance. This instance can be passed to, or shared through Arc<Registry> with, other threads, allowing multiple threads to register interest to the same Poll instance even when Poll is blocking another thread while waiting for new events to happen in Poll::poll. Poll::poll requires exclusive access since it takes a &mut self, so when we're waiting for events in Poll::poll, there is no way to register interest from a different thread at the same time if we rely on using Poll to register interest, since that will be prevented by Rust's type system. It also makes it effectively impossible to have multiple threads waiting for events by calling Poll::poll on the same instance in any meaningful way since it would require synchronization that essentially would make each call sequential anyway. The design lets users interact with the queue from potentially many threads by registering interest, while one thread makes the blocking call and handles the notifications from the operating system. Note The fact that mio doesn't enable you to have multiple threads that are blocked on the same call to Poll::poll isn't a limitation due to epoll, kqueue, or IOCP. They all allow for the scenario that many threads will call Poll::poll on the same instance and get notifications on events in the queue. epoll even allows specific flags to dictate whether the operating system should wake up only one or all threads that wait for notification (specifically the EPOLLEXCLUSIVE flag). The problem is partly about how the different platforms decide which threads to wake when there are many of them waiting for events on the same queue, and partly about the fact that there doesn't seem to be a huge interest in that functionality. For example, epoll will, by default, wake all threads that block on Poll, while Windows, by default, will only wake up one thread. Y ou can modify this behavior to some extent, and there have been ideas on implementing a try_clone method on Poll as well in the future. For now, the design is like we outlined, and we will stick to that in our example as well. This brings us to another topic we should cover before we start implementing our example.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 72 Is all I/O blocking? Finally, a question that's easy to answer. The answer is a big, resounding... maybe. The thing is that not all I/O operations will block in the sense that the operating system will park the calling thread and it will be more efficient to switch to another task. The reason for this is that the operating system is smart and will cache a lot of information in memory. If information is in the cache, a syscall requesting that information would simply return immediately with the data, so forcing a context switch or any rescheduling of the current task might be less efficient than just handling the data synchronously. The problem is that there is no way to know for sure whether I/O is blocking and it depends on what you're doing. Let me give you two examples. DNS lookup When creating a TCP connection, one of the first things that happens is that you need to convert a typical address such as www. google. com to an IP address such as 216. 58. 207. 228. The operating system maintains a mapping of local addresses and addresses it's previously looked up in a cache and will be able to resolve them almost immediately. However, the first time you look up an unknown address, it might have to make a call to a DNS server, which takes a lot of time, and the OS will park the calling thread while waiting for the response if it's not handled in a non-blocking manner. File I/O Files on the local filesystem are another area where the operating system performs quite a bit of caching. Smaller files that are frequently read are often cached in memory, so requesting that file might not block at all. If you have a web server that serves static files, there is most likely a rather limited set of small files you'll be serving. The chances are that these are cached in memory. However, there is no way to know for sure-if an operating system is running low on memory, it might have to map memory pages to the hard drive, which makes what would normally be a very fast memory lookup excruciatingly slow. The same is true if there is a huge number of small files that are accessed randomly, or if you serve very large files since the operating system will only cache a limited amount of information. Y ou'll also encounter this kind of unpredictability if you have many unrelated processes running on the same operating system as it might not cache the information that's important to you. A popular way of handling these cases is to forget about non-blocking I/O, and actually make a blocking call instead. Y ou don't want to do these calls in the same thread that runs a Poll instance (since every small delay will block all tasks), but you would probably relegate that task to a thread pool. In the thread pool, you have a limited number of threads that are tasked with making regular blocking calls for things such as DNS lookups or file I/O. An example of a runtime that does exactly th is is libuv (http://docs. libuv. org/en/ v1. x/threadpool. html#threadpool ). libuv is the asynchronous I/O library that Node. js is built upon.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The ffi module 73 While its scope is larger than mio (which only cares about non-blocking I/O), libuv is to Node in Java Script what mio is to Tokio in Rust. Note The reason for doing file I/O in a thread pool is that there have historically been poor cross-platform APIs for non-blocking file I/O. While it's true that many runtimes choose to relegate this task to a thread pool making blocking calls to the OS, it might not be true in the future as the OS APIs evolve over time. Creating a thread pool to handle these cases is outside the scope of this example (even mio considers this outside its scope, just to be clear). We'll focus on showing how epoll works and mention these topics in the text, even though we won't actually implement a solution for them in this example. Now that we've covered a lot of basic information about epoll, mio, and the design of our example, it's time to write some code and see for ourselves how this all works in practice. The ffi module Let's start with the modules that don't depend on any others and work our way from there. The ffi module contains mappings to the syscalls and data structures we need to communicate with the operating system. We'll also explain how epoll works in detail once we have presented the syscalls. It's only a few lines of code, so I'll place the first part here so it's easier to keep track of where we are in the file since there's quite a bit to explain. Open the ffi. rs file and write the following lines of code: ch04/a-epoll/src/ffi. rs pub const EPOLL_CTL_ADD: i32 = 1; pub const EPOLLIN: i32 = 0x1; pub const EPOLLET: i32 = 1 << 31; #[link(name = "c")] extern "C" { pub fn epoll_create(size: i32)-> i32; pub fn close(fd: i32)-> i32; pub fn epoll_ctl(epfd: i32, op: i32, fd: i32, event: *mut Event)-> i32; pub fn epoll_wait(epfd: i32, events: *mut Event, maxevents: i32, timeout: i32)-> i32; } The first thing you'll notice is that we declare a few constants called EPOLL_CTL_ADD, EPOLLIN, and EPOLLET.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 74 I'll get back to explaining what these constants are in a moment. Let's first take a look at the syscalls we need to make. Fortunately, we've already covered syscalls in detail, so you already know the basics of ffi and why we link to C in the preceding code: epoll_create is the syscall we make to create an epoll queue. Y ou can find the documentation for it at https://man7. org/linux/man-pages/man2/epoll_create. 2. html. This method accepts one argument called size, but size is there only for historical reasons. The argument will be ignored but must have a value larger than 0. close is the syscall we need to close the file descriptor we get when we create our epoll instance, so we release our resources properly. Y ou can read the documentation for the syscall at https://man7. org/linux/man-pages/man2/close. 2. html. epoll_ctl is the control interface we use to perform operations on our epoll instance. This is the call we use to register interest in events on a source. It supports three main operations: add, modify, or delete. The first argument, epfd, is the epoll file descriptor we want to perform operations on. The second argument, op, is the argument where we specify whether we want to perform an add, modify, or delete operation In our case, we're only interested in adding interest for events, so we'll only pass in EPOLL_ CTL_ADD, which is the value to indicate that we want to perform an add operation. epoll_ event is a little more complicated, so we'll discuss it in more detail. It does two important things for us: first, the events field indicates what kind of events we want to be notified of and it can also modify the behavior of how and when we get notified. Second, the data field passes on a piece of data to the kernel that it will return to us when an event occurs. The latter is important since we need this data to identify exactly what event occurred since that's the only information we'll receive in return that can identify what source we got the notification for. Y ou can find the documentation for this syscall here: https://man7. org/linux/ man-pages/man2/epoll_ctl. 2. html. epoll_wait is the call that will block the current thread and wait until one of two things happens: we receive a notification that an event has occurred or it times out. epfd is the epoll file descriptor identifying the queue we made with epoll_create. events is an array of the same Event structure we used in epoll_ctl. The difference is that the events field now gives us information about what event did occur, and importantly the data field contains the same data that we passed in when we registered interest For example, the data field lets us identify which file descriptor has data that's ready to be read. The maxevents arguments tell the kernel how many events we have reserved space for in our array. Lastly, the timeout argument tells the kernel how long we will wait for events before it will wake us up again so we don't potentially block forever. Y ou can read the documentation for epoll_wait at https://man7. org/linux/man-pages/man2/ epoll_wait. 2. html.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The ffi module 75 The last part of the code in this file is the Event struct: ch04/a-epoll/src/ffi. rs #[derive(Debug)] #[repr(C, packed)] pub struct Event { pub(crate) events: u32, // Token to identify event pub(crate) epoll_data: usize, } impl Event { pub fn token(&self)-> usize { self. epoll_data } } This structure is used to communicate to the operating system in epoll_ctl, and the operating system uses the same structure to communicate with us in epoll_wait. Events are defined as a u32, but it's more than just a number. This field is what we call a bitmask. I'll take the time to explain bitmasks in a later section since it's common in most syscalls and not something everyone has encountered before. In simple terms, it's a way to use the bit representation as a set of yes/no flags to indicate whether an option has been chosen or not. The different options are described in the link I provided for the epoll_ctl syscall. I won't explain all of them in detail here, but just cover the ones we'll use: EPOLLIN represents a bitflag indicating we're interested in read operations on the file handle EPOLLET represents a bitflag indicating that we're interested in getting events notified with epoll set to an edge-triggered mode We'll get back to explaining bitflags, bitmasks, and what edge-triggered mode really means in a moment, but let's just finish with the code first. The last field on the Event struct is epoll_data. This field is defined as a union in the documentation. A union is much like an enum, but in contrast to Rust's enums, it doesn't carry any information on what type it is, so it's up to us to make sure we know what type of data it holds. We use this field to simply hold a usize so we can pass in an integer identifying each event when we register interest using epoll_ctl. It would be perfectly fine to pass in a pointer instead-just as long as we make sure that the pointer is still valid when it's returned to us in epoll_wait. We can think of this field as a token, which is exactly what mio does, and to keep the API as similar as possible, we copy mio and provide a token method on the struct to get this value.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 76 What does #[repr(packed)] do? The #[repr(packed)] annotation is new to us. Usually, a struct will have padding either between fields or at the end of the struct. This happens even when we've specified #[repr(C)]. The reason has to do with efficient access to the data stored in the struct by not having to make multiple fetches to get the data stored in a struct field. In the case of the Event struct, the usual padding would be adding 4 bytes of padding at the end of the events field. When the operating system expects a packed struct for Event, and we give it a padded one, it will write parts of event_data to the padding between the fields. When you try to read event_data later on, you'll end up only reading the last part of event_data, which happened to overlap and get the wrong data The fact that the operating systemexpects a packed Event struct isn't obvious by reading the manpages for Linux, so you have to read the appropriate C header files to know for sure. Y ou could of course simply rely on the libc crate (https://github. com/rust-lang/ libc ), which we would do too if we weren't here to learn things like this for ourselves. So, now that we've finished walking through the code, there are a few topics that we promised to get back to. Bitflags and bitmasks Y ou'll encounter this all the time when making syscalls (in fact, the concept of bitmasks is pretty common in low-level programming). A bitmask is a way to treat each bit as a switch, or a flag, to indicate that an option is either enabled or disabled.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The ffi module 77 An integer, such as i32, can be expressed as 32 bits. EPOLLIN has the hex value of 0x1 (which is simply 1 in decimal). Represented in bits, this would look like 000000000000000000000000 00000001. EPOLLET, on the other hand, has a value of 1 << 31. This simply means the bit representation of the decimal number 1, shifted 31 bits to the left. The decimal number 1 is incidentally the same as EPOLLIN, so by looking at that representation and shifting the bits 31 times to the left, we get a number with the bit representation of 10000000000000000000000000000000. The way we use bitflags is that we use the OR operator, |, and by OR'ing the values together, we get a bitmask with each flag we OR' ed set to 1. In our example, the bitmask would look like 10000000 000000000000000000000001. The receiver of the bitmask (in this case, the operating system) can then do an opposite operation, check which flags are set, and act accordingly. We can create a very simple example in code to show how this works in practice (you can simply run this in the Rust playground or create a new empty project for throwaway experiments such as this): fn main() { let bitflag_a: i32 = 1 << 31; let bitflag_b: i32 = 0x1; let bitmask: i32 = bitflag_a | bitflag_b; println!("{bitflag_a:032b}"); println!("{bitflag_b:032b}"); println!("{bitmask:032b}"); check(bitmask); } fn check(bitmask: i32) { const EPOLLIN: i32 = 0x1; const EPOLLET: i32 = 1 << 31; const EPOLLONESHOT: i32 = 0x40000000; let read = bitmask & EPOLLIN != 0; let et = bitmask & EPOLLET != 0; let oneshot = bitmask & EPOLLONESHOT != 0; println!("read_event? {read}, edge_triggered: {et}, oneshot?: {oneshot}") } This code will output the following: 10000000000000000000000000000000 00000000000000000000000000000001
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
Create Your Own Event Queue 78 10000000000000000000000000000001 read_event? true, edge_triggered: true, oneshot?: false The next topic we will introduce in this chapter is the concept of edge-triggered events, which probably need some explanation. Level-triggered versus edge-triggered events In a perfect world, we wouldn't need to discuss this, but when working with epoll, it's almost impossible to avoid having to know about the difference. It's not obvious by reading the documentation, especially not if you haven't had previous experience with these terms before. The interesting part of this is that it allows us to create a parallel between how events are handled in epoll and how events are handled at the hardware level. epoll can notify events in a level-triggered or edge-triggered mode. If your main experience is programming in high-level languages, this must sound very obscure (it did to me when I first learned about it), but bear with me. In the events bitmask on the Event struct, we set the EPOLLET flag to get notified in edge-triggered mode (the default if you specify nothing is level-triggered). This way of modeling event notification and event handling has a lot of similarities to how computers handle interrupts. Level-triggered means that the answer to the question “Has an event happened” is true as long as the electrical signal on an interrupt line is reported as high. If we translate this to our example, a read event has occurred as long as there is data in the buffer associated with the file handle. When handling interrupts, you would clear the interrupt by servicing whatever hardware caused it, or you could mask the interrupt, which simply disables interrupts on that line until it's explicitly unmasked later on. In our example, we clear the interrupt by draining all the data in the buffer by reading it. When the buffer is drained, the answer to our question changes to false. When using epoll in its default mode, which is level-triggered, we can encounter a case where we get multiple notifications on the same event since we haven't had time to drain the buffer yet (remember, as long as there is data in the buffer, epoll will notify you over and over again). This is especially apparent when we have one thread that reports events and then delegates the task of handling the event (reading from the stream) to other worker threads since epoll will happily report that an event is ready even though we're in the process of handling it. To remedy this, epoll has a flag named EPOLLONESHOT.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf
The ffi module 79 EPOLLONESHOT tells epoll that once we receive an event on this file descriptor, it should disable the file descriptor in the interest list. It won't remove it, but we won't get any more notifications on that file descriptor unless we explicitly reactivate it by calling epoll_ctl with the EPOLL_CTL_MOD argument and a new bitmask. If we didn't add this flag, the following could happen: if thread 1 is the thread where we call epoll_ wait, then once it receives a notification about a read event, it starts a task in thread 2 to read from that file descriptor, and then calls epoll_wait again to get notifications on new events. In this case, the call to epoll_wait would return again and tell us that data is ready on the same file descriptor since we haven't had the time to drain the buffer on that file descriptor yet. We know that the task is taken care of by thread 2, but we still get a notification. Without additional synchronization and logic, we could end up giving the task of reading from the same file descriptor to thread 3, which could cause problems that are quite hard to debug. Using EPOLLONESHOT solves this problem since thread 2 will have to reactivate the file descriptor in the event queue once it's done handling its task, thereby telling our epoll queue that it's finished with it and that we are interested in getting notifications on that file descriptor again. To go back to our original analogy of hardware interrupts, EPOLLONESHOT could be thought of as masking an interrupt. Y ou haven't actually cleared the source of the event notification yet, but you don't want further notifications until you've done that and explicitly unmask it. In epoll, the EPOLLONESHOT flag will disable notifications on the file descriptor until you explicitly enable it by calling epoll_ctl with the op argument set to EPOLL_CTL_MOD. Edge-triggered means that the answer to the question “Has an event happened” is true only if the electrical signal has changed from low to high. If we translate this to our example: a read event has occurred when the buffer has changed from having no data to having data. As long as there is data in the buffer, no new events will be reported. Y ou still handle the event by draining all the data from the socket, but you won't get a new notification until the buffer is fully drained and then filled with new data. Edge-triggered mode also comes with some pitfalls. The biggest one is that if you don't drain the buffer properly, you will never receive a notification on that file handle again.
9781805128137-ASYNCHRONOUS_PROGRAMMING_IN_RUST.pdf