Unit 1.1 - Introduction
Exercise 1.1.1: Setup Your Installation
In this file you'll find instructions on how to install the tools we'll use during the course.
All of these tools are available for Linux, macOS and Windows users. We'll need the tools to write and compile our Rust code, and allow for remote mentoring. Important: these instructions are to be followed at home, before the start of the first tutorial. If you have any problems with installation, contact the lecturers! We won't be addressing installation problems during the first tutorial.
Rust and Cargo
First we'll need rustc, the standard Rust compiler.
rustc is generally not invoked directly, but through cargo, the Rust package manager.
rustup takes care of installing rustc and cargo.
This part is easy: go to https://rustup.rs and follow the instructions. Please make sure you're installing the latest default toolchain. Once done, run
rustc -V && cargo -V
The output should be something like this:
rustc 1.67.1 (d5a82bbd2 2023-02-07)
cargo 1.67.1 (8ecd4f20a 2023-01-10)
Using Rustup, you can install Rust toolchains and components. More info:
Rustfmt and Clippy
To avoid discussions, Rust provides its own formatting tool, Rustfmt. We'll also be using Clippy, a collection of lints to analyze your code, that catches common mistakes for you. You'll notice that Rusts Clippy can be a very helpful companion. Both Rustfmt and Clippy are installed by Rustup by default.
To run Rustfmt on your project, execute:
cargo fmt
To run clippy:
cargo clippy
More info:
Visual Studio Code
During the course, we will use Visual Studio Code (vscode) to write code in. Of course, you're free to use your favorite editor, but if you encounter problems, you can't rely on support from us. Also, we'll use vscode to allow for remote collaboration and mentoring during tutorial sessions.
You can find the installation instructions here: https://code.visualstudio.com/.
We will install some plugins as well. The first one is Rust-Analyzer. Installation instructions can be found here https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer. Rust-Analyzer provides a lot of help during development and in indispensable when getting started with Rust.
Another plugin we'll install is Live Share. We will use the plugin to share screens and provide help during remote tutorial sessions. The extension pack also contains the Live Share Audio plugin, which allows for audio communication during share sessions. Installation instructions can be found here: https://marketplace.visualstudio.com/items?itemName=MS-vsliveshare.vsliveshare
The last plugin we'll use is CodeLLDB. This plugin enables debugging Rust code from within vscode. You can find instructions here: https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb.
More info:
Git
We will use Git as version control tool. If you haven't installed Git already, you can find instructions here: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git. If you're new to Git, you'll also appreciate GitHubs intro to Git https://docs.github.com/en/get-started/using-git/about-git and the Git intro with vscode, which you can find here: https://www.youtube.com/watch?v=i_23KUAEtUM.
More info: https://www.youtube.com/playlist?list=PLg7s6cbtAD15G8lNyoaYDuKZSKyJrgwB-
Course code
Now that everything is installed, you can clone the source code repository. The repository can be found here: https://github.com/tweedegolf/teach-rs.
Instructions on cloning the repository can be found here: https://docs.github.com/en/get-started/getting-started-with-git/about-remote-repositories#cloning-with-https-urls
Trying it out
Now that you've got the code on your machine, navigate to it using your favorite terminal and run:
cd exercises/1-course-introduction/1-introduction/1-setup-your-installation
cargo run
This command may take a while to run the first time, as Cargo will first fetch the crate index from the registry.
It will compile and run the intro package, which you can find in exercises/1-course-introduction/1-introduction/1-setup-your-installation.
If everything goes well, you should see some output:
Compiling intro v0.1.0 ([REDACTED]/exercises/1-course-introduction/1-introduction/1-setup-your-installation)
Finished dev [unoptimized + debuginfo] target(s) in 0.11s
Running `target/debug/intro`
🦀 Hello, world! 🦀
You've successfully compiled and run your first Rust project!
If Rust-Analyzer is set up correctly, you can also click the '▶️ Run'-button that is shown in exercises/1-course-introduction/1-introduction/1-setup-your-installation/src/main.rs.
With CodeLLDB installed correctly, you can also start a debug session by clicking 'Debug', right next to the '▶️ Run'-button.
Play a little with setting breakpoints by clicking on a line number, making a red circle appear and stepping over/into/out of functions using the controls.
You can view variable values by hovering over them while execution is paused, or by expanding the 'Local' view under 'Variables' in the left panel during a debug session.
Unit 2.1 - Basic Syntax
Exercise 2.1.1: Basic Syntax
Open exercises/2-foundations-of-rust/1-basic-syntax/1-basic-syntax in your editor. This folder contains a number of exercises with which you can practise basic Rust syntax.
While inside the exercises/2-foundations-of-rust/1-basic-syntax/1-basic-syntax folder, to get started, run:
cargo run --bin 01
This will try to compile exercise 1. Try and get the example to run, and continue on with the next exercise by replacing the number of the exercise in the cargo run command.
Some exercises contain unit tests. To run the test in src/bin/01.rs, run
cargo test --bin 01
Make sure all tests pass!
Unit 2.2 - Ownership and References
Exercise 2.2.1: Move Semantics
This exercise is adapted from the move semantics exercise from Rustlings
While inside the exercises/2-foundations-of-rust/2-ownership-and-references/1-move-semantics folder, to get started, run:
cargo run --bin 01
This will try to compile exercise 1. Try and get the example to run, and continue on with the next exercise by replacing the number of the exercise in the cargo run command.
Some exercises contain unit tests. To run the test in src/bin/01.rs, run
cargo test --bin 01
Make sure all tests pass!
01.rs should compile as is, but you'll have to make sure the others compile as well. For some exercises, instructions are included as doc comments at the top of the file. Make sure to adhere to them.
Exercise 2.2.2: Borrowing
Fix the two examples in the exercises/2-foundations-of-rust/2-ownership-and-references/2-borrowing crate! Don't forget you
can run individual binaries by using cargo run --bin 01 in that directory!
Make sure to follow the instructions that are in the comments!
Unit 2.3 - Advanced Syntax
Exercise 2.3.1: Error propagation
Follow the instructions in the comments of exercises/2-foundations-of-rust/3-advanced-syntax/1-error-propagation/src/main.rs!
Exercise 2.3.2: Error handling
Follow the instructions in the comments of exercises/2-foundations-of-rust/3-advanced-syntax/2-error-handling/src/main.rs!
Exercise 2.3.3: Slices
Follow the instructions in the comments of exercises/2-foundations-of-rust/3-advanced-syntax/3-slices/src/main.rs!
Don't take too much time on the extra assignment, instead come back later once
you've done the rest of the exercises.
Exercise 2.3.4: Ring Buffer
This is a bonus exercise! Follow the instructions in the comments of
exercises/2-foundations-of-rust/3-advanced-syntax/4-ring-buffer/src/main.rs!
Exercise 2.3.5: Boxed Data
Follow the instructions in the comments of exercises/2-foundations-of-rust/3-advanced-syntax/5-boxed-data/src/main.rs!
Unit 2.4 - Traits and Generics
Exercise 2.4.1: Local Storage Vec
In this exercise, we'll create a type called LocalStorageVec, which is generic list of items that resides either on the stack or the heap, depending on its size. If its size is small enough for items to be put on the stack, the LocalStorageVec buffer is backed by an array. LocalStorageVec is not only generic over the type (T) of items in the list, but also by the size (N) of this stack-located array using a relatively new feature called "const generics". Once the LocalStorageVec contains more items than fit in the array, a heap based Vec is allocated as space for the items to reside in.
Within this exercise, the objectives are annotated with a number of stars (⭐), indicating the difficulty. You are likely not to be able to finish all exercises during the tutorial session
Questions
- When is such a data structure more efficient than a standard
Vec? - What are the downsides, compared to just using a
Vec?
Open the exercises/2-foundations-of-rust/4-traits-and-generics/1-local-storage-vec crate. It contains a src/lib.rs file, meaning this crate is a library. lib.rs contains a number of tests, which can be run by calling cargo test. Don't worry if they don't pass or even compile right now: it's your job to fix that in this exercise. Most of the tests are commented out right now, to enable a step-by-step approach. Before you begin, have a look at the code and the comments in there, they contain various helpful clues.
2.4.1.A Defining the type ⭐
Currently, the LocalStorageVec enum is incomplete. Give it two variants: Stack and Heap. Stack contains two named fields, buf and len. buf will be the array with a capacity to hold N items of type T; len is a field of type usize that will denote the amount of items actually stored. The Heap variant has an unnamed field containing a Vec<T>. If you've defined the LocalStorageVec variants correctly, running cargo test should output something like
running 1 test
test test::it_compiles ... ignored, This test is just to validate the definition of `LocalStorageVec`. If it compiles, all is OK
test result: ok. 0 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
This test does (and should) not run, but is just there for checking your variant definition.
Hint 1
You may be able to reverse-engineer the `LocalStorageVec` definition using the code of the `it_compiles` test case.Hint 2 (If you got stuck, but try to resist me for a while)
Below definition works. Read the code comments and make sure you understand what's going on.
#![allow(unused)] fn main() { // Define an enum `LocalStorageVec` that is generic over // type `T` and a constant `N` of type `usize` pub enum LocalStorageVec<T, const N: usize> { // Define a struct-like variant called `Stack` containing two named fields: // - `buf` is an array with elements of `T` of size `N` // - `len` is a field of type `usize` Stack { buf: [T; N], len: usize }, // Define a tuple-like variant called `Heap`, containing a single field // of type `Vec<T>`, which is a heap-based growable, contiguous list of `T` Heap(Vec<T>), } }
2.4.1.B impl-ing From<Vec<T>> ⭐
Uncomment the test it_from_vecs, and add an implementation for From<Vec<T>> to LocalStorageVec<T>. To do so, copy the following code in your lib.rs file and replace the todo! macro invocation with your code that creates a heap-based LocalStorageVec containing the passed Vec<T>.
#![allow(unused)] fn main() { impl<T, const N: usize> From<Vec<T>> for LocalStorageVec<T, N> { fn from(v: Vec<T>) -> Self { todo!("Implement me"); } } }
Question
- How would you pronounce the first line of the code you just copied in English?*
Run cargo test to validate your implementation.
2.4.1.C impl LocalStorageVec ⭐⭐
To make the LocalStorageVec more useful, we'll add more methods to it.
Create an impl-block for LocalStorageVec.
Don't forget to declare and provide the generic parameters.
For now, to make implementations easier, we will add a bound T, requiring that it implements Copy and Default.
First off, uncomment the test called it_constructs.
Make it compile and pass by creating a associated function called new on LocalStorageVec that creates a new, empty LocalStorageVec instance without heap allocation.
The next methods we'll implement are len, push, pop, insert, remove and clear:
lenreturns the length of theLocalStorageVecpushappends an item to the end of theLocalStorageVecand increments its length. Possibly moves the contents to the heap if they no longer fit on the stack.popremoves an item from the end of theLocalStorageVec, optionally returns it and decrements its length. If the length is 0,popreturnsNoneinsertinserts an item at the given index and increments the length of theLocalStorageVecremoveremoves an item at the given index and returns it.clearresets the length of theLocalStorageVecto 0.
Uncomment the corresponding test cases and make them compile and pass. Be sure to have a look at the methods provided for slices [T] and Vec<T> Specifically, [T]::copy_within and Vec::extend_from_slice can be of use.
2.4.1.E Iterator and IntoIterator ⭐⭐
Our LocalStorageVec can be used in the real world now, but we still shouldn't be satisfied. There are various traits in the standard library that we can implement for our LocalStorageVec that would make users of our crate happy.
First off, we will implement the IntoIterator and Iterator traits. Go ahead and uncomment the it_iters test case. Let's define a new type:
#![allow(unused)] fn main() { pub struct LocalStorageVecIter<T, const N: usize> { vec: LocalStorageVec<T, N>, counter: usize, } }
This is the type we'll implement the Iterator trait on. You'll need to specify the item this Iterator implementation yields, as well as an implementation for Iterator::next, which yields the next item. You'll be able to make this easier by bounding T to Default when implementing the Iterator trait, as then you can use the std::mem::take function to take an item from the LocalStorageVec and replace it with the default value for T.
Take a look at the list of methods under the 'provided methods' section. In there, lots of useful methods that come free with the implementation of the Iterator trait are defined, and implemented in terms of the next method. Knowing in the back of your head what methods there are, greatly helps in improving your efficiency in programming with Rust. Which of the provided methods can you override in order to make the implementation of LocalStorageVecIter more efficient, given that we can access the fields and methods of LocalStorageVec?
Now to instantiate a LocalStorageVecIter, implement the [IntoIter] trait for it, in such a way that calling into_iter yields a LocalStorageVecIter.
2.4.1.F Index ⭐⭐
To allow users of the LocalStorageVec to read items or slices from its buffer, we can implement the Index trait. This trait is generic over the type of the item used for indexing. In order to make our LocalStorageVec versatile, we should implement:
Index<usize>, allowing us to get a single item by callingvec[1];Index<RangeTo<usize>>, allowing us to get the firstnitems (excluding itemn) by callingvec[..n];Index<RangeFrom<usize>>, allowing us to get the lastnitems by callingvec[n..];Index<Range<usize>>, allowing us to get the items betweennandmitems (excluding itemm) by callingvec[n..m];
Each of these implementations can be implemented in terms of the as_ref implementation, as slices [T] all support indexing by the previous types. That is, [T] also implements Index for those types. Uncomment the it_indexes test case and run cargo test in order to validate your implementation.
2.4.1.G Removing bounds ⭐⭐
When we implemented the borrowing Iterator, we saw that it's possible to define methods in separate impl blocks with different type bounds. Some of the functionality you wrote used the assumption that T is both Copy and Default. However, this means that each of those methods are only defined for LocalStorageVecs containing items of type T that in fact do implement Copy and Default, which is not ideal. How many methods can you rewrite having one or both of these bounds removed?
2.4.1.H Borrowing Iterator ⭐⭐⭐
We've already got an iterator for LocalStorageVec, though it has the limitation that in order to construct it, the LocalStorageVec needs to be consumed. What if we only want to iterate over the items, and not consume them? We will need another iterator type, one that contains an immutable reference to the LocalStorageVec and that will thus need a lifetime annotation. Add a method called iter to LocalStorageVec that takes a shared &self reference, and instantiates the borrowing iterator. Implement the Iterator trait with the appropriate Item reference type for your borrowing iterator. To validate your code, uncomment and run the it_borrowing_iters test case.
Note that this time, the test won't compile if you require the items of LocalStorageVec be Copy! That means you'll have to define LocalStorageVec::iter in a new impl block that does not put this bound on T:
#![allow(unused)] fn main() { impl<T: Default + Copy, const N: usize> LocalStorageVec<T, N> { // Methods you've implemented so far } impl<T: const N: usize> LocalStorageVec<T, N> { pub fn iter(&self) -> /* TODO */ } }
Defining methods in separate impl blocks means some methods are not available for certain instances of the generic type. In our case, the new method is only available for LocalStorageVecs containing items of type T that implement both Copy and Default, but iter is available for all LocalStorageVecs.
2.4.1.I Generic Index ⭐⭐⭐⭐
You've probably duplicated a lot of code in exercise 2.4.1.F. We can reduce the boilerplate by defining an empty trait:
#![allow(unused)] fn main() { trait LocalStorageVecIndex {} }
First, implement this trait for usize, RangeTo<usize>, RangeFrom<usize>, and Range<usize>.
Next, replace the multiple implementations of Index with a single implementation. In English:
"For each type T, I and constant N of type usize,
implement Index<I> for LocalStorageVec<T, N>,
where I implements LocalStorageVecIndex
and [T] implements Index<I>"
If you've done this correctly, it_indexes should again compile and pass.
2.4.1.J Deref and DerefMut ⭐⭐⭐⭐
The next trait that makes our LocalStorageVec more flexible in use are Deref and DerefMut that utilize the 'deref coercion' feature of Rust to allow types to be treated as if they were some type they look like.
That would allow us to use any method that is defined on [T] by calling them on a LocalStorageVec.
Before continuing, read the section 'Treating a Type Like a Reference by Implementing the Deref Trait' from The Rust Programming Language (TRPL).
Don't confuse deref coercion with any kind of inheritance! Using Deref and DerefMut for inheritance is frowned upon in Rust.
Below, an implementation of Deref and DerefMut is provided in terms of the AsRef and AsMut implementations. Notice the specific way in which as_ref and as_mut are called.
#![allow(unused)] fn main() { impl<T, const N: usize> Deref for LocalStorageVec<T, N> { type Target = [T]; fn deref(&self) -> &Self::Target { <Self as AsRef<[T]>>::as_ref(self) } } impl<T, const N: usize> DerefMut for LocalStorageVec<T, N> { fn deref_mut(&mut self) -> &mut Self::Target { <Self as AsMut<[T]>>::as_mut(self) } } }
Question
- Replacing the implementation of
derefwithself.as_ref()results in a stack overflow when running an unoptimized version. Why? (Hint: deref coercion)
Unit 2.5 - Closures and Dynamic dispatch
Exercise 2.5.1: Config Reader
In this exercise, you'll work with dynamic dispatch to deserialize with serde_json or serde_yaml, depending on the file extension. The starter code is in exercises/2-foundations-of-rust/5-closures-and-dynamic-dispatch/1-config-reader. Fix the todo's in there.
To run the program, you'll need to pass the file to deserialize to the binary, not to Cargo. To do this, run
cargo run -- <FILE_PATH>
Deserializing both config.json and config.yml should result in the Config being printed correctly.
Unit 2.6 - Interior mutability
There are no exercises for this unit
Unit 3.1 - Introduction to Multitasking
There are no exercises for this unit
Unit 3.2 - Parallel Multitasking
Exercise 3.2.1: TF-IDF
Follow the instructions in the comments of exercises/3-multitasking/2-parallel-multitasking/1-tf-idf/src/main.rs!
Exercise 3.2.2: Mutex
The basic mutex performs a spin-loop while waiting to take the lock. That is terribly inefficient. Luckily, your operating system is able to wait until the lock becomes available, and will just put the thread to sleep in the meantime.
This functionality is exposed in the atomic_wait crate. The section on implementing a mutex from "Rust Atomics and Locks" explains how to use it.
- change the
AtomicBoolfor aAtomicU32 - implement
lock. Be careful about spurious wakes: afterwaitreturns, you must stil check the condition - implement unlocking (
Drop for MutexGuard<T>usingwake_one.
The linked chapter goes on to further optimize the mutex. This is technically out of scope for this course, but we won't stop you if you try (and will still try to help if you get stuck)!
Unit 3.3 - Asynchronous Multitasking
Exercise 3.3.1: Async Channels
Channels are a very useful way to communicate between threads and async tasks. They allow for decoupling your application into many tasks. You'll see how that can come in nicely in exercise E.2. In this exercise, you'll implement two variants: a oneshot channel and a multi-producer-single-consumer (MPSC) channel. If you're up for a challenge, you can write a broadcast channel as well.
3.3.1.A MPSC channel ⭐⭐
A multi-producer-single-consumer (MPSC) channel is a channel that allows for multiple Senders to send many messages to a single Receiver.
Open exercises/3-multitasking/3-asynchronous-multitasking/1-async-channels in your editor. You'll find the scaffolding code there. For part A, you'll work in src/mpsc.rs. Fix the todo!s in that file in order to make the test pass. To test, run:
cargo test -- mpsc
If your tests are stuck, probably either your implementation does not use the Waker correctly, or it returns Poll::Pending where it shouldn't.
3.3.1.B Oneshot channel ⭐⭐⭐
A oneshot is a channel that allows for one Sender to send exactly one message to a single Receiver.
For part B, you'll work in src/broadcast.rs. This time, you'll have to do more yourself. Intended behavior:
ReceiverimplementsFuture. It returnsPoll::Ready(Ok(T))ifinner.dataisSome(T),Poll::Pendingifinner.dataisNone, andPoll::Ready(Err(Error::SenderDropped))if theSenderwas dropped.Receiver::pollreplacesinner.wakerwith the one from theContext.Senderconsumesselfon send, allowing the it to be used no more than once. Sending setsinner.datatoSome(T). It returnsErr(Error::ReceiverDropped(T))if theReceiverwas dropped before sending.Sender::sendwakesinner.wakerafter putting the data ininner.data- Once the
Senderis dropped, it marks itself dropped withinner - Once the
Receiveris dropped, it marks itself dropped withinner - Upon succesfully sending the message, the consumed
Senderis not marked as dropped. Insteadstd::mem::forgetis used to avoid running the destructor.
To test, run:
cargo test -- broadcast
3.3.1.C Broadcast channel (bonus) ⭐⭐⭐⭐
A Broadcast channel is a channel that supports multiple senders and receivers. Each message that is sent by any of the senders, is received by every receiver. Therefore, the implemenentation has to hold on to messages until they have been sent to every receiver that has not yet been dropped. This furthermore implies that the message shoud be cloned upon broadcasting.
For this bonus exercise, we provide no scaffolding. Take your inspiration from the mpsc and oneshot modules, and implement a broadcast module yourself.
Exercise 3.3.2: Async Chat
In this exercise, you'll write a simple chat server and client based on Tokio. Open exercises/3-multitasking/3-asynchronous-multitasking/2-async-chat in your editor. The project contains a lib.rs file, in which a type Message resides. This Message defines the data the chat server and clients use to communicate.
3.3.2.A Server ⭐⭐⭐
The chat server, which resides in src/bin/server.rs listens for incoming TCP connections on port 8000, and spawns two tasks (futures):
handle_incoming: reads lines coming in from the TCP connection. It reads the username the client provides, and broadcasts incomingMessages, possibly after some modification.handle_outgoing: sends messages that were broadcasted by thehandle_incomingtasks to the client over TCP.
Both handle_incoming and handle_outgoing contain a number to todos. Fix them.
To start the server, run
cargo run --bin server
3.3.2.B Client ⭐⭐
The chat client, residing in src/bin/client.rs contains some todo's as well. Fix them to allow for registration and sending Messages to the server.
To start the client, run
cargo run --bin client
If everything works well, you should be able to run multiple clients and see messages sent from each client in every other.
Unit 4.1 - Foreign Function Interface
Exercise 4.1.1: CRC in C
Use a CRC checksum function written in C in a Rust program
prerequisites
- A C compiler
Steps
-
Add the
ccbuild dependency, by adding toCrate.tomlthe lines:[build-dependencies] cc = "1.0" -
Create
build.rswith contentsextern crate cc; fn main() { println!("cargo:rerun-if-changed=crc32.h"); println!("cargo:rerun-if-changed=crc32.c"); cc::Build::new().file("crc32.c").compile("crc32.a"); }This will find your c code, compile it, and link it into the executable rust produces
-
In
main.rs, define an extern (fill in the argument and return types)#![allow(unused)] fn main() { extern "C" { fn CRC32( ... ) -> ...; // hint: https://doc.rust-lang.org/std/os/raw } } -
Now, create a rust wrapper that calls the extern function
#![allow(unused)] fn main() { fn crc32( ... ) -> ... { ... // (hints: `unsafe`, `.as_ptr()`, `.len()`) } } -
Call our wrapper on some example input
fn main() { println!("{:#x}", crc32(b"12345678")); }In the above example, the correct output is
0x9ae0daaf
Exercise 4.1.2: CRC in Rust
Use a CRC checksum function written in Rust in a C program
Requirements
- A C compiler
Steps
-
Change Cargo.toml to
[package] name = "crc-in-rust" version = "0.1.0" edition = "2021" [lib] name = "crc_in_rust" crate-type = ["dylib"] # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html [dependencies] -
Expose an extern rust function
#![allow(unused)] fn main() { #[no_mangle] pub extern "C" fn crc32(...) -> ... { ... crc32_rust(...) } } -
Create a C header file
crc_in_rust.h#include <inttypes.h> // uint32_t, uint8_t #include <stddef.h> // size_t uint32_t crc32(const uint8_t data[], size_t data_length); -
Use the rust
crc32function in C#include <inttypes.h> // uint32_t, uint8_t #include <stddef.h> // size_t #include <stdio.h> // printf #include "crc_in_rust.h" int main() { uint8_t data[] = { 0,1,2,3,4,5,6 }; size_t data_length = 7; uint32_t hash = crc32(data, data_length); printf("Hash: 0x%d\n", hash); return 0; } -
compile and run
$ clang main.c target/debug/libcrc_in_rust.so -omain $ ./main Hash: -1386739207
Exercise 4.1.3: TweetNaCl Bindgen
Use cargo bindgen to generate the FFI bindings. Bindgen will look at a C header file, and generate rust functions, types and constants based on the C definitions.
But the generated code is ugly and non-idiomatic. To wrap a C library properly, good API design and documentation is needed.
tweetnacl-bindgen
Making rust bindings for the tweetnacl C library
Exercise: implement crypto_hash_sha256_tweet
Below you find instructions for using bindgen and wrapping crypto_hash_sha512_tweet. Follow the instructions, then repeat the steps for crypto_hash_sha256_tweet
Instructions
Prerequisites:
- a C compiler is installed on the system
- bindgen, install with
cargo install bindgen-cli
Steps
-
Create the rust bindings:
bindgen tweetnacl.h -o src/bindings.rs -
Use
build.rsto compile and linktweetnacl.c. Createbuild.rsand insertfn main() { cc::Build::new() .file("tweetnacl.c") .compile("tweetnacl"); // outputs `libtweetnacl.a` }And add this section to your
Cargo.toml[build-dependencies] cc = "1" -
Create
src/lib.rswith the contentspub mod bindings;. This will make thebindingsmodule available inmain.rs. -
Run
cargo checkto verify everything is compiling correctly. -
By default building will generate a bunch of warnings. we can turn those off by replacing our build.rs with
fn main() { cc::Build::new() .warnings(false) .extra_warnings(false) .file("tweetnacl.c") .compile("tweetnacl"); // outputs `libtweetnacl.a` }and adding this line at the top of
src/bindings.rs:#![allow(unused)] #![allow(non_upper_case_globals)] fn main() { }
Inspecting our bindings
In the generated bindings.rs file we find this signature for the crypto_hash_sha512_tweet C function from tweetNaCl:
#![allow(unused)] fn main() { extern "C" { pub fn crypto_hash_sha512_tweet( arg1: *mut ::std::os::raw::c_uchar, arg2: *const ::std::os::raw::c_uchar, arg3: ::std::os::raw::c_ulonglong, ) -> ::std::os::raw::c_int; } }
Some observations
- The definition is inside of an
extern "C"block, and has no body. Therefore this function is marked as an extern, and rust expects it to be linked in. - The function is marked
pub, meaning we can import and use it in other modules (likemain.rsin our case) - We can deduce the behavior from the type signature:
arg1is the output: a mutable pointer to a sequence of bytesarg2is the input: a constant pointer to a sequence of bytesarg3is a length (unclear of what)- the return value is probably an error code
- These are raw C types, which makes it a hassle to call directly from rust.
We will deal with the last point by writing some nice rust wrappers around the generated bindings.
In rust we bundle a pointer to a sequence of elements and its length in a slice. We could write the signature of our own rust wrapper function as:
#![allow(unused)] fn main() { pub fn crypto_hash_sha512_tweet(out: &mut [u8], data: &[u8]) -> i32 { todo!() } }
Modelling with types
But by looking at the tweetNaCl source code we can see that the contract is a bit stronger:
- the output is always 64 bytes wide (64 * 8 = 512)
- we only ever return
0
int crypto_hash(u8 *out,const u8 *m,u64 n)
{
u8 h[64],x[256];
u64 i,b = n;
FOR(i,64) h[i] = iv[i];
crypto_hashblocks(h,m,n);
m += n;
n &= 127;
m -= n;
FOR(i,256) x[i] = 0;
FOR(i,n) x[i] = m[i];
x[n] = 128;
n = 256-128*(n<112);
x[n-9] = b >> 61;
ts64(x+n-8,b<<3);
crypto_hashblocks(h,x,n);
FOR(i,64) out[i] = h[i];
return 0;
}
The rust type system can model these invariants: We can explicitly make the output 64 elements long by using a reference to an array. Furthermore we can drop the return type if there is nothing useful to return.
#![allow(unused)] fn main() { pub fn crypto_hash_sha512_tweet(out: &mut [u8; 64], data: &[u8]) { todo!() } }
But even better, we can return the output array directly:
#![allow(unused)] fn main() { pub fn crypto_hash_sha512_tweet(data: &[u8]) -> [u8; 64] { todo!() } }
The compiler will turn this signature into the one we had before under the hood. Returning the value is more idiomatic and convenient in rust, and with modern compilers there is no performance penalty.
In detail: The C ABI mandates that any return value larger than those that fit in a register (typically 128 bits nowadays) are allocated on the caller's stack. The first argument to the function is the pointer to write the result into. LLVM, the backend used by the rust compiler has specific optimizations to make sure the function result is written directly into this pointer.
Writing our implementation
Allright, with the signature worked out, we can write the actual implementation.
We can reach the bindings from main.rs with e.g.
#![allow(unused)] fn main() { tweetnacl_bindgen::bindings::crypto_hash_sha512_tweet(a,b,c); }
Here tweetnacl_bindgen is the name of the project, specified in the package section of the Cargo.toml
[package]
name = "tweetnacl-bindgen"
Then bindings is the module name (the file src/bindings.rs is implicitly also a module) and finally crypto_hash_sha512_tweet is the function name from the original C library.
On to the implmentation. Extern functions are considered unsafe in rust, so we will need an unsafe block to call ours.
#![allow(unused)] fn main() { pub fn crypto_hash_sha512_tweet(data: &[u8]) -> [u8; 64] { unsafe { tweetnacl_bindgen::bindings::crypto_hash_sha512_tweet( todo!(), todo!(), todo!(), ); } } }
Next we can pass our argument: we turn the slice into a pointer with .as_ptr(), and get the length with len(). The length needs to be cast to the right type. In this case we can use as _ where rust will infer the right type to cast to.
#![allow(unused)] fn main() { pub fn crypto_hash_sha512_tweet(data: &[u8]) -> [u8; 64] { unsafe { tweetnacl_bindgen::bindings::crypto_hash_sha512_tweet( todo!(), data.as_ptr(), data.len() as _, ); } } }
Next we create an array for the return value, pass a mutable pointer to this memory to our extern functin, and return the array.
#![allow(unused)] fn main() { pub fn crypto_hash_sha512_tweet(data: &[u8]) -> [u8; 64] { let mut result = [ 0; 64 ]; unsafe { tweetnacl_bindgen::bindings::crypto_hash_sha512_tweet( &mut result as *mut _, data.as_ptr(), data.len() as _, ); } result } }
And we're done: an idiomatic rust wrapper around the crypto_hash_sha512_tweet!
Uninitialized memory
There is one more trick: our current function initializes and zeroes out the memory for result. That is wasteful because the extern function will overwrite these zeroes. Because the extern function is linked in, the compiler likely does not have enough information to optimize the zeroing out away.
The solution is MaybeUninit:
#![allow(unused)] fn main() { use std::mem::MaybeUninit; pub fn crypto_hash_sha512_tweet(data: &[u8]) -> [u8; 64] { let mut result : MaybeUninit<[u8; 64]> = MaybeUninit::uninit(); unsafe { tweetnacl_bindgen::bindings::crypto_hash_sha512_tweet( result.as_mut_ptr() as *mut _, data.as_ptr(), data.len() as _, ); result.assume_init() } } }
The std::mem::MaybeUninit type is an abstraction for uninitialized memory. The .uninit() method gives a chunk of uninitialized memory big enough to store a value of the desired type (in our case [u8; 64] will be inferred).
We can look at the LLVM IR to verify that 1) the initialization with zeroes is not optimized away and 2) using MaybeUninit does not initialize the array.
Below is a call site of our crypto_hash_sha512_tweet function that zeroes out the memory. Indeed, we see a memset that sets all the bytes to 0. (also not that our wrapper function actually got inlined)
%result.i = alloca <64 x i8>, align 1
%0 = getelementptr inbounds <64 x i8>, <64 x i8>* %result.i, i64 0, i64 0
call void @llvm.memset.p0i8.i64(i8* noundef nonnull align 1 dereferenceable(64) %0, i8 0, i64 64, i1 false), !alias.scope !8, !noalias !11
%_2.i = call i32 @bindings::crypto_hash_sha512_tweet(i8* nonnull %0, i8* nonnull "foobarbaz", i64 9)
In constrast, the version with MaybeUninit just calls our extern function without touching the memory at all:
%result.i = alloca <64 x i8>, align 1
%0 = getelementptr inbounds <64 x i8>, <64 x i8>* %result.i, i64 0, i64 0
%_3.i = call i32 @bindings::crypto_hash_sha512_tweet(i8* nonnull %0, i8* nonnull "foobarbaz", i64 9), !noalias !6
Full LLVM IR
define i8 @call_with_maybeuninit() unnamed_addr #1 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* @rust_eh_personality {
start:
%result.i = alloca <64 x i8>, align 1
%0 = getelementptr inbounds <64 x i8>, <64 x i8>* %result.i, i64 0, i64 0
call void @llvm.lifetime.start.p0i8(i64 64, i8* nonnull %0), !noalias !2
%_3.i = call i32 @crypto_hash_sha512_tweet(i8* nonnull %0, i8* nonnull getelementptr inbounds (<{ [9 x i8] }>, <{ [9 x i8] }>* @alloc1, i64 0, i32 0, i64 0), i64 9), !noalias !6
%1 = load <64 x i8>, <64 x i8>* %result.i, align 1, !noalias !7
call void @llvm.lifetime.end.p0i8(i64 64, i8* nonnull %0), !noalias !2
%2 = call i8 @llvm.vector.reduce.add.v64i8(<64 x i8> %1)
ret i8 %2
}
define i8 @call_without_maybeuninit() unnamed_addr #1 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* @rust_eh_personality {
start:
%_4 = alloca <64 x i8>, align 1
%0 = getelementptr inbounds <64 x i8>, <64 x i8>* %_4, i64 0, i64 0
call void @llvm.lifetime.start.p0i8(i64 64, i8* nonnull %0)
call void @llvm.memset.p0i8.i64(i8* noundef nonnull align 1 dereferenceable(64) %0, i8 0, i64 64, i1 false), !alias.scope !8, !noalias !11
%_2.i = call i32 @crypto_hash_sha512_tweet(i8* nonnull %0, i8* nonnull getelementptr inbounds (<{ [9 x i8] }>, <{ [9 x i8] }>* @alloc1, i64 0, i32 0, i64 0), i64 9)
%1 = load <64 x i8>, <64 x i8>* %_4, align 1
%2 = call i8 @llvm.vector.reduce.add.v64i8(<64 x i8> %1)
call void @llvm.lifetime.end.p0i8(i64 64, i8* nonnull %0)
ret i8 %2
}