RustBelt: Logical Foundations for the Future of Safe Systems Programming

Rust is a new systems programming language, sponsored by Mozilla, that promises to overcome the seemingly fundamental tradeoff in language design between high-level safety guarantees and low-level control over resource management. Unfortunately, none of Rust’s safety claims have been formally proven, and there is good reason to question whether they actually hold. Specifically, Rust employs a strong, ownership-based type system, but then extends the expressive power of this core type system through libraries that internally use unsafe features.

In this talk, I will present RustBelt, the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof is extensible in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe extension to the language. We have carried out this verification for some of the most important libraries that are used throughout the Rust ecosystem. After reviewing some essential features of the Rust language, I will describe the high-level structure of the RustBelt verification and then delve into detail about the secret weapon that makes RustBelt possible: the Iris framework for higher-order concurrent separation logic in Coq. I will explain by example how Iris generalizes the expressive power of O’Hearn’s original concurrent separation logic in ways that are essential for verifying the safety of Rust libraries. I will not assume any prior familiarity with concurrent separation logic or Rust. This is joint work with Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, Hoang-Hai Dang, Jan-Oliver Kaiser, and the rest of the Iris team.

Transcript

Today I’m going to tell you about some work that we’ve been doing for the past several years at MPI for Software Systems, on something we call RustBelt. This is basically… The reason I’m excited about it is that, we spend a lot of time in academic work studying… Well. It’s okay. We may need more chairs, though.

You’re very popular.

Apparently. Okay. I’m honored. Oh yeah, you can take this out. That’s good.

So yeah, I spent a lot of time proving theorems and developing theories and logics and type systems and all this stuff, and you wonder if it has anything to do with real code. In this project, we’re actually trying to apply some of these logical foundations for programming languages that we’ve been building up over the past decade to a real, actively developed programming language named Rust. Just a show of hands, how many people have heard of Rust? Okay, that’s what I figured. How many people have actually used Rust in any form? Okay, all right. Fine. Good.

Part of this talk will be just about cluing you into what I think is interesting about Rust in general, and then honing in on a really important problem that we’ve been studying, in terms of how do you build foundations for this language, and establish that its safety guarantees actually hold. And then I’m going to talk throughout the rest of the talk about these fundamental technologies that we’ve been building up that allow us to do that.

First of all, what is Rust? Here you can… actually, I need a chair a little later on. Well, actually I don’t, if you want to use the chair it’s good. Go for it. I can handle it. I can stand for this talk.

Okay. Rust is basically trying to attack this long-standing problem of, how do you balance safety and control? On the one hand, you have high-level… And this is not going to be news to any of you, I don’t think. So, on the one hand, you have high-level languages that are great for building large applications. They have these nice high-level abstractions and things like automatic memory management, and they’re safe.

On the other hand, for certain kinds of applications which are typically called systems programming applications, you often need to go down to a lower level of abstraction, you need more fine-grain control over resource management, and you typically end up having to use some unsafe language like, C or C++.

The question has been, for decades, can you somehow overcome this trade-off and have a language which gives you the best of both worlds? In other words, a safe systems programming language. And that’s the niche that Rust is trying to fill. Rust has been in development by Mozilla since around 2010, and it’s actually not just developed in Mozilla. Mozilla has a core team that works on it, but there’s thousands of contributors to the language. It’s in use by many companies. Obviously, Mozilla’s one of the big users. They’ve been using Rust to build Servo, which is a next-generation web browser engine with better parallel performance, pieces of which have already been incorporated into Firefox. But there’s many other companies using Rust in production code, as well.

Why is Rust exciting? Basically, the reason is that it’s the only systems programming language developed so far that provides a combination of three things. One, low-level control in the style of modern C++. It inherits a lot of common APIs or useful APIs from C++. Another thing, is strong safety guarantees, and we’ll get into this in a bit more detail, but basically the main ones are type safety, memory safety, and data race freedom in concurrent code. And last but not least, it has industrial development and backing from a major industrial software vendor.

I think this is the reason that people are really excited about Rust, and it’s been… For a while, people have been talking about it being a next big thing in systems programming, or having the potential to become the next big thing in systems programming. So, that’s the very high-level sales pitch for why we’re interested in this, at all. Let’s get into a bit more detail here.

What is Rust about? The core idea of Rust is essentially that the root of all evil in this kind of low-level systems programming is the unrestricted combination of mutation and aliasing. In other words, when you have multiple references or aliases to some object, and you use one of those references to mutate it and then potentially deallocate and reallocate the object, then you can be left with things like dangling pointers. That leads to things like use-after-free errors, data races, iterator invalidation, and a host of other common programming anomalies that you find in C++ code.

The idea in Rust is basically that, to try to eliminate this whole class of errors using a so-called ownership type system, and sometimes in the academic literature, this is often called substructural or an affine-type system. I’m not going to go into the detail of why, but basically it has something to do with the origins of this kind of approach in substructural logic.

How does that work? Here’s the very, very simple answer of how the ownership… Type system works. The basic idea is that… The basic idea of the type system is that an object can be either mutable, or it can have multiple active aliases, but it can’t have both at the same time. So, the type system is set up to ensure that property.

The first line of defense is basically, the idea of ownership. So in Rust, if you have a value of type T in your scope, then that means that you own the object, sort of, in quotes. What that means roughly, intuitively, is you have control over the resources backing that object, and you know that other parts of the program can’t have pointers into it or have aliases into it. You own it fully, and you can mutate it however you want therefore, because you know there’s no other aliases and you can even deallocate it. That will be perfectly safe.

However, of course, that ownership, the strict ownership discipline is too restrictive by itself, so in addition, you have the ability to create references. For example, when you pass something by reference to a function, that creates what is called a borrow of the object. When you create a borrow, you have two different ways of creating a borrow. You can either create it in a way that’s shared, or a way that’s unique. This is basically a trade-off between whether you want to allow mutation or aliasing.

In the shared case, when you create a shared reference, you can have as many copies of that as you want. You can freely copy the references, so it’s freely aliasable. But you can’t actually mutate through any of the references. Whereas if you create a mutable, or a unique borrow, then you know that, that’s the only active alias to the object, but you’re allowed to mutate through it. So you have again this trade-off between aliasing and mutation.

To make this concrete, I’m going to go to a very lame demo that will give you, hopefully… Oh, that’s a little small, let’s make this bigger.

Give you a little idea of what’s going on. Okay. So, this is just a little function that allocates a vector of three elements, pushes on to the vector the number 42, and then prints the elements of the vector in a loop.

This should look innocuous, and it is. You can see that. Okay.

Okay so, it prints, one, two, three, 42. Okay. All right, good. Now, suppose that we move that push into the body of the loop. So, does anyone have any idea of what is likely to happen, or what should happen, or anything? Yeah.

[inaudible 00:08:17] an infinite loop?

An infinite loop? Okay.

I don’t know.

Is it an error, because you have a reference that’s not mutable?

Okay, that’s a Rust… That’s a good question. I should be clearer with my questions. Suppose this were C++, and you wrote something similar What could happen?

Yeah, infinite loop is one possibility. Yeah?

Iterator invalidation?

Iterator invalidation? Okay, why?

Because you’re potentially reallocating the vector after growing it.

Exactly. So, I didn’t specify anything about the semantics of the push method on this vector, but that’s exactly what will happen.

In Rust, when you push something it sees if it has the space allocated for that new element, but if it doesn’t, then it’s going to reallocate the vector. This is exactly a case where you have the potential for iterator invalidation.

When you do this push, it might actually reallocate the vector, so the next time you try to get the next element of the original vector, you’d get a use-after-free error.

Rust is going to prevent this from happening at all. Here’s what it says, hopefully you can read that. Can you read that in the back? No? All right, let me make it a little bigger. Is that better? Any better? Okay.

Hopefully, that’s as big as I can make it. What does it say? It says that we had an immutable borrow occurring here, that’s the shared borrow, okay? And then a mutable borrow occurs here. This is perhaps not obvious if you haven’t programmed the rest, but when you do V dot push at 42, that’s actually creating a mutable borrow that you have to pass as the self parameter basically, to the function.

The push method expects you to give it a mutable reference to the object V, okay? That’s done in that syntactic sugar for creating a mutable reference when you call push, because push is going to mutate the vector so, it needs to have a mutable access to the vector.

So, it’s saying, you can’t do that here, because there’s an outstanding immutable borrow, which means that you’re not allowed to have some mutation while there exists this active alias to the vector that’s owned by the iterator. Okay.

So, this is just statically prevented. Now, if we were to… Just show you what happens, if we had put this call to the push outside, after the iterator is done, we would hope that this actually is allowed, because at this point, there is no outstanding reference, that vector… The immutable borrow here, was only created for the duration of the iteration, then afterwards we should be able to get mutable access to the vector back.

Indeed, now it’s fine. Okay. So, this is just to give you an idea of what kind of interesting analyses are happening in the Rust type system, and the kind of programming anomalies that it prevents statically. Okay. Good.

All right, so I told you this story now, about… And this standard story, if you hear any talk about Rust, you’ll hear basically, the story I’ve said so far… Preventing this combination of mutation and aliasing.

Unfortunately, this is a lie. The truth is, sometimes you need mutation and aliasing. Sometimes there’s no way around it. At some level of abstraction. In particular, when you want to implement a variety of different kinds of data structures that are widely used in the Rust ecosystem. For example, pointer-based data structures that fundamentally involve some mutation of alias state like doubly-linked lists. That goes outside the core Rust type discipline.

Synchronization mechanisms like locks, or channels, these all involve some kind of communication which is implemented by mutating some state that’s shared by multiple parties.

Memory management, which Rust is not built in, but there are libraries that provide different kinds of memory management such as, reference counting, and those are fundamentally implemented internally by having aliases to some object and updating shared things, like a reference count. Okay.

These all involve some kind of mutation of aliased state which seems to go outside of what the Rust type system allows, how do you support these things… Or how does Rust support these things?

Basically, through APIs so, I’m going to show you an example of two APIs working together, and I’m not going to show you that many specific examples in this talk, these two APIs Arc and Mutex, I’m going to focus on in more detail later in the talk, so it’s not a gratuitous example.

I’ve tried to focus on just a couple examples to illustrate the key points here. I’m going to show you an example of how you use these APIs, then I’m going to talk about what’s involved in implementing them.

Here the idea is, these two APIs, what they do is, Mutex, basically provides synchronized access to some object through a lock. Basically, if you have some object, X of type T, and you wrapped in a Mutex, then the only way to get access to that is by acquiring a lock to the object, okay?

Then Arc, which stands for atomic reference counting, allows dynamic sharing of that Mutex among multiple threads in the thread-safe way using reference counting, okay? So, basically wraps it in a reference counted object.

I’m showing you one example of how you can implement shared mutable state through these two APIs working together. There’s other ways of doing this, as well, but this is just one example.

This is just a silly example, it’s saying we spawned 10 threads, and for each of these 10 threads, we’re going to clone the Arc so, you have to actually explicitly clone the Arc to get a new Arc that you can give to a different thread. When you clone this Arc, it creates a new Arc that can be passed to this thread that’s spawned here, that’s what the move does here, is it moves ownership of this Arc, mydata, to the thread that’s spawned. In that thread, basically what happens, is we call the lock method on mydata, and unwrap is just doing air handling, and then we get… Basically, I’m not showing the full types, because then it gets more involved… But basically, what happens here, is that Y gives you mutable access to the underlying object in the scope this thread. Okay.

We’re basically, just spawning 10 threads… We have 10 clones of this Arc, and then they can each be used to race to access the thread. Okay. So, we just have 10 threads racing with each other.

This is perfectly safe Rust code, this is code that a client would write, and it’s giving us shared mutable state, right? This means that these libraries, like Arc and Mutex, are fundamentally expanding the expressive power of the language, because they’re giving you the ability to do things you could not write in the well-type way, in the safe fragment of the language.

What does that mean? Well, it means that, clearly, these are not implemented in the safe fragment of the language. Instead, they’re implemented using unsafe code. The idea is that, this is the reality of Rust, okay, which you don’t hear about in most talks on Rust, so that’s why I’m talking about it today.

The most code is written in the safe fragment of the language. Then there’s many data types implemented in the standard library, and other user libraries that internally make use of unsafe code, but they’re encapsulated in safe APIs. The idea is that inside these implementations… So, this is the implementation of RefCell, but it’s the same for any of these, you’ll have these snippets of code, that are marked unsafe, they’re in what’s called, an unsafe block.

What happens in an unsafe block is, basically, that you have the ability to do a couple more operations that you can’t usually do, like dereferencing a raw pointer. A raw pointer is basically, a pointer who’s aliasing is untracked by the type system.

You have the ability to do these extra operations, inside these unsafe blocks, and the idea is supposed to be that, although you’re using these unsafe operations in the implementations of these APIs, the APIs are nonetheless, safe to use. In other words, the claim that the developers want to make, is that clients of these APIs… As long as the clients are in this green ellipse in the middle, here, right? They’re written in the safe fragment of the language, then they will never observe any undefined behavior by using these APIs. That’s the claim. And that sounds like a nice claim, it’s also a claim that gets verification people like me very excited, because there’s great potential for bugs.

Just one example of this… There’s actually been many examples, but this was one when we just started to work on the project, I guess… Yeah, 2015, right. This is actually before our project officially started, but it was around the same time we were proposing it. Aaron [Teron 00:17:04], who was the manager of the Rust project at the time, wrote this really nice blog post about fearless concurrency with Rust showing… Basically, describing a lot of this story that I said already of the type system is great, but there’s all these APIs that let you do interesting things on top of it, and that extend the power of it in a modular way.

At the end of the post, he ended up with this, piece de resistance, example of the scoped threads API, which is basically, an API that let you access spawned threads that would have access to references into the parent threads stack-frame, and do so in a safe way.

This was an example, again, of one of these APIs that internally makes use of unsafe code, this is not something you would be able to… The implementation of that API was not something you could type-check in the language itself, or in the safe fragment of the language itself.

And low and behold, on the day that he posted this blog post, a bug was found in this API, and this lead to much gnashing of teeth, there was a lot of… This was also… I forgot to mention, this was two weeks before the 1.0 release of the language so, not a great time. I’m not going to into the details of this, but basically, there was a bad interaction between this API, and the RC API, which is reference counting API, and so, each of them by themselves was safe… No one could find any bugs, but the combination of these two APIs could cause you to get use-after-free errors.

In the end, they deprecated the API, and there’s been further development of other APIs that do similar things. Okay. Basically, in that case, they chose that RC was a more important API, and they stuck with that.

The question, of course is, how could you avoid this kind of thing, right? How can you actually ensure that formally, this kind of soundness bugs don’t occur? I think, this is particular important to the people on the Rust team, because Rust… One of the big sales points of Rust is that it is… Compared to something like C++, is safety.

On the other hand, one of the things that allows the language to grow and evolve, and be as expressive as it is, is the use of these kind of APIs that encapsulate safely, controlled uses of shared mutable state, like the ones that I’ve talked about, so far.

You need some way of being able to grow the language in a safe way, and have more confidence that you’re doing that correctly. So, that’s basically, where we come in. RustBelt is one of the first projects to actually, look at the Rust programming language, period. But our goal is really to develop the first logical foundations for the language such that, we can verify the safety of the core type system, and a number of widely used libraries, like the ones I’ve talked about so far. Furthermore, so that we can give Rust developers the tools that they will need in order to safely evolve the language going forward. Okay?

That’s the very high-level statement of what we’re trying to do. Any questions so far?

Okay. Great. Now, I’m going to go into a little section of the talk where I give a high-level view of the general approach that we’re taking, and then I’m going to zoom in on what I think is the most interesting technical problem that we faced, and how to address it.

The first question is, what are we trying to prove here? What do we mean when we say that we want to show the language is safe? The standard approach that people have taken in the past, is this so called, syntactic safety approach, or syntactic type soundness. It’s also often referred to as, progress and preservation. How many of you have actually heard of any of those terms? Okay, fine. Good. If you haven’t it’s no problem, because I’m not using that approach.

The nice thing about the syntactic approach, or the reason that many people in the programming language community have used it, and it’s taught in many undergraduate textbooks, and so forth, is that it is very simple. You basically, take the notion whether a program is well typed, you lift that to a notion of whether a machine state is well typed, and then you show that, that is preserved throughout the execution of the program, and ensures well defined behavior of your code. Okay.

It boils down to these two very simple theorems, and if things work out nicely, you can prove them in a very simple way. The problem is, it totally doesn’t work if you have any unsafe code in your language, okay? As soon as you have any piece of the program that is not in the safe fragment, that is not well typed according to some fixed set of syntactic typing rules, then this whole approach basically goes out the window, that doesn’t tell you anything about the safety of your program.

That’s a non-starter for Rust, because as I’ve said already, the use of unsafe code is essential to the whole enterprise. What do we do instead? We basically generalize to something that we call semantic safety, or semantic soundness, and the idea at a very high level, is that we lift the notion safety from… Instead of just saying something is safe, if its well typed according to some fixed set of rules, we give an observational or extensional definition of safety that says, a library is semantically safe if there’s no way you can poke at it, in a well typed way, in order to get it to produce undefined behavior, okay?

So, when you use it in any larger, well typed program, it won’t cause any undefined behavior, you can’t observe any undefined behavior.

Yeah?

What does undefined behavior really mean?

Well, that’s an excellent question, which I’m not going to answer in this talk. It’s an excellent question, in that it’s… That’s an active topic of current research. In fact, we have a paper right now that’s starting to push more in that direction.

For the purpose of what we’ve done so far in RustBelt, we basically meant a standard notion of not accessing memory outside the regions that are allocated, for example.

Or not having data races according to some operational definition of what a data race is. No unsynchronized accesses to memory from different threads.

There’s some fixed definitions of what undefined behavior are, and we show that you don’t have those things. Does that answer your question? I know that’s somewhat vague, I’m not going into details of exactly [crosstalk 00:23:36].

I could imagine that it would be a bunch of other… A property I would want is, I normally make the types of my program, and make inferences, things that have to be true.

Right.

Those inferences are about more than just-

Let me be clear, undefined behavior for us means, literally, we have an operational semantics that we give for a language that we define called, [lameta 00:23:56] Rust, which is a core Rust calculus that’s modeled after the MIRR; middle-level, intermediate representation of Rust, and that has an operational semantics, the operational semantics has the ability to get stuck, meaning get into some bad state.

We prove that well typed Rust programs, for some definition that I’ll give in a sec, never get into this bad state. Okay?

Yeah.

Does this definition of semantic safety cover the Leakpocalypse we had where we have two of these unsafe things [crosstalk 00:24:25]?

Yes, yes.

Okay.

If I don’t answer that question in two minutes, ask me again.

Do you have a… Yeah?

Is this fundamentally different than saying, [inaudible 00:24:38]… Is this semantic technique, fundamentally different than syntactic safety where I add a few extra constructs [inaudible 00:24:43]?

The difference is basically, I don’t know of any scalable approach to syntactic safety that allows you to keep adding those primitives, right?

The problem is, we want to have a framework that would allow you say, when I’m developing a new API, can I add this? Basically, yes, in some sense, you are adding it as a new primitive to the language, right? It’s a new data-type that is safe for subtle reasons. Generally, the whole benefit of the syntactic approach is that, you don’t have to think very much.

There really hasn’t been any approach to syntactic safety plus a few primitives, where the primitives can keep on growing, and in such a way, that when you add new primitive, you don’t have to revise all the existing proofs. Whereas, with semantic safety, when you add a new API, you don’t have to revise the existing proofs, you just prove the thing about the new API.

Okay? Okay. This’ll become a little bit clearer in the next couple slides.

Here’s this picture about what we do, we have basically, something we call semantic model which we define, which is a mapping from types, or more generally, interfaces of APIs to some kind of safety contract, which is basically, a verification condition that the implementation of the API has to satisfy, in order to be considered semantically safe… Safely linkable with other code, okay?

We define this semantic model, once and for all, about the type system, okay? And then what do we prove? We prove two things; one is the general theorem which says, that if your implementation of your library is completely in the safe fragment of the language, then it satisfies its safety contract corresponding to its API, by construction. Okay?

Yeah, we prove this once and for all. But then, if you have an API that is implemented internally with unsafe features, then there’s something you have to do. Okay? Then there’s a manual proof burden, so you have to show that your implementation of your library satisfies its corresponding safety contract, and that could be arbitrarily difficult. But at least, at least you have a safety contract against which you can verify it. Right? You know what is the property you have to show.

When you put those together, basically, remember this picture, what this means is, all the stuff in the middle is safe by construction. Okay, we prove that once and for all. And then all the libraries that use unsafe features, those have to be manually verified. Okay?

Now, of course, in the future, we would love to be able to develop more automated verification technology that would allow you to develop these kind of proofs for these APIs in a more systematic, or automated way. That’s not been our focus, so far, we’ve just been focused on defining this framework, and instantiating it with a significant subset of the Rust language, of the Rust type system, and a significant number of significant, and widely used APIs, and showing that these all link together safely. Okay?

Any questions about that, so far? Yeah.

Did you find any bugs?

We did actually, not a lot, but we found a couple of bugs. One in the Mutex API, that was not a deep bug, I mean, nothing was… That was not a deep bug, but it was something we did discover in the course of doing this verification, we also found something in the Arc API, which had to do with relaxed memory, which I’m not talking about today.

Not a lot of bugs, no, but we have found a couple. Yeah?

Do you have to put in extra thought to the interaction of different libraries, or because they have to go through the same line structure, that its not relevant?

Yeah, right. Okay, this was the question we had before, right? Basically, the answer is no, you don’t have to worry about the interaction because the idea is that, the semantic model is this mapping for all… It tells you how to interpret the interface of any API as a verification condition on the implementation.

That means everything is independent. Of course that means that, in the case of something like, the example I gave before, with RC, and the scoped threads API, one of them would have to fail its verification proof, right?

In our case, of course, since the language went with RC, is the important thing. We made sure that our syntactic model accounted for the safety of RC. We have a proof of the RC implementation, we don’t have a proof of the scoped threads, that would not work.

Yeah.

But you acquainted towards scoped threads, and that was found in Arc, or how would that work?

Well, if you tried to verify it, you would fail. I don’t remember the details exactly of where you would fail in that proof, but-

Okay. So, in the process of manually verifying it would just be a real [inaudible 00:29:13]

Yes.

Okay. Got it.

Yeah.

Does the safe by construction part talk about one of the symptoms are about the memory model of behavior yet?

The memory model, when you say the memory model, what do you mean exactly?

[crosstalk 00:29:26] Things like virtual memory, I can create shared mutable state that would be obvious to the language.

Not exactly sure I’d answer the question. I should say, when I say we prove something, it’s always with respect to the formal model we defined. We defined a memory model, which was relatively simplistic, CompCert-like memory model, if you’re familiar with that for Rust, and we use that.

That’s, in various ways, not realistic, or not fully realistic. The question of how to define a fully realistic, or fully faithful memory model that accounts for A, what Rust developers would like it to be, and B, what will be safe to compile to LLVM, which is what actually happens.

This is an open question, right? It’s always with respect to some memory model that we’ve written down. One of the things we’ve been doing recently is, actually working on defining a richer memory model that has more undefined behavior, in order to allow for various type-based compiler optimization that the rest of the developers want to do. We have not yet adapted RustBelt to that more aggressive memory model.

Yeah.

To draw on a point I think you were suggesting here is… A kind of cultural relativity question here, which is, there were two incompatible things, and you shaped a logical world such as to make one of them work, and the other one not. And it sounds like, you maybe could’ve done it the other way, and made the other one work, and not RC. I’m wondering, if that’s right, and how much choice is there [crosstalk 00:31:05]?

Right. That’s an excellent question that is very difficult to explain quickly. If it’s okay, I’m going to table that question for later, because that’s an open ended question that we can go on for a while with.

The short answer is, there’s a lot of flexibility in how we define the semantic model. Indeed, I have two projects right now, where each one is involving looking at new and more adventuresome APIs, that are doing weirder, and weirder things. I mean, this is the trend of… A lot of Rust hackers are very adventuresome, and they like Rust, because it lets them do adventuresome things with a safe interface. It’s exactly this story that I’m telling you here.

As a result, often, when you look at these APIs, you say, well, turns out I can’t actually verify that safe according to the semantic model we have right now.

It turns out we do need the ability to also adapt our semantic model, and our whole framework overtime. The good news is, this is all verified in Coq, in the Coq Proof Assistant. This is made much simpler than it would otherwise be. Basically, it would be impossible if we didn’t have machine support for doing this.

Since we do, it’s actually… When you want to change the semantic model, Coq tells you where you have to various changes to any existing proofs, and so, we have that flexibility of evolving our semantic model over time.

Yeah. Yeah, sure.

Do I understand the point of this to be, [inaudible 00:32:33], and if you go over and manually verify Mutex, and then independently manually verify Arc?

Yes.

Okay.

Yeah, they’re completely independent.

Okay. Got it.

Yes.

Okay. So, what I described was the overall goal of the RustBelt project, for various definitions of… The parameters are, how do we define the language, how do we define the semantic model, what are the APIs we want to verify, under what realistic memory model, et cetera, et cetera. There’s various knobs, you can tweak to make this more, and more realistic, and more faithful to the actual language.

In our POPL paper from last year, we basically did this for a sequentially consistent… Very simple concurrent semantics for a core fragment of Rust. Basically, in the rest of this talk… Things are a little silly, but that’s okay.

I’m going to focus on giving you some basic idea of how we do this, okay, and the key challenge. The key challenge from my perspective is essentially, that… Oh, shoot.

The danger of clickers. Well, all right. Jumped the gun.

The [inaudible 00:33:46] question is, which logic to use? And you saw the answer already… So, I don’t know how many of you are familiar with it, but… Sorry, I jumped too far.

When I say which logic to use, what do I mean? Okay. The point is, right, we have this semantic model which translates API interfaces into verification conditions. Which logic do we use to describe these verification conditions, okay?

If you just pick some garden variety of logic this may be a terrible fit for actually modeling Rust, and proving things about it. Okay?

The logic we use instead, is a derivative something called, separation logic. How many of you have heard of separation logic? Oh, okay, that’s more than I expected.

Basically, separation logic is an extension of Hoare logic. How many of you have heard of Hoare logic? A few more. Hoare logic is basically, a program logic… You can think preconditions, and postconditions and invariance, and things like this that was developed in the 60s originally, and has been studied for decades.

Separation logic is an extension that started around the turn of the century, and it was originally designed for reasoning about pointer manipulating programs, which is something that Hoare logic was not very good at. It’s been a major influence on many verification analysis tools. Why is it a good fit for Rust? Basically, because the idea of separation logic… Which I’m going to talk about in more detail, I’m not assuming you know separation logic, I’m going to explain what you need to know about it.

The basic idea is this idea of ownership. Okay? Separate this idea of ownership, which I’ve already talked about in the context of Rust, the same thing is built in as a primitive notion in separation logic, assertions in separation logic denote ownership of resources. It’s that natural fit for Rust. It’s thinking in the same terms as the Rust type system is.

There’s a couple problems. You can’t just say we use separation logic. The first problem is, there’s not just one separation logic, there’s many of them, there’s more than seven.

For something like Rust, a natural idea would be to use some extension of concurrent separation logic, so that’s kind of like the first Superman here. This is something was developed around 2004, pretty early on.

The original separation logic had nothing to say about concurrency, but O’Hearn realized, who was one of the original developers, he realized soon that it would actually, be very useful for doing modular reasoning about concurrent programs, as well.

Since that work, there’s been a huge pile-on of work… Do I have the picture? Oh, no, I don’t have the picture, the picture comes later.

There was a paper actually, in 2010, called, The Next 700 Separation Logics, which… You can’t read that in the back, I’m sure. In the abstract it said, however, the separation logic has brought great advances to the world of verification, blah, blah, blah. However, there is a disturbing trend for each new library, or concurrency primitive to require a new separation logic. Okay.

Clearly that’s bad, because if we’re developing a semantic model that is supposed to be this universal semantic model that all of these APIs are verified with respect to, then if you come up with a new API, and suddenly the existing separation logic that the thing is built on doesn’t allow you to verify it, then you’re kind of screwed.

What we really want is some kind of unifying framework for being able to derive new and more powerful separation logics over time. Okay. That’s the first problem.

The second problem, which I’m really not going to talk about at all today is, the concurrency model. The original… All the work on separation logic and concurrent separation logic until a couple years ago, was assuming sequential consistency for memory accesses.

This means, basically, a simple memory model where every thread takes a turn, accessing the global state. As soon as you make a mutation of the state, it’s visible to all other threads, and this is not how concurrency actually works. It’s a nice fiction, but the reality is more complicated.

To actually have a faithful model of what Rust is doing, we need to account for C++ style atomics, which are these relaxed memory operations that give you more fine-grain control over the performance of your concurrent operations.

This lead to a bunch of work, and this is work basically, that we’ve been doing for the last, I don’t know… What is it, ‘14, five, six years, something like that, on trying to address these two problems, which I think are the central problems in building up some verification framework like RustBelt.

One is this line of work on Iris, which is what I’m going to focus on today, which is this unifying framework for understanding concurrent separation logic, into which we can encode these semantic models of Rust types.

The other one is a line of work on developing separation logics that account for relaxed memory, in the style of the C++ memory model, and actually in most recent work, we’ve actually shown how to encode this whole line of work on this logic GPS, and its descendants inside Iris.

So, Iris really is this very general logic that lets you encode things about, not only one language, or one memory model, but many different kinds, and its all mechanized in Coq.

In this talk, what I’m going to do in the rest of the time, if I have… Yeah. I’ll do the best I can, is to give you a high-level review of the basic ideas of concurrent separation logic are. I’ll motivate why those were not enough, like why you need to go beyond the basic concurrent separation logic for doing the kind of verifications we’re interested in, and I’ll talk a bit about Iris, and how that supplies the right framework for what we’re trying to do. Any questions, so far?

Hoare Logic! Okay. This is a very quick introduction to Hoare logic. Basically, in Hoare logic you prove… Well, at least traditionally… You prove these, what are called, Hoare triples, and this is named after, by the way, Tony Hoare, a famous British scientist.

There’s the code we’re verifying, C, there’s a precondition, which is the assertion you make about the state before the code executes, so this is the assumption of the verification. And there’s the postcondition, which is the property that should hold afterwards, and basically, what the Hoare triple says is that, if the precondition holds of the initial state, then C, is safe to execute. Okay, meaning its execution is well defined, it won’t get stuck in the operational semantics, and if it terminates with a value, then Q holds of the resulting value. Okay?

That concludes my introduction to Hoare logic. There’s also rules, but I only have so much time. I’ll show a couple rules later on. Actually, I think there are only two rules in this talk.

What is separation logic? Well, it’s basically Hoare logic where you change the meaning of what assertions are. So, in Hoare logic assertions are just facts about the whole program state. In separation logic, assertions denote ownership of pieces of the program state. This is vanilla separation logic circa 1999.

These are the two major new features; one is these points to assertions; X points to V, which says if I can assert that in the precondition of my Hoare triple, it means that my thread owns X… It owns that location, and it points to V. And what does own mean? It means that, if I can assume that in my precondition, then I could assume that I’m not going to have any interference on that location from any other thread that I’m linking with. Okay? I know that I can mutate it, and I can reason about the contents of X, without worrying about any interference.

Then there’s this thing called separating conjunction, P, star, Q, which means that, P and Q both hold, but they hold of disjoint pieces of state, okay, which in this case, you can think of disjoint sets of memory locations.

For example, if we assert X points to V, star, Y points to W, it means that X points to V, and Y points to W, and it means that X and Y do not alias, so X is not equal to Y, they’re different locations in memory. Okay?

To give you a very simple example of how that would be used… Oh, sorry, before I give the simple example, I have to show this rule. This is the basic rule for how you put these things together in concurrent separation logic. So, if I have multiple threads, C1, and C2, that are running in parallel, then the idea is, I can verify them completely independently. So, I just have these two independent premises, I’ve got C1, and C2, and I link together the proofs by starring together the preconditions and the postconditions. Okay?

The intuition here is, if C1, and C2, are operating on disjoint state, as evidence by the fact that their preconditions are starred over here, so we assume to be on disjoint pieces of memory, then they don’t interfere. Okay? So, I don’t have to worry about interference. This is the poster child for concurrent separation logic, this is where it does very well, and to give you a simple example, if I have one thread mutating X, and one thread mutating Y, and I assume in the premise that X points to 0, star, Y points to 0. Well, I can just verify them completely independently on this side, and on this side, and link together the results, and I know that they’re not going to write over each other. Okay?

Okay. That was one rule, I’m going to show you the other rule. The rule I just showed you was for disjoint state. What about if the threads are operating on shared states? For this, you use the invariant rule of CSL… This is a slightly incorrect version of the invariant rule, I’m hiding some details, okay, for purposes of presentation. Basically, the idea is, that when you have some piece of shared state, the logic lets you impose an invariant on it, which I’m writing here as, invariant R. That just means, I’ve established some invariant, meaning an assertion R holds of some piece of shared state, and it’s going to hold permanently. Okay?

The idea is that I’m executing some command C, and C wants to access that state, for example, maybe C is, whatever, some pointer manipulation, that’s talking about a pointer that’s governed by that R, then I’m allowed to gain access to the invariant temporarily for the duration of the execution of C. Okay?

I star the R, which is describing that shared state on to my precondition, and I have to restore it in the postcondition. Okay? It’s basically an assumption, and a guarantee. There’s a side condition here, that says, C is atomic, which means C takes exactly one step of computation. Does anyone have any idea why I need that side condition?

Yeah.

If not [inaudible 00:44:35] something else, [inaudible 00:44:35] and then C doesn’t happen.

Yes, exactly. Basically, if C were not atomic, then there could be interference from some other thread that’s also applying the invariant rule and trying to access this invariant at the same time, because internally in here, if this thing takes multiple steps of computation, the first step could be that it does something to its precondition which breaks the invariant R, right?

For example, R might be, I don’t know, X points to an even number, and then it might increment X, and then it would point to an odd number, and so, that would be violating the assumption that another thread wanted to make if C were not atomic.

Okay. How do you use this rule? So, to show you how to use this rule, I’m going to give you a very simple example, we’re going to go to that Mutex example that we saw before. This is a very cut-down version of the actual Mutex, but it gives you the idea.

Mutex has two states… Okay so, Mutex, well, how is it implemented? It’s basically… I’m just modeling it here as a pointer to an integer, which is either 0 for unlocked, or 1 for locked. The locked method just does a CAS loop to try to set it from 0 to 1, atomically, and terminates once it does so. And the unlocked just sets it back to zero. Okay?

First I’ll show you how we specify this in separation logic, then I’ll show you how we can verify it using the rules we saw, so far.

To specify it… The specification is very simple, suppose we pick P… P is a total parameter of the specification. P is whatever resource, whatever property you want to describe the thing that is protected by the lock. Okay? The resource that is protected by the lock. The lock method basically, takes a trivial precondition true… Anyone can call this lock method, and once it terminates, you gain ownership of the resource that’s governed by the lock, because you have acquired the lock.

Correspondingly, when you want to unlock, you have to give up ownership of the thing required by the lock, by passing it in to the precondition, and this means you actually lose, P. You can’t use P anymore after you’ve called unlock, and so your postcondition’s just true. That’s the whole spec, very simple, and this P is really, completely arbitrary.

Just to give you an example, this is how one would use this spec, right? So, if I have two threads that are trying to access the resource that’s governed by this lock, they can both be done under a trivial precondition, and a trivial postcondition. Each one just acquires the lock, gets ownership of P, does whatever it wants in the critical section using that resource, as long as it restores the invariant P when it’s done, and then calls unlock, and these can be verified completely independently. Okay.

How do we actually verify something like this? Well, basically, you have to use the rules I showed before, one of which is, you have this shared state which is the lock so, you want to impose an invariant on it. What is the invariant associated with this Mutex X? Well, it’s basically going to say, that the lock is in one of two states, all right? This is just a disjunction between two states. Either you’re in the unlocked state so, X is pointing to 0, and the invariant owns the resource, in the resource assertion P. Okay? Because it’s unlocked so, no one owns it, it’s just the shared state that owns it.

Or you’re in the locked state where X points to 1, and the invariant doesn’t own P, some thread is acquired and it may not even be satisfying P at the moment, because they have acquired the lock, they can break the invariant while they acquire the lock.

I’m just going to show you, to give you a flavor of how you would verify something like the lock method in concurrent separation logic. What is it? It’s a CAS loop, the only interesting part of this is when the CAS actually succeeds, when it fails it just loops around so, that’s not interesting.

I’m going to not show you the locked state, I’m going to show you the unlocked state. Basically, this amount is just showing this property… When we CAS, and the thing actually succeeds… When we’re in the unlocked state we want to get back, P. How do we do this? We apply the invariant rule. Okay? When we apply the invariant rule, we see… If we’re in the unlocked state, we can actually, transfer… Remember the whole idea of the invariant rule, is you transfer the invariant resource into your precondition as a starred thing. We get X points to 0, star, P, we take temporary ownership of it.

Now, we own X, so we can do the CAS. Okay? And that succeeds going from 0 to 1, and now, we have to restore the invariant, which we want to do by just transferring X points to 1, back into the invariant, and we keep P, in the postcondition. Okay? It’s all a proof.

If that was a little fast, that’s fine, I also alighted various details that are important. Basically, that’s how the proof goes, and actually, when you do these proofs in Coq, in our framework… Okay, things aren’t flying around, but it’s basically, this interactive proof. Okay?

Just to recap what I’ve shown so far, there’s these two rules, one for reasoning about disjoint concurrency, one for reasoning about shared state concurrency, and you can put these together to do a lot of interesting things.

Going beyond CSL… Just check my time, okay… Is it okay if I go 5 minutes, 10… Okay. All right. What’s the problem with CSL? Basically, the problem is, although… It’s actually pretty impressive how far you can get with it, it’s not unlimited. There’s a variety of things you can’t do in CSL, and that have been the source of why people have been building all these other Superman sequels since then.

One of them is that, you want to be able to describe not only invariance on the state, but how state changes over time. Okay. For example, you might want to say that, a shared counter only increases its value over time. That’s not something you can just say as a statement about an invariant property. Or you might want to say, which party in a multi-party computation… Or a multi-thread computation, can make changes at a certain point during the implementation of subprotocal. Okay?

Typically, this is done in these various, fancier logics, using things like permissions, or tokens, or capabilities, these logical bits of state that let you describe which threads can take on which roles during a protocol. All right? And that’s, again, something that’s not there in the original concurrent separation logic.

Basically, this is where the name of the game is in terms of improving the expressiveness of these things. The unfortunate thing is it lead to this… Okay, whoops, sorry… This is an outdated picture, there’s actually another layer or two since this was… I think, this picture was from a couple years ago… Right, the last one here is 2015, there’s a couple more years of further CSL work.

Basically, there’s been this explosion of these fancier, and fancier logics for doing this kind of thing. The rules in these logics get very complicated, right? I mean, I showed you some rules that I thought would look reasonably simple, but as of a couple years ago, these were some of the most advanced logics, and they have rules like this built in as primitive.

Now, these rules are powerful, they let you do all sorts of things, right? They’re advanced reasoning rules, I’m not shaming them… I sort of am shaming them. And I’m responsible for some of them, I was responsible for that one, which is the least shameful of the three.

Just kidding. Well, not entirely.

But they are actually doing very interesting things. The problem is, really, that you don’t want to bake such things in as primitive, because what happens is, each of these logics had a customized soundness proof. People would build some model of a logic to prove its sound, and these are all unrelated to each other. So, if you did your proof of some data structure in one logic, you wouldn’t know that it carried over to reasoning about data structures in another logic, or that you could compose these things. Also, it’s just ridiculously complicated. Okay? There must be a better way.

This was in 2014, I think, no, even 2013, when we started to work on this stuff… Was that there should be some, more fundamental, or more canonical approach to doing these fancier kind of concurrent reasoning.

This lead to the work on Iris. Basically, the idea of Iris, at least, the basic idea of Iris… Obviously, things get more involved, but the basic idea is that, all you really need if you want to do this fancier concurrent reasoning are two things; one is invariance more or less… Essentially, as I’ve described them, a little more complicated in reality, but not in a way that I’m going to get into today. Essentially, as I’ve described them.

Plus, one new additional feature, which is a very powerful feature, which we call, user-defined logical state, or user-defined [ghost 00:53:34] state. Okay. And I’m going to talk about this in a minute, using just these two things, you can derive the rules of some of the most advanced extensions to concurrent separation logic, and you can also just develop new reasoning principles when you need them for the application at hand.

For example, in the RustBelt verification, we ended up wanting to have some kind notion in our logic of, what we call, a borrow proposition, which is basically, the idea of, that you have temporary ownership of some resource during the lifetime of some data. Okay. And that once that lifetime is over, you could get back… The original owner of that resource would get back control of it.

This is not something that… This idea for ownership for a certain duration of time, was not something that existed in ordinary concurrent separation logic, this was a new thing so, instead of developing some new bespoke logic, we just derive that notion as a new predicate. You could think of it as a new logical API that we encoded in Iris, and we proved the soundness of it in Iris, and we used that in a fundamental way in our RustBelt development. This, lift the level of abstraction of our semantic model.

That’s just an example of the kind of thing we can do in Iris. I’m going to give you a flavor of how this works, again, by example, returning to this Arc example that we saw before. This will get a little more involved so, if you want to leave now, it’s fine. But it’ll only take about 5 minutes.

The idea is that, in Arc we have… What are the objects? They are pointer to basically, a count field, and a data field. The data is the underlying data, and the count is reference count. What happens is… Here’s just three methods, okay. The clone method, basically does a fetch-and-add, which is an atomic fetch increment on the count field, and then returns an alias to the original Arc object. Okay?

The print thing is just an example to say we can print the data. The drop method… In Rust the structures are called, drop. The drop method will do a fetch-and-decrement on the count field, and if it sees that it’s the last one out the door, it will deallocate the object. Okay.

This is, by the way, not exactly what Rust does, but it gives you the idea. How would we specify this thing? Basically, the idea on how we specify things in RustBelt is, for every type we’re going to have some predicate that describes what it means a value of that type.

In this case, I’ll write it as Arc of X, you could think of it as, if I had X of type Arc of T in my program, that’s going to translate into some logical precondition in my code that says, Arc of X holds. Okay.

Intuitively, what does Arc of X mean? It means I have the permission to access some shared piece of data through this Arc object X, this handle, X. So, here’s the spec that I’m going to prove… I’m going to show just the proof of one of these methods, but this is the whole spec.

Each of the methods takes Arc of X as a precondition, because they’re one of the methods on the Arc object. In the case of clone, we actually get back… When you do Y equals X dot clone, you’re going to get back Arc of X, star, Arc of Y. You’ve spawned off a new Arc predicate, which you could then pass using the disjoint concurrency rule, you could pass that to another thread and you could reason about those two threads into completely independently.

The other methods are straightforward. Except the print, you keep the Arc predicate, in the drop method, you drop it, you lose it.

There’s something very funny about this clone method, I don’t know if anyone… In terms of, the spec of this clone method, given the interpretation of what the separating conjunction is that I’ve given you, so far. Does anyone see anything bizarre about this spec?

Yeah.

Creating an alias for value. So, it doesn’t actually, separate, right?

Yeah. Exactly. I have a separating conjunction here, but X and Y… Y is actually, equal to X. I’ve literally returned X. This is really saying, Arc of X, star, Arc of X. Both of them are talking about the same piece of memory so, what does it mean that I have a separating conjunction on two predicates that, they’re talking about the same piece of memory? Isn’t that defying the entire idea of separation logic?

The answer is, no. Why is the that always the answer, no? Because Arc of X is not actually describing physical ownership. The idea is that, Arc of X is not a descriptor of physical state, it’s a descriptor of logical state… Or in this case, logical permission to access some physical state. This is a key idea in Iris, and also, by the way, a lot of the ideas in Iris are there in previous work, it’s just we have a more simple canonical foundation for them.

The idea is Arc of X is describing this logical predicate, and the two predicates you have here, are logically disjoint predicates. They’re logical predicates that you can pass to two different threads, and you know that they’re not going to trample on each other, that’s basically the idea.

This is not an obvious idea, if you’re not used to doing this kind of verification, but I’ll show you in a sec how this plays out.

As I said, Arc of X describes ownership of logical state, it’s like, I own this privilege to access the object through X, one privilege, it’s a countable privilege, right? You can have multiple privileges, and the number matters, because the number of Arcs that are out there is going to somehow have to match the underlying count field of the object.

That’s what’s represented by this little, logical API that I’m writing down here, is some proof rules. I should say up front, you should never trust someone who just throws up proof rules on the screen, and doesn’t justify them, but that’s what I’m going to do right now.

If I don’t get back to that point in two minutes then you can complain about it. Basically, the intuition here is, I explained Arc of X already, there’s also this other predicate we need called count of X,N, which basically says, X’s reference count field currently has N stored in it. Okay.

Another way of saying it is, N is the number of Arc of X’s that exist in the world at the moment. Given that intuition, these rules make sense, doesn’t mean they’re sound, but it means they make sense.

First one says, that if N is the number of Arc of X’s out there, and separately, I have an Arc of X in my hand, then clearly, N is greater than 0, because I just showed you, I have one.

The other one says, if N is the current number of Arc’s out there, then this funny symbol that has three lines in it, is different from that one that has two lines, because this one doesn’t update on the state. Okay?

So, this one will say, I can update the count to N + 1, and at the same time spawn off one of these Arc privileges. Okay. Or I can go the reverse direction, and consume one, going from N + 1, to N. Okay. I’ve shown you this, and now, I’m going to show you how we can actually, prove this thing very quickly, because I’m running out of time. But basically, this is almost the end of the talk.

Here is the invariant we would use, and this is a totally common pattern in many of our proofs is that, the invariant serves to tie the logical state, which is the thing that’s manipulated by… At the level of the API, with the physical state which is hidden inside the implementation of the object. Okay.

In this case, the physical state is that pointer X, the assertion that says X, is currently pointing to something, where N is the count field. What does this invariant says? It says that, at all times, there exists a number N, and N is the current number of Arcs that are currently in existence, either N is 0, or the object is allocated and its count field is N. Okay.

In the case when N is 0, you don’t have the object around, because that means the thing has been deallocated. Okay. This is basically, tying together the N in the logical state, with the N in the physical state. Okay. I’m going to go quickly through this just to give you the flavor, but basically, this is another way of writing that invariant that I just showed you, visually, as a state transition system. Here the bubble, N says that count X,N currently holds, and in the state when it’s 4, then we also have X points to 4, something, when it’s 3, X points to 3, something, et cetera, and at the 0 state, we don’t have ownership of X, at all. Okay.

Here’s the spec again, that we wanted to prove, notice that in all the cases, we have Arc of X as a precondition. Can anyone say what that tells us about that, in conjunction with the invariant that we just set up, what do we know about the state of anything?

N is at least 1.

N is at least 1. Exactly. Right. Using that property that I showed before, we know we have count of X, N, from the invariant, and we know we have an Arc of X, and that tells us N must be greater than 0.

Consequently, we know that, actually, X is allocated, because in all the N greater than 0 cases, the invariant owns X. Okay. That’s important, because in all of these operations, we do something with X. Okay.

Then when we do the clone, what is the clone? It’s just a fetch-and-add, this is very simple verification. Correspondingly, the proof is very simple, we just apply the invariant rule. We open up the invariant, we know were not in the 0 state, as I just said, let’s suppose we’re in the 2 state… Obviously, in the proof, you do it in a generic way, but just for visualizing this proof, suppose we’re in the 2 state.

We use the invariant rule to temporarily gain ownership of the physical state X, now we can do the fetch-and-add. So, we go from X points to 2 something, to X points to 3 something. Now, we want to restore this state… Or, we want to restore the invariant. The problem is, our physical state points to 3, but our logical state points to 2. Okay.

So, we have to update the logical state, to match the physical state change we just did. We could do that using that second axiom that I showed you before, that lets you update the logical state. If we do that, it spawns off a new Arc of X, which we can put in our postcondition, and now, our logical state matches our physical state so, we can restore the invariant and the proof is done, and we have that extra Arc of X that we spawned in our postcondition, which is exactly what we need. Okay.

That was probably too fast for you to understand it, but hopefully, the idea you get is just, we did the same kind of proof we did in ordinary CSL, where we open the invariant, we do some manipulation of some state, we restore the invariant, except here, we also did some manipulation of this logical state alongside it, using some reasonable axioms about logical state that I just put up there. Okay.

This is the point where I say, how do we know that those axioms that I just put up there, are sound? This is basically, the whole point of Iris is, to give you a framework for verifying such axioms as appropriate for different applications when you want to.

Right here, I just threw up these axioms… There has to be some way of actually, justifying them, and I’m not going to go into detail about this, at all, I just wanted to say it in one slide. The idea is, basically, that you can define whatever notion of logical state you want for your application… There’s a very generic algebraic structure that lets you do this, called a partial commutative monoid, that doesn’t mean anything to you, that’s fine. That’s basically, the same algebraic structure that’s used to model ordinary separation logic, it’s just that in Iris, you could pick any partial commutative monoid you want to represent your logical state.

Once you have that, then there’s a couple primitive proof rules, they look like this, I’m not going to go into details, but again, simple, much simpler than those fancy primitive rules that I showed on the earlier slide. Using those, you can derive the soundness of the kind of axioms that I just showed a few slides ago within the logic. Okay.

Wrapping up… And of course, the point is, you could these kinds of things in the same kind of pattern using just these very basic proof rules about arbitrary partial commutative monoids.

Hopefully, I’ve given you some sense of what Rust is about, what’s the challenge in verifying the safety of Rust, and how Iris can give you… Which is this flexible modern version of separation logic… Can you give you a very nice way of defining, and encoding a semantic model for Rust types that scales up to real Rust code. Thanks.

The next great idea will come from you

Be alert for fraudulent schemes impersonating Jane Street in order to steal your money and information.