Notes on substructural types

Introduction

The type systems we have studied so far are good for catching some kinds of program errors---such as using values inconsistently with their types, or (in the case of effect typing) failing to account for possible exceptions. However, they assume that the types of objects are unchanging over the course of the program. This means that these type sytems are unable to catch some classes of program errors, such as using a file handle after closing it, writing to a memory reference after freeing it, or failing to follow communication protocols.

These notes describe substructural type systems, one approach to capturing these kinds of errors. The key idea behind substructural type systems is that the errors above can all be detected in the usage pattens of the program variables. By restricting the use of variables, we can provide a basis for detecting other kinds of errors.

Logical and structural rules in proofs

We draw our inspiration from similar ideas in the study of formal proofs. There, the manipulation of assumptions (equivalent to variables) are regulated by what are called the structural rules; the remaining rules are called logical rules. The typing rules that we are familiar with correspond to the logical rules; for example

$$ \frac{~}{A \vdash A} \quad \frac{C \vdash A \quad D \vdash B} {C,D \vdash A \land B} \quad \frac{C, A \vdash B} {C \vdash A \implies B} \quad \frac{C \vdash A \implies B \quad D \vdash A} {C,D \vdash B} $$

correspond to the familiar rules for variables, pairs, function abstraction, and function application. The structural rules may be less familiar. There are three of them:

$$ \frac{C \vdash A} {C, B \vdash A} \text{(Weakening)} \quad \frac{C, B, B \vdash A} {C, B \vdash A} \text{(Contraction)} \quad \frac{C, B_1, B_2, D \vdash A} {C, B_2, B_1, D \vdash A} \text{(Exchange)} $$

The (Weakening) rule allows derivations of conclusions from "irrelevant" hypotheses---if you can prove $A$ from $C$, then a proof of $A$ from $C,B$ must not make essential use of the $B$ hypothesis. The (Contraction) rule allows the same assumption ($B$) to be used multiple times. Finally, the (Exchange) rule allows the hypotheses to be reordered.

It may not be immediately apparent that the structural rules have any analog in type systems. However, consider the analogy between the ($\to$E) rule with which we are familiar and the logical modus ponens rule given above.

$$ \frac{C \vdash A \implies B \quad D \vdash A} {C,D \vdash B} \quad \frac{\Gamma \vdash e_1 : t_1 \to t_2 \quad \Gamma \vdash e_2 : t_1} {\Gamma \vdash e_1\,e_2 : t_2} $$

In the rule on the left, each hypothesis has a distinct role: those in $C$ go towards proving $A \implies B$, while those in $D$ go towards proving $A$. In contrast, in the rule on the right, we allow all the hypotheses in $\Gamma$ to be used in both the left-hand and right-hand subderivations. We can relate these rules, however, by viewing the right-hand rule as combining (a version of) the left-hand rule with a series of applications of the (Contraction) rule, one to duplicate each assumption found in $\Gamma$. Similarly, consider the analogy between the (var) rule and the axiom given above:

$$ \frac{~}{A \vdash A} \quad \frac{(x : t) \in \Gamma} {\Gamma \vdash x : t} $$

The left-hand rule has a single antecedent, whereas the right-hand rule allows arbitrary antecedents so long as they contain the succedent. We can relate these by considering the right-hand rule to be a combination of the left-hand rule with a number of applications of the (Weakening) rule.

Limiting the structural rules

Having identified the role the structural rules play in proofs, we can consider what would result from restricting their use. Logical systems that do this are generally called substructural logics, as they have fewer structural rules than intuitionistic logic.

Limiting weakening results in systems in which every hypothesis must be used somewhere in the proof. This approach has been taken in philosophy, where it leads to relevant logics, and in computer science, where it appeared in Church's original $\lambda$ calculus, frequently called the $\lambda I$ calculus. Practically speaking, this approach rules out memory or resource leaks: each resource allocated by the program, whether dynamic memory or some operating system resource, must be disposed of somewhere in the program.

Limiting contraction results in systems in which every hypothesis can only be used one place in the the proof. This approach also has a long history in computer science, having first appeared in Reynolds' classic paper "Syntactic Control of Interference". Practically speaking, this approach rules out conflicts between different parts of a program; for example, one part of the program cannot close a file while another part of the program is still expecting to read from it.

Limiting both weakening and contraction leads to a system in which every hypothesis is used exactly once; this combines the benefits of the previous approaches, ruling out both resource leaks and interference. The best known instances of this approach are Girard's "Linear logic" and O'Hearn and Pym's "Logic of bunched implications".

Limiting exchange is relevant in systems where ordering is important---these includes uses of logic to capture natural language or quantum computation. We will not discuss these settings further.

Discovering new connectives

When we consider our familiar typing rules in a linear context, we discover fine structure that was not captured by our previous type systems. Consider two variations on the elimination of products, shown below.

$$ \frac{\Gamma_1 \vdash e_1 : t_1 \times t_2 \quad \Gamma_2, x_1 : t_1, x_2 : t_2 \vdash e_2 : t} {\Gamma_1, \Gamma_2 \vdash \Let{(x_1, x_2)}{e_1}{e_2} : t} (\times E_1) \quad \frac{\Gamma \vdash e: t_1 \times t_2} {\Gamma \vdash \Fst e : t_1} (\times E_2) \quad \frac{\Gamma \vdash e: t_1 \times t_2} {\Gamma \vdash \Snd e : t_2} (\times E_3) $$

So far, we've only used the first elimination form. We might prefer to use the second two---for example, they're parallel with the treatment of sums. In our type systems so far, these two approaches are entirely equivalent---that is, they are interdefinable. However, from a substructural perspective, these are very different ideas. In the first case, by deconstructing a product, we get back both of the values used to construct it. In the second case, when we deconstruct a pair, we get only one of the values used to construct it. In substructural type systems, we distinguish these two cases. The first, $t_1 \otimes t_2$, contains our familiar rules for pair elimination. However, the introduction form splits its environment to construct the two components of the pair. Practically speaking, you can think of the introduction rule as computing $e_1$ and $e_2$ in parallel, where the type system guarantees no interference between the two computations.

$$ \frac{\Gamma_1 \vdash e_1 : t_1 \quad \Gamma_2 \vdash e_2 : t_2} {\Gamma_1, \Gamma_2 \vdash (e_1, e_2) : t_1 \otimes t_2} \quad \frac{\Gamma_1 \vdash e_1 : t_1 \otimes t_2 \quad \Gamma_2, x_1 : t_1, x_2 : t_2 \vdash e_2 : t} {\Gamma_1, \Gamma_2 \vdash \Let{(x_1, x_2)}{e_1}{e_2} : t} $$

The other form, $t_1 \With t_2$ of product constructs a pair of values from a single set of resources. This gets us the projection elimination forms. You can think if it as offering a delayed choice of two possible ways to use a single set of resources.

$$ \frac{\Gamma \vdash e_1 : t_1 \quad \Gamma \vdash e_2 : t_2} {\Gamma \vdash \langle e_1, e_2 \rangle} \quad \frac{\Gamma \vdash e: t_1 \With t_2} {\Gamma \vdash \Fst e : t_1} \quad \frac{\Gamma \vdash e: t_1 \With t_2} {\Gamma \vdash \Snd e : t_2} $$

Embedding intuitionistic logic

While substructural logics, and the corresponding type systems, provide a valuable refinement of traditional logics and type systems, they may also impose restrictions that are not always applicable. For example, while we want to be sure that file handles are not leaked, and that we do not read from files after they are closed, there a no corresponding worries about integers. Having to write our integer programs with substructural restrictions would rule out many existing programs.

Girard's solution to this problem was to introduce a modality, or one-place type constructor, $!t$ (pronounced "of course $t$"). For example, traditional programs might manipulate values of type $!\Int$, denoting that they used integers without observing the substructural constraints. He then allowed the weakening and contraction rules, but only in the case that the type being weakened or contracted was of the form $!t$.

$$ \frac{\Gamma \vdash e : t_2} {\Gamma, x : !t_1 \vdash e : t_2} (!W) \quad \frac{\Gamma, x : !t_1, x : !t_1 \vdash e : t_2} {\Gamma, x : !t_1 \vdash e : t_2} (!C) $$

This leaves the question of how values of type $!t$ are introduced and eliminated. The elimination rule is straightforward. Intuitively, a value of type $!t$ can be used any number of times. 1 is a number, so we can transform $!t$ values into $t$ values. To understand the introduction rule, we think about the meaning of the derivation $\Gamma \vdash e : t$ in a linear type system. Intuitively, this means that each of the resources in $\Gamma$ is used once in constructing a value of type $t$. But, if $t$ is type $!t'$, for some $t'$, then the resources in $\Gamma$ may be used many times. So, we must assure that each of the resources in $\Gamma$ is of the form $!u$ for some type $u$.

$$ \frac{!\Gamma \vdash e : t} {!\Gamma \vdash !e : !t} (!I) \quad \frac{\Gamma_1 \vdash e_1 : !t_1 \quad \Gamma_2, x : t_1 \vdash e_2 : t_2} {\Gamma_1, \Gamma_2 \vdash \Let{!x}{e_1}{e_2} : t_2} (!E) $$

The notation $!\Gamma$ means that, for each assumption $x : t \in \Gamma$, $t$ is of the form $!u$ for some type $u$.

Typing rules in full

Rule Name
$$\frac{~}{x : t \vdash x : t}$$ (var)
$$\frac{\Gamma, x : t \vdash e : u} {\Gamma \vdash \backslash x : t \to e : t \lolli u}$$ ($\multimap$I)
$$\frac{\Gamma_1 \vdash e_1 : t \lolli u \quad \Gamma_2 \vdash e_2 : t} {\Gamma_1, \Gamma_2 \vdash e_1 \, e_2 : u}$$ ($\multimap$E)
$$\frac{\Gamma_1 \vdash e_1 : t_1 \quad \Gamma_2 \vdash e_2 : t_2} {\Gamma_1, \Gamma_2 \vdash (e_1, e_2) : t_1 \otimes t_2}$$ ($\otimes$I)
$$\frac{\Gamma_1 \vdash e_1 : t_1 \otimes t_2 \quad \Gamma_2, x_1 : t_1, x_2 : t_2 \vdash e_2 : t} {\Gamma_1, \Gamma_2 \vdash \Let{(x_1,x_2)}{e_1}{e_2} : t}$$ ($\otimes$E)
$$\frac{\Gamma \vdash e_1 : t_1 \quad \Gamma \vdash e_2 : t_2} {\Gamma \vdash \langle e_1, e_2 \rangle : t_1 \With t_2}$$ (&I)
$$\frac{\Gamma \vdash e : t_1 \With t_2} {\Gamma \vdash \Fst e : t_1}$$ (&E1)
$$\frac{\Gamma \vdash e : t_1 \With t_2} {\Gamma \vdash \Snd e : t_2}$$ (&E2)
$$\frac{\Gamma \vdash e : t_1} {\Gamma \vdash \Inl e : t_1 \oplus t_2}$$ ($\oplus$I1)
$$\frac{\Gamma \vdash e : t_2} {\Gamma \vdash \Inr e : t_1 \oplus t_2}$$ ($\oplus$I2)
$$\frac{\Gamma_1 \vdash e : t_1 \oplus t_2 \quad \Gamma_2, x_1 : t_1 \vdash e_1 : t \quad \Gamma_2, x_2 : t_2 \vdash e_2 : t} {\Gamma_1,\Gamma_2 \vdash \CCase e {x_1} {e_1} {x_2} {e_2} : t}$$ ($\oplus$E)
$$\frac{!\Gamma \vdash e : t} {!\Gamma \vdash !e : !t}$$ (!I)
The notation $!\Gamma$ means that, for every $x : t \in \Gamma$, $t$ is of the form $!u$ for some type $u$.
$$\frac{\Gamma \vdash e : t_2} {\Gamma, x : !t_1 \vdash e : t_2}$$ (!W)
$$\frac{\Gamma, x : !t_1, x : !t_1 \vdash e : t_2} {\Gamma, x : !t_1 \vdash e : t_2}$$ (!C)
$$\frac{\Gamma_1 \vdash e_1 : !t_1 \quad \Gamma_2, x : t_1 \vdash e_2 : t_2} {\Gamma_1, \Gamma_2 \vdash \Let{!x}{e_1}{e_2} : t_2}$$ (!E)