Thursday, July 13, 2017

POPL submission, pulling together these blog posts on semantics!

Last week I submitted a paper to POPL 2018 about the new kind of denotational semantics that I've been writing about in this blog, which I am now calling declarative semantics. I think this approach to semantics has the potential to replace operational semantics for the purposes of language specification. The declarative semantics has the advantage of being compositional and extensional while, like operational semantics, using only elementary mathematics. Thus, the declarative semantics should be better than operational semantics for reasoning about programs and for reasoning about the language as a whole (i.e. it's meta-theory). The paper pulls together many of the blog posts, updates them, and adds a semantics for mutable references. The paper is available now on arXiv and the Isabelle mechanization is available here. I hope you enjoy it and I welcome your feedback!

Wednesday, June 07, 2017

Revisiting "well-typed programs cannot go wrong"

Robin Milner proved that well-typed programs cannot go wrong in his 1978 paper A Theory of Type Polymorphism in Programming (Milner 1978). That is, he defined a type system and denotational semantics for the Exp language (a subset of ML) and then proved that the denotation of a well-typed program in Exp is not the “wrong” value. The “wrong” denotation signifies that a runtime type error occurred, so Milner’s theorem proves that the type system is strong enough to prevent all the runtime type errors that could occur in an Exp program. The denotational semantics used by Milner (1978) was based on the standard domain theory for an explicitly typed language with higher-order functions.

I have been exploring, over the last month, whether I can prove a similar theorem but using my new denotational semantics, and mechanize the proof in the Isabelle proof assistant. At first I tried to stay as close to Milner’s proof as possible, but in the process I learned that Milner’s proof is rather syntactic and largely consists of proving lemmas about how substitution interacts with the type system, which does not shed much light on the semantics of polymorphism.

Last week I decided to take a step back and try a more semantic approach and switch to a cleaner but more expressive setting, one with first-class polymorphism. So I wrote down a denotational semantics for System F (Reynolds 1974) extended with support for general recursion. The proof that well-typed programs cannot go wrong came together rather quickly. Today I finished the mechanization in Isabelle and it came in at just 539 lines for all the definitions, lemmas, and main proof. I’m excited to share the details of how it went! Spoiler: the heart of the proof turned out to be a lemma I call Compositionality because it looks a lot like the similarly-named lemma that shows up in proofs of parametricity.

Syntax

The types in the language include natural numbers, function types, universal types, and type variables. Regarding the variables, after some experimentation with names and locally nameless, I settled on good old DeBruijn indices to represent both free and bound type variables. \[\begin{array}{rcl} i,j & \in & \mathbb{N} \\ \sigma,\tau & ::= & \mathtt{nat} \mid \tau \to \tau \mid \forall\,\tau \mid i \end{array}\] So the type of the polymorphic identity function, normaly written \(\forall \alpha.\, \alpha \to \alpha\), is instead written \(\forall \left(0 \to 0\right)\).

The syntax of expressions is as follows. I choose to use DeBruijn indices for term variables as well, and left off all type annotations, but I don’t think that matters for our purposes here. \[\begin{array}{rcl} n & \in & \mathbb{N} \\ e & ::= & n \mid i \mid \lambda e \mid e\; e \mid \Lambda e \mid e [\,] \mid \mathtt{fix}\, e \end{array}\]

Denotational Semantics

The values in this language, described by the below grammar, include natural numbers, functions represented by finite lookup tables, type abstractions, and \(\mathsf{wrong}\) to represent a runtime type error. \[\begin{array}{rcl} f & ::= & \{ (v_1,v'_1), \ldots, (v_n,v'_n) \} \\ o & ::= & \mathsf{none} \mid \mathsf{some}(v) \\ v & ::= & n \mid \mathsf{fun}(f) \mid \mathsf{abs}(o) \mid \mathsf{wrong} \end{array}\] A type abstraction \(\mathsf{abs}(o)\) consists of an optional value, and not simply a value, because the body of a type abstraction might be a non-terminating computation.

We define the following information ordering on values so that we can reason about one lookup table being more or less-defined than another lookup table. We define \(v \sqsubseteq v'\) inductively as follows.

\[\begin{gathered} n \sqsubseteq n \quad \frac{f_1 \subseteq f_2} {\mathsf{fun}(f_1) \sqsubseteq \mathsf{fun}(f_2)} \quad \mathsf{wrong} \sqsubseteq\mathsf{wrong} \\ \mathsf{abs}(\mathsf{none}) \sqsubseteq\mathsf{abs}(\mathsf{none}) \quad \frac{v \sqsubseteq v'} {\mathsf{abs}(\mathsf{some}(v)) \sqsubseteq\mathsf{abs}(\mathsf{some}(v'))}\end{gathered}\]

The denotational semantics maps an expression to a set of values. Why a set and not just a single value? A single finite lookup table is not enough to capture the meaning of a lambda, but an infinite set of finite tables is. However, dealing with sets is somewhat inconvenient, so we mitigate this issue by working in a set monad. Also, to deal with \(\mathsf{wrong}\) we need an error monad, so we use a combined set-and-error monad.

\[\begin{aligned} X := E_1 ; E_2 &\equiv \{ v \mid \exists v'. \, v' \in E_1, v' \neq \mathsf{wrong}, v \in E_2[v'/X] \} \\ & \quad \cup \{ v \mid v = \mathsf{wrong}, \mathsf{wrong} \in E_1 \} \\ \mathsf{return}(E) & \equiv \{ v \mid v \sqsubseteq E \} \\ X \leftarrow E_1; E_2 & \equiv \{ v \mid \exists v'.\, v' \in E_1, v \in E_2[v'/X]\}\end{aligned}\]

The use of \(\sqsubseteq\) in \(\mathsf{return}\) is to help ensure that the meaning of an expression is downward-closed with respect to \(\sqsubseteq\). (The need for which is explained in prior blog posts.)

Our semantics will make use of a runtime environment \(\rho\) that includes two parts, \(\rho_1\) and \(\rho_2\). The first part gives meaning to the term variables, for which we use a list of values (indexed by their DeBruijn number). The second part, for the type variables, is a list containing sets of values, as the meaning of a type will be a set of values. We define the following notation for dealing with runtime environments.

\[\begin{aligned} v{::}\rho \equiv (v{::}\rho_1, \rho_2) \\ V{::}\rho \equiv (\rho_1, V{::}\rho_2)\end{aligned}\]

We write \(\rho[i]\) to mean either \(\rho_1[i]\) or \(\rho_2[i]\), which can be disambiguated based on the context of use.

To help define the meaning of \(\mathtt{fix}\,e\), we inductively define a predicate named \(\mathsf{iterate}\). Its first parameter is the meaning \(L\) of the expression \(e\), which is a mapping from an environment to a set of values. The second parameter is a runtime environment \(\rho\) and the third parameter is a value that is the result of iteration.

\[\begin{gathered} \mathsf{iterate}(L, \rho, \mathsf{fun}(\emptyset)) \quad \frac{\mathsf{iterate}(L, \rho, v) \quad v' \in E(v{::}\rho)} {\mathsf{iterate}(L, \rho, v')}\end{gathered}\]

To help define the meaning of function application, we define the following \(\mathsf{apply}\) functiion. \[\mathsf{apply}(V_1,V_2) \equiv \begin{array}{l} x_1 := V_1; \\ x_2 := V_2; \\ \mathsf{case}\,x_1\,\textsf{of}\\ \;\; \mathsf{fun}(f) \Rightarrow (x'_2,x'_3) \leftarrow f; \mathsf{if}\, x'_2 \sqsubseteq x_2 \, \mathsf{then}\, x'_3 \,\mathsf{else}\, \emptyset \\ \mid \_ \Rightarrow \mathsf{return}(\mathsf{wrong}) \end{array}\]

The denotational semantics is given by the following function \(E\) that maps an expression and environment to a set of values.

\[\begin{aligned} E[ n ]\rho &= \mathsf{return}(n) \\[1ex] E[ i ]\rho &= \mathsf{return}(\rho[i]) \\[1ex] E[ \lambda e ]\rho &= \{ v \mid \exists f.\, v = \mathsf{fun}(f), \forall v_1 v'_2.\, (v_1,v'_2) \in f \Rightarrow \\ & \qquad\qquad \exists v_2.\, v_2 \in E[ e ] (v_1{::}\rho), v'_2 \sqsubseteq v_2\} \\[1ex] E[ e_1\; e_2 ] \rho &= \mathsf{apply}(E[ e_1 ]\rho, E[ e_2 ]\rho) \\[1ex] E[ \mathtt{fix}\,e ] \rho &= \{ v \mid \mathsf{iterate}(E[ e ], \rho, v) \} \\[1ex] E[ \Lambda e ] \rho &= \{ v \mid \exists v'.\, v = \mathsf{abs}(\mathsf{some}(v')), \forall V. v' \in E[ e ] (V{::}\rho) \} \\ & \quad\; \cup \{ v \mid v = \mathsf{abs}(\mathsf{none}), \forall V. E[ e ](V{::}\rho) = \emptyset \} \\[1ex] E[ e [\,] ] \rho &= \begin{array}{l} x := E [ e ] \rho;\\ \mathsf{case}\,x\,\mathsf{of} \\ \;\; \mathsf{abs}(\mathsf{none}) \Rightarrow \emptyset \\ \mid \mathsf{abs}(\mathsf{some}(v')) \Rightarrow \mathsf{return}(v') \\ \mid \_ \Rightarrow \mathsf{return}(\mathsf{wrong}) \end{array}\end{aligned}\]

We give meaning to types with the function \(T\), which maps a type and an environment to a set of values. For this purposes, we only need the second part of the runtime environment which gives meaning to type variables. Instead of writing \(\rho_2\) everywhere, we’ll use the letter \(\eta\). It is important to ensure that \(T\) is downward closed, which requires some care either in the definition of \(T[ \forall \tau ]\eta\) or in the definition of \(T[ i ]\eta\). We have chosen to do this work in the definition of \(T[ i ]\eta\), and let the definition of \(T[ \forall \tau ]\eta\) quantify over any set of values \(V\) to give meaning to it’s bound type variable.

\[\begin{aligned} T[ \mathtt{nat} ] \eta &= \mathbb{N} \\ T[ i ] \eta &= \begin{cases} \{ v \mid \exists v'.\, v' \in \eta[i], v \sqsubseteq v',v \neq \mathsf{wrong} \} &\text{if } i < |\eta| \\ \emptyset & \text{otherwise} \end{cases} \\ T[ \sigma\to\tau ] \eta &= \{ v\mid \exists f. \,v=\mathsf{fun}(f), \forall v_1 v'_2.\, (v_1,v'_2) \in f, v_1 \in T[\sigma]\eta \\ & \hspace{1.5in} \Rightarrow \exists v_2.\, v_2 \in T[\tau]\eta, v'_2 \sqsubseteq v_2 \} \\ T[ \forall\tau ] \eta &= \{ v \mid \exists v'.\, v = \mathsf{abs}(\mathsf{some}(v')), \forall V.\, v' \in T[\tau ] (V{::}\eta) \} \cup \{ \mathsf{abs}(\mathsf{none}) \} \end{aligned}\]

Type System

Regarding the type system, it is standard except perhaps how we deal with the DeBruijn representation of type variables. We begin with the definition of well-formed types. A type is well formed if all the type variables in it are properly scoped, which is captured by their indices being below a given threshold (the number of enclosing type variable binders, that is, \(\Lambda\)’s and \(\forall\)’s). More formally, we write \(j \vdash \tau\) to say that type \(\tau\) is well-formed under threshold \(j\), and give the following inductive definition.

\[\begin{gathered} j \vdash \mathtt{nat} \quad \frac{j \vdash \sigma \quad j \vdash \tau}{j \vdash \sigma \to \tau} \quad \frac{j+1 \vdash \tau }{j \vdash \forall \tau} \quad \frac{i < j}{j \vdash i}\end{gathered}\]

Our representation of the type environment is somewhat unusual. Because term variables are just DeBruijn indices, we can use a list of types (instead of a mapping from names to types). However, to keep track of the type-variable scoping, we also include with each type the threshold from its point of definition. Also, we need to keep track of the current threshold, so when we write \(\Gamma\), we mean a pair where \(\Gamma_1\) is a list and \(\Gamma_2\) is a number. The list consists of pairs of types and numbers, so for example, \(\Gamma_1[i]_1\) is a type and \(\Gamma_1[i]_2\) is a number whenever \(i\) is less than the length of \(\Gamma_1\). We use the following notation for extending the type environment:

\[\begin{aligned} \tau :: \Gamma &\equiv ((\tau,\Gamma_2){::}\Gamma_1, \Gamma_2) \\ * :: \Gamma & \equiv (\Gamma_1, \Gamma_2 + 1)\end{aligned}\]

We write \(\vdash \rho : \Gamma\) to say that environment \(\rho\) is well-typed according to \(\Gamma\) and define it inductively as follows.

\[\begin{gathered} \vdash ([],[]) : ([], 0) \quad \frac{\vdash \rho : \Gamma \quad v \in T[ \tau ] \rho_2} {\vdash v{::}\rho : \tau{::}\Gamma} \quad \frac{\vdash \rho : \Gamma} {\vdash V{::}\rho : *{::}\Gamma}\end{gathered}\]

The primary operation that we perform on a type environment is looking up the type associated with a term variable, for which we define the following function \(\mathsf{lookup}\) that maps a type environment and DeBruijn index to a type. To make sure that the resulting type is well-formed in the current environment, we must increase all of its free type variables by the difference of the current threshold \(\Gamma_2\) and the threshold at its point of definition, \(\Gamma_1[i]_2\), which is accomplished by the shift operator \(\uparrow^k_c(\tau)\) (Pierce 2002). \[\mathsf{lookup}(\Gamma,i) \equiv \begin{cases} \mathsf{some}(\uparrow^{k}_{0}(\Gamma_1[i]_1) & \text{if } n < |\Gamma_1| \\ & \text{where } k = \Gamma_2 - \Gamma_1[i]_2 \\ \mathsf{none} & \text{otherwise} \end{cases}\]

To review, the shift operator is defined as follows.

\[\begin{aligned} \uparrow^{k}_{c}(\mathtt{nat}) &= \mathtt{nat} \\ \uparrow^{k}_{c}(i) &= \begin{cases} i + k & \text{if } c \leq i \\ i & \text{otherwise} \end{cases} \\ \uparrow^{k}_{c}(\sigma \to \tau) &= \uparrow^{k}_{c}(\sigma) \to \uparrow^{k}_{c}(\tau) \\ \uparrow^{k}_{c}(\forall \tau) &= \forall\, \uparrow^{k}_{c+1}(\tau)\end{aligned}\]

Last but not least, we need to define type substitution so that we can use it in the typing rule for instantiation (type application). We write \([j\mapsto \tau]\sigma\) for the substitution of type \(\tau\) for DeBruijn index \(j\) within type \(\sigma\) (Pierce 2002).

\[\begin{aligned} [j\mapsto \tau]\mathtt{nat} &= \mathtt{nat} \\ [j\mapsto\tau]i &= \begin{cases} \tau & \text{if } j = i \\ i - 1 & \text{if } j < i \\ i & \text{otherwise} \end{cases}\\ [j\mapsto\tau](\sigma\to\sigma') &= [j\mapsto\tau]\sigma \to [j\mapsto \tau]\sigma' \\ [j\mapsto \tau]\forall\sigma &= \forall\, [j+1 \mapsto \uparrow^{1}_{0}(\tau)]\sigma\end{aligned}\]

Here is the type system for System F extended with \(\mathtt{fix}\).

\[\begin{gathered} \Gamma \vdash n : \mathtt{nat} \qquad \frac{\mathsf{lookup}(\Gamma,i) = \mathsf{some}(\tau)} {\Gamma \vdash i : \tau} \\[2ex] \frac{\Gamma_2 \vdash \sigma \quad \sigma{::}\Gamma \vdash e : \tau} {\Gamma \vdash \lambda e : \sigma \to \tau} \qquad \frac{\Gamma \vdash e : \sigma \to \tau \quad \Gamma \vdash e' : \sigma} {\Gamma \vdash e \; e' : \tau} \\[2ex] \frac{\Gamma_2 \vdash \sigma \to \tau \quad (\sigma\to \tau){::}\Gamma \vdash e : \sigma \to \tau } {\Gamma \vdash \mathtt{fix}\,e : \sigma \to \tau} \\[2ex] \frac{*::\Gamma \vdash e : \tau} {\Gamma \vdash \Lambda e :: \forall\tau} \qquad \frac{\Gamma \vdash e : \forall \tau} {\Gamma \vdash e[\,] : [0\mapsto\sigma]\tau}\end{gathered}\]

We say that a type environment \(\Gamma\) is well-formed if \(\Gamma_2\) is greater or equal to every threshold in \(\Gamma_1\), that is \(\Gamma_1[i]_2 \leq \Gamma_2\) for all \(i < |\Gamma_1|\).

Proof of well-typed programs cannot go wrong

The proof required 6 little lemmas and 4 big lemmas. (There were some itsy bitsy lemmas too that I’m not counting.)

Little Lemmas

Lemma [\(\sqsubseteq\) is a preorder]  

  • \(v \sqsubseteq v\)

  • If \(v_1 \sqsubseteq v_2\) and \(v_2 \sqsubseteq v_3\), then \(v_1 \sqsubseteq v_3\).

[lem:less-refl] [lem:less-trans]

I proved transitivity by induction on \(v_2\).

Lemma [\(T\) is downward closed] If \(v \in T [ \tau ] \eta\) and \(v' \sqsubseteq v\), then \(v' \in T [ \tau ] \eta\). [lem:T-down-closed]

The above is a straightforward induction on \(\tau\)

Lemma [\(\mathsf{wrong}\) not in \(T\)] For any \(\tau\) and \(\eta\), \(\mathsf{wrong} \notin T [ \tau ] \eta\). [lem:wrong-not-in-T]

The above is another straightforward induction on \(\tau\)

Lemma If \(\vdash \rho : \Gamma\), then \(\Gamma\) is a well-formed type environment. [lem:wfenv-good-ctx]

The above is proved by induction on the derivation of \(\vdash \rho : \Gamma\).

Lemma \[T [ \tau ] (\eta_1 \eta_3) = T [ \uparrow^{|\eta_2|}_{ |\eta_1|}(\tau) ] (\eta_1\eta_2\eta_3)\]

The above lemma is proved by induction on \(\tau\). It took me a little while to figure out the right strengthening of the statement of this lemma to get the induction to go through. The motivations for this lemma were the following corollaries.

Corollary [Lift/Append Preserves \(T\)] \[T [ \tau ](\eta_2) = T [ \uparrow^{|\eta_1|}_{0}(\tau) ] (\eta_1\eta_2)\] [lem:lift-append-preserves-T]

Corollary[Lift/Cons Preserves \(T\)] \[T [ \tau ] (\eta) = T [ \uparrow^{1}_{0}(\tau) ] (V{::}\eta)\] [lem:shift-cons-preserves-T]

Of course, two shifts can be composed into a single shift by adding the amounts.

Lemma [Compose Shift] \[\uparrow^{j+k}_{c}(\tau) = \uparrow^{j}_{c}( \uparrow^{k}_{c}(\tau))\] [lem:compose-shift]

The proof is a straightforward induction on \(\tau\).

Big Lemmas

There are one or two big lemmas for each of the “features” in this variant of System F.

The first lemma shows that well-typed occurrences of term variables cannot go wrong.

Lemma [Lookup in Well-typed Environment] 
If \(\vdash \rho : \Gamma\) and \(\mathsf{lookup}(\Gamma,i) = \mathsf{some}(\tau)\), then \(\exists v.\, \rho_1[i] = v\) and \(v \in T [ \tau ] \rho_2\). [lem:lookup-wfenv]

The proof is by induction on the derivation of \(\vdash \rho : \Gamma\). The first two cases were straightforward but the third case required some work and used lemmas [lem:wfenv-good-ctx], [lem:shift-cons-preserves-T], and [lem:compose-shift].

Lemma [Application cannot go wrong] If \(V \subseteq T [ \sigma \to \tau ] \eta\) and \(V' \subseteq T [ \sigma ] \eta\), then \(\mathsf{apply}(V,V') \subseteq T [ \tau ] \eta\). [lem:fun-app]

The proof of this lemma is direct and does not use induction. However, it does use lemmas [lem:wrong-not-in-T] and [lem:T-down-closed].

Lemma [Compositionality] Let \(V = T [ \sigma ] (\eta_1\eta_2)\). \[T [ \tau ] (\eta_1 V \eta_2) = T [ \tau[\sigma/|\eta_1|] ] (\eta_1 \eta_2)\] [lem:compositionality]

I proved the Compositionality lemma by induction on \(\tau\). All of the cases were straightforward except for \(\tau=\forall\tau'\). In that case I used the induction hypothesis to show that \[T [ \tau' ] (V \eta_1 S \eta_2) = T [ ([|V\eta_1|\mapsto \uparrow^1_0(\sigma)] \tau' ] (V\eta_1\eta_2) \text{ where } S = T [ \uparrow^1_0(\sigma) ] (V\eta_1\eta_2)\] and I used Lemma [lem:shift-cons-preserves-T].

Lemma [Iterate cannot go wrong] If

  • \(\mathsf{iterate}(L,\rho,v)\) and

  • for any \(v'\), \(v' \in T[ \sigma\to\tau ] \rho_2\) implies \(L(v'{::}\rho) \subseteq T[ \sigma\to\tau ] \rho_2\),

then \(v \in T [ \sigma \to \tau ] \rho_2\). [lem:iterate-sound]

This was straightfroward to prove by induction on the derivation of \(\mathsf{iterate}(L,\rho,v)\). The slightly difficult part was coming up with the definition of \(\mathsf{iterate}\) to begin with and in formulating the second premise.

The Theorem

Theorem [Well-typed programs cannot go wrong] 
If \(\Gamma \vdash e : \tau\) and \(\vdash \rho : \Gamma\), then \(E [ e ] \rho \subseteq T[ \tau ] \rho_2\). [thm:welltyped-dont-go-wrong]

The proof is by induction on the derivation of \(\Gamma \vdash e : \tau\).

  • \(\Gamma \vdash n : \mathtt{nat}\)

    This case is immediate.

  • \(\frac{\mathsf{lookup}(\Gamma,i) = \mathsf{some}(\tau)} {\Gamma \vdash i : \tau}\)

    Lemma [lem:lookup-wfenv] tells us that \(\rho_1[i] = v\) and \(v \in T [ \tau ] \rho_2\) for some \(v\). We conclude by Lemma [lem:T-down-closed].

  • \(\frac{\Gamma_2 \vdash \sigma \quad \sigma{::}\Gamma \vdash e : \tau} {\Gamma \vdash \lambda e : \sigma \to \tau}\)

    After unraveling some definitions, for arbitrary \(f,v_1,v_2,v'_2\) we can assume \(v_1 \in T [ \sigma ] \rho_2\), \(v_2 \in E [ e ](v_1{::}\rho)\), and \(v'_2 \sqsubseteq v_2\). We need to prove that \(v_2 \in T [ \tau ] (v_1{::}\rho)_2\).

    We can show \(\vdash v_1{::}\rho : \sigma{::}\Gamma\) and therefore, by the induction hypothesis, \(E [ e ] (v_1{::}\rho) \subseteq T [ \tau ] (v_1{::}\rho)_2\). So we conclude that \(v_2 \in T [ \tau ] (v_1{::}\rho)_2\).

  • \(\frac{\Gamma \vdash e : \sigma \to \tau \quad \Gamma \vdash e' : \sigma} {\Gamma \vdash e \; e' : \tau}\)

    By the induction hypothesis, we have \(E [ e ] \rho \subseteq T [ \sigma\to\tau ] \rho_2\) and \(E [ e' ] \rho \subseteq T [ \sigma ] \rho_2\). We conclude by Lemma [lem:fun-app].

  • \(\frac{\Gamma_2 \vdash \sigma \to \tau \quad (\sigma\to \tau){::}\Gamma \vdash e : \sigma \to \tau } {\Gamma \vdash \mathtt{fix}\,e : \sigma \to \tau}\)

    For an arbitrary \(v\), we may assume \(\mathsf{iterate}(E[ e ], \rho, v)\) and need to show that \(v \in T [ \sigma\to\tau ]\rho_2\).

    In preparation to apply Lemma [lem:iterate-sound], we first prove that for any \(v'\), \(v' \in T[ \sigma\to\tau ] \rho_2\) implies \(E[ e](v'{::}\rho) \subseteq T[ \sigma\to\tau ] \rho_2\). Assume \(v'' \in E[ e](v'{::}\rho)\). We need to show that \(v'' \in T[ \sigma\to\tau ] \rho_2\). We have \(\vdash v'{::}\rho : (\sigma\to\tau){::}\Gamma\), so by the induction hypothesis \(E [ e ](v'{::}\rho) \subseteq T[ \sigma\to\tau ](v'{::}\rho)\). From this we conclude that \(v'' \in T[ \sigma\to\tau ] \rho_2\).

    We then apply Lemma [lem:iterate-sound] to conclude this case.

  • \(\frac{*::\Gamma \vdash e : \tau} {\Gamma \vdash \Lambda e :: \forall\tau}\)

    After unraveling some definitions, for an arbitrary \(v'\) and \(V\) we may assume that \(\forall V'.\, v' \in E [ e ](V{::}\rho)\). We need to show that \(v' \in T [ \tau ] (V{::}\rho_2)\). We have \(\vdash V{::}\rho : *{::}\Gamma\), so by the induction hypothesis \(E[ e ](V{::}\rho) \subseteq T [ \tau ] (V{::}\rho)_2\). Also, from the assumption we have \(v' \in E [ e ](V{::}\rho)\), so we can conclude.

  • \(\frac{\Gamma \vdash e : \forall \tau} {\Gamma \vdash e[\,] : [0\mapsto\sigma]\tau}\)

    Fix a \(v' \in E [ e ] \rho\). We have three cases to consider.

    1. \(v'=\mathsf{abs}(\mathsf{none})\). This case is immediate.

    2. \(v'=\mathsf{abs}(\mathsf{some}(v''))\) for some \(v''\). By the induction hypothesis, \(v' \in T [ \forall\tau ]\rho_2\). So we have \(v'' \in T [ \tau ](V{::}\rho_2)\) where \(V=T[\sigma]\rho_2\). Then by Compositionality (Lemma [lem:compositionality]) we conclude that \(v'' \in T [ [0\mapsto \sigma]\tau]\rho_2\).

    3. \(v'\) is some other kind of value. This can’t happen because, by the induction hypothesis, \(v' \in T [ \forall\tau ]\rho_2\).

References

Milner, Robin. 1978. “A Theory of Type Polymorphism in Programming.” Journal of Computer and System Sciences 17 (3): 348–75.

Pierce, Benjamin C. 2002. Types and Programming Languages. MIT Press.

Reynolds, John C. 1974. “Towards a Theory of Type Structure.” In Programming Symposium: Proceedings, Colloque Sur La Programmation, 19:408–25. LNCS. Springer-Verlag.

Friday, March 24, 2017

Consolidation of the Denotational Semantics and an Application to Compiler Correctness

This is a two part post. The second part depends on the first.

Part 1. Consolidation of the Denotational Semantics

As a matter of expediency, I've been working with two different versions of the intersection type system upon which the denotational semantics is based, one version with subsumption and one without. I had used the one with subsumption to prove completeness with respect to the reduction semantics whereas I had used the one without subsumption to prove soundness (for both whole programs and parts of programs, that is, contextual equivalence). The two versions of the intersection type system are equivalent. However, it would be nice to simplify the story and just have one version. Also, while the correspondence to intersection types has been enormously helpful in working out the theory, it would be nice to have a presentation of the semantics that doesn't talk about them and instead talks about functions as tables.

Towards these goals, I went back to the proof of completeness with respect to the reduction semantics and swapped in the "take 3" semantics. While working on that I realized that the subsumption rule was almost admissible in the "take 3" semantics, just the variable and application equations needed more uses of \(\sqsubseteq\). With those changes in place, the proof of completeness went through without a hitch. So here's the updated definition of the denotational semantics of the untyped lambda calculus.

The definition of values remains the same as last time: \[ \begin{array}{lrcl} \text{function tables} & T & ::= & \{ v_1\mapsto v'_1,\ldots,v_n\mapsto v'_n \} \\ \text{values} & v & ::= & n \mid T \end{array} \] as does the \(\sqsubseteq\) operator. \begin{gather*} \frac{}{n \sqsubseteq n} \qquad \frac{T_1 \subseteq T_2}{T_1 \sqsubseteq T_2} \end{gather*} For the denotation function \(E\), we add uses of \(\sqsubseteq\) to the equations for variables (\(v \sqsubseteq \rho(x)\)) and function application (\(v_3 \sqsubseteq v_3'\)). (I've also added the conditional expression \(\mathbf{if}\,e_1\,e_2\,e_3\) and primitive operations on numbers \(f(e_1,e_2)\), where \(f\) ranges over binary functions on numbers.) \begin{align*} E[\!| n |\!](\rho) &= \{ n \} \\ E[\!| x |\!](\rho) &= \{ v \mid v \sqsubseteq \rho(x) \} \\ E[\!| \lambda x.\, e |\!](\rho) &= \left\{ T \middle| \begin{array}{l} \forall v_1 v_2'. \, v_1\mapsto v_2' \in T \Rightarrow\\ \exists v_2.\, v_2 \in E[\!| e |\!](\rho(x{:=}v_1)) \land v_2' \sqsubseteq v_2 \end{array} \right\} \\ E[\!| e_1\;e_2 |\!](\rho) &= \left\{ v_3 \middle| \begin{array}{l} \exists T v_2 v_2' v_3'.\, T {\in} E[\!| e_1 |\!](\rho) \land v_2 {\in} E[\!| e_2 |\!](\rho) \\ \land\, v'_2\mapsto v_3' \in T \land v'_2 \sqsubseteq v_2 \land v_3 \sqsubseteq v_3' \end{array} \right\} \\ E[\!| f(e_1, e_2) |\!](\rho) &= \{ f(n_1,n_2) \mid \exists n_1 n_2.\, n_1 \in E[\!| e_1 |\!](\rho) \land n_2 \in E[\!| e_2 |\!](\rho) \} \\ E[\!| \mathbf{if}\,e_1\,e_2\,e_3 |\!](\rho) &= \left\{ v \, \middle| \begin{array}{l} v \in E[\!| e_2 |\!](\rho) \quad \text{if } n \neq 0 \\ v \in E[\!| e_3 |\!](\rho) \quad \text{if } n = 0 \end{array} \right\} \end{align*}

Here are the highlights of the results for this definition.

Proposition (Admissibility of Subsumption)
If \(v \in E[\!| e |\!] \) and \(v' \sqsubseteq v\), then \(v' \in E[\!| e |\!] \).

Theorem (Reduction implies Denotational Equality)

  1. If \(e \longrightarrow e'\), then \(E[\!| e |\!] = E[\!| e' |\!]\).
  2. If \(e \longrightarrow^{*} e'\), then \(E[\!| e |\!] = E[\!| e' |\!]\).

Theorem (Whole-program Soundness and Completeness)

  1. If \(v' \in E[\!| e |\!](\emptyset)\), then \(e \longrightarrow^{*} v\) and \(v' \in E[\!| v |\!](\emptyset)\).
  2. If \(e \longrightarrow^{*} v\), then \(v' \in E[\!| e |\!](\emptyset) \) and \(v' \in E[\!| v |\!](\emptyset) \) for some \(v'\).

Proposition (Denotational Equality is a Congruence)
For any context \(C\), if \(E[\!| e |\!] = E[\!| e' |\!]\), then \(E[\!| C[e] |\!] = E[\!| C[e'] |\!]\).

Theorem (Soundness wrt. Contextual Equivalence)
If \(E[\!| e |\!] = E[\!| e' |\!]\), then \(e \simeq e'\).

Part 2. An Application to Compiler Correctness

Towards finding out how useful this denotational semantics is, I've begun looking at using it to prove compiler correctness. I'm not sure exactly which compiler I want to target yet, but as a first step, I wrote a simple source-to-source optimizer \(\mathcal{O}\) for the lambda calculus. It performs inlining and constant folding and simplifies conditionals. The optimizer is parameterized over the inlining depth to ensure termination. We perform optimization on the body of a function after inlining, so this is a polyvariant optimizer. Here's the definition. \begin{align*} \mathcal{O}[\!| x |\!](k) &= x \\ \mathcal{O}[\!| n |\!](k) &= n \\ \mathcal{O}[\!| \lambda x.\, e |\!](k) &= \lambda x.\, \mathcal{O}[\!| e |\!](k) \\ \mathcal{O}[\!| e_1\,e_2 |\!](k) &= \begin{array}{l} \begin{cases} \mathcal{O}[\!| [x{:=}e_2'] e |\!] (k{-}1) & \text{if } k \geq 1 \text{ and } e_1' = \lambda x.\, e \\ & \text{and } e_2' \text{ is a value} \\ e_1' \, e_2' & \text{otherwise} \end{cases}\\ \text{where } e_1' = \mathcal{O}[\!|e_1 |\!](k) \text{ and } e_2' = \mathcal{O}[\!|e_2 |\!](k) \end{array} \\ \mathcal{O}[\!| f(e_1,e_2) |\!](k) &= \begin{array}{l} \begin{cases} f(n_1,n_2) & \text{if } e_1' = n_1 \text{ and } e_2' = n_2 \\ f(e_1',e_2') & \text{otherwise} \end{cases}\\ \text{where } e_1' = \mathcal{O}[\!|e_1 |\!](k) \text{ and } e_2' = \mathcal{O}[\!|e_2 |\!](k) \end{array} \\ \mathcal{O}[\!| \mathbf{if}\,e_1\,e_2\,e_3 |\!](k) &= \begin{array}{l} \begin{cases} e_2' & \text{if } e_1' = n \text{ and } n \neq 0 \\ e_3' & \text{if } e_1' = n \text{ and } n = 0 \\ \mathbf{if}\,e_1'\, e_2'\,e_3'|\!](k) & \text{otherwise} \end{cases}\\ \text{where } e_1' = \mathcal{O}[\!|e_1 |\!](k) \text{ and } e_2' = \mathcal{O}[\!|e_2 |\!](k)\\ \text{ and } e_3' = \mathcal{O}[\!|e_3 |\!](k) \end{array} \end{align*}

I've proved that this optimizer is correct. The first step was proving that it preserves denotational equality.

Lemma (Optimizer Preserves Denotations)
\(E(\mathcal{O}[\!| e|\!](k)) = E[\!|e|\!] \)
Proof
The proof is by induction on the termination metric for \(\mathcal{O}\), which is the lexicographic ordering of \(k\) then the size of \(e\). All the cases are straightforward to prove because Reduction implies Denotational Equality and because Denotational Equality is a Congruence. QED

Theorem (Correctness of the Optimizer)
\(\mathcal{O}[\!| e|\!](k) \simeq e\)
Proof
The proof is a direct result of the above Lemma and Soundness wrt. Contextual Equivalence. QED

Of course, all of this is proved in Isabelle. Here is the tar ball. I was surprised that this proof of correctness for the optimizer was about the same length as the definition of the optimizer!

Friday, March 10, 2017

The Take 3 Semantics, Revisited

In my post about intersection types as denotations, I conjectured that the simple "take 3" denotational semantics is equivalent to an intersection type system. I haven't settled that question per se, but I've done something just as good, which is to show that everything that I've done with the intersection type system can also be done with the "take 3" semantics (with a minor modification).

Recall that the main difference between the "take 3" semantics and the intersection type system is how subsumption of functions is handled. The "take 3" semantics defined function application as follows, using the subset operator \(\sqsubseteq\) to require the argument \(v_2\) to include all the entries in the parameter \(v'_2\), while allowing \(v_2\) to have possibly more entries. \begin{align*} E[\!| e_1\;e_2 |\!](\rho) &= \left\{ v_3 \middle| \begin{array}{l} \exists v_1 v_2 v'_2.\, v_1 {\in} E[\!| e_1 |\!](\rho) \land v_2 {\in} E[\!| e_2 |\!](\rho) \\ \land\, \{ v'_2\mapsto v_3 \} \sqsubseteq v_1 \land v'_2 \sqsubseteq v_2 \end{array} \right\} \end{align*} Values are either numbers or functions. Functions are represented as a finite tables mapping values to values. \[ \begin{array}{lrcl} \text{tables} & T & ::= & \{ v_1\mapsto v'_1,\ldots,v_n\mapsto v'_n \} \\ \text{values} & v & ::= & n \mid T \end{array} \] and \(\sqsubseteq\) is defined as equality on numbers and subset for function tables: \begin{gather*} \frac{}{n \sqsubseteq n} \qquad \frac{T_1 \subseteq T_2}{T_1 \sqsubseteq T_2} \end{gather*} Recall that \(\subseteq\) is defined in terms of equality on elements.

In an intersection type system (without subsumption), function application uses subtyping. Here's one way to formulate the typing rule for application: \[ \frac{\Gamma \vdash_2 e_1: C \quad \Gamma \vdash_2 e_2 : A \quad \quad C <: A' \to B \quad A <: A'} {\Gamma \vdash_2 e_1 \; e_2 : B} \] Types are defined as follows \[ \begin{array}{lrcl} \text{types} & A,B,C & ::= & n \mid A \to B \mid A \land B \mid \top \end{array} \] and the subtyping relation is given below. \begin{gather*} \frac{}{n <: n}(a) \quad \frac{}{\top <: \top}(b) \quad \frac{}{A \to B <: \top}(c) \quad \frac{A' <: A \quad B <: B'} {A \to B <: A' \to B'}(d) \\[2ex] \frac{C <: A \quad C <: B}{C <: A \wedge B}(e) \quad \frac{}{A \wedge B <: A}(f) \quad \frac{}{A \wedge B <: B}(g) \\[2ex] \frac{}{(C\to A) \wedge (C \to B) <: C \to (A \wedge B)}(h) \end{gather*} Recall that values and types are isomorphic (and dual) to eachother in this setting. Here's the functions \(\mathcal{T}\) and \(\mathcal{V}\) that map back and forth between values and types. \begin{align*} \mathcal{T}(n) &= n \\ \mathcal{T}( \{ v_1 \mapsto v'_1, \ldots, v_n \mapsto v'_n \} ) &= \mathcal{T}(v_1) {\to} \mathcal{T}(v'_1) \land \cdots \land \mathcal{T}(v_n) {\to} \mathcal{T}(v'_n) \\[2ex] \mathcal{V}(n) &= n \\ \mathcal{V}(A \to B) &= \{ \mathcal{V}(A)\mapsto\mathcal{V}(B) \} \\ \mathcal{V}(A \land B) &= \mathcal{V}(A) \cup \mathcal{V}(B)\\ \mathcal{V}(\top) &= \emptyset \end{align*}

Given that values and types are really the same, the the typing rule for application is almost the same as the equation for the denotation of \(E[\!| e_1\;e_2 |\!](\rho)\). The only real difference is the use of \(<:\) versus \(\sqsubseteq\). However, subtyping is a larger relation than \(\sqsubseteq\), i.e., \(v_1 \sqsubseteq v_2\) implies \(\mathcal{T}(v_1) <: \mathcal{T}(v_2)\) but it is not the case that \(A <: B\) implies \(\mathcal{V}(A) \sqsubseteq \mathcal{V}(B)\). Subtyping is larger because of rules \((d)\) and \((h)\). The other rules just express the dual of \(\subseteq\).

So the natural question is whether subtyping needs to be bigger than \(\sqsubseteq\), or would we get by with just \(\sqsubseteq\)? In my last post, I mentioned that rule \((h)\) was not necessary. Indeed, I removed it from the Isabelle formalization without disturbing the proofs of whole-program soundness and completeness wrt. operational semantics, and was able to carry on and prove soundness wrt. contextual equivalence. This morning I also replaced rule \((d)\) with a rule that only allows equal function types to be subtypes. \[ \frac{}{A \to B <: A \to B}(d') \] The proofs went through again! Though I did have to make two minor changes in the type system without subsumption to ensure that it stays equivalent to the version of the type system with subsumption. I used the rule given above for function application instead of \[ \frac{\Gamma \vdash_2 e_1: C \quad \Gamma \vdash_2 e_2 : A \quad \quad C <: A \to B} {\Gamma \vdash_2 e_1 \; e_2 : B} \] Also, I had to change the typing rule for \(\lambda\) to use subtyping to relate the body's type to the return type. \[ \frac{\Gamma,x:A \vdash e : B' \qquad B' <: B} {\Gamma \vdash \lambda x.\, e : A \to B} \] Transposing this back into the land of denotational semantics and values, we get the following equation for the meaning of \(\lambda\), in which everything in the return specification \(v_2\) must be contained in the value \(v'_2\) produced by the body. \[ E[\!| \lambda x.\; e |\!] (\rho) = \left\{ v \middle| \begin{array}{l}\forall v_1 v_2. \{v_1\mapsto v_2\} \sqsubseteq v \implies \\ \exists v_2'.\; v'_2 \in E[\!| e |\!] (\rho(x{:=}v_1)) \,\land\, v_2 \sqsubseteq v'_2 \end{array} \right\} \]

So with this little change, the "take 3" semantics is a great semantics for the call-by-value untyped lambda calculus! For whole programs, it's sound and complete with respect to the standard operational semantics, and it is also sound with respect to contextual equivalence.

Wednesday, March 08, 2017

Sound wrt. Contextual Equivalence

The ICFP paper submission deadline kept me busy for much of February, but now I'm back to thinking about the simple denotational semantics of the lambda calculus. In previous posts I showed that this semantics is equivalent to standard operational semantics when considering the behavior of whole programs. However, sometimes it is necessary to reason about the behavior of program fragments and we would like to use the denotational semantics for this as well. For example, an optimizing compiler might want to exchange one expression for another less-costly expression that does the same job.

The formal notion of two such ``exchangeable'' expressions is contextual equivalence (Morris 1968). It says that two expression are equivalent if plugging them into an arbitrary context produces programs that behave the same.

Definition (Contextual Equivalence)
Two expressions \(e_1\) and \(e_2\) are contextually equivalent, written \(e_1 \simeq e_2\), iff for any closing context \(C\), \[ \mathsf{eval}(C[e_1]) = \mathsf{eval}(C[e_2]). \]

We would like to know that when two expressions are denotationally equal, then they are also contextually equivalent.

Theorem (Sound wrt. Contextual Equivalence)
If \(E[e_1]\Gamma = E[e_2]\Gamma\) for any \(\Gamma\), then \(e_1 \simeq e_2\).

The rest of the blog post gives an overview of the proof (except for the discussion of related work at the very end). The details of the proof are in the Isabelle mechanization. But first we need to define the terms used in the above statements.

Definitions

Recall that our denotational semantics is defined in terms of an intersection type system. The meaning of an expression is the set of all types assigned to it by the type system. \[ E[e]\Gamma \equiv \{ A \mid \Gamma \vdash_2 e : A \} \] Recall that the types include singletons, functions, intersections, and a top type: \[ A,B,C ::= n \mid A \to B \mid A \land B \mid \top \] I prefer to think of these types as values, where the function, intersection, and top types are used to represent finite tables that record the input-output values of a function.

The intersection type system that we use here differs from the one in the previous post in that we remove the subsumption rule and sprinkle uses of subtyping elsewhere in a standard fashion (Pierce 2002).

\begin{gather*} \frac{}{\Gamma \vdash_2 n : n} \\[2ex] \frac{} {\Gamma \vdash_2 \lambda x.\, e : \top} \quad \frac{\Gamma \vdash_2 \lambda x.\, e : A \quad \Gamma \vdash_2 \lambda x.\, e : B} {\Gamma \vdash_2 \lambda x.\, e : A \wedge B} \\[2ex] \frac{x:A \in \Gamma}{\Gamma \vdash_2 x : A} \quad \frac{\Gamma,x:A \vdash_2 B} {\Gamma \vdash_2 \lambda x.\, e : A \to B} \\[2ex] \frac{\Gamma \vdash_2 e_1: C \quad C <: A \to B \quad \Gamma \vdash_2 e_2 : A} {\Gamma \vdash_2 e_1 \; e_2 : B} \\[2ex] \frac{\begin{array}{l}\Gamma \vdash_2 e_1 : A \quad A <: n_1 \\ \Gamma \vdash_2 e_2 : B \quad B <: n_2 \end{array} \quad [\!|\mathit{op}|\!](n_1,n_2) = n_3} {\Gamma \vdash_2 \mathit{op}(e_1,e_2) : n_3} \\[2ex] \frac{\Gamma \vdash_2 e_1 : A \quad A <: 0 \quad \Gamma \vdash_2 e_3 : B} {\Gamma \vdash_2 \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : B} \\[2ex] \frac{\Gamma \vdash_2 e_1 : A \quad A <: n \quad n \neq 0 \quad \Gamma \vdash_2 e_2 : B} {\Gamma \vdash_2 \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : B} \end{gather*}

Regarding subtyping, we make a minor change and leave out the rule \[ \frac{}{(C\to A) \wedge (C \to B) <: C \to (A \wedge B)} \] because I had a hunch that it wasn't needed to prove Completeness with respect to the small step semantics, and indeed it was not. So the subtyping relation is defined as follows.

\begin{gather*} \frac{}{n <: n} \quad \frac{}{\top <: \top} \quad \frac{}{A \to B <: \top} \quad \frac{A' <: A \quad B <: B'} {A \to B <: A' \to B'} \\[2ex] \frac{C <: A \quad C <: B}{C <: A \wedge B} \quad \frac{}{A \wedge B <: A} \quad \frac{}{A \wedge B <: B} \end{gather*}

This type system is equivalent to the one with subsumption in the following sense.

Theorem (Equivalent Type Systems)

  1. If \(\Gamma \vdash e : A\), then \(\Gamma \vdash_2 e : A'\) and \(A' <: A\) for some \(A'\).
  2. If \(\Gamma \vdash_2 e : A\), then \(\Gamma \vdash e : A\).
Proof
The proofs of the two parts are straightforward inductions on the derivations of the typing judgments. QED

This type system satisfies the usual progress and preservation properties.

Theorem (Preservation)
If \(\Gamma \vdash_2 e : A\) and \(e \longrightarrow e'\), then \(\Gamma \vdash_e e' : A'\) and \(A' <: A\) for some \(A'\).
Proof
The proof of preservation is by induction on the derivation of the reduction. The case for \(\beta\) reduction relies on lemmas about substitution and type environments. QED

Theorem (Progress)
If \(\emptyset \vdash_2 e : A\) and \(\mathrm{FV}(e) = \emptyset\), then \(e\) is a value or \(e \longrightarrow e'\) for some \(e'\).
Proof
The proof of progress is by induction on the typing derivation. As usual it relies on a canonical forms lemma. QED

Lemma (Canonical forms)
Suppose \(\emptyset \vdash_2 v : A\).

  1. If \(A <: n\), then \(v = n\).
  2. If \(A <: B \to C\), then \(v = \lambda x.\, e\) for some \(x,e\).

Next we turn to the definition of \(\mathit{eval}\). As usual, we shall define the behavior of a program in terms of the operational (small-step) semantics and an \(\mathit{observe}\) function. \begin{align*} \mathit{eval}(e) &= \begin{cases} \mathit{observe}(v) & \text{if } e \longrightarrow^{*} v \\ \mathtt{bad} & \text{otherwise} \end{cases}\\ \mathit{observe}(n) &= n \\ \mathit{observe}(\lambda x.\, e) &= \mathtt{fun} \end{align*} In the above we categorize programs as \(\mathtt{bad}\) if they do not produce a value. Thus, we are glossing over the distinction between programs that diverge and programs that go wrong (e.g., segmentation fault). We do this because our denotational semantics does not make such a distinction. However, I plan to circle back to this issue in the future and develop a version of the semantics that does.

Soundness wrt. Contextual Equivalence

We assume that \(E[e_1]\Gamma = E[e_2]\Gamma\) for any \(\Gamma\) and need to show that \(e_1 \simeq e_2\). That is, we need to show that \(\mathsf{eval}(C[e_1]) = \mathsf{eval}(C[e_2]) \) for any closing context \(C\). We shall prove Congruence which lets us lift the denotational equality of \(e_1\) and \(e_2\) through any context, so we have \begin{equation} E[C[e_1]]\emptyset = E[C[e_2]]\emptyset \qquad\qquad (1) \end{equation} Now let us consider the cases for \(\mathsf{eval}(C[e_1])\).

  • Case \(\mathsf{eval}(C[e_1]) = \mathit{observe}(v)\) and \(C[e_1] \longrightarrow^{*} v\):
    By Completeness of the intersection type system we have \(\emptyset \vdash_2 C[e_1] : A\) and \(\emptyset \vdash_2 v : A'\) for some \(A,A'\) such that \(A' <: A\). Then with (1) we have \begin{equation} \emptyset \vdash_2 C[e_2] : A \qquad\qquad (2) \end{equation} The type system is sound wrt. the big-step semantics, so \(\emptyset \vdash C[e_2] \Downarrow v'\) for some \(v'\). Therefore \(C[e_2] \longrightarrow^{*} v''\) because the big-step semantics is sound wrt. the small-step semantics. It remains to show that \(\mathit{observe}(v'') = \mathit{observe}(v)\). From (2) we have \(\emptyset \vdash_2 v'' : A''\) for some \(A''\) where \(A'' <: A\), by Preservation. Noting that we already have \(\emptyset \vdash_2 v : A'\), \(\emptyset \vdash_2 v'' : A''\), \(A' <: A\), and \(A'' <: A\), we conclude that \(\mathit{observe}(v) = \mathit{observe}(v'')\) by the Lemma Observing values of subtypes.
  • Case \(\mathsf{eval}(C[e_1]) = \mathtt{bad}\):
    So \(C[e_1]\) either diverges or gets stuck. In either case, we have \(E[C[e_1]]\emptyset = \emptyset \) (Lemmas Diverging programs have no meaning and Programs that get stuck have no meaning). So by (1) we have \(E[C[e_2]]\emptyset = \emptyset\). We conclude that \(C[e_2]\) either diverges or gets stuck by Lemma (Programs with no meaning diverge or get stuck). Thus, \(\mathsf{eval}(C[e_2]) = \mathtt{bad}\).
QED

Lemma (Congruence)
Let \(C\) be an arbitrary context. If \(E[e_1]\Gamma' = E[e_2]\Gamma'\) for any \(\Gamma'\), then \(E[C[e_1]]\Gamma = E[C[e_2]]\Gamma\).
Proof
We prove congruence by structural induction on the context \(C\), using the induction hypothesis and the appropriate Compatibility lemma for each kind of expression. QED

Most of the Compatibility lemmas are straightforward, though the one for abstraction is worth discussing.

Lemma (Compatibility for abstraction)
If \(E[e_1]\Gamma' = E[e_2]\Gamma'\) for any \(\Gamma'\), then \(E[\lambda x.\, e_1]\Gamma = E[\lambda x.\, e_2]\Gamma\).
Proof
To prove compatibility for abstractions, we first prove that

If \(\Gamma' \vdash_2 e_1 : B\) implies \(\Gamma' \vdash_2 e_2 : B\) for any \(\Gamma',B\), then \(\Gamma \vdash_2 \lambda x.\, e_1 : C\) implies \(\Gamma \vdash_2 \lambda x.\, e_2 : C\).
This is a straightforward induction on the type \(C\). Compatibility follows by two uses this fact. QED

Theorem (Completeness wrt. small-step semantics) If \(e \longrightarrow^{*} v\) then \(\emptyset \vdash_2 e : A\) and \(\emptyset \vdash_2 v : A'\) for some \(A,A'\) such that \(A' <: A\).
Proof
We have \(\emptyset \vdash e : B\) and \(\emptyset \vdash v : B\) by Completeness of the type system with subsumption. Therefore \(\emptyset \vdash_2 e : A\) and \(A <: B\) by Theorem Equivalent Type Systems. By preservation we conclude that \(\emptyset \vdash_2 v : A'\) and \(A' <: A\). QED

In a previous blog post, we proved soundness with respect to big-step semantics for a slightly different denotational semantics. So we update that proof for the denotational semantics defined above. We shall make use of the following logical relation \(\mathcal{G}\) in this proof. \begin{align*} G[n] &= \{ n \} \\ G[A \to B] &= \{ \langle \lambda x.\, e, \rho \rangle \mid \forall v \in G[A]. \; \rho(x{:=}v) \vdash e \Downarrow v' \text{ and } v' \in G[B] \} \\ G[A \land B] &= G[A] \cap G[B] \\ G[\top] &= \{ v \mid v \in \mathrm{Values} \} \\ \\ G[\emptyset] &= \{ \emptyset \} \\ G[\Gamma,x:A] &= \{ \rho(x{:=}v) \mid v \in G[A] \text{ and } \rho \in G[\Gamma] \} \end{align*}

We shall need two lemmas about this logical relation.

Lemma (Lookup in \(\mathcal{G}\))
If \(x:A \in \Gamma\) and \(\rho \in G[\Gamma]\), then \(\rho(x) = v\) and \(v \in G[A]\).

Lemma (\(\mathcal{G}\) preserves subtyping )
If \(A <: B\) and \(v \in G[A]\), then \(v \in G[B]\).

Theorem (Soundness wrt. big-step semantics)
If \(\Gamma \vdash_2 e : A\) and \(\rho \in G[\Gamma]\), then \(\rho \vdash e \Downarrow v\) and \(v \in G[A]\).
Proof
The proof is by induction on the typing derivation. The case for variables uses the Lookup Lemma and all of the elimination forms use the above Subtyping Lemma (because their typing rules use subtyping). QED

Lemma (Observing values of subtypes)
If \(\emptyset \vdash_2 v : A\), \(\emptyset \vdash_2 v' : B\), \(A <: C\), and \(B <: C\), then \(\mathit{observe}(v) = \mathit{observe}(v')\).
Proof
The proof is by cases of \(v\) and \(v'\). We use Lemmas about the symmetry of subtyping for singletons, an inversion lemma for functions, and that subtyping preserves function types. QED

Lemma (Subtyping symmetry for singletons) If \(n <: A\), then \(A <: n\).

For the next lemma we need to characterize the types for functions. \begin{gather*} \frac{}{\mathit{fun}(A \to B)} \quad \frac{\mathit{fun}(A) \qquad \mathit{fun}(B)} {\mathit{fun}(A \land B)} \quad \frac{}{\mathit{fun}(\top)} \end{gather*}

Lemma (Inversion on Functions)
If \(\Gamma \vdash_2 \lambda x.\, e : A\), then \(\mathit{fun}(A)\).

Lemma (Subtyping preserves functions)
If \(A <: B\) and \(\mathit{fun}(A)\), then \(\mathit{fun}(B)\).

Lemma (Diverging Programs have no meaning)
If \(e\) diverges, then \(E[e]\emptyset = \emptyset\).
Proof
Towards a contradiction, suppose \(E[e]\emptyset \neq \emptyset\). Then we have \(\emptyset \vdash_2 e : A\) for some \(A\). Then by soundness wrt. big-step semantics, we have \(\emptyset \vdash e \Downarrow v\) and so also \(e \longrightarrow^{*} v'\). But this contradicts the premise that \(e\) diverges. QED

Lemma (Programs that get stuck have no meaning)
Suppose that \(e \longrightarrow^{*} e'\) and \(e'\) is stuck (and not a value). Then \(E[e]\emptyset = \emptyset\).
Proof
Towards a contradiction, suppose \(E[e]\emptyset \neq \emptyset\). Then we have \(\emptyset \vdash_2 e : A\) for some \(A\). Therefore \(\emptyset \vdash_2 e' : A'\) for some \(A' <: A\). By Progress, either \(e'\) is a value or it can take a step. But that contradicts the premise. QED

Lemma (Programs with no meaning diverge or gets stuck)
If \(E[e]\emptyset = \emptyset\), then \(e\) diverges or reduces to a stuck non-value.
Proof
Towards a contradiction, suppose that \(e\) does not diverge and does not reduce to a stuck non-value. So \(e \longrightarrow^{*} v\) for some \(v\). But then by Completeness wrt. the small-step semantics, we have \(\emptyset \vdash_2 e : A\) for some \(A\), which contradicts the premise \(E[e]\emptyset = \emptyset\). QED

Related Work

The proof method used here, of proving Compatibility and Congruence lemmas to show soundness wrt. contextual equivalence, is adapted from Gunter's book (1992), where he proves that the standard model for PCF (CPO's and continuous functions) is sound. This approach is also commonly used to show that logical relations are sound wrt. contextual equivalence (Pitts 2005).

The problem of full abstraction is to show that denotational equivalence is both sound (aka. correct): \[ E[e_1] = E[e_2] \qquad \text{implies} \qquad e_1 \simeq e_2 \] and complete: \[ e_1 \simeq e_2 \qquad \text{implies} \qquad E[e_1] = E[e_2] \] with respect to contextual equivalence (Milner 1975). Here we showed that the simple denotational semantics is sound. I do not know whether it is complete wrt. contextual equivalence.

There are famous examples of denotational semantics that are not complete. For example, the standard model for PCF is not complete. There are two expressions in PCF that are contextually equivalent but not denotationally equivalent (Plotkin 1977). The idea behind the counter-example is that parallel-or cannot be defined in PCF, but it can be expressed in the standard model. The two expressions are higher-order functions constructed to behave differently only when applied to parallel-or.

Rocca and Paolini (2004) define a filter model \(\mathcal{V}\) for the call-by-value lambda calculus, similar to our simple denotational semantics, and prove that it is sound wrt. contextual equivalence (Theorem 12.1.18). Their type system and subtyping relation differs from ours in several ways. Their \(\land\,\mathrm{intro}\) rule is not restricted to \(\lambda\), they include subsumption, their \(\top\) type is a super-type of all types (not just function types), they include the distributivity rule discussed at the beginning of this post, and they include a couple other rules (labeled \((g)\) and \((v)\) in Fig. 12.1). I'm not sure whether any of these differences really matter; the two systems might be equivalent. Their proof is quite different from ours and more involved; it is based on the notion of approximants. They also show that \(\mathcal{V}\) is incomplete wrt. contextual equivalence, but go on to create another model based on \(\mathcal{V}\) that is. The fact that \(\mathcal{V}\) is incomplete leads me suspect that \(\mathcal{E}\) is also incomplete. This is certainly worth looking into.

Abramsky (1990) introduced a domain logic whose formulas are intersetion types: \[ \phi ::= \top \mid \phi \land \phi \mid \phi \to \phi \] and whose proof theory is an intersection type system designed to capture the semantics of the lazy lambda calculus. Abramsky proves that it is sound with respect to contextual equivalence. As far as I can tell, the proof is different than the approach used here, as it shows that the domain logic is sound with respect to a denotational semantics that solves the domain equation \(D = (D \to D)_\bot\), then shows that this denotational semantics is sound wrt. contextual equivalence. (See also Alan Jeffrey (1994).)

Sunday, February 05, 2017

On the Meaning of Casts and Blame for Gradual Typing

Gradually typed languages enable programmers to choose which parts of their programs are statically typed and which parts are dynamically typed. Thus, gradually typed languages perform some type checking at compile time and some type checking at run time. When specifying the semantics of a gradually typed language, we usually express the run time checking in terms of casts. Thus, the semantics of a gradually typed language depends crucially on the semantics of casts. This blog post tries to answer the question: "What is a cast?"

Syntax and Static Semantics of Casts

Syntactically, a cast is an expression of the form \[ e : A \Rightarrow^{\ell} B \] where \(e\) is a subexpression; \(A\) and \(B\) are the source and target types, respectively. The \(\ell\) is what we call a blame label, which records the location of the cast (e.g. line number and character position).

Regarding the static semantics (compile-time type checking), a cast enables the sub-expression \(e\) of static type \(A\) to be used in a context expecting a different type \(B\). \[ \frac{\Gamma \vdash e : A} {\Gamma \vdash (e : A \Rightarrow^{\ell} B) : B} \] In gradual typing, \(A\) and \(B\) typically differ in how "dynamic" they are but are otherwise similar to each other. So we often restrict the typing rule for casts to only allow source and target types that have some values in common, that is, when \(A\) and \(B\) are consistent. \[ \frac{\Gamma \vdash e : A \quad A \sim B} {\Gamma \vdash (e : A \Rightarrow^{\ell} B) : B} \] For example, if we let \(\star\) be the unknown type (aka. \(\mathtt{dynamic}\)), then we have \(\mathtt{Int} \sim \star\) and \(\star \sim \mathtt{Int}\) but \(\mathtt{Int} \not\sim \mathtt{Int}\to\mathtt{Int}\). Here are the rules for consistency with integers, functions, and the dynamic type. \begin{gather*} \mathtt{Int} \sim \mathtt{Int} \qquad \frac{A \sim B \qquad A' \sim B'} {A \to A' \sim B \to B'} \qquad A \sim \star \qquad \star \sim B \end{gather*}

Dynamic Semantics of Casts

The dynamic semantics of a cast is to check whether the value produced by subexpression \(e\) is of the target type \(B\) and if so, return the value; otherwise signal an error. The following is a strawman denotational semantics that expresses this basic intuition about casts. Suppose we have already defined the meaning of types, so \(\mathcal{T}[\!| A |\!]\) is the set of values of type \(A\). The meaning function \(\mathcal{E}[\!| e |\!]\) maps an expression to a result (either a value \(v\) or error \(\mathsf{blame}\,\ell\)). \begin{align*} \mathcal{E} [\!| e : A \Rightarrow^{\ell} B |\!] &= \begin{cases} v & \text{if } v \in \mathcal{T}[\!| B |\!] \\ \mathsf{blame}\,\ell & \text{if } v \notin \mathcal{T}[\!| B |\!] \end{cases} \\ & \text{where } v = \mathcal{E} [\!| e |\!] \end{align*}

If we restrict ourselves to first-order types such as \(\mathtt{Int}\), it is straightforward to define \(\mathcal{T}\) and check whether a value is in the set. \begin{align*} \mathcal{T}[\!| \mathtt{Int} |\!] &= \mathbb{Z} \end{align*} The story for function types, that is, for \(A \to B\), is more complicated. In a denotational setting, it traditionally takes sophisticated mathematics to come up with mathematical entities that can serve as function values when the \(\star\) type is involved (Scott 1970, 1976). The primary challenge is that one cannot simply use the usual notion of a mathematical function to represent function values because of a cardinality problem. Suppose that \(D\) is the set of all values. The set of mathematical functions whose domain and codomain is \(D\) is necessarily larger than \(D\), so the mathematical functions cannot fit into the set of all values. There is nothing wrong with sophisticated mathematics per se, but when it comes to using a specification for communication (e.g. between language designers and compiler writers), it is less desirable to require readers of the specification to fully understand a large number of auxiliary definitions and decide whether those definitions match their intuitions.

Competing Operational Semantics for Casts

We'll come back to denotational semantics in a bit, but first let's turn to operational semantics, in particular reduction semantics, which is what the recent literature uses to explains casts and the type \(\star\) (Gronski 2006, Siek 2007, Wadler 2009). In a reduction semantics, we give rewrite rules to say what happens when a syntactic value flows into a cast, that is, we say what expression the cast reduces to. Recall that a syntactic value is just an expression that cannot be further reduced. We can proceed by cases on the consistency of the source type \(A\) and target type \(B\).

  • Case \((v : \mathtt{Int} \Rightarrow^{\ell} \mathtt{Int})\). This one is easy, the static type system ensures that \(v\) has type \(\mathtt{Int}\), so there is nothing to check and we can rewrite to \(v\). \[ v : \mathtt{Int} \Rightarrow^{\ell} \mathtt{Int} \longrightarrow v \]
  • Case \((v : \star \Rightarrow^{\ell} \star)\). This one is also easy. \[ v : \star \Rightarrow^{\ell} \star \longrightarrow v \]
  • Case \((v : A \to A' \Rightarrow^{\ell} B \to B')\). This one is more complicated. We'd like to check that the function \(v\) has type \(B \to B'\). Suppose \(B'=\mathtt{Int}\). How can we determine whether a function returns an integer? In general, that's just as hard as the halting problem, which is undecidable. So instead of checking now, we'll delay the checking until when the function is called. We can accomplish this by rewriting to a lambda expression that casts the input, calls \(v\), and then casts the output. \[ v : A \to A' \Rightarrow^{\ell} B \to B' \longrightarrow \lambda x{:}B. (v \; (x : B \Rightarrow^{\ell} A)) : A' \Rightarrow^{\ell} B' \] Here we see the importance of attaching blame labels to casts. Because of the delayed checking, the point of error can be far removed from the original source code location, but thanks to the blame label we can point back to the source location of the cast that ultimately failed (Findler and Felleisen 2002).
  • Case \((v : A \Rightarrow^{\ell} \star)\). For this one there are multiple options in the literature. One option is declare this as a syntactic value (Siek 2009), so no rewrite rule is necessary. Another option is to factor all casts to \(\star\) through the ground types \(G\): \[ G ::= \mathtt{Int} \mid \star \to \star \] Then we expand the cast from \(A\) to \(\star\) into two casts that go through the unique ground type for \(A\). \begin{align*} v : A \Rightarrow^{\ell} \star &\longrightarrow (v : A \Rightarrow^{\ell} G) : G \Rightarrow^{\ell} \star\\ & \text{where } A \sim G, A \neq G, A \neq \star \end{align*} and then declare that expressions of the form \((v : G \Rightarrow^{\ell} \star)\) are values (Wadler 2009).
  • Case \((v : \star \Rightarrow^{\ell} B)\). There are multiple options here as well, but the choice is linked to the above choice regarding casting from \(A\) to \(\star\). If \(v = (v' : A \Rightarrow^{\ell'} \star)\), then we need the following rewrite rules \begin{align*} (v' : A \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} B &\longrightarrow v' : A \Rightarrow^{\ell} B & \text{if } A \sim B \\[2ex] (v' : A \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} B &\longrightarrow \mathsf{blame}\,\ell & \text{if } A \not\sim B \end{align*} On the other hand, if we want to factor through the ground types, we have the following reduction rules. \begin{align*} v : \star \Rightarrow^{\ell} B &\longrightarrow v : \star \Rightarrow^{\ell} G \Rightarrow^{\ell} B \\ & \text{if } B \sim G, B \neq G, B \neq \star \\[2ex] (v : G \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} G &\longrightarrow v \\[2ex] (v : G \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} G' &\longrightarrow \mathsf{blame}\,\ell\\ & \text{if } G \neq G' \end{align*}

Given that we have multiple options regarding the reduction semantics, an immediate question is whether it matters, that is, can we actually observe different behaviors for some program? Yes, in the following example we cast the identity function on integers to an incorrect type. \begin{equation} \begin{array}{l} \mathtt{let}\, id = (\lambda x{:}\mathtt{Int}. x)\, \mathtt{in}\\ \mathtt{let}\, f = (id : \mathtt{Int}\to \mathtt{Int} \Rightarrow^{\ell_1} \star) \, \mathtt{in} \\ \mathtt{let}\, g = (f : \star \Rightarrow^{\ell_2} (\mathtt{Int}\to \mathtt{Int}) \to \mathtt{Int})\,\mathtt{in} \\ \quad g \; id \end{array} \tag{P0}\label{P0} \end{equation} If we choose the semantics that factors through ground types, the above program reduces to \(\mathsf{blame}\, \ell_1\). If we choose the other semantics, the above program reduces to \(\mathsf{blame}\, \ell_2\). Ever since around 2008 I've been wondering which of these is correct, though for the purposes of full disclosure, I've always felt that \(\mathsf{blame}\,\ell_2\) was the better choice for this program. I've also been thinking for a long time that it would be nice to have some alternative, hopefully more intuitive, way to specify the semantics of casts, with which we could then compare the above two alternatives.

A Denotational Semantics of Functions and Casts

I've recently found out that there is a simple way to represent function values in a denotational semantics. The intuition is that, although a function may be able to deal with an infinite number of different inputs, the function only has to deal with a finite number of inputs on any one execution of the program. Thus, we can represent functions with finite tables of input-output pairs. An empty table is written \(\emptyset\), a single-entry table has the form \(v \mapsto v'\) where \(v\) is the input and \(v'\) is the corresponding output. We build a larger table out of two smaller tables \(v_1\) and \(v_2\) with the notation \(v_1 \sqcup v_2\). So, with the addition of integer values \(n \in \mathbb{Z}\), the following grammar specifies the values. \[ v ::= n \mid \emptyset \mid v \mapsto v \mid v \sqcup v \]

Of course, we can't use just one fixed-size table as the denotation of a lambda expression. Depending on the context of the lambda, we may need a bigger table that handles more inputs. Therefore we map each lambda expression to the set of all finite tables that jive with that lambda. To be more precise, we shall define a meaning function \(\mathcal{E}\) that maps an expression and an environment to a set of values, and an auxiliary function \(\mathcal{F}\) that determines whether a table jives with a lambda expression in a given environment. Here's a first try at defining \(\mathcal{F}\). \begin{align*} \mathcal{F}(n, \lambda x{:}A. e, \rho) &= \mathsf{false} \\ \mathcal{F}(\emptyset, \lambda x{:}A. e, \rho) &= \mathsf{true} \\ \mathcal{F}(v \mapsto v', \lambda x{:}A. e, \rho) &= \mathcal{T}(A,v) \text{ and } v' \in \mathcal{E}[\!| e |\!]\rho(x{:=}v) \\ \mathcal{F}(v_1 \sqcup v_2, \lambda x{:}A. e, \rho) &= \mathcal{F}(v_1, \lambda x{:}A. e, \rho) \text{ and } \mathcal{F}(v_2, \lambda x{:}A. e, \rho) \end{align*} (We shall define \(\mathcal{T}(A,v)\) shortly.) We then define the semantics of a lambda-expression in terms of \(\mathcal{F}\). \[ \mathcal{E}[\!| \lambda x{:}A.\, e|\!]\rho = \{ v \mid \mathcal{F}(v, \lambda x{:}A. e, \rho) \} \] The semantics of function application is essentially that of table lookup. We write \((v_2 \mapsto v) \sqsubseteq v_1\) to say, roughly, that \(v_2 \mapsto v\) is an entry in the table \(v_1\). (We give the full definition of \(\sqsubseteq\) in the Appendix.) \[ \mathcal{E}[\!| e_1 \, e_2 |\!]\rho = \left\{ v \middle| \begin{array}{l} \exists v_1 v_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \text{ and } v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \text{ and } (v_2 \mapsto v) \sqsubseteq v_1 \end{array} \right\} \] Finally, to give meaning to lambda-bound variables, we simply look them up in the environment. \[ \mathcal{E}[\!| x |\!]\rho = \{ \rho(x) \} \]

Now that we have a good representation for function values, we can talk about giving meaning to higher-order casts, that is, casts from one function type to another. Recall that in our strawman semantics, we got stuck when trying to define the meaning of types in the form of map \(\mathcal{T}\) from a type to a set of values. Now we can proceed based on the above definition of values \(v\). (To make the termination of \(\mathcal{T}\) more obvious, we'll instead define \(\mathcal{T}\) has a map from a type and a value to a Boolean. The measure is a lexicographic ordering on the size of the type and then the size of the value.) \begin{align*} \mathcal{T}(\mathtt{Int}, v) &= (\exists n. \; v = n) \\ \mathcal{T}(\star, v) &= \mathsf{true} \\ \mathcal{T}(A \to B, n) &= \mathsf{false} \\ \mathcal{T}(A \to B, \emptyset) &= \mathsf{true} \\ \mathcal{T}(A \to B, v \mapsto v') &= \mathcal{T}(A, v) \text{ and } \mathcal{T}(B, v') \\ \mathcal{T}(A \to B, v_1 \sqcup v_2) &= \mathcal{T}(A \to B, v_1) \text{ and } \mathcal{T}(A \to B, v_2) \end{align*} With \(\mathcal{T}\) defined, we define the meaning to casts as follows. \begin{align*} \mathcal{E} [\!| e : A \Rightarrow^{\ell} B |\!]\rho &= \{ v \mid v \in \mathcal{E} [\!| e |\!]\rho \text{ and } \mathcal{T}(B, v) \}\\ & \quad\; \cup \left\{ \mathsf{blame}\,\ell \middle| \begin{array}{l} \exists v.\; v \in \mathcal{E} [\!| e |\!]\rho \text{ and } \neg \mathcal{T}(B, v)\\ \text{and } (\forall l'. v \neq \mathsf{blame}\,l') \end{array}\right\}\\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in \mathcal{E} [\!| e |\!]\rho \} \end{align*} This version says that the result of the cast should only be those values of \(e\) that also have type \(B\). It also says that we signal an error when a value of \(e\) does not have type \(B\). Also, if there was an error in \(e\) then we propagate it. The really interesting thing about this semantics is that, unlike the reduction semantics, we actually check functions at the moment they go through the cast, instead of delaying the check to when they are called. We immediately determine whether the function is of the target type. If the function is not of the target type, we can immediately attribute blame to this cast, so there is no need for complex blame tracking rules.

Of course, we need to extend values to include blame: \[ v ::= n \mid \emptyset \mid v \mapsto v \mid v \sqcup v \mid \mathsf{blame}\,\ell \] and augment \(\mathcal{T}\) and \(\mathcal{F}\) to handle \(\mathsf{blame}\,\ell\). \begin{align*} \mathcal{T}(A\to B, \mathsf{blame}\,\ell) &= \mathsf{false} \\ \mathcal{F}(\mathsf{blame}\,\ell, \lambda x{:}A.e, \rho) &= \mathsf{false} \end{align*} To propagate errors to the meaning of the entire program, we augment the meaning of other language forms, such as function application to pass along blame. \begin{align*} \mathcal{E}[\!| e_1 \, e_2 |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 v_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \text{ and } v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \text{and } (v_2 \mapsto v) \sqsubseteq v_1 \end{array} \right\} \\ & \quad\; \cup \{ \mathsf{blame}\, \ell \mid \mathsf{blame}\, \ell \in \mathcal{E}[\!| e_1 |\!]\rho \text{ or } \mathsf{blame}\,\ell \in \mathcal{E}[\!| e_2 |\!]\rho\} \end{align*}

Two Examples

Let us consider the ramifications of this semantics. The following example program creates a function \(f\) that returns \(1\) on non-zero input and returns the identity function when applied to \(0\). We cast this function to the type \(\mathtt{Int}\to\mathtt{Int}\) on two separate occasions, cast \(\ell_3\) and cast \(\ell_4\), to create \(g\) and \(h\). We apply \(g\) to \(1\) and \(h\) to its result. \[ \begin{array}{l} \mathtt{let}\,f = \left(\lambda x:\mathtt{Int}.\; \begin{array}{l} \mathtt{if}\, x \,\mathtt{then}\, (0: \mathtt{Int}\Rightarrow^{\ell_1}\,\star)\\ \mathtt{else}\, ((\lambda y:\mathtt{Int}.\; y) : \mathtt{Int}\to\mathtt{Int}\Rightarrow{\ell_2} \, \star) \end{array} \right) \; \mathtt{in} \\ \mathtt{let}\,g = (f : \mathtt{Int}\to\star \Rightarrow^{\ell_3} \mathtt{Int}\to\mathtt{Int})\, \mathtt{in} \\ \mathtt{let}\,h = (f : \mathtt{Int}\to\star \Rightarrow^{\ell_4} \mathtt{Int}\to\mathtt{Int})\, \mathtt{in} \\ \mathtt{let}\,z = (g \; 1)\, \mathtt{in} \\ \quad (h\; z) \end{array} \] The meaning of this program is \(\{ \mathsf{blame}\,\ell_3, \mathsf{blame}\,\ell_4\}\). To understand this outcome, we can analyze the meaning of the various parts of the program. (The semantics is compositional!) Toward writing down the denotation of \(f\), let's define auxiliary functions \(id\) and \(F\). \begin{align*} id(n) &= \mathsf{false} \\ id(\emptyset) &= \mathsf{true} \\ id(v \mapsto v') &= (v = v') \\ id(v_1 \sqcup v_2) &= id(v_1) \text{ and } id(v_2) \\ id(\mathsf{blame}\,\ell) &= \mathsf{false} \\ \\ F(n) &= \textsf{false} \\ F(\emptyset) &= \textsf{true} \\ F(0 \mapsto v) &= \mathit{id}(v) \\ F(n \mapsto 0) &= (n \neq 0)\\ F(v_1 \sqcup v_2) &= F(v_1) \text{ and } F(v_2) \\ F(\mathsf{blame}\,\ell) &= \mathsf{false} \end{align*} The denotation of \(f\) is \[ \mathcal{E}[\!| f |\!] = \{ v \mid F(v) \} \] To express the denotation of \(g\), we define \(G\) \begin{align*} G(n) &= \textsf{false} \\ G(\emptyset) &= \textsf{true} \\ G(n \mapsto 0) &= (n \neq 0) \\ G(v_1 \sqcup v_2) &= G(v_1) \text{ and } G(v_2) \\ G(\mathsf{blame}\,\ell) &= \mathsf{false} \end{align*} The meaning of \(g\) is all the values that satisfy \(G\) and also \(\mathsf{blame}\,\ell_3\). \[ \mathcal{E}[\!| g |\!] = \{ v \mid G(v) \} \cup \{ \mathsf{blame}\, \ell_3 \} \] The meaning of \(h\) is similar, but with different blame. \[ \mathcal{E}[\!| h |\!] = \{ v \mid G(v) \} \cup \{ \mathsf{blame}\, \ell_4 \} \] The function \(g\) applied to \(1\) produces \(\{ 0, \mathsf{blame}\, \ell_3\}\), whereas \(h\) applied to \(0\) produces \(\{ \mathsf{blame}\, \ell_4\}\). Thus, the meaning of the whole program is \(\{ \mathsf{blame}\,\ell_3, \mathsf{blame}\,\ell_4\}\).

Because cast \(\ell_3\) signals an error, one might be tempted to have the meaning of \(g\) be just \(\{ \mathsf{blame}\,\ell_3\}\). However, we want to allow implementations of this language that do not blame \(\ell_3\) (\(g\) is never applied to \(0\) after all, so its guilt was not directly observable) and instead blame \(\ell_4\), who was caught red handed. So it is important for the meaning of \(g\) to include the subset of values from \(f\) that have type \(\mathtt{Int}\to\mathtt{Int}\) so that we can carry on and find other errors as well. We shall expect implementations of this language to be sound with respect to blame, that is, if execution results in blame, it should blame one of the labels that is in the denotation of the program (and not some other innocent cast).

Let us return to the example (P0). The denotation of that program is \(\{\mathsf{blame}\,\ell_2\}\) because the cast at \(\ell_2\) is a cast to \((\mathtt{Int}\to \mathtt{Int}) \to \mathtt{Int}\) and the identity function is not of that type. The other case at \(\ell_1\) is innocent because it is a cast to \(\star\) and all values are of that type, including the identity cast.

Discussion

By giving the cast calculus a denotational semantics in terms of finite function tables, it became straightforward to define whether a function value is of a given type. This in turn made it easy to define the meaning of casts, even casts at function type. A cast succeeds if the input value is of the target type and it fails otherwise. With this semantics we assign blame to a cast in an eager fashion, without the need for the blame tracking machinery that is present in the operational semantics.

We saw an example program where the reduction semantics that factors through ground types attributes blame to a cast that the denotational semantics says is innocent. This lends some evidence to that semantics being less desirable.

I plan to investigate whether the alternative reduction semantics is sound with respect to the denotational semantics in the sense that the reduction semantics only blames a cast if the denotational semantics says it is guilty.

Appendix

We give the full definition of the cast calculus here in the appendix. The relation \(\sqsubseteq\) that we used to define table lookup is the dual of the subtyping relation for intersection types. The denotational semantics is a mild reformation of the intersection type system that I discussed in previous blog posts.

Syntax \[ \begin{array}{lcl} A &::=& \mathtt{Int} \mid A \to B \mid \star \\ e &::= &n \mid \mathit{op}(e,e) \mid \mathtt{if}\, e\, \mathtt{then}\, e \,\mathtt{else}\, e \mid x \mid \lambda x{:}A \mid e \; e \mid e : A \Rightarrow^\ell B \end{array} \] Consistency \begin{gather*} \mathtt{Int} \sim \mathtt{Int} \qquad \frac{A \sim B \qquad A' \sim B'} {A \to A' \sim B \to B'} \qquad A \sim \star \qquad \star \sim B \end{gather*} Type System \begin{gather*} \frac{}{\Gamma \vdash n : \mathtt{Int}} \quad \frac{\Gamma \vdash e_1 : \mathtt{Int} \quad \Gamma \vdash e_2 : \mathtt{Int}} {\Gamma \vdash \mathit{op}(e_1,e_2) : \mathtt{Int}} \\[2ex] \frac{\Gamma \vdash e_1 : \mathtt{Int} \quad \Gamma \vdash e_2 : A \quad \Gamma \vdash e_3 : A} {\Gamma \vdash \mathtt{if}\, e_1\, \mathtt{then}\, e_2 \,\mathtt{else}\, e_3 : A} \\[2ex] \frac{x{:}A \in \Gamma}{\Gamma \vdash x : A} \quad \frac{\Gamma,x{:}A \vdash e : B}{\Gamma \vdash \lambda x{:}A.\; e : A \to B} \quad \frac{\Gamma e_1 : A \to B \quad \Gamma e_2 : A} {\Gamma \vdash e_1 \; e_2 : B} \\[2ex] \frac{\Gamma \vdash e : A \quad A \sim B} {\Gamma \vdash (e : A \Rightarrow^\ell B) : B} \end{gather*} Values \[ v ::= n \mid \emptyset \mid v \mapsto v \mid v \sqcup v \mid \mathsf{blame}\,\ell \] Table Lookup (Value Information Ordering) \begin{gather*} \frac{}{n \sqsubseteq n} \quad \frac{v'_1 \sqsubseteq v_1 \quad v_2 \sqsubseteq v'_2} {v_1 \mapsto v_2 \sqsubseteq v'_1 \mapsto v'_2} \quad \frac{}{\mathsf{blame}\,\ell \sqsubseteq \mathsf{blame}\,\ell} \\[2ex] \frac{}{v_1 \sqsubseteq v_1 \sqcap v_2} \quad \frac{}{v_2 \sqsubseteq v_1 \sqcap v_2} \quad \frac{v_1 \sqsubseteq v_3 \quad v_2 \sqsubseteq v_3} {v_1 \sqcup v_2 \sqsubseteq v_3} \\[2ex] \frac{}{v_1 \mapsto (v_2 \sqcup v_3) \sqsubseteq (v_1 \mapsto v_2) \sqcup (v_1 \mapsto v_3)} \quad \frac{}{\emptyset \sqsubseteq v_1 \mapsto v_2} \quad \frac{}{\emptyset \sqsubseteq \emptyset} \end{gather*} \noindent Semantics of Types \begin{align*} \mathcal{T}(\mathtt{Int}, v) &= (\exists n. \; v = n) \\ \mathcal{T}(\star, v) &= \mathsf{true} \\ \mathcal{T}(A \to B, n) &= \mathsf{false} \\ \mathcal{T}(A \to B, \emptyset) &= \mathsf{true} \\ \mathcal{T}(A \to B, v \mapsto v') &= \mathcal{T}(A, v) \text{ and } \mathcal{T}(B, v') \\ \mathcal{T}(A \to B, v_1 \sqcup v_2) &= \mathcal{T}(A \to B, v_1) \text{ and } \mathcal{T}(A \to B, v_2) \\ \mathcal{T}(A\to B, \mathsf{blame}\,\ell) &= \mathsf{false} \end{align*} Denotational Semantics \begin{align*} \mathcal{E}[\!| n |\!]\rho &= \{ n \}\\ \mathcal{E}[\!| \mathit{op}(e_1,e_2) |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 v_2 n_1 n_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \land v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \land\; n_1 \sqsubseteq v_1 \land n_2 \sqsubseteq v_2 \land v = [\!| \mathit{op} |\!](n_1,n_2) \end{array} \right\}\\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in (\mathcal{E} [\!| e_1 |\!]\rho \cup \mathcal{E} [\!| e_2 |\!]\rho) \} \\ \mathcal{E}[\!| \mathtt{if}\, e_1\, \mathtt{then}\, e_2 \,\mathtt{else}\, e_3 |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 n. v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \land n \sqsubseteq v_1 \\ \land\; (n = 0 \Longrightarrow v \in \mathcal{E}[\!| e_3 |\!]\rho) \\ \land\; (n \neq 0 \Longrightarrow v \in \mathcal{E}[\!| e_2 |\!]\rho) \end{array} \right\}\\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in (\mathcal{E} [\!| e_1 |\!]\rho \cup \mathcal{E} [\!| e_2 |\!]\rho \cup \mathcal{E} [\!| e_3 |\!]\rho ) \} \\ \mathcal{E}[\!| x |\!]\rho &= \{ \rho(x) \}\\ \mathcal{E}[\!| \lambda x{:}A.\, e|\!]\rho &= \{ v \mid \mathcal{F}(v, \lambda x{:}A.\, e, \rho) \} \\ \mathcal{E}[\!| e_1 \, e_2 |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 v_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \land v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \land\; (v_2 \mapsto v) \sqsubseteq v_1 \end{array} \right\} \\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in (\mathcal{E} [\!| e_1 |\!]\rho \cup \mathcal{E} [\!| e_2 |\!]\rho) \} \\ \mathcal{E} [\!| e : A \Rightarrow^{\ell} B |\!]\rho &= \{ v \mid v \in \mathcal{E} [\!| e |\!]\rho \text{ and } \mathcal{T}(B, v) \}\\ & \quad\; \cup \left\{ \mathsf{blame}\,\ell \middle| \begin{array}{l}\exists v.\; v \in \mathcal{E} [\!| e |\!] \rho \text{ and } \neg \mathcal{T}(B, v)\\ \text{and } (\forall \ell'. v \neq \mathsf{blame}\,\ell') \end{array} \right\} \\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in \mathcal{E} [\!| e |\!]\rho \} \\ \mathcal{F}(n, \lambda x{:}A.\, e, \rho) &= \mathsf{false} \\ \mathcal{F}(\emptyset, \lambda x{:}A.\, e, \rho) &= \mathsf{true} \\ \mathcal{F}(v \mapsto v', \lambda x{:}A.\, e, \rho) &= \mathcal{T}(A,v) \text{ and } v' \in \mathcal{E}[\!| e |\!]\rho(x{:=}v) \\ \mathcal{F}(v_1 \sqcup v_2, \lambda x{:}A.\, e, \rho) &= \mathcal{F}(v_1, \lambda x{:}A.\, e, \rho) \text{ and } \mathcal{F}(v_2, \lambda x{:}A.\, e, \rho) \\ \mathcal{F}(\mathsf{blame}\,\ell, \lambda x{:}A.e, \rho) &= \mathsf{false} \end{align*}

References

  • (Findler 2002) Contracts for higher-order functions. R. B. Findler and M. Felleisen. International Conference on Functional Programming. 2002.
  • (Gronski 2006) Sage: Hybrid Checking for Flexible Specifications. Jessica Gronski and Kenneth Knowles and Aaron Tomb and Stephen N. Freund and Cormac Flanagan. Scheme and Functional Programming Workshop, 2006.
  • (Scott 1970) Outline of a Mathematical Theory of Computation. Dana Scott. Oxford University. 1970. Technical report PRG-2.
  • (Scott 1976) Data Types as Lattices. Dana Scott. SIAM Journal on Computing. 1976. Volume 5, Number 3.
  • (Siek 2009) Exploring the Design Space of Higher-Order Casts. Jeremy G. Siek and Ronald Garcia and Walid Taha. European Symposium on Programming. 2009.
  • (Wadler 2009) Well-typed programs can't be blamed. Philip Wadler and Robert Bruce Findler. European Symposium on Programming. 2009.

Monday, January 30, 2017

Completeness of Intersection Types wrt. an Applied CBV Lambda Calculus

I'm still quite excited about the simple denotational semantics and looking forward to applying it to the semantics of gradually typed languages. However, before building on it I'd like to make sure it's correct. Recall that I proved soundness of the simple semantics with respect to a standard big-step operational semantics, but I did not prove completeness. Completeness says that if the operational semantics says that the program reduces to a particular value, then the denotational semantics does too. Recall that the first version of the simple semantics that I gave was not complete! It couldn't handle applying a function to itself, which is needed for the \(Y\) combinator and recursion. I've written down a fix, but the question remains whether the fix is good enough, that is, can we prove completeness? In the mean time, I learned that the simple semantics is closely related to filter models based on type systems with intersection types. This is quite helpful because that literature includes many completeness results for pure lambda calculi, see for example Intersection Types and Computational Rules by Alessi, Barbanera, and Dezani-Ciancaglini (2003).

In this blog post I prove completeness for an intersection type system with respect to a call-by-value lambda calculus augmented with numbers, primitive operators (addition, multiplication, etc.), and a conditional if-expression. The main outline of the proof is adapted from the above-cited paper, in which completeness is proved with respect to small-step operational semantics, though you'll find more details (i.e. lemmas) here because I've mechanized the proof in Isabelle and can't help but share in my suffering ;) (Lambda.thy, SmallStepLam.thy, IntersectComplete.thy) Ultimately I would like to prove completeness for the simple denotational semantics, but a good first step is doing the proof for a system that is in between the simple semantics and the intersection type systems in the literature.

The intersection type system I use here differs from ones in the literature in that I restrict the \(\wedge\) introduction rule to \(\lambda\)'s instead of applying it to any expression, as shown below. I recently realized that this change does not disturb the proof of Completeness because we're dealing with a call-by-value language. \[ \frac{\Gamma \vdash \lambda x.\, e : A \quad \Gamma \vdash \lambda x.\, e : B} {\Gamma \vdash \lambda x.\, e : A \wedge B}(\wedge\,\mathrm{intro}) \] I would like to remove the subsumption rule \[ \frac{\Gamma \vdash e : A \quad A <: B} {\Gamma \vdash e : B}(\mathrm{Sub}) \] but doing so was increasing the complexity of the proof of Completeness. Instead I plan to separately prove that the version without subsumption is equivalent to the version with subsumption. One might also consider doing the same regarding our above change to the \(\wedge\) introduction rule. I have also been working on that approach, but proving the admissibility of the standard \(\wedge\) introduction rule has turned out to be rather difficult (but interesting!).

Definition of an Applied CBV Lambda Calculus

Let us dive into the formalities and define the language that we're interested in. Here's the types, which include function types, intersection types, the top function type (written \(\top\)), and singleton numbers. Our \(\top\) corresponds to the type \(\nu\) from Egidi, Honsell, and Rocca (1992). See also Alessi et al. (2003). \[ A,B,C ::= A \to B \mid A \wedge B \mid \top \mid n \] and here's the expressions: \[ e ::= n \mid \mathit{op}(e,e) \mid \mathrm{if}\,e\,\mathrm{then}\,e\,\mathrm{else}\,e \mid x \mid \lambda x.\, e \mid e\,e \] where \(n\) ranges over numbers and \(\mathit{op}\) ranges over arithmetic operators such as addition.

We define type environments as an association list mapping variables to types. \[ \Gamma ::= \emptyset \mid \Gamma,x:A \]

The type system, defined below, is unusual in that it is highly precise. Note that the rule for arithmetic operators produces a precise singleton result and that the rules for if-expressions require the condition to be a singleton number (zero or non-zero) so that it knows which branch is taken. Thus, this type system is really a kind of dynamic semantics.

\begin{gather*} \frac{}{\Gamma \vdash n : n} \\[2ex] \frac{} {\Gamma \vdash \lambda x.\, e : \top}(\top\,\mathrm{intro}) \quad \frac{\Gamma \vdash \lambda x.\, e : A \quad \Gamma \vdash \lambda x.\, e : B} {\Gamma \vdash \lambda x.\, e : A \wedge B}(\wedge\,\mathrm{intro}) \\[2ex] \frac{\Gamma \vdash e : A \quad A <: B} {\Gamma \vdash e : B}(\mathrm{Sub}) \\[2ex] \frac{x:A \in \Gamma}{\Gamma \vdash x : A} \quad \frac{\Gamma,x:A \vdash B} {\Gamma \vdash \lambda x.\, e : A \to B} \quad \frac{\Gamma \vdash e_1: A \to B \quad \Gamma \vdash e_2 : A} {\Gamma \vdash e_1 \; e_2 : B}(\to\mathrm{intro}) \\[2ex] \frac{\Gamma \vdash e_1 : n_1 \quad \Gamma \vdash e_2 : n_2 \quad [\!|\mathit{op}|\!](n_1,n_2) = n_3} {\Gamma \vdash \mathit{op}(e_1,e_2) : n_3} \\[2ex] \frac{\Gamma \vdash e_1 : 0 \quad \Gamma \vdash e_3 : B} {\Gamma \vdash \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : B} \quad \frac{\Gamma \vdash e_1 : n \quad n \neq 0 \quad \Gamma \vdash e_2 : A} {\Gamma \vdash \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : A} \end{gather*}

The rules for subtyping come from the literature.

\begin{gather*} \frac{}{n <: n} \quad \frac{}{\top <: \top} \quad \frac{}{A \to B <: \top} \quad \frac{A' <: A \quad B <: B'} {A \to B <: A' \to B'}(<:\to) \\[2ex] \frac{C <: A \quad C <: B}{C <: A \wedge B} \quad \frac{}{A \wedge B <: A} \quad \frac{}{A \wedge B <: B} \\[2ex] \frac{}{(C\to A) \wedge (C \to B) <: C \to (A \wedge B)} \end{gather*}

We shall be working with values that are well typed in an empty type environment. This usually implies that the values have no free variables. However, that is not true of the current type system because of the \(\top\) introduction rule. So we add a side condition for \(\lambda\) in our definition of values. (In retrospect, I should have instead included a statement about free variables in the main Completeness theorem and then propagated that information to where it is needed.) \[ v ::= n \mid \lambda x.\, e \quad \text{where } FV(e) \subseteq \{x\} \]

We use a naive notion of substitution (not capture avoiding) because the \(v\)'s have no free variables to capture. \begin{align*} [x:=v] y &= \begin{cases} v & \text{if } x = y \\ y & \text{if } x \neq y \end{cases} \\ [x:=v] n &= n \\ [x:=v] (\lambda y.\, e) &= \begin{cases} \lambda y.\, e & \text{if } x = y \\ \lambda y.\, [x:=v] e & \text{if } x \neq y \end{cases} \\ [x:=v](e_1\, e_2) &= ([x:=v]e_1\, [x:=v]e_2) \\ [x:=v]\mathit{op}(e_1, e_2) &= \mathit{op}([x:=v]e_1, [x:=v]e_2) \\ [x:=v](\mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3) &= \mathrm{if}\,[x:=v]e_1\,\mathrm{then}\,[x:=v]e_2\,\mathrm{else}\,[x:=v]e_3 \end{align*}

The small-step operational semantics is defined by the following reduction rules. I'm not sure why I chose to use SOS-style rules instead of evaluation contexts. \begin{gather*} \frac{}{(\lambda x.\,e) \; v \longrightarrow [x:=v]e} \quad \frac{e_1 \longrightarrow e'_1}{e_1\,e_2 \longrightarrow e'_1 \, e_2} \quad \frac{e_2 \longrightarrow e'_2}{e_1\,e_2 \longrightarrow e_1 \, e'_2} \\[2ex] \frac{}{\mathit{op}(n_1,n_2) \longrightarrow [\!|\mathit{op}|\!](n_1,n_2)} \quad \frac{e_1 \longrightarrow e'_1} {\mathit{op}(e_1,e_2) \longrightarrow \mathit{op}(e'_1,e_2)} \quad \frac{e_2 \longrightarrow e'_2} {\mathit{op}(e_1,e_2) \longrightarrow \mathit{op}(e_1,e'_2)} \\[2ex] \frac{}{\mathrm{if}\,0\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 \longrightarrow e_3} \quad \frac{n \neq 0} {\mathrm{if}\,n\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 \longrightarrow e_2} \\[2ex] \frac{e_1 \longrightarrow e'_1} {\mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 \longrightarrow \mathrm{if}\,e'_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3} \end{gather*} \[ \frac{}{e \longrightarrow^{*} e} \qquad \frac{e_1 \longrightarrow e_2 \quad e_2 \longrightarrow^{*} e_3} {e_1 \longrightarrow^{*} e_3} \]

Proof of Completeness

The theorem that we aim to prove is that if the operational semantics says that a program reduces to a value, then the program is typable in the intersection type system and that the result type precisely describes the result value. I'm going to present the proof in a top-down style, so the proof of each lemma that I use is found further along in this blog post.

Theorem (Completeness)
If \(e \longmapsto^{*} v\), then \(\emptyset \vdash e : A\) and \(\emptyset \vdash v : A\) for some type \(A\).
Proof
Every value is typable (use the \(\top\) introduction rule for \(\lambda\)), so we have some \(A\) such that \(\emptyset \vdash v : A\). We shall show that typing is preserved by reverse reduction, which will give us \(\emptyset \vdash e : A\). QED

Lemma (Reverse Multi-Step Preserves Types)
If \(e \longrightarrow^{*} e'\) and \(\emptyset \vdash e' : A\), then \(\emptyset \vdash e : A\).
Poof
The proof is by induction on the derivation of \(e \longrightarrow^{*} e'\). The base case is trivial. The induction case requires that typing be preserved for a single-step of reduction, which we prove next. QED

Lemma (Reverse Single-Step Preserves Types)
If \(e \longrightarrow e'\) and \(\emptyset \vdash e' : A\), then \(\emptyset \vdash e : A\).
Proof
The proof is by induction on the derivation of \(e \longrightarrow e'\). The most important case is for function application: \[ (\lambda x.\,e) \; v \longrightarrow [x:=v]e \] We have that \(\emptyset \vdash [x:=v]e : A\) and need to show that \(\emptyset \vdash (\lambda x.\,e) \; v : A\). That is, we need to show that call-by-value \(\beta\)-expansion preserves types. So we need \(x:B \vdash e : A\) and \(\emptyset \vdash v : B\) for some type \(B\). The proof of this was the crux and required some generalization; I found it difficult to find the right statement of the lemma. It is proved below under the name Reverse Substitution Preserves Types. The other cases of this proof are straightforward except for one hiccup. They all require inversion lemmas (aka. generation lemmas) to unpack the information from \(\emptyset \vdash e' : A\). However, as is usual for languages with subsumption, the inversion lemmas are not simply proved by case analysis on typing rules, but must instead be proved by induction on the typing derivations. QED

Lemma (Inversion)

  1. If \(\Gamma \vdash n : A \), then \(n <: A\).
  2. If \(\Gamma \vdash e_1\,e_2 : A \), then \( \Gamma \vdash e_1 : B \to A' \), \( A' <: A \), and \( \Gamma \vdash e_2 : B \) for some \(A'\) and \(B\).
  3. If \(\Gamma \vdash \mathit{op}(e_1,e_2) : A\), then \(\Gamma \vdash e_1 : n_1\), \(\Gamma \vdash e_2 : n_2 \), \(\Gamma \vdash \mathit{op}(e_1,e_2) : [\!|\mathit{op}|\!](n_1,n_2) \), and \([\!|\mathit{op}|\!](n_1,n_2) <: A\) for some \(n_1\) and \(n_2\).
  4. If \(\Gamma \vdash \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : A\), then either
    • \(\Gamma \vdash e_1 : 0\), \(\Gamma\vdash e_3 : B\), and \(B <: A\), for some \(B\).
    • \(\Gamma \vdash e_1 : n\), \(n \neq 0\), \(\Gamma\vdash e_2 : A'\), and \(A' <: A\), for some \(A'\).
Proof The proofs are by induction on the derivation of typing. QED

To state the reverse substitution lemma in a way that provides a useful induction hypothesis in the case for \(\lambda\), we introduce a notion of equivalence of type environments: \[ \Gamma \approx \Gamma' = (x : A \in \Gamma \text{ iff } x : A \in \Gamma') \] The reverse substitution lemma will show that if \([y:=v]e\) is well typed, then so is \(e\) in the environment extended with \(y:B\), for some appropriate choice of \(B\). Now, the value \(v\) may appear in multiple places within \([y:=v]e\) and in each place, \(v\) may have been assigned a different type. For example, \(v\) could be \(\lambda x.\, {+}(x,1)\) and it could have the type \(0\to 1\) in one place and \(1\to 2\) in another place. However, we must choose a single type \(B\) for \(y\). But thanks to intersection types, we can choose \(B\) to be the intersection of all the types assigned to \(v\).

Lemma (Reverse Substitution Preserves Types)
If \(\Gamma \vdash [y:=v]e : A\) and \(y \notin \mathrm{dom}(\Gamma)\), then \( \emptyset \vdash v : B \), \( \Gamma' \vdash e : A\), and \(\Gamma' \approx \Gamma,y:B\) for some \(\Gamma'\) and \(B\).
Proof The proof is by induction on the derivation of \(\Gamma \vdash [y:=v]e : A\). (I wonder if the proof would have been easier if done by induction on \(e\).) The proof is rather long, so I'll just highlight the lemmas that were needed here. The full details are in the Isabelle mechanization.

  • The cases for variables and numbers are relatively straightforward.
  • The case for \(\lambda\) requires lemmas regarding Environment Strengthening and Environment Lowering and their corollaries.
  • The case for subsumption is relatively easy.
  • The case for function application is interesting. We have \( (e_1 \, e_2) = [y:=v]e \), so \(e = e'_1 \, e'_2\) where \(e_1 = [y:=v]e'_1\) and \(e_2 = [y:=v]e'_2\). From the induction hypotheses for \(e_1\) and \(e_2\), we have \(\emptyset \vdash v : B_1\) and \(\emptyset \vdash v : B_2\). The lemma Combine Values gives us some \(B_3\) such that \(\emptyset \vdash v : B_3\) and \(B_3 <: B_1\) and \(B_3 <: B_2\). We choose \(\Gamma' = \Gamma,y:B_3\). To show \(\Gamma' \vdash e'_1\, e'_2 : A\) we use the induction hypotheses for \(e_1\) and \(e_2\), along with the lemmas Equivalent Environments and Environment Lowering.
  • The case for \(\top\) introduction is straightforward.
  • The case for \(\wedge\) introduction uses the lemmas Well-typed with No Free Variables, Environment Strengthening, Combine Values, Equivalent Environments, and Environment Lowering.
  • The cases for arithmetic operators and if-expressions follow a pattern similar to that of function application.
QED

Lemma (Environment Strengthening)
If \(\Gamma \vdash e : A\) and for every free variable \(x\) in \(e\), \( x:A \in \Gamma \text{ iff } x:A \in \Gamma' \), then \(\Gamma' \vdash e : A\).
Proof The proof is by induction on the derivation of \(\Gamma \vdash e : A\). QED

Corollary (Well-typed with No Free Variables)
If \(\Gamma \vdash e : A\) and \(\mathit{FV}(e) = \emptyset\), then \(\emptyset \vdash e : A\).

We define the following ordering relation on environments: \[ \Gamma \sqsupseteq \Gamma' = (x:A \in \Gamma \Longrightarrow x:A' \in \Gamma \text{ and } A' <: A) \]

Lemma (Environment Lowering)
If \(\Gamma \vdash e : A\) and \(\Gamma \sqsupseteq \Gamma'\), then \(\Gamma' \vdash e : A\).
Proof The proof is by induction on the derivation of \(\Gamma \vdash e : A\). QED

Corollary (Equivalent Environments)
If \(\Gamma \vdash e : A\) and \(\Gamma \approx \Gamma'\), then \(\Gamma' \vdash e : A\).
Proof If \(\Gamma \approx \Gamma'\) then we also have \(\Gamma \sqsupseteq \Gamma'\), so we conclude by applying Environment Lowering. QED

Lemma (Combine Values)
If \(\Gamma \vdash v : B_1\) and \(\Gamma \vdash v : B_2\), then \(\Gamma \vdash v : B_3\), \(B_3 <: B_1 \wedge B_2\), and \(B_1 \wedge B_2 <: B_3\) for some \(B_3\).
Proof The proof is by cases on \(v\). It uses the Inversion lemma for numbers and the \(\wedge\) introduction rule for \(\lambda\)'s. QED