Sunday, February 05, 2017

On the Meaning of Casts and Blame for Gradual Typing

Gradually typed languages enable programmers to choose which parts of their programs are statically typed and which parts are dynamically typed. Thus, gradually typed languages perform some type checking at compile time and some type checking at run time. When specifying the semantics of a gradually typed language, we usually express the run time checking in terms of casts. Thus, the semantics of a gradually typed language depends crucially on the semantics of casts. This blog post tries to answer the question: "What is a cast?"

Syntax and Static Semantics of Casts

Syntactically, a cast is an expression of the form \[ e : A \Rightarrow^{\ell} B \] where \(e\) is a subexpression; \(A\) and \(B\) are the source and target types, respectively. The \(\ell\) is what we call a blame label, which records the location of the cast (e.g. line number and character position).

Regarding the static semantics (compile-time type checking), a cast enables the sub-expression \(e\) of static type \(A\) to be used in a context expecting a different type \(B\). \[ \frac{\Gamma \vdash e : A} {\Gamma \vdash (e : A \Rightarrow^{\ell} B) : B} \] In gradual typing, \(A\) and \(B\) typically differ in how "dynamic" they are but are otherwise similar to each other. So we often restrict the typing rule for casts to only allow source and target types that have some values in common, that is, when \(A\) and \(B\) are consistent. \[ \frac{\Gamma \vdash e : A \quad A \sim B} {\Gamma \vdash (e : A \Rightarrow^{\ell} B) : B} \] For example, if we let \(\star\) be the unknown type (aka. \(\mathtt{dynamic}\)), then we have \(\mathtt{Int} \sim \star\) and \(\star \sim \mathtt{Int}\) but \(\mathtt{Int} \not\sim \mathtt{Int}\to\mathtt{Int}\). Here are the rules for consistency with integers, functions, and the dynamic type. \begin{gather*} \mathtt{Int} \sim \mathtt{Int} \qquad \frac{A \sim B \qquad A' \sim B'} {A \to A' \sim B \to B'} \qquad A \sim \star \qquad \star \sim B \end{gather*}

Dynamic Semantics of Casts

The dynamic semantics of a cast is to check whether the value produced by subexpression \(e\) is of the target type \(B\) and if so, return the value; otherwise signal an error. The following is a strawman denotational semantics that expresses this basic intuition about casts. Suppose we have already defined the meaning of types, so \(\mathcal{T}[\!| A |\!]\) is the set of values of type \(A\). The meaning function \(\mathcal{E}[\!| e |\!]\) maps an expression to a result (either a value \(v\) or error \(\mathsf{blame}\,\ell\)). \begin{align*} \mathcal{E} [\!| e : A \Rightarrow^{\ell} B |\!] &= \begin{cases} v & \text{if } v \in \mathcal{T}[\!| B |\!] \\ \mathsf{blame}\,\ell & \text{if } v \notin \mathcal{T}[\!| B |\!] \end{cases} \\ & \text{where } v = \mathcal{E} [\!| e |\!] \end{align*}

If we restrict ourselves to first-order types such as \(\mathtt{Int}\), it is straightforward to define \(\mathcal{T}\) and check whether a value is in the set. \begin{align*} \mathcal{T}[\!| \mathtt{Int} |\!] &= \mathbb{Z} \end{align*} The story for function types, that is, for \(A \to B\), is more complicated. In a denotational setting, it traditionally takes sophisticated mathematics to come up with mathematical entities that can serve as function values when the \(\star\) type is involved (Scott 1970, 1976). The primary challenge is that one cannot simply use the usual notion of a mathematical function to represent function values because of a cardinality problem. Suppose that \(D\) is the set of all values. The set of mathematical functions whose domain and codomain is \(D\) is necessarily larger than \(D\), so the mathematical functions cannot fit into the set of all values. There is nothing wrong with sophisticated mathematics per se, but when it comes to using a specification for communication (e.g. between language designers and compiler writers), it is less desirable to require readers of the specification to fully understand a large number of auxiliary definitions and decide whether those definitions match their intuitions.

Competing Operational Semantics for Casts

We'll come back to denotational semantics in a bit, but first let's turn to operational semantics, in particular reduction semantics, which is what the recent literature uses to explains casts and the type \(\star\) (Gronski 2006, Siek 2007, Wadler 2009). In a reduction semantics, we give rewrite rules to say what happens when a syntactic value flows into a cast, that is, we say what expression the cast reduces to. Recall that a syntactic value is just an expression that cannot be further reduced. We can proceed by cases on the consistency of the source type \(A\) and target type \(B\).

  • Case \((v : \mathtt{Int} \Rightarrow^{\ell} \mathtt{Int})\). This one is easy, the static type system ensures that \(v\) has type \(\mathtt{Int}\), so there is nothing to check and we can rewrite to \(v\). \[ v : \mathtt{Int} \Rightarrow^{\ell} \mathtt{Int} \longrightarrow v \]
  • Case \((v : \star \Rightarrow^{\ell} \star)\). This one is also easy. \[ v : \star \Rightarrow^{\ell} \star \longrightarrow v \]
  • Case \((v : A \to A' \Rightarrow^{\ell} B \to B')\). This one is more complicated. We'd like to check that the function \(v\) has type \(B \to B'\). Suppose \(B'=\mathtt{Int}\). How can we determine whether a function returns an integer? In general, that's just as hard as the halting problem, which is undecidable. So instead of checking now, we'll delay the checking until when the function is called. We can accomplish this by rewriting to a lambda expression that casts the input, calls \(v\), and then casts the output. \[ v : A \to A' \Rightarrow^{\ell} B \to B' \longrightarrow \lambda x{:}B. (v \; (x : B \Rightarrow^{\ell} A)) : A' \Rightarrow^{\ell} B' \] Here we see the importance of attaching blame labels to casts. Because of the delayed checking, the point of error can be far removed from the original source code location, but thanks to the blame label we can point back to the source location of the cast that ultimately failed (Findler and Felleisen 2002).
  • Case \((v : A \Rightarrow^{\ell} \star)\). For this one there are multiple options in the literature. One option is declare this as a syntactic value (Siek 2009), so no rewrite rule is necessary. Another option is to factor all casts to \(\star\) through the ground types \(G\): \[ G ::= \mathtt{Int} \mid \star \to \star \] Then we expand the cast from \(A\) to \(\star\) into two casts that go through the unique ground type for \(A\). \begin{align*} v : A \Rightarrow^{\ell} \star &\longrightarrow (v : A \Rightarrow^{\ell} G) : G \Rightarrow^{\ell} \star\\ & \text{where } A \sim G, A \neq G, A \neq \star \end{align*} and then declare that expressions of the form \((v : G \Rightarrow^{\ell} \star)\) are values (Wadler 2009).
  • Case \((v : \star \Rightarrow^{\ell} B)\). There are multiple options here as well, but the choice is linked to the above choice regarding casting from \(A\) to \(\star\). If \(v = (v' : A \Rightarrow^{\ell'} \star)\), then we need the following rewrite rules \begin{align*} (v' : A \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} B &\longrightarrow v' : A \Rightarrow^{\ell} B & \text{if } A \sim B \\[2ex] (v' : A \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} B &\longrightarrow \mathsf{blame}\,\ell & \text{if } A \not\sim B \end{align*} On the other hand, if we want to factor through the ground types, we have the following reduction rules. \begin{align*} v : \star \Rightarrow^{\ell} B &\longrightarrow v : \star \Rightarrow^{\ell} G \Rightarrow^{\ell} B \\ & \text{if } B \sim G, B \neq G, B \neq \star \\[2ex] (v : G \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} G &\longrightarrow v \\[2ex] (v : G \Rightarrow^{\ell'} \star) : \star \Rightarrow^{\ell} G' &\longrightarrow \mathsf{blame}\,\ell\\ & \text{if } G \neq G' \end{align*}

Given that we have multiple options regarding the reduction semantics, an immediate question is whether it matters, that is, can we actually observe different behaviors for some program? Yes, in the following example we cast the identity function on integers to an incorrect type. \begin{equation} \begin{array}{l} \mathtt{let}\, id = (\lambda x{:}\mathtt{Int}.\; x)\, \mathtt{in}\\ \mathtt{let}\, f = (id : \mathtt{Int}\to \mathtt{Int} \Rightarrow^{\ell_1} \star) \, \mathtt{in} \\ \mathtt{let}\, g = (f : \star \Rightarrow^{\ell_2} (\mathtt{Int}\to \mathtt{Int}) \to \mathtt{Int})\,\mathtt{in} \\ \quad g \; id \end{array} \tag{P0}\label{P0} \end{equation} If we choose the semantics that factors through ground types, the above program reduces to \(\mathsf{blame}\, \ell_1\). If we choose the other semantics, the above program reduces to \(\mathsf{blame}\, \ell_2\). Every since around 2008 I've been wondering which of these is correct, though for the purposes of full disclosure, I've always felt that \(\mathsf{blame}\,\ell_2\) was the better choice for this program. I've also been thinking for a long time that it would be nice to have some alternative, hopefully more intuitive, way to specify the semantics of casts, with which we could then compare the above two alternatives.

A Denotational Semantics of Functions and Casts

I've recently found out that there is a simple way to represent function values in a denotational semantics. The intuition is that, although a function may be able to deal with an infinite number of different inputs, the function only has to deal with a finite number of inputs on any one execution of the program. Thus, we can represent functions with finite tables of input-output pairs. An empty table is written \(\emptyset\), a single-entry table has the form \(v \mapsto v'\) where \(v\) is the input and \(v'\) is the corresponding output. We build a larger table out of two smaller tables \(v_1\) and \(v_2\) with the notation \(v_1 \sqcup v_2\). So, with the addition of integer values \(n \in \mathbb{Z}\), the following grammar specifies the values. \[ v ::= n \mid \emptyset \mid v \mapsto v \mid v \sqcup v \]

Of course, we can't use just one fixed-size table as the denotation of a lambda expression. Depending on the context of the lambda, we may need a bigger table that handles more inputs. Therefore we map each lambda expression to the set of all finite tables that jive with that lambda. To be more precise, we shall define a meaning function \(\mathcal{E}\) that maps an expression and an environment to a set of values, and an auxiliary function \(\mathcal{F}\) that determines whether a table jives with a lambda expression in a given environment. Here's a first try at defining \(\mathcal{F}\). \begin{align*} \mathcal{F}(n, \lambda x{:}A.\, e, \rho) &= \mathsf{false} \\ \mathcal{F}(\emptyset, \lambda x{:}A.\, e, \rho) &= \mathsf{true} \\ \mathcal{F}(v \mapsto v', \lambda x{:}A.\, e, \rho) &= \mathcal{T}(A,v) \text{ and } v' \in \mathcal{E}[\!| e |\!]\rho(x{:=}v) \\ \mathcal{F}(v_1 \sqcup v_2, \lambda x{:}A.\, e, \rho) &= \mathcal{F}(v_1, \lambda x{:}A.\, e, \rho) \text{ and } \mathcal{F}(v_2, \lambda x{:}A.\, e, \rho) \end{align*} (We shall define \(\mathcal{T}(A,v)\) shortly.) We then define the semantics of a lambda-expression in terms of \(\mathcal{F}\). \[ \mathcal{E}[\!| \lambda x{:}A.\, e|\!]\rho = \{ v \mid \mathcal{F}(v, \lambda x{:}A.\, e, \rho) \} \] The semantics of function application is essentially that of table lookup. We write \((v_2 \mapsto v) \sqsubseteq v_1\) to say, roughly, that \(v_2 \mapsto v\) is an entry in the table \(v_1\). (We give the full definition of \(\sqsubseteq\) in the Appendix.) \[ \mathcal{E}[\!| e_1 \, e_2 |\!]\rho = \left\{ v \middle| \begin{array}{l} \exists v_1 v_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \text{ and } v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \text{ and } (v_2 \mapsto v) \sqsubseteq v_1 \end{array} \right\} \] Finally, to give meaning to lambda-bound variables, we simply look them up in the environment. \[ \mathcal{E}[\!| x |\!]\rho = \{ \rho(x) \} \]

Now that we have a good representation for function values, we can talk about giving meaning to higher-order casts, that is, casts from one function type to another. Recall that in our strawman semantics, we got stuck when trying to define the meaning of types in the form of map \(\mathcal{T}\) from a type to a set of values. Now we can proceed based on the above definition of values \(v\). (To make the termination of \(\mathcal{T}\) more obvious, we'll instead define \(\mathcal{T}\) has a map from a type and a value to a Boolean. The measure is a lexicographic ordering on the size of the type and then the size of the value.) \begin{align*} \mathcal{T}(\mathtt{Int}, v) &= (\exists n. \; v = n) \\ \mathcal{T}(\star, v) &= \mathsf{true} \\ \mathcal{T}(A \to B, n) &= \mathsf{false} \\ \mathcal{T}(A \to B, \emptyset) &= \mathsf{true} \\ \mathcal{T}(A \to B, v \mapsto v') &= \mathcal{T}(A, v) \text{ and } \mathcal{T}(B, v') \\ \mathcal{T}(A \to B, v_1 \sqcup v_2) &= \mathcal{T}(A \to B, v_1) \text{ and } \mathcal{T}(A \to B, v_2) \end{align*} With \(\mathcal{T}\) defined, we define the meaning to casts as follows. \begin{align*} \mathcal{E} [\!| e : A \Rightarrow^{\ell} B |\!]\rho &= \{ v \mid v \in \mathcal{E} [\!| e |\!]\rho \text{ and } \mathcal{T}(B, v) \}\\ & \quad\; \cup \left\{ \mathsf{blame}\,\ell \middle| \begin{array}{l} \exists v.\; v \in \mathcal{E} [\!| e |\!]\rho \text{ and } \neg \mathcal{T}(B, v)\\ \text{and } (\forall l'. v \neq \mathsf{blame}\,l') \end{array}\right\}\\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in \mathcal{E} [\!| e |\!]\rho \} \end{align*} This version says that the result of the cast should only be those values of \(e\) that also have type \(B\). It also says that we signal an error when a value of \(e\) does not have type \(B\). Also, if there was an error in \(e\) then we propagate it. The really interesting thing about this semantics is that, unlike the reduction semantics, we actually check functions at the moment they go through the cast, instead of delaying the check to when they are called. We immediately determine whether the function is of the target type. If the function is not of the target type, we can immediately attribute blame to this cast, so there is no need for complex blame tracking rules.

Of course, we need to extend values to include blame: \[ v ::= n \mid \emptyset \mid v \mapsto v \mid v \sqcup v \mid \mathsf{blame}\,\ell \] and augment \(\mathcal{T}\) and \(\mathcal{F}\) to handle \(\mathsf{blame}\,\ell\). \begin{align*} \mathcal{T}(A\to B, \mathsf{blame}\,\ell) &= \mathsf{false} \\ \mathcal{F}(\mathsf{blame}\,\ell, \lambda x{:}A.e, \rho) &= \mathsf{false} \end{align*} To propagate errors to the meaning of the entire program, we augment the meaning of other language forms, such as function application to pass along blame. \begin{align*} \mathcal{E}[\!| e_1 \, e_2 |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 v_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \text{ and } v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \text{and } (v_2 \mapsto v) \sqsubseteq v_1 \end{array} \right\} \\ & \quad\; \cup \{ \mathsf{blame}\, \ell \mid \mathsf{blame}\, \ell \in \mathcal{E}[\!| e_1 |\!]\rho \text{ or } \mathsf{blame}\,\ell \in \mathcal{E}[\!| e_2 |\!]\rho\} \end{align*}

Two Examples

Let us consider the ramifications of this semantics. The following example program creates a function \(f\) that returns \(1\) on non-zero input and returns the identity function when applied to \(0\). We cast this function to the type \(\mathtt{Int}\to\mathtt{Int}\) on two separate occasions, cast \(\ell_3\) and cast \(\ell_4\), to create \(g\) and \(h\). We apply \(g\) to \(1\) and \(h\) to its result. \[ \begin{array}{l} \mathtt{let}\,f = \left(\lambda x:\mathtt{Int}.\; \begin{array}{l} \mathtt{if}\, x \,\mathtt{then}\, (0: \mathtt{Int}\Rightarrow^{\ell_1}\,\star)\\ \mathtt{else}\, ((\lambda y:\mathtt{Int}.\; y) : \mathtt{Int}\to\mathtt{Int}\Rightarrow{\ell_2} \, \star) \end{array} \right) \; \mathtt{in} \\ \mathtt{let}\,g = (f : \mathtt{Int}\to\star \Rightarrow^{\ell_3} \mathtt{Int}\to\mathtt{Int})\, \mathtt{in} \\ \mathtt{let}\,h = (f : \mathtt{Int}\to\star \Rightarrow^{\ell_4} \mathtt{Int}\to\mathtt{Int})\, \mathtt{in} \\ \mathtt{let}\,z = (g \; 1)\, \mathtt{in} \\ \quad (h\; z) \end{array} \] The meaning of this program is \(\{ \mathsf{blame}\,\ell_3, \mathsf{blame}\,\ell_4\}\). To understand this outcome, we can analyze the meaning of the various parts of the program. (The semantics is compositional!) Toward writing down the denotation of \(f\), let's define auxiliary functions \(id\) and \(F\). \begin{align*} id(n) &= \mathsf{false} \\ id(\emptyset) &= \mathsf{true} \\ id(v \mapsto v') &= (v = v') \\ id(v_1 \sqcup v_2) &= id(v_1) \text{ and } id(v_2) \\ id(\mathsf{blame}\,\ell) &= \mathsf{false} \\ \\ F(n) &= \textsf{false} \\ F(\emptyset) &= \textsf{true} \\ F(0 \mapsto v) &= \mathit{id}(v) \\ F(n \mapsto 0) &= (n \neq 0)\\ F(v_1 \sqcup v_2) &= F(v_1) \text{ and } F(v_2) \\ F(\mathsf{blame}\,\ell) &= \mathsf{false} \end{align*} The denotation of \(f\) is \[ \mathcal{E}[\!| f |\!] = \{ v \mid F(v) \} \] To express the denotation of \(g\), we define \(G\) \begin{align*} G(n) &= \textsf{false} \\ G(\emptyset) &= \textsf{true} \\ G(n \mapsto 0) &= (n \neq 0) \\ G(v_1 \sqcup v_2) &= G(v_1) \text{ and } G(v_2) \\ G(\mathsf{blame}\,\ell) &= \mathsf{false} \end{align*} The meaning of \(g\) is all the values that satisfy \(G\) and also \(\mathsf{blame}\,\ell_3\). \[ \mathcal{E}[\!| g |\!] = \{ v \mid G(v) \} \cup \{ \mathsf{blame}\, \ell_3 \} \] The meaning of \(h\) is similar, but with different blame. \[ \mathcal{E}[\!| h |\!] = \{ v \mid G(v) \} \cup \{ \mathsf{blame}\, \ell_4 \} \] The function \(g\) applied to \(1\) produces \(\{ 0, \mathsf{blame}\, \ell_3\}\), whereas \(h\) applied to \(0\) produces \(\{ \mathsf{blame}\, \ell_4\}\). Thus, the meaning of the whole program is \(\{ \mathsf{blame}\,\ell_3, \mathsf{blame}\,\ell_4\}\).

Because cast \(\ell_3\) signals an error, one might be tempted to have the meaning of \(g\) be just \(\{ \mathsf{blame}\,\ell_3\}\). However, we want to allow implementations of this language that do not blame \(\ell_3\) (\(g\) is never applied to \(0\) after all, so its guilt was not directly observable) and instead blame \(\ell_4\), who was caught red handed. So it is important for the meaning of \(g\) to include the subset of values from \(f\) that have type \(\mathtt{Int}\to\mathtt{Int}\) so that we can carry on and find other errors as well. We shall expect implementations of this language to be sound with respect to blame, that is, if execution results in blame, it should blame one of the labels that is in the denotation of the program (and not some other innocent cast).

Let us return to the example (P0). The denotation of that program is \(\{\mathsf{blame}\,\ell_2\}\) because the cast at \(\ell_2\) is a cast to \((\mathtt{Int}\to \mathtt{Int}) \to \mathtt{Int}\) and the identity function is not of that type. The other case at \(\ell_1\) is innocent because it is a cast to \(\star\) and all values are of that type, including the identity cast.

Discussion

By giving the cast calculus a denotational semantics in terms of finite function tables, it became straightforward to define whether a function value is of a given type. This in turn made it easy to define the meaning of casts, even casts at function type. A cast succeeds if the input value is of the target type and it fails otherwise. With this semantics we assign blame to a cast in an eager fashion, without the need for the blame tracking machinery that is present in the operational semantics.

We saw an example program where the reduction semantics that factors through ground types attributes blame to a cast that the denotational semantics says is innocent. This lends some evidence to that semantics being less desirable.

I plan to investigate whether the alternative reduction semantics is sound with respect to the denotational semantics in the sense that the reduction semantics only blames a cast if the denotational semantics says it is guilty.

Appendix

We give the full definition of the cast calculus here in the appendix. The relation \(\sqsubseteq\) that we used to define table lookup is the dual of the subtyping relation for intersection types. The denotational semantics is a mild reformation of the intersection type system that I discussed in previous blog posts.

Syntax \[ \begin{array}{lcl} A &::=& \mathtt{Int} \mid A \to B \mid \star \\ e &::= &n \mid \mathit{op}(e,e) \mid \mathtt{if}\, e\, \mathtt{then}\, e \,\mathtt{else}\, e \mid x \mid \lambda x{:}A \mid e \; e \mid e : A \Rightarrow^\ell B \end{array} \] Consistency \begin{gather*} \mathtt{Int} \sim \mathtt{Int} \qquad \frac{A \sim B \qquad A' \sim B'} {A \to A' \sim B \to B'} \qquad A \sim \star \qquad \star \sim B \end{gather*} Type System \begin{gather*} \frac{}{\Gamma \vdash n : \mathtt{Int}} \quad \frac{\Gamma \vdash e_1 : \mathtt{Int} \quad \Gamma \vdash e_2 : \mathtt{Int}} {\Gamma \vdash \mathit{op}(e_1,e_2) : \mathtt{Int}} \\[2ex] \frac{\Gamma \vdash e_1 : \mathtt{Int} \quad \Gamma \vdash e_2 : A \quad \Gamma \vdash e_3 : A} {\Gamma \vdash \mathtt{if}\, e_1\, \mathtt{then}\, e_2 \,\mathtt{else}\, e_3 : A} \\[2ex] \frac{x{:}A \in \Gamma}{\Gamma \vdash x : A} \quad \frac{\Gamma,x{:}A \vdash e : B}{\Gamma \vdash \lambda x{:}A.\; e : A \to B} \quad \frac{\Gamma e_1 : A \to B \quad \Gamma e_2 : A} {\Gamma \vdash e_1 \; e_2 : B} \\[2ex] \frac{\Gamma \vdash e : A \quad A \sim B} {\Gamma \vdash (e : A \Rightarrow^\ell B) : B} \end{gather*} Values \[ v ::= n \mid \emptyset \mid v \mapsto v \mid v \sqcup v \mid \mathsf{blame}\,\ell \] Table Lookup (Value Information Ordering) \begin{gather*} \frac{}{n \sqsubseteq n} \quad \frac{v'_1 \sqsubseteq v_1 \quad v_2 \sqsubseteq v'_2} {v_1 \mapsto v_2 \sqsubseteq v'_1 \mapsto v'_2} \quad \frac{}{\mathsf{blame}\,\ell \sqsubseteq \mathsf{blame}\,\ell} \\[2ex] \frac{}{v_1 \sqsubseteq v_1 \sqcap v_2} \quad \frac{}{v_2 \sqsubseteq v_1 \sqcap v_2} \quad \frac{v_1 \sqsubseteq v_3 \quad v_2 \sqsubseteq v_3} {v_1 \sqcup v_2 \sqsubseteq v_3} \\[2ex] \frac{}{v_1 \mapsto (v_2 \sqcup v_3) \sqsubseteq (v_1 \mapsto v_2) \sqcup (v_1 \mapsto v_3)} \quad \frac{}{\emptyset \sqsubseteq v_1 \mapsto v_2} \quad \frac{}{\emptyset \sqsubseteq \emptyset} \end{gather*} \noindent Semantics of Types \begin{align*} \mathcal{T}(\mathtt{Int}, v) &= (\exists n. \; v = n) \\ \mathcal{T}(\star, v) &= \mathsf{true} \\ \mathcal{T}(A \to B, n) &= \mathsf{false} \\ \mathcal{T}(A \to B, \emptyset) &= \mathsf{true} \\ \mathcal{T}(A \to B, v \mapsto v') &= \mathcal{T}(A, v) \text{ and } \mathcal{T}(B, v') \\ \mathcal{T}(A \to B, v_1 \sqcup v_2) &= \mathcal{T}(A \to B, v_1) \text{ and } \mathcal{T}(A \to B, v_2) \\ \mathcal{T}(A\to B, \mathsf{blame}\,\ell) &= \mathsf{false} \end{align*} Denotational Semantics \begin{align*} \mathcal{E}[\!| n |\!]\rho &= \{ n \}\\ \mathcal{E}[\!| \mathit{op}(e_1,e_2) |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 v_2 n_1 n_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \land v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \land\; n_1 \sqsubseteq v_1 \land n_2 \sqsubseteq v_2 \land v = [\!| \mathit{op} |\!](n_1,n_2) \end{array} \right\}\\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in (\mathcal{E} [\!| e_1 |\!]\rho \cup \mathcal{E} [\!| e_2 |\!]\rho) \} \\ \mathcal{E}[\!| \mathtt{if}\, e_1\, \mathtt{then}\, e_2 \,\mathtt{else}\, e_3 |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 n. v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \land n \sqsubseteq v_1 \\ \land\; (n = 0 \Longrightarrow v \in \mathcal{E}[\!| e_3 |\!]\rho) \\ \land\; (n \neq 0 \Longrightarrow v \in \mathcal{E}[\!| e_2 |\!]\rho) \end{array} \right\}\\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in (\mathcal{E} [\!| e_1 |\!]\rho \cup \mathcal{E} [\!| e_2 |\!]\rho \cup \mathcal{E} [\!| e_3 |\!]\rho ) \} \\ \mathcal{E}[\!| x |\!]\rho &= \{ \rho(x) \}\\ \mathcal{E}[\!| \lambda x{:}A.\, e|\!]\rho &= \{ v \mid \mathcal{F}(v, \lambda x{:}A.\, e, \rho) \} \\ \mathcal{E}[\!| e_1 \, e_2 |\!]\rho &= \left\{ v \middle| \begin{array}{l} \exists v_1 v_2.\; v_1 \in \mathcal{E}[\!| e_1 |\!]\rho \land v_2 \in \mathcal{E}[\!| e_2 |\!]\rho \\ \land\; (v_2 \mapsto v) \sqsubseteq v_1 \end{array} \right\} \\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in (\mathcal{E} [\!| e_1 |\!]\rho \cup \mathcal{E} [\!| e_2 |\!]\rho) \} \\ \mathcal{E} [\!| e : A \Rightarrow^{\ell} B |\!]\rho &= \{ v \mid v \in \mathcal{E} [\!| e |\!]\rho \text{ and } \mathcal{T}(B, v) \}\\ & \quad\; \cup \left\{ \mathsf{blame}\,\ell \middle| \begin{array}{l}\exists v.\; v \in \mathcal{E} [\!| e |\!] \rho \text{ and } \neg \mathcal{T}(B, v)\\ \text{and } (\forall \ell'. v \neq \mathsf{blame}\,\ell') \end{array} \right\} \\ & \quad\; \cup \{ \mathsf{blame}\,\ell' \mid \mathsf{blame}\,\ell' \in \mathcal{E} [\!| e |\!]\rho \} \\ \mathcal{F}(n, \lambda x{:}A.\, e, \rho) &= \mathsf{false} \\ \mathcal{F}(\emptyset, \lambda x{:}A.\, e, \rho) &= \mathsf{true} \\ \mathcal{F}(v \mapsto v', \lambda x{:}A.\, e, \rho) &= \mathcal{T}(A,v) \text{ and } v' \in \mathcal{E}[\!| e |\!]\rho(x{:=}v) \\ \mathcal{F}(v_1 \sqcup v_2, \lambda x{:}A.\, e, \rho) &= \mathcal{F}(v_1, \lambda x{:}A.\, e, \rho) \text{ and } \mathcal{F}(v_2, \lambda x{:}A.\, e, \rho) \\ \mathcal{F}(\mathsf{blame}\,\ell, \lambda x{:}A.e, \rho) &= \mathsf{false} \end{align*}

References

  • (Findler 2002) Contracts for higher-order functions. R. B. Findler and M. Felleisen. International Conference on Functional Programming. 2002.
  • (Gronski 2006) Sage: Hybrid Checking for Flexible Specifications. Jessica Gronski and Kenneth Knowles and Aaron Tomb and Stephen N. Freund and Cormac Flanagan. Scheme and Functional Programming Workshop, 2006.
  • (Scott 1970) Outline of a Mathematical Theory of Computation. Dana Scott. Oxford University. 1970. Technical report PRG-2.
  • (Scott 1976) Data Types as Lattices. Dana Scott. SIAM Journal on Computing. 1976. Volume 5, Number 3.
  • (Siek 2009) Exploring the Design Space of Higher-Order Casts. Jeremy G. Siek and Ronald Garcia and Walid Taha. European Symposium on Programming. 2009.
  • (Wadler 2009) Well-typed programs can't be blamed. Philip Wadler and Robert Bruce Findler. European Symposium on Programming. 2009.

Monday, January 30, 2017

Completeness of Intersection Types wrt. an Applied CBV Lambda Calculus

I'm still quite excited about the simple denotational semantics and looking forward to applying it to the semantics of gradually typed languages. However, before building on it I'd like to make sure it's correct. Recall that I proved soundness of the simple semantics with respect to a standard big-step operational semantics, but I did not prove completeness. Completeness says that if the operational semantics says that the program reduces to a particular value, then the denotational semantics does too. Recall that the first version of the simple semantics that I gave was not complete! It couldn't handle applying a function to itself, which is needed for the \(Y\) combinator and recursion. I've written down a fix, but the question remains whether the fix is good enough, that is, can we prove completeness? In the mean time, I learned that the simple semantics is closely related to filter models based on type systems with intersection types. This is quite helpful because that literature includes many completeness results for pure lambda calculi, see for example Intersection Types and Computational Rules by Alessi, Barbanera, and Dezani-Ciancaglini (2003).

In this blog post I prove completeness for an intersection type system with respect to a call-by-value lambda calculus augmented with numbers, primitive operators (addition, multiplication, etc.), and a conditional if-expression. The main outline of the proof is adapted from the above-cited paper, in which completeness is proved with respect to small-step operational semantics, though you'll find more details (i.e. lemmas) here because I've mechanized the proof in Isabelle and can't help but share in my suffering ;) (Lambda.thy, SmallStepLam.thy, IntersectComplete.thy) Ultimately I would like to prove completeness for the simple denotational semantics, but a good first step is doing the proof for a system that is in between the simple semantics and the intersection type systems in the literature.

The intersection type system I use here differs from ones in the literature in that I restrict the \(\wedge\) introduction rule to \(\lambda\)'s instead of applying it to any expression, as shown below. I recently realized that this change does not disturb the proof of Completeness because we're dealing with a call-by-value language. \[ \frac{\Gamma \vdash \lambda x.\, e : A \quad \Gamma \vdash \lambda x.\, e : B} {\Gamma \vdash \lambda x.\, e : A \wedge B}(\wedge\,\mathrm{intro}) \] I would like to remove the subsumption rule \[ \frac{\Gamma \vdash e : A \quad A <: B} {\Gamma \vdash e : B}(\mathrm{Sub}) \] but doing so was increasing the complexity of the proof of Completeness. Instead I plan to separately prove that the version without subsumption is equivalent to the version with subsumption. One might also consider doing the same regarding our above change to the \(\wedge\) introduction rule. I have also been working on that approach, but proving the admissibility of the standard \(\wedge\) introduction rule has turned out to be rather difficult (but interesting!).

Definition of an Applied CBV Lambda Calculus

Let us dive into the formalities and define the language that we're interested in. Here's the types, which include function types, intersection types, the top function type (written \(\top\)), and singleton numbers. Our \(\top\) corresponds to the type \(\nu\) from Egidi, Honsell, and Rocca (1992). See also Alessi et al. (2003). \[ A,B,C ::= A \to B \mid A \wedge B \mid \top \mid n \] and here's the expressions: \[ e ::= n \mid \mathit{op}(e,e) \mid \mathrm{if}\,e\,\mathrm{then}\,e\,\mathrm{else}\,e \mid x \mid \lambda x.\, e \mid e\,e \] where \(n\) ranges over numbers and \(\mathit{op}\) ranges over arithmetic operators such as addition.

We define type environments as an association list mapping variables to types. \[ \Gamma ::= \emptyset \mid \Gamma,x:A \]

The type system, defined below, is unusual in that it is highly precise. Note that the rule for arithmetic operators produces a precise singleton result and that the rules for if-expressions require the condition to be a singleton number (zero or non-zero) so that it knows which branch is taken. Thus, this type system is really a kind of dynamic semantics.

\begin{gather*} \frac{}{\Gamma \vdash n : n} \\[2ex] \frac{} {\Gamma \vdash \lambda x.\, e : \top}(\top\,\mathrm{intro}) \quad \frac{\Gamma \vdash \lambda x.\, e : A \quad \Gamma \vdash \lambda x.\, e : B} {\Gamma \vdash \lambda x.\, e : A \wedge B}(\wedge\,\mathrm{intro}) \\[2ex] \frac{\Gamma \vdash e : A \quad A <: B} {\Gamma \vdash e : B}(\mathrm{Sub}) \\[2ex] \frac{x:A \in \Gamma}{\Gamma \vdash x : A} \quad \frac{\Gamma,x:A \vdash B} {\Gamma \vdash \lambda x.\, e : A \to B} \quad \frac{\Gamma \vdash e_1: A \to B \quad \Gamma \vdash e_2 : A} {\Gamma \vdash e_1 \; e_2 : B}(\to\mathrm{intro}) \\[2ex] \frac{\Gamma \vdash e_1 : n_1 \quad \Gamma \vdash e_2 : n_2 \quad [\!|\mathit{op}|\!](n_1,n_2) = n_3} {\Gamma \vdash \mathit{op}(e_1,e_2) : n_3} \\[2ex] \frac{\Gamma \vdash e_1 : 0 \quad \Gamma \vdash e_3 : B} {\Gamma \vdash \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : B} \quad \frac{\Gamma \vdash e_1 : n \quad n \neq 0 \quad \Gamma \vdash e_2 : A} {\Gamma \vdash \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : A} \end{gather*}

The rules for subtyping come from the literature.

\begin{gather*} \frac{}{n <: n} \quad \frac{}{\top <: \top} \quad \frac{}{A \to B <: \top} \quad \frac{A' <: A \quad B <: B'} {A \to B <: A' \to B'}(<:\to) \\[2ex] \frac{C <: A \quad C <: B}{C <: A \wedge B} \quad \frac{}{A \wedge B <: A} \quad \frac{}{A \wedge B <: B} \\[2ex] \frac{}{(C\to A) \wedge (C \to B) <: C \to (A \wedge B)} \end{gather*}

We shall be working with values that are well typed in an empty type environment. This usually implies that the values have no free variables. However, that is not true of the current type system because of the \(\top\) introduction rule. So we add a side condition for \(\lambda\) in our definition of values. (In retrospect, I should have instead included a statement about free variables in the main Completeness theorem and then propagated that information to where it is needed.) \[ v ::= n \mid \lambda x.\, e \quad \text{where } FV(e) \subseteq \{x\} \]

We use a naive notion of substitution (not capture avoiding) because the \(v\)'s have no free variables to capture. \begin{align*} [x:=v] y &= \begin{cases} v & \text{if } x = y \\ y & \text{if } x \neq y \end{cases} \\ [x:=v] n &= n \\ [x:=v] (\lambda y.\, e) &= \begin{cases} \lambda y.\, e & \text{if } x = y \\ \lambda y.\, [x:=v] e & \text{if } x \neq y \end{cases} \\ [x:=v](e_1\, e_2) &= ([x:=v]e_1\, [x:=v]e_2) \\ [x:=v]\mathit{op}(e_1, e_2) &= \mathit{op}([x:=v]e_1, [x:=v]e_2) \\ [x:=v](\mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3) &= \mathrm{if}\,[x:=v]e_1\,\mathrm{then}\,[x:=v]e_2\,\mathrm{else}\,[x:=v]e_3 \end{align*}

The small-step operational semantics is defined by the following reduction rules. I'm not sure why I chose to use SOS-style rules instead of evaluation contexts. \begin{gather*} \frac{}{(\lambda x.\,e) \; v \longrightarrow [x:=v]e} \quad \frac{e_1 \longrightarrow e'_1}{e_1\,e_2 \longrightarrow e'_1 \, e_2} \quad \frac{e_2 \longrightarrow e'_2}{e_1\,e_2 \longrightarrow e_1 \, e'_2} \\[2ex] \frac{}{\mathit{op}(n_1,n_2) \longrightarrow [\!|\mathit{op}|\!](n_1,n_2)} \quad \frac{e_1 \longrightarrow e'_1} {\mathit{op}(e_1,e_2) \longrightarrow \mathit{op}(e'_1,e_2)} \quad \frac{e_2 \longrightarrow e'_2} {\mathit{op}(e_1,e_2) \longrightarrow \mathit{op}(e_1,e'_2)} \\[2ex] \frac{}{\mathrm{if}\,0\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 \longrightarrow e_3} \quad \frac{n \neq 0} {\mathrm{if}\,n\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 \longrightarrow e_2} \\[2ex] \frac{e_1 \longrightarrow e'_1} {\mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 \longrightarrow \mathrm{if}\,e'_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3} \end{gather*} \[ \frac{}{e \longrightarrow^{*} e} \qquad \frac{e_1 \longrightarrow e_2 \quad e_2 \longrightarrow^{*} e_3} {e_1 \longrightarrow^{*} e_3} \]

Proof of Completeness

The theorem that we aim to prove is that if the operational semantics says that a program reduces to a value, then the program is typable in the intersection type system and that the result type precisely describes the result value. I'm going to present the proof in a top-down style, so the proof of each lemma that I use is found further along in this blog post.

Theorem (Completeness)
If \(e \longmapsto^{*} v\), then \(\emptyset \vdash e : A\) and \(\emptyset \vdash v : A\) for some type \(A\).
Proof
Every value is typable (use the \(\top\) introduction rule for \(\lambda\)), so we have some \(A\) such that \(\emptyset \vdash v : A\). We shall show that typing is preserved by reverse reduction, which will give us \(\emptyset \vdash e : A\). QED

Lemma (Reverse Multi-Step Preserves Types)
If \(e \longrightarrow^{*} e'\) and \(\emptyset \vdash e' : A\), then \(\emptyset \vdash e : A\).
Poof
The proof is by induction on the derivation of \(e \longrightarrow^{*} e'\). The base case is trivial. The induction case requires that typing be preserved for a single-step of reduction, which we prove next. QED

Lemma (Reverse Single-Step Preserves Types)
If \(e \longrightarrow e'\) and \(\emptyset \vdash e' : A\), then \(\emptyset \vdash e : A\).
Proof
The proof is by induction on the derivation of \(e \longrightarrow e'\). The most important case is for function application: \[ (\lambda x.\,e) \; v \longrightarrow [x:=v]e \] We have that \(\emptyset \vdash [x:=v]e : A\) and need to show that \(\emptyset \vdash (\lambda x.\,e) \; v : A\). That is, we need to show that call-by-value \(\beta\)-expansion preserves types. So we need \(x:B \vdash e : A\) and \(\emptyset \vdash v : B\) for some type \(B\). The proof of this was the crux and required some generalization; I found it difficult to find the right statement of the lemma. It is proved below under the name Reverse Substitution Preserves Types. The other cases of this proof are straightforward except for one hiccup. They all require inversion lemmas (aka. generation lemmas) to unpack the information from \(\emptyset \vdash e' : A\). However, as is usual for languages with subsumption, the inversion lemmas are not simply proved by case analysis on typing rules, but must instead be proved by induction on the typing derivations. QED

Lemma (Inversion)

  1. If \(\Gamma \vdash n : A \), then \(n <: A\).
  2. If \(\Gamma \vdash e_1\,e_2 : A \), then \( \Gamma \vdash e_1 : B \to A' \), \( A' <: A \), and \( \Gamma \vdash e_2 : B \) for some \(A'\) and \(B\).
  3. If \(\Gamma \vdash \mathit{op}(e_1,e_2) : A\), then \(\Gamma \vdash e_1 : n_1\), \(\Gamma \vdash e_2 : n_2 \), \(\Gamma \vdash \mathit{op}(e_1,e_2) : [\!|\mathit{op}|\!](n_1,n_2) \), and \([\!|\mathit{op}|\!](n_1,n_2) <: A\) for some \(n_1\) and \(n_2\).
  4. If \(\Gamma \vdash \mathrm{if}\,e_1\,\mathrm{then}\,e_2\,\mathrm{else}\,e_3 : A\), then either
    • \(\Gamma \vdash e_1 : 0\), \(\Gamma\vdash e_3 : B\), and \(B <: A\), for some \(B\).
    • \(\Gamma \vdash e_1 : n\), \(n \neq 0\), \(\Gamma\vdash e_2 : A'\), and \(A' <: A\), for some \(A'\).
Proof The proofs are by induction on the derivation of typing. QED

To state the reverse substitution lemma in a way that provides a useful induction hypothesis in the case for \(\lambda\), we introduce a notion of equivalence of type environments: \[ \Gamma \approx \Gamma' = (x : A \in \Gamma \text{ iff } x : A \in \Gamma') \] The reverse substitution lemma will show that if \([y:=v]e\) is well typed, then so is \(e\) in the environment extended with \(y:B\), for some appropriate choice of \(B\). Now, the value \(v\) may appear in multiple places within \([y:=v]e\) and in each place, \(v\) may have been assigned a different type. For example, \(v\) could be \(\lambda x.\, {+}(x,1)\) and it could have the type \(0\to 1\) in one place and \(1\to 2\) in another place. However, we must choose a single type \(B\) for \(y\). But thanks to intersection types, we can choose \(B\) to be the intersection of all the types assigned to \(v\).

Lemma (Reverse Substitution Preserves Types)
If \(\Gamma \vdash [y:=v]e : A\) and \(y \notin \mathrm{dom}(\Gamma)\), then \( \emptyset \vdash v : B \), \( \Gamma' \vdash e : A\), and \(\Gamma' \approx \Gamma,y:B\) for some \(\Gamma'\) and \(B\).
Proof The proof is by induction on the derivation of \(\Gamma \vdash [y:=v]e : A\). (I wonder if the proof would have been easier if done by induction on \(e\).) The proof is rather long, so I'll just highlight the lemmas that were needed here. The full details are in the Isabelle mechanization.

  • The cases for variables and numbers are relatively straightforward.
  • The case for \(\lambda\) requires lemmas regarding Environment Strengthening and Environment Lowering and their corollaries.
  • The case for subsumption is relatively easy.
  • The case for function application is interesting. We have \( (e_1 \, e_2) = [y:=v]e \), so \(e = e'_1 \, e'_2\) where \(e_1 = [y:=v]e'_1\) and \(e_2 = [y:=v]e'_2\). From the induction hypotheses for \(e_1\) and \(e_2\), we have \(\emptyset \vdash v : B_1\) and \(\emptyset \vdash v : B_2\). The lemma Combine Values gives us some \(B_3\) such that \(\emptyset \vdash v : B_3\) and \(B_3 <: B_1\) and \(B_3 <: B_2\). We choose \(\Gamma' = \Gamma,y:B_3\). To show \(\Gamma' \vdash e'_1\, e'_2 : A\) we use the induction hypotheses for \(e_1\) and \(e_2\), along with the lemmas Equivalent Environments and Environment Lowering.
  • The case for \(\top\) introduction is straightforward.
  • The case for \(\wedge\) introduction uses the lemmas Well-typed with No Free Variables, Environment Strengthening, Combine Values, Equivalent Environments, and Environment Lowering.
  • The cases for arithmetic operators and if-expressions follow a pattern similar to that of function application.
QED

Lemma (Environment Strengthening)
If \(\Gamma \vdash e : A\) and for every free variable \(x\) in \(e\), \( x:A \in \Gamma \text{ iff } x:A \in \Gamma' \), then \(\Gamma' \vdash e : A\).
Proof The proof is by induction on the derivation of \(\Gamma \vdash e : A\). QED

Corollary (Well-typed with No Free Variables)
If \(\Gamma \vdash e : A\) and \(\mathit{FV}(e) = \emptyset\), then \(\emptyset \vdash e : A\).

We define the following ordering relation on environments: \[ \Gamma \sqsupseteq \Gamma' = (x:A \in \Gamma \Longrightarrow x:A' \in \Gamma \text{ and } A' <: A) \]

Lemma (Environment Lowering)
If \(\Gamma \vdash e : A\) and \(\Gamma \sqsupseteq \Gamma'\), then \(\Gamma' \vdash e : A\).
Proof The proof is by induction on the derivation of \(\Gamma \vdash e : A\). QED

Corollary (Equivalent Environments)
If \(\Gamma \vdash e : A\) and \(\Gamma \approx \Gamma'\), then \(\Gamma' \vdash e : A\).
Proof If \(\Gamma \approx \Gamma'\) then we also have \(\Gamma \sqsupseteq \Gamma'\), so we conclude by applying Environment Lowering. QED

Lemma (Combine Values)
If \(\Gamma \vdash v : B_1\) and \(\Gamma \vdash v : B_2\), then \(\Gamma \vdash v : B_3\), \(B_3 <: B_1 \wedge B_2\), and \(B_1 \wedge B_2 <: B_3\) for some \(B_3\).
Proof The proof is by cases on \(v\). It uses the Inversion lemma for numbers and the \(\wedge\) introduction rule for \(\lambda\)'s. QED

Saturday, January 14, 2017

Intersection Types as Denotations

In my previous post I described a simple denotational semantics for the CBV lambda calculus in which the meaning of a \(\lambda\) function is a set of tables. For example, here is a glimpse at some of the tables in the meaning of \(\lambda x. x+2\).

\[ E[\!| (\lambda x. x+2) |\!](\emptyset) = \left\{ \begin{array}{l} \emptyset, \\ \{ 5\mapsto 7 \},\\ \{ 0\mapsto 2, 1 \mapsto 3 \},\\ \{ 0\mapsto 2, 1\mapsto 3, 5 \mapsto 7 \}, \\ \vdots \end{array} \right\} \]

Since then I've been reading the literature starting from an observation by Alan Jeffrey that this semantics seems similar to the domain logic in Abramsky's Ph.D. thesis (1987). That in turn pointed me to the early literature on intersection types, which were invented in the late 1970's by Coppo, Dezani-Ciancaglini, Salle, and Pottinger. It turns out that one of the motivations for intersection types was to create a denotational semantics for the lambda calculus. Furthermore, it seems that intersection types are closely related to my simple denotational semantics!

The intersection types for the pure lambda calculus included function types, intersections, and a top type: \[ A,B,C ::= A \to B \mid A \wedge B \mid \top \] For our purposes we shall also add singleton types for numbers. \[ A,B,C ::= A \to B \mid A \wedge B \mid \top \mid n \] So the number \(2\) has the singleton type \(2\) and any function that maps \(0\) to \(2\) will have the type \(0 \to 2\). Any function that maps \(0\) to \(2\) and also maps \(1\) to \(3\) has the intersection type \[ (0 \to 2) \wedge (1 \to 3) \] These types are starting to look a lot like the tables above! Indeed, even the empty table \(\emptyset\) corresponds to the top type \(\top\), they both can be associated with any \(\lambda\) function.

The addition of the singleton number types introduces a choice regarding the top type \(\top\). Does it include the numbers and functions or just functions? We shall go with the later, which corresponds to the \(\nu\) type in the literature (Egidi, Honsell, Rocca 1992).

Now that we have glimpsed the correspondence between tables and intersection types, let's review the typing rules for the implicitly typed lambda calculus with singletons, intersections, and \(\top\).

\begin{gather*} \frac{}{\Gamma \vdash n : n} \\[2ex] \frac{}{\Gamma \vdash \lambda x.\,e : \top}(\top\,\mathrm{intro}) \quad \frac{\Gamma \vdash e : A \quad \Gamma \vdash e : B} {\Gamma \vdash e : A \wedge B}(\wedge\,\mathrm{intro}) \\[2ex] \frac{\Gamma \vdash e : A \quad A <: B} {\Gamma \vdash e : B}(\mathrm{Sub}) \quad \frac{x:A \in \Gamma}{\Gamma \vdash x : A} \\[2ex] \frac{\Gamma,x:A \vdash B} {\Gamma \vdash \lambda x.\, e : A \to B} \quad \frac{\Gamma \vdash e_1: A \to B \quad \Gamma \vdash e_2 : A} {\Gamma \vdash e_1 \; e_2 : B}(\to\mathrm{elim}) \end{gather*} where subtyping is defined as follows \begin{gather*} \frac{}{n <: n} \quad \frac{}{\top <: \top} \quad \frac{}{A \to B <: \top} \quad \frac{A' <: A \quad B <: B'} {A \to B <: A' \to B'} \\[2ex] \frac{C <: A \quad C <: B}{C <: A \wedge B} \quad \frac{}{A \wedge B <: A} \quad \frac{}{A \wedge B <: B} \\[2ex] \frac{}{(C\to A) \wedge (C \to B) <: C \to (A \wedge B)} \end{gather*}

With intersection types, one can write the same type in many different ways. For example, the type \(5\) is the same as \(5 \wedge 5\). One common way to define such equalities is in terms of subtyping: \(A = B\) iff \(A <: B\) and \(B <: A\).

So how does one define a semantics using intersection types? Barendregt, Coppo, Dezani-Ciancaglini (1983) (BCD) define the meaning of an expression \(e\) to be the set of types for which it is typable, something like \[ [\!| e |\!](\Gamma) = \{ A \mid \Gamma \vdash e : A \} \] For a simple type system (without intersection), such as semantics would not be useful. Any term with self application (needed for recursion) would not type check and therefore its meaning would be the empty set. But with intersection types, the semantics gives a non-empty meaning to all terminating programs!

The next question is, how does the BCD semantics relate to my simple table-based semantics? One difference is that the intersection type system has two rules that are not syntax directed: \((\wedge\,\mathrm{intro})\) and (Sub). However, we can get rid of these rules. The \((\wedge\,\mathrm{intro})\) rule is not needed for numbers, only for functions. So one should be able to move all uses of the \((\wedge\,\mathrm{intro})\) rules to \(\lambda\)'s. \[ \frac{\Gamma \vdash \lambda x.\, e : A \quad \Gamma \vdash \lambda x.\; e : B} {\Gamma \vdash \lambda x.\, e : A \wedge B} \] To get rid of (Sub), we need to modify \((\to\mathrm{elim})\) to allow for the possibility that \(e_1\) is not literally of function type. \[ \frac{\Gamma \vdash e_1 : C \quad C <: A \to B \quad \Gamma \vdash e_2 : A} {\Gamma \vdash e_1 \; e_2 : B} \]

All of the rules are now syntax directed, though we now have three rules for \(\lambda\), but those rules handle the three different possible types for a \(\lambda\) function: \(A \to B\), \(A \wedge B\), and \(\top\). Next we observe that a relation is isomorphic to a function that produces a set. So we change from \(\Gamma \vdash e : A\) to \(E[\!| e |\!](\Gamma) = \mathcal{A}\) where \(\mathcal{A}\) ranges over sets of types, i.e., \(\mathcal{A} \in \mathcal{P}(A)\). We make use of an auxiliary function \(F\) to define the meaning of \(\lambda\) functions. \begin{align*} E[\!| n |\!](\Gamma) & = \{ n \} \\ E[\!| x |\!](\Gamma) & = \{ \Gamma(x) \} \\ E[\!| \lambda x.\, e |\!](\Gamma) & = \{ A \mid F(A,x,e,\Gamma) \} \\ E[\!| e_1 \; e_2 |\!](\Gamma) & = \left\{ B \middle| \begin{array}{l} C \in E[\!| e_1 |\!](\Gamma) \\ \land\; A \in E[\!| e_2 |\!](\Gamma) \\ \land\; C <: A \to B \end{array} \right\} \\ \\ F(A \to B, x,e,\Gamma) &= B \in E[\!| e |\!](\Gamma(x:=A)) \\ F(A \wedge B, x,e,\Gamma) &= F(A, x,e,\Gamma) \land F(B, x,e,\Gamma) \\ F(\top, x,e,\Gamma) &= \mathrm{true} \end{align*}

I conjecture that this semantics is equivalent to the ``take 3'' semantics. There are a couple remaining differences and here's why I don't think they matter. Regarding the case for \(\lambda\) in \(E\), the type \(A\) can be viewed as an alternative representation for a table. The function \(F\) essentially checks that all entries in the table jive with the meaning of the \(\lambda\)'s body, which is what the clause for \(\lambda\) does in the ``take 3'' semantics. Regarding the case for application in \(E\), the \(C\) is a table and \(C <: A \to B\) means that there is some entry \(A' \to B'\) in the table \(C\) such that \(A' \to B' <: A \to B\), which means \(A <: A'\) and \(B' <: B\). The \(A <: A'\) corresponds to our use of \(\sqsubseteq\) in the ``take 3'' semantics. The \(B' <: B\) doesn't matter.

There's an interesting duality and change of viewpoint going on here between the table-based semantics and the intersection types. The table-based semantics is concerned with what values are produced by a program whereas the intersection type system is concerned with specifying what kind of values are allowed, but the types are so precise that it becomes dual in a strong sense to the values themselves. To make this precise, we can talk about tables in terms of their finite graphs (sets of pairs), and create them using \(\emptyset\), union, and a singleton input-output pair \(\{(v_1,v_2)\}\). With this formulation, tables are literally dual to types, with \(\{(v_1,v_2)\}\) corresponding to \(v_1 \to v_2\), union corresponding to intersection, empty set corresponding to \(\top\), and \(T_1 \subseteq T_2\) corresponding to \(T_2 <: T_1\).

Wednesday, December 21, 2016

Take 3: Application with Subsumption for Den. Semantics of Lambda Calculus

Alan Jeffrey tweeted the following in reply to the previous post:

@jeremysiek wouldn't it be easier to change the defn of application to be
⟦MN⟧σ = { W | T ∈ ⟦M⟧σ, V ∈ ⟦N⟧σ, (V′,W) ∈ T, V′ ⊆ V }?

The idea is that, for higher order functions, if the function \(M\) is expecting to ask all the questions in the table \(V'\), then it is OK to apply \(M\) to a table \(V\) that answers more questions than \(V'\). This idea is quite natural, it is like Liskov's subsumption principle but for functions instead of objects. If this change can help us with the self application problem, then it will be preferable to the graph-o-tables approach described in the previous post because it retains the simple inductive definition of values. So let's see where this takes us!

We have the original definition of values

\[ \begin{array}{lrcl} \text{values} & v & ::= & n \mid \{ (v_1,v'_1),\ldots,(v_n,v'_n) \} \end{array} \]

and here is the denotational semantics, updated with Alan's suggestion to include the clause \(v'_2 \sqsubseteq v_2\) in the case for application.

\begin{align*} E[\!| n |\!](\rho) &= \{ n \} \\ E[\!| x |\!](\rho) &= \{ \rho(x) \} \\ E[\!| \lambda x.\; e |\!](\rho) &= \{ T \mid \forall v v'. (v,v') \in T \Rightarrow v' \in E[\!|e|\!](\rho(x:=v)) \} \\ E[\!| e_1\;e_2 |\!](\rho) &= \left\{ v \middle| \begin{array}{l} \exists T v_2 v'_2. T {\in} E[\!| e_1 |\!](\rho) \land v_2 {\in} E[\!| e_2 |\!](\rho) \\ \land v'_2 \sqsubseteq v_2 \land (v'_2,v) {\in} T \end{array} \right\} \end{align*}

The ordering on values \(\sqsubseteq\) used above is just equality on numbers and subset on function tables.

The first thing to check is whether this semantics can handle self application at all, such as \[ (\lambda f. f \; f) \; (\lambda g. \; 42) \]

Example 1. \( 42 \in E[\!| (\lambda f. f \; f) \; (\lambda g. \; 42) |\!](\emptyset) \)
The main work is figuring out witnesses for the function tables. We're going to need the following tables: \begin{align*} T_0 &= \emptyset \\ T_1 &= \{ (\emptyset, 42)\} \\ T_2 &= \{ (T_1, 42) \} \end{align*} Here's the proof, working top-down, or goal driven. The important use of subsumption is the \( \emptyset \sqsubseteq T_1 \) below.

  • \( T_2 \in E[\!| (\lambda f. f \; f)|\!](\emptyset)\)
    So we need to show: \( 42 \in E[\!| f \; f|\!](f:=T_1) \)
    • \( T_1 \in E[\!| f |\!](f:=T_1) \)
    • \( T_1 \in E[\!| f |\!](f:=T_1) \)
    • \( \emptyset \sqsubseteq T_1 \)
    • \( (\emptyset, 42) \in T_1 \)
  • \( T_1 \in E[\!| (\lambda g. \; 42) |\!](\emptyset)\)
    So we need to show \( 42 \in E[\!| 42 |\!](g:=\emptyset)\), which is immediate.
  • \( T_1 \sqsubseteq T_1 \)
  • \( (T_1,42) \in T_2 \)

Good, so this semantics can handle a simple use of self application. How about factorial? Instead of considering factorial of 3, as in the previous post, we'll go further this time and consider factorial of an arbitrary number \(n\).

Example 2. We shall compute the factorial of \(n\) using the strict version of the Y combinator, that is, the Z combinator. \begin{align*} M & \equiv \lambda x. f \; (\lambda v. (x\; x) \; v) \\ Z & \equiv \lambda f. M \; M \\ F & \equiv \lambda n. \mathbf{if}\,n=0\,\mathbf{then}\, 1\,\mathbf{else}\, n \times r \,(n-1)\\ H & \equiv \lambda r. F \\ \mathit{fact} & \equiv Z\, H \end{align*} We shall show that \[ n! \in E[\!|\mathit{fact}\;n|\!](\emptyset) \] For this example we need very many tables, but fortunately there are just a few patterns. To capture these patterns, be define the following table-producing functions. \begin{align*} T_F(n) &= \{ (n,n!) \} \\ T_H(n) &= \{ (\emptyset,T_F(0)), (T_F(0), T_F(1)), \ldots ,(T_F(n-1), T_F(n)) \} \\ T_M(n) &= \begin{cases} \emptyset & \text{if } n = 0 \\ \{ (T_M(n'), T_F(n')) \} \cup T_M(n') & \text{if } n = 1+n' \end{cases} \\ T_Z(n) &= \{ (T_H(n), T_F(n) )\} \end{align*} \(T_F(n)\) is a fragment of the factorial function, for the one input \(n\). \(T_H(n)\) maps each \(T_F(i)\) to \(T_F(i+1) \) for up to \(i+1 = n\). \(T_M(n)\) is the heart of the matter, and what makes the self application work. It maps successively larger versions of itself to fragments of the factorial function, that is \[ T_M(n) = \left\{ \begin{array}{l} T_M(0) \mapsto T_F(0) \\ T_M(1) \mapsto T_F(1) \\ \vdots & \\ T_M(n-1) \mapsto T_F(n-1) \end{array} \right\} \] For example, here is \(T_M(4)\):

The tables \( T_M \) enable self application because we have the following two crucial properties:
  1. \( T_M(n) \sqsubseteq T_M(1+n) \)
  2. \( (T_M(n), T_F(n)) \in T_M(1+n) \)
The main lemma's that we prove are

Lemma 1. If \(n \le k\), then \(T_M(1+n) \in E[\!| M |\!](f:=T_H(k)) \).

Lemma 2. \( T_Z(n) \in E[\!| Z |\!](\emptyset) \)

If you're curious about the details for the complete proof of \( n! \in E[\!|\mathit{fact}\;n|\!](\emptyset) \) you can take a look at the proof in Isabelle that I've written here.

This is all quite promising! Next we look at the proof of soundness with respect to the big step semantics.

Soundness with Respect to the Big-Step Semantics

The proof of soundness is quite similar to that of the first version, as the relation \(\approx\) between the denotational and big-step values remains the same. However, the following two technical lemmas are needed to handle subsumption.

Lemma (Related Table Subsumption) If \(T' \subseteq T\) and \(T \approx \langle \lambda x.e, \rho \rangle\), then \(T' \approx \langle \lambda x.e, \rho \rangle\).
The proof is by induction on \(T'\).

Lemma (Related Value Subsumption) If \(v_1 \approx v'\) and \(v_2 \sqsubseteq v'\), then \(v_2 \approx v'\).
The proof is by case analysis, using the previous lemma when the values are function tables.

Theorem (Soundness).
If \(v \in E[\!| e |\!](\rho) \) and \( \rho \approx \rho' \), then \( \rho' \vdash e \Downarrow v' \) and \(v \approx v'\) for some \(v'\).

The mechanization of soundness in Isabelle is here.

Monday, December 19, 2016

Take 2: Graph of Tables for the Denotational Semantics of the Lambda Calculus

In the previous post, the denotational semantics I gave for the lambda calculus could not deal with self application, such as the program \[ (\lambda f. f\;f)\;(\lambda g. 42) \] whose result should be \(42\). The problem was that I defined function values to be tables of pairs of values, using a datatype definition, which rules out the possibility of cycles. In the above program, the table for \( \lambda g. 42 \) needs to include itself so that the application \(f \; f \) makes sense.

A straightforward way to solve this problem is to allow cycles by representing all the functions created by the program as a graph of tables. A function value will just contain an index, i.e. it will have the form \(\mathsf{fun}[i]\) with index \(i\), and the graph will map the index to a table. So we define values as follows. \[ \begin{array}{lccl} \text{numbers} & n \in \mathbb{N} \\ \text{values} & v \in \mathbb{V} & ::= & n \mid \mathsf{fun}[i] \\ & \mathit{graph} & = & \mathbb{N} \to (\mathbb{V} \times \mathbb{V})\,\mathit{list} \end{array} \] Given a function value \(v\) and graph \(G\), we write \(\mathit{tab}(v,G)\) for the table \(t\) when \(v= \mathsf{fun}[i]\) and \(G(i)=t\) for some index \(i\). So we modify the semantic function \(E\) to be defined as follows.

\begin{align*} E & : \mathit{exp} \to \mathit{env} \to (\mathbb{V} \times \mathit{graph})\,\mathit{set} \\ E[\!| n |\!](\rho) &= \{ (v,G) \mid v = n \} \\ E[\!| x |\!](\rho) &= \{ (v,G) \mid v = \rho(x) \} \\ E[\!| \lambda x.\; e |\!](\rho) &= \left\{ (v,G) \middle| \begin{array}{l} \forall v_1 v_2. (v_1,v_2) \in \mathit{tab}(v,G) \\ \Rightarrow v_2 \in E[\!|e|\!](\rho(x:=v_1)) \end{array} \right\} \\ E[\!| e_1\;e_2 |\!](\rho) &= \left\{ (v,G) \middle| \begin{array}{l}\exists v_1 v_2. v_1 {\in} E[\!| e_1 |\!](\rho) \land v_2 {\in} E[\!| e_2 |\!](\rho) \\ \land (v_2,v) {\in} \mathit{tab}(v_1,G) \end{array} \right\} \end{align*}

Example 1. Let's consider again the program \[ (\lambda f. f\;f)\;(\lambda g. 42) \] We'll define a graph \(G\) as follows. \[ G(0) = [(\mathsf{fun}[1], 42)] \qquad G(1) = [(\mathsf{fun}[1], 42)] \] To show \[ (42,G) \in E[\!|(\lambda f. f\;f)\;(\lambda g. 42)|\!]\emptyset \] we need to show

  1. \( (\mathsf{fun}[0],G) \in E[\!|(\lambda f. f\;f)|\!]\emptyset \). So we need to show \[ (42,G) \in E[\!|f\;f|\!](f:=\mathsf{fun}[1]) \] which we have because \((\mathsf{fun}[1], 42) \in \mathsf{fun}[1]\).
  2. \( (\mathsf{fun}[1],G) \in E[\!|(\lambda g. 42)|\!]\emptyset \). We need \[ (42,G) \in E[\!|42|\!](f:=\mathsf{fun}[1]) \] which is immediate.
  3. \( (\mathsf{fun}[1], 42) \in \mathit{tab}(\mathsf{fun}[0]) \). This is immediate.

Example 2. We shall compute the factorial of 3 using the strict version of the Y combinator, that is, the Z combinator. \begin{align*} R & \equiv \lambda v. (x\; x) \; v \\ M & \equiv \lambda x. f \; R \\ Z & \equiv \lambda f. M \; M \\ F & \equiv \lambda n. \mathbf{if}\,n=0\,\mathbf{then}\, 1\,\mathbf{else}\, n \times r \,(n-1)\\ H & \equiv \lambda r. F \\ \mathit{fact} & \equiv Z\, H \end{align*} We shall show that \[ (6,G) \in E[\!|\mathit{fact}\;3|\!]\emptyset \] for some graph \(G\) that we need to construct. By instrumenting an interpreter for the lambda calculus and running \(\mathit{fact}\,3\), we observe the following graph. \begin{align*} G(0) &= [(\mathsf{fun}[1],\mathsf{fun}[4])] & \text{for } Z \\ G(1) &= [(\mathsf{fun}[3],\mathsf{fun}[4])] & \text{for } H \\ G(2) &= [(\mathsf{fun}[2],\mathsf{fun}[4])]& \text{for } M \\ G(3) &= [(0,1),(1,1),(2,2)] & \text{for } R \\ G(4) &= [(0,1),(1,1),(2,2),(3,6)] & \text{for } F \end{align*} We check all of the following (Tedious! I'm going to write a program to do this next time.):

  • \((\mathsf{fun}[4],G) \in E[Z\,H]\emptyset \)
  • \((3,6) \in G(4) \)
  • \((\mathsf{fun}[0],G) \in E[\!|Z|\!]\emptyset\)
  • \( (\mathsf{fun}[4],G) \in E[\!|M\;M|\!](f:=\mathsf{fun}[1])\)
  • \( (\mathsf{fun}[2],G) \in E[\!|M|\!](f:=\mathsf{fun}[1]) \)
  • \( (\mathsf{fun}[4],G) \in E[\!|f \; R|\!](f:=\mathsf{fun}[1],x:=\mathsf{fun}[2]) \)
  • \( (\mathsf{fun}[3],G) \in E[\!|R|\!](f:=\mathsf{fun}[1],x:=\mathsf{fun}[2]) \)
  • \( (1,G) \in E[\!|(x\;x)\;v|\!](f:=\mathsf{fun}[1],x:=\mathsf{fun}[2],v:=0) \) and \( (0,1) \in G(4) \)
  • \( (1,G) \in E[\!|(x\;x)\;v|\!](f:=\mathsf{fun}[1],x:=\mathsf{fun}[2],v:=1) \) and \( (1,1) \in G(4) \)
  • \( (2,G) \in E[\!|(x\;x)\;v|\!](f:=\mathsf{fun}[1],x:=\mathsf{fun}[2],v:=2) \) and \( (2,2) \in G(4) \)
  • \( (\mathsf{fun}[2], \mathsf{fun}[4]) \in G(2) \)
  • \((\mathsf{fun}[1],G) \in E[\!|H|\!]\emptyset\)
  • \((\mathsf{fun}[4],G) \in E[\!|F|\!](r:=\mathsf{fun}[3])\)
  • \((1,G) \in E[\!|\mathbf{if}\,n=0\,\mathbf{then}\, 1\,\mathbf{else}\, n \times r \,(n-1)|\!](r:=\mathsf{fun}[3],n:=0)\)
  • \((1,G) \in E[\!|\mathbf{if}\,n=0\,\mathbf{then}\, 1\,\mathbf{else}\, n \times r \,(n-1)|\!](r:=\mathsf{fun}[3],n:=1)\)
  • \((2,G) \in E[\!|\mathbf{if}\,n=0\,\mathbf{then}\, 1\,\mathbf{else}\, n \times r \,(n-1)|\!](r:=\mathsf{fun}[3],n:=2)\)
  • \((6,G) \in E[\!|\mathbf{if}\,n=0\,\mathbf{then}\, 1\,\mathbf{else}\, n \times r \,(n-1)|\!](r:=\mathsf{fun}[3],n:=3)\)

The next step is to update the proof of soundness wrt. the big-step semantics. The graphs will make it a bit more challenging. But hopefully they will make it possible to also prove completeness!

Thursday, December 15, 2016

Simple Denotational Semantics for the Lambda Calculus, Pω Revisited?

I've been trying to understand Dana Scott's \(P_{\omega}\) and \(D_{\infty}\) models of the lambda calculus, as well as a couple large Coq formalizations of domain theory, and in the process have come up with an extraordinarily simple denotational semantics for the call-by-value lambda calculus. It borrows some of the main ideas from \(P_{\omega}\) but doesn't encode everything into numbers and puts infinity in a different place. (More discussion about this near the bottom of the post.) That being said, I still don't 100% understand \(P_{\omega}\), so there may be other subtle differences. In any event, what follows is so simple that it's either wrong or amazing that it's not already a well-known semantics.

UPDATE: It's wrong. See the section titled Counterexample to Completeness below. Now I need to go and read more literature.
UPDATE: It seems that there is an easy fix to the problem! See the subsequent post.

To get started, here's the syntax of the lambda calculus.

\[ \begin{array}{lrcl} \text{variables} & x \\ \text{numbers} & n & \in & \mathbb{N} \\ \text{expressions} & e & ::= & n \mid x \mid \lambda x.e \mid e\;e \end{array} \]

So we've got a language with numbers and first-class functions. I'm going to represent functions as data in the most simple way possible, as a lookup table, i.e., a list of pairs, each pair is an input value \(v_i\) and it's corresponding output \(v'_i\). Of course, functions can have an infinite number of inputs, so how will a finite-length list suffice?

\[ \begin{array}{lrcl} \text{values} & v & ::= & n \mid [(v_1,v'_1),\ldots,(v_n,v'_n)] \end{array} \]

The answer is that we're going to write the denotational semantics as a logic program, that is, as a relation instead of a function. (We'll turn it back into a function later.) Instead of producing the result value, the semantics will ask for a result value and then say whether it is correct or not. So when it comes to a program that produces a function, the semantics will ask for a list of input-output values and say whether they agree with the function. A finite list suffices because, after all, you can't actually build an infinite list to pass into the semantics.

So here we go, our first version of the semantics, in relational style, or equivalently, as a function with three parameters that returns a Boolean. We name the function denoto, with an o at the end as a nod to The Reasoned Schemer. The meaning of a \(\lambda x.e\) is any table \(T\) that agrees with the semantics of the body \(e\). (This allows the table to be empty!) The meaning of an application \(e_1 e_2\) is simply to do a lookup in a table for \(e_1\). (Requiring a non-empty table.) We write \((v,v') \in T\) when \((v,v')\) is one of the pairs in the table \(T\).

\begin{align*} \mathit{denoto}(n, \rho, n) &= \mathit{true} \\ \mathit{denoto}(x, \rho, \rho(x)) &= \mathit{true} \\ \mathit{denoto}(\lambda x. e, \rho, T) &= \forall v v'.\, (v,v') \in T \Rightarrow \mathit{denoto}(e,\rho(x:=v),v') \\ \mathit{denoto}(e_1 \; e_2, \rho, v) &= \left(\begin{array}{l} \exists T v_2.\; \mathit{denoto}(e_1, \rho, T) \land \mathit{denoto}(e_2, \rho, v_2) \\ \qquad \land (v_2,v) \in T \end{array} \right) \end{align*}

The existential quantifier in the line for application is powerful. It enables the semantics to guess a sufficiently large table \(T\), so long as the table agrees with the semantics of \(e_1\) and the later uses of the result value. Because the execution of a terminating program can only call the function with a finite number of arguments, and the execution can only use the results in a finite number of ways, there is a sufficiently large table \(T\) to cover all its uses in the execution of the whole program. Also, note that \(T\) can be large in a couple dimensions: it may handle a large number of inputs, but also, it can specify large outputs (in the case when the outputs are functions).

Denotational semantics are usually written as functions, not relations, so we remove the third parameter and instead return a set of values. This will bring the semantics more in line with \(P_{\omega}\). Also, for this version we'll use the name \(E\) for the semantic function instead of denoto.

\begin{align*} E[\!| n |\!](\rho) &= \{ n \} \\ E[\!| x |\!](\rho) &= \{ \rho(x) \} \\ E[\!| \lambda x.\; e |\!](\rho) &= \{ T \mid \forall v v'. (v,v') \in T \Rightarrow v' \in E[\!|e|\!](\rho(x:=v)) \} \\ E[\!| e_1\;e_2 |\!](\rho) &= \{ v \mid \exists T v_2. T {\in} E[\!| e_1 |\!](\rho) \land v_2 {\in} E[\!| e_2 |\!](\rho) \land (v_2,v) {\in} T \} \end{align*}

With the semantics written this way, it is clear that the meaning of a \(\lambda\) is not just a finite table, but instead it is typically an infinite set of tables, each of which is an approximation of the actual infinite graph of the function.

Is this semantics correct? I'm not entirely sure yet, but I have proved that it is sound with respect to the big-step semantics.

Theorem (Soundness).
If \(v \in E[\!| e |\!](\rho) \) and \( \rho \approx \rho' \), then \( \rho' \vdash e \Downarrow v' \) and \(v \approx v'\) for some \(v'\).

The two semantics have different notions of values, so the relation \(\approx\) is defined to bridge the two worlds.

\begin{gather*} n \approx n \\[2ex] \frac{\forall v_1 v_2 v'_1. (v_1,v_2) \in T \land v_1 {\approx} v'_1 \Rightarrow \exists v'_2. \rho(x:=v'_1) \vdash e \Downarrow v'_2 \land v_2 {\approx} v'_2} {T \approx \langle \lambda x.e, \rho \rangle} \end{gather*} The definition of \(\approx\) extends to environments in the natural way.

The semantics and proof of soundness in Isabelle is here.

Counterexample to Completeness

This semantics does not handle self application properly. Consider the program \[ (\lambda f. f \; f) \; (\lambda g. 1) \] The operational semantics says the answer is \(1\). The denotational semantics requires us to find some tables \(T \in [\!|\lambda f. f\;f|\!]\) and \(T' \in [\!|\lambda g.1|\!]\). We need \( (T',1) \in T\), so we need \(1 \in [\!|f\; f|\!](f:=T') \). That requires \((T', 1) \in T'\), but that's impossible given that we've defined the values and tables in a way that does not allow cycles.

Relationship with \(P_{\omega}\)

The idea of representing functions as data, and as a lookup table, comes from \(P_{\omega}\), as does having the denotation's result be a set of values. As mentioned above, one (minor) difference is that \(P_{\omega}\) encodes everything into numbers, whereas here we've used a datatype definition for the values. However, the most important difference (if I understand \(P_{\omega}\) correctly) is that its functions are infinite in a first-class sense. That is, \(P_{\omega}\) is a solution to \[ D = \mathbb{N} + (D \to D) \] and the things in \(D \to D\) are functions with potentially infinite graphs. In contrast, I've taken a stratified approach in which I've defined the values \(V\) to include only finite representations of functions \[ V = \mathbb{N} + (V \times V) \, \mathit{list} \] and then, only at the top level, I've allowed for infinity by making the denotation of an expression be a (potentially infinite) set of values. \[ E : \mathit{exp} \to \mathit{env} \to V\, \mathit{set} \]

Thursday, December 08, 2016

Denotational Semantics of IMP without the Least Fixed Point

It has been too long since I wrote a blog post! Needless to say, parenthood, teaching, research, service, and my wonderful graduate students and post-docs have been keeping me busy. But with the Fall 2016 semester winding down I can sneak away for a bit and think.

But first, I have something to admit. My education in the theory of programming languages has a glaring hole in it. I only have a very basic understanding of Denotational Semantics despite having taking a course on Domain Theory in graduate school. I figured it would be fun to remedy this situation, and it might even be useful in the future. So my first step was to understand the denotational semantics of the IMP language, which is the starter language for most textbooks on denotational semantics. IMP is simply an imperative language with assignment statements, while loops, if statements, and arithmetic on integers. The IMP language has two syntactic categories, expressions and commands. The following is the syntax of IMP.

\[ \begin{array}{lrcl} \text{variables} & x \\ \text{numbers} & n & \in & \mathbb{N} \\ \text{unary operators}&u & ::= & \neg \\ \text{binary operators}&b & ::= & + \mid - \mid \times \mid \,=\, \\ \text{expressions} & e & ::= & n \mid x \mid u(e) \mid b(e,e)\\ \text{commands} & c & ::= & \mathtt{skip} \mid x := e \mid c ; c \mid \mathtt{if}\;e\;\mathtt{then}\;c\;\mathtt{else}\;c \\ & & \mid & \mathtt{while}\;e\;\mathtt{do}\;c \end{array} \]

As far as I can tell, for a semantics to be a denotational semantics it has to satisfy two properties.

  • It is a mapping from abstract syntax (the program) to a mathematical object, which is just to say some precisely defined entity, that describes the observable behavior of the program. For example, the mathematical object could be a relation between a program's inputs and outputs.
  • It is compositional, which means that the denotation of a particular language construct is defined in terms of the denotation of the syntactic sub-parts of the construct. For example, the meaning of a while loop is defined in terms of the meaning of its conditional and the meaning of its body.
For the expressions of IMP, it is straightforward to write down a denotational semantics, the following function \(E\). This \(E\) is no different from a recursively defined interpreter. In the following, we map expressions to natural numbers. Following custom, we use the brackets \([\!| e |\!]\) ask a kind of quote to distinguish abstract syntax from the surrounding mathematics. To handle variables, we also pass in a function \(\sigma\) from variables to numbers, which we call a state.

\begin{align*} E[\!| n |\!](\sigma) &= n \\ E[\!| x_i |\!](\sigma) &= \sigma(x) \\ E[\!| u(e) |\!](\sigma) &= [\!|u|\!]( E[\!|e|\!] ) \\ E[\!| b(e_1,e_2) |\!](\sigma) &= [\!|b|\!]( E[\!|e_1|\!], E[\!|e_2|\!]) \\ \\ E[\!| \neg |\!](n) &= \begin{cases} 1 & \text{if } n = 0 \\ 0 & \text{if } n \neq 0\end{cases} \\ E[\!| + |\!](n_1,n_2) &= n_1 + n_2 \\ E[\!| - |\!](n_1,n_2) &= n_1 - n_2 \\ E[\!| \times |\!](n_1,n_2) &= n_1 \times n_2 \\ E[\!| = |\!](n_1,n_2) &= \begin{cases} 1 & \text{if } n_1 = n_2 \\ 0 & \text{if } n_1 \neq n_2 \end{cases} \end{align*}

What do the commands of IMP have in common regarding what they do? They change a state to a new state. For example, if \(\sigma\) is the incoming state, then the assignment command \( x := 42 \) outputs a new state \(\sigma'\) which is the same as \(\sigma\) except that \(\sigma'(x) = 42\). In general, the denotation of a command, \(C[\!|c|\!]\), will be a relation on states, that is, a set of pairs that match up input states with their corresponding output states. We shall give the denotational semantics of commands, one construct as a time.

The meaning of the skip command is that it it doesn't change the state, so it relates each state to itself.

\[ C[\!| \mathtt{skip} |\!] = \{ (\sigma,\sigma) \mid \sigma \in \mathit{state} \} \]

The meaning of the assignment statement is to update the state to map the left-hand side variable to the result of the right-hand side expression. So the new state is a function that takes in a variable named \(y\) and returns \( [\!|e|\!](\sigma) \) if \(y=x\) and otherwise returns the same thing as \(\sigma\).

\begin{align*} C[\!| x := e |\!] &= \{ (\sigma, \sigma') \mid \sigma \in \mathit{state} \} \\ & \text{where } \sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases} \end{align*}

The meaning of two commands in sequence is just the meaning of the first followed by the meaning of the second.

\[ C[\!| c_1; c_2 |\!] = \{ (\sigma,\sigma'') \mid \exists \sigma'. (\sigma,\sigma') \in C[\!| c_1 |\!] \land (\sigma',\sigma'') \in C[\!| c_2 |\!] \} \]

The meaning of an if command depends on the conditional expression \(e\). If the \(e\) evaluates to 0, then the meaning of if is given by the else branch \(c_2\). Otherwise, the meaning of if is given by the then branch \(c_1\).

\[ C[\!| \mathtt{if}\, e \,\mathtt{then}\, c_1 \,\mathtt{else}\, c_2 |\!] = \left\{ (\sigma,\sigma') \middle| \begin{array}{l} (\sigma,\sigma') \in C[\!| c_2 |\!] & \text{if}\, E[\!|e|\!](\sigma) = 0 \\ (\sigma,\sigma') \in C[\!| c_1 |\!] & \text{if}\, E[\!|e|\!](\sigma) \neq 0 \end{array} \right\} \]

The meaning of the while command is the crux of the matter. This is normally where a textbook includes several pages of beautiful mathematics about monotone and continuous functions, complete partial orders, and least fixed points. We're going to bypass all of that.

The meaning of an while command is to map each starting state \(\sigma_0\) to an ending state \(\sigma_n\) obtained by iterating it's body command so long as the condition is non-zero. Pictorially, we have the following:

We define an auxiliary function named \(L\) to express the iterating of the loop. It takes as input the number of iterations of the loop, the denotation of the condition expression \(de\), and the denotation of the command \(dc\).

\begin{align*} L(0, de, dc) &= \{ (\sigma,\sigma) \mid de(\sigma) = 0 \} \\ L(1+n, de, dc) &= \{ (\sigma,\sigma'') \mid de(\sigma) \neq 0 \land \exists \sigma'. (\sigma,\sigma') \in dc \land (\sigma',\sigma'') \in L(n, de, dc) \} \end{align*}

The meaning of the while command is to relate any state \(\sigma\) to state \(\sigma'\) if \(L(n,[\!|e|\!],[\!|c|\!])\) relates \(\sigma\) to \(\sigma'\) for some \(n\).

\[ C[\!| \mathtt{while}\, e \,\mathtt{do}\, c |\!] = \{ (\sigma,\sigma') \mid \exists n.\, (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) \} \]

At this point I'm worried that this is so simple that it couldn't possibly be correct. A good way to check is to prove that it is equivalent to the standard big-step semantics for IMP, which we shall do now.

Equivalence to the Standard Big-Step Semantics

The big-step semantics for the IMP language is a three-place relation on a command, a starting state, and the final state, which we shall write \(c\mid\sigma\Downarrow\sigma'\). It is defined inductively by the following rules.

\begin{gather*} \frac{}{\mathtt{skip} \mid \sigma \Downarrow \sigma} \\[2ex] \frac{\sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases}}{ x := e \mid \sigma \Downarrow \sigma'} \qquad \frac{c_1 \mid \sigma \Downarrow \sigma' \quad c_2 \mid \sigma' \Downarrow \sigma''} {c_1 ; c_2 \mid \sigma \Downarrow \sigma''} \\[2ex] \frac{E[\!|e|\!](\sigma) = 0 \quad c_2 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\qquad \frac{E[\!|e|\!](\sigma) \neq 0 \quad c_1 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\\[2ex] \frac{E[\!|e|\!](\sigma) = 0} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma} \qquad \frac{E[\!|e|\!](\sigma) \neq 0 \quad c \mid \sigma \Downarrow \sigma' \quad \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma' \Downarrow \sigma''} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma''} \end{gather*}

(The big-step semantics is not denotational because the second rule for while is not compositional: the recursion is not on a proper sub-part but instead on the entire while command.)

We shall prove that the denotational semantics is equivalent to the big-step semantics in two steps.

  1. The big-step semantics implies the denotational semantics. (completeness)
  2. The denotation semantics implies the big-step semantics. (soundness)

Theorem (Completeness). If \(c \mid \sigma \Downarrow \sigma'\), then \((\sigma,\sigma') \in [\!|c|\!]\).
Proof. We proceed by induction on the derivation of \(c \mid \sigma \Downarrow \sigma'\). We have one case to consider per rule in the big-step semantics. (For the reader in a hurry: the case for while at the end is the only interesting one.)
Case: \[ \frac{}{\mathtt{skip} \mid \sigma \Downarrow \sigma} \] We need to show that \((\sigma,\sigma) \in [\!|\mathtt{skip}|\!]\), which is immediate from the definition of the denotational semantics.
Case: \[ \frac{\sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases}}{ x := e \mid \sigma \Downarrow \sigma'} \] We need to show that \((\sigma,\sigma') \in [\!|x := e|\!]\). Again this is immediate.
Case: \[ \frac{c_1 \mid \sigma \Downarrow \sigma' \quad c_2 \mid \sigma' \Downarrow \sigma''} {c_1 ; c_2 \mid \sigma \Downarrow \sigma''} \] We have two induction hypotheses: \((\sigma,\sigma') \in C[\!|c_1|\!]\) and \((\sigma',\sigma'') \in C[\!|c_2|\!]\). It follows (by definition) that \((\sigma,\sigma'') \in C[\!|c_1 ; c_2|\!]\).
Case: \[ \frac{E[\!|e|\!](\sigma) = 0 \quad c_2 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\qquad \] We have the induction hypothesis \((\sigma,\sigma') \in [\!|c_2|\!]\). Together with the condition expression evaluating to 0, we have \((\sigma,\sigma') \in C[\!|\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2|\!]\).
Case: \[ \frac{E[\!|e|\!](\sigma) \neq 0 \quad c_1 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\\[2ex] \] We have the induction hypothesis \((\sigma,\sigma') \in C[\!|c_1|\!]\). Together with the condition is non-zero, we have \((\sigma,\sigma') \in C[\!|\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2|\!]\).
Case: \[ \frac{E[\!|e|\!](\sigma) = 0} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma} \qquad \] From \(E[\!|e|\!](\sigma) = 0\) we have \((\sigma,\sigma) \in L(0,E[\!|e|\!],E[\!|c|\!]) \). Therefore \((\sigma,\sigma) \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!]\).
Case: \[ \frac{E[\!|e|\!](\sigma) \neq 0 \quad c \mid \sigma \Downarrow \sigma' \quad \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma' \Downarrow \sigma''} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma''} \] We have the induction hypotheses \((\sigma,\sigma') \in C[\!|c|\!]\) and \((\sigma',\sigma'') \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!]\). Unpacking the definition of the later, we have \( (\sigma',\sigma'') \in L(n,E[\!|e|\!],C[\!|c|\!]) \) for some \(n\). Therefore we have \( (\sigma,\sigma'') \in L(1+n,E[\!|e|\!],C[\!|c|\!]) \). So we conclude that \((\sigma,\sigma'') \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!]\).
QED.

The other direction, that if the denotation of \(c\) relates \(\sigma\) to \(\sigma'\), then so does the big-step semantics, takes a bit more work. The proof will be by induction on the structure of \(c\). In the case for while we need to reason about the \(L\) function. We get to assume that \((\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!])\) for some \(n\) and we have the induction hypothesis that \(\forall \sigma \sigma'.\, (\sigma,\sigma') \in C[\!|c|\!] \to c \mid \sigma \Downarrow \sigma' \). Because \(L\) is recursive, we going to need a lemma about \(L\) and prove it by induction on the number of iterations.

Lemma If \((\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!])\) and \(\forall \sigma \sigma'.\, (\sigma,\sigma') \in [\!|c|\!] \to c \mid \sigma \Downarrow \sigma' \), then \(\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma' \).
Proof. The proof is by induction on \(n\).
Case \(n=0\). We have \((\sigma,\sigma') \in L(0,E[\!|e|\!],C[\!|c|\!])\), so \(\sigma = \sigma'\) and \(E[\!|e|\!](\sigma) = 0\). Therefore we can conclude that \(\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma'\).
Case \(n=1 + n'\). We have \((\sigma,\sigma') \in L(1+n',E[\!|e|\!],C[\!|c|\!])\), so \(E[\!|e|\!](\sigma) \neq 0\) and \( (\sigma,\sigma_1) \in C[\!|c|\!] \) and \( (\sigma_1,\sigma') \in L(n',E[\!|e|\!],C[\!|c|\!]) \) for some \(\sigma_1\). From the premise about \(c\), we have \(c \mid \sigma \Downarrow \sigma_1\). From the induction hypothesis, we have \(\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma_1 \Downarrow \sigma'\). Putting all of these pieces together, we conclude that \(\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma'\).
QED.

Theorem (Soundness). For any \(\sigma\) and \(\sigma'\), if \((\sigma,\sigma') \in C[\!|c|\!]\), then \(c \mid \sigma \Downarrow \sigma'\).
Proof. The proof is by induction on the structure of \(c\).
Case \(\mathtt{skip}\). From \((\sigma,\sigma') \in C[\!|\mathtt{skip}|\!]\) we have \(\sigma = \sigma'\) and therefore \(\mathtt{skip} \mid \sigma \Downarrow \sigma'\).
Case \(x:=e\). We have \[ \sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases} \] and therefore \(x := e \mid \sigma \Downarrow \sigma'\).
Case \(c_1 ; c_2\). We have \( (\sigma, \sigma_1) \in C[\!|c_1|\!]\) and \( (\sigma_1, \sigma') \in C[\!|c_2|\!]\) for some \(\sigma_1\). So by the induction hypothesis, we have \(c_1 \mid \sigma \Downarrow \sigma_1\) and \(c_2 \mid \sigma_1 \Downarrow \sigma'\), from which we conclude that \( c_1 ; c_2 \mid \sigma \Downarrow \sigma'\).
Case \(\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,e_2\). We have two cases to consider, whether \(E[\!|e|\!](\sigma) = 0\) or not.

  • Suppose \(E[\!|e|\!](\sigma) = 0\). Then \( (\sigma,\sigma') \in C[\!|c_2|\!] \) and by the induction hypothesis, \( c_2 \mid \sigma \Downarrow \sigma' \). We conclude that \( \mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,e_2 \mid \sigma \Downarrow \sigma' \).
  • Suppose \(E[\!|e|\!](\sigma) \neq 0\). Then \( (\sigma,\sigma') \in C[\!|c_1|\!] \) and by the induction hypothesis, \( c_1 \mid \sigma \Downarrow \sigma' \). We conclude that \( \mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,e_2 \mid \sigma \Downarrow \sigma' \).

Case \(\mathtt{while}\, e \,\mathtt{do}\, c\). From \((\sigma,\sigma') \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!]\) we have \( (\sigma,\sigma') \in L(n, E[\!|e|\!], C[\!|c|\!]) \). Also, by the induction hypothesis, we have that \( \forall \sigma \sigma'. \; (\sigma,\sigma') \in C[\!|c|\!] \to c \mid \sigma \Downarrow \sigma' \). By the Lemma about \(L\), we conclude that \(\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma'\).
QED.

Wow, the simple denotational semantics of IMP is correct!

The mechanization of all this in Coq is available here.

What about infinite loops?

Does this denotational semantics give meaning to programs with infinite loops, such as \[ \mathtt{while}\, 1 \,\mathtt{do}\, \mathtt{skip} \] The answer is yes, the semantics defines a total function from commands to relations, so every program gets a meaning. So the next question is which relation is the denotation of an infinite loop? Just like the fixed-point semantics, the answer is the empty relation. \[ C[\!|\mathtt{while}\, 1 \,\mathtt{do}\, \mathtt{skip} |\!] = \{ (\sigma,\sigma') \mid \exists n.\; (\sigma,\sigma') \in L(n, E[\!|1|\!], C[\!|\mathtt{skip}|\!]) \} = \emptyset \]

Comparison to using the least fixed point semantics

The standard denotational semantics for IMP defines the meaning of while in terms of the least fixed point of the following functional.

\[ W(de, dc)(dw) = \left\{ (\sigma,\sigma') \middle| \begin{array}{l} \sigma = \sigma' & \text{if } de(\sigma) = 0 \\ \exists \sigma_1, (\sigma,\sigma_1) \in dc \land (\sigma_1,\sigma') \in dw & \text{if } de(\sigma) \neq 0 \end{array} \right\} \]

One of the standard ways to compute the least fixed point of a functional \(F\) is from Kleene's fixpoint theorem, which says that the least fixed point of \(F\) is \[ \bigcup_{k=0}^{\infty} F^k(\emptyset) \] where \begin{align*} F^0(x) & = x \\ F^{k+1}(x) & = F (F^k(x)) \end{align*} So the traditional denotation of while is: \[ C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \bigcup_{k=0}^{\infty} W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) \] Applying the definition of infinitary union, we have \[ C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \{ (\sigma,\sigma') \mid \exists k.\; (\sigma,\sigma') \in W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) \} \] which starts to look similar to our definition. But they are not trivially equivalent.

Consider the following loop that counts down to zero. \[ \mathtt{while}\,\neg (x=0)\, \mathtt{do}\, x := x - 1 \] To talk about the semantics of this loop, we create the following abbreviations for some relations on states. \begin{align*} R_0 &= \{ (\sigma, \sigma) \mid \sigma(x) = 0 \} \\ R_1 &= \{ (\sigma, \sigma') \mid \sigma(x) = 1 \land \sigma'(x) = 0 \} \\ R_2 &= \{ (\sigma, \sigma') \mid \sigma(x) = 2 \land \sigma'(x) = 0 \} \\ R_3 &= \{ (\sigma, \sigma') \mid \sigma(x) = 3 \land \sigma'(x) = 0 \} \\ & \vdots \end{align*}

  • If \(x=0\) in the initial state, the loop immediately terminates, so the final state is the same as the input state. This is \(R_0\).
  • If \(x=1\) in the initial state, the loop executes one iteration, so the final state has \(x=0\). This is \(R_1\).
  • If \(x=2\) in the initial state, the loop executes one iteration, so the final state has \(x=0\). This is \(R_2\).
  • and so on.
The \(L\) function computes exactly these \(R\)'s. \begin{align*} L(0, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_0 \\ L(1, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_1 \\ L(2, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_2 \\ L(3, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_3\\ & \vdots \end{align*} The semantics of while given by \(L\) says that an initial state is related to a final state if it is possible to guess the iteration count \(n\) to pick out the appropriate line of \(L\) that relates the two states. \[ C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \{ (\sigma,\sigma') \mid \exists n.\, (\sigma,\sigma') \in L(n, [\!|e|\!],[\!|c|\!]) \} \] In contrast, Kleene's iteration incrementally builds up the union of all the \(R\)'s: \begin{align*} W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^0(\emptyset) &= \emptyset \\ W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^1(\emptyset) &= R_0\\ W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^2(\emptyset) &= R_0 \cup R_1 \\ W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^3(\emptyset) &= R_0 \cup R_1 \cup R_2 \\ & \vdots \end{align*} The semantics of while given by the least fixed point of \(W\) says that an initial state is related to a final state if, after a sufficient number of applications of \(W\), say \(k\), the two states are in the resulting union \(R_0 \cup \cdots \cup R_{k-1}\). \[ C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \{ (\sigma,\sigma') \mid \exists k.\; (\sigma,\sigma') \in W([\!|e|\!],[\!|c|\!])^k(\emptyset) \} \]

In general, \( L(n,E[\!|e|\!],C[\!|c|\!]) \) gives the denotation of the loop exactly when \(n\) is the number of iterations executed by the loop in a given initial state. In contrast, \( W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) \) produces the \(k\)th approximation of the loop's meaning, providing the appropriate initial/final states for up to \(k-1\) iterations of the loop.

However, these two algorithms are equivalent in the following sense.

Theorem (Equivalence to LFP semantics)

  1. If \( (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) \), then \( \exists k.\; (\sigma,\sigma') \in W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) \).
  2. If \( (\sigma,\sigma') \in W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) \), then \( \exists n.\; (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) \).
The first part is proved by induction on n. The second part is proved by induction on k. The full proofs are in the Coq development linked to above.

Parting Thoughts

  • The traditional least fixed-point semantics is overkill for IMP.
  • What makes this simple version work is the combination of choosing the denotation of commands to be relations on states (instead of functions on states) and the use of an existential for the number of loop iterations in the meaning of while.
  • The IMP language is just a toy. As we consider more powerful language constructs such as functions, pointers, etc., at what point will we be forced to use the heavier-duty math of least fixed-points, continuous functions, etc.? I'm looking forward to finding out!