Tuesday, April 24, 2018

What do real numbers have in common with lambdas? and what does continuity have to do with it?

Continuous functions over the real numbers

As a high school student and undergraduate I learned in Calculus that
  1. real numbers involve infinity in precision, e.g. some have no finite decimal representation, and
  2. a continuous function forms an unbroken line, a necessary condition to be differentiable.
For an example, the decimal representation of \(\sqrt 2\) goes on forever: \[1.41421 \ldots\] Later on, in a course on Real Analysis, I learned that one way to define the real numbers is to declare them to be Cauchy sequences, that is, infinite sequences of rational numbers that get closer and closer together. So, for example, \(\sqrt 2\) is declared to be the sequence \(1, \frac{3}{2}, \frac{17}{12}, \frac{577}{408}, \ldots\) described by the following recursive formulas.
\[A_0 = 1 \qquad A_{n+1} = \frac{A_n}{2} + \frac{1}{A_n} \hspace{1in} (1)  \label{eq:caucy-sqrt-2}\]
Depending on how close an approximation to \(\sqrt 2\) you need, you can go further out in this sequence. (Alternatively, one can represent \(\sqrt 2\) by its sequence of continued fractions.)
For an example of a continuous function, Figure 1 depicts \(x^3 - x^2 - 4x\). On the other hand, Figures 2 and 3 depict functions that are not continuous. The function \(1/\mathrm{abs}(x-\sqrt 2)^{1/4}\) in Figure 2 is not continuous because it goes to infinity as it approaches \(\sqrt 2\). The function \((x+1)\,\mathrm{sign}(x)\) in Figure 3 is not continuous because it jumps from \(-1\) to \(1\) at \(0\).

Figure 1. The function \(x^3 - x^2 - 4x\) is continuous.
Figure 2. The function \(1/\mathrm{abs}(x-\sqrt 2)^{1/4}\) is not continuous at \(\sqrt 2\).
Figure 3. The function \((x+1)\,\mathrm{sign}(x)\) is not continuous at \(0\).

You may recall the \(\epsilon\)-\(\delta\) definition of continuity, stated below and depicted in Figure 4.
A function \(f\) is continuous at a point \(x\) if for any \(\epsilon > 0\) there exists a \(\delta > 0\) such that for any \(x'\) in the interval \((x - \delta,x+\delta)\), \(f(x')\) is in \((f(x) -\epsilon, f(x) + \epsilon)\).
In other words, when a function is continuous, if you want to determine its result with an accuracy of \(\epsilon\), you need to measure the input with an accuracy of \(\delta\).

Figure 4. The \(\epsilon\)-\(\delta\) definition of continuity.
One connection between the infinite nature of real numbers and continuity that only recently sunk-in is that continuous functions are the ones that can be reasonably approximated by applying them to approximate, finitely-represented inputs. For example, suppose you wish to compute \(f(\sqrt 2)\) for some continuous function \(f\). You can accomplish this by applying \(f\) to each rational number in the Cauchy sequence for \(\sqrt 2\) until two subsequent results are closer than your desired accuracy. On the other hand, consider trying to approximate the function from Figure 2 by applying it to rational numbers in the Cauchy sequence for \(\sqrt 2\). No matter how far down the sequence you go, you’ll still get a result that is wrong by an infinite margin!

The \(\lambda\)-calculus and continuous functions

In graduate school I studied programming languages and learned that
  1. the \(\lambda\)-calculus is a little language for creating and applying functions, and
  2. Dana S. Scott’s semantics of the \(\lambda\)-calculus interprets \(\lambda\)’s as continuous functions.
For example, the \(\lambda\) expression \[\lambda x.\; x + 1\] creates an anonymous function that maps its input \(x\), say a natural number, to the next greatest one. The graph of this function is \[\left\{ \begin{array}{l} 0\mapsto 1, \\ 1\mapsto 2, \\ 2\mapsto 3, \\ \quad\,\vdots \end{array} \right\}\] which is infinite. So we have our first similarity between the real numbers and \(\lambda\)’s, both involve infinity.
A key characteristic of the \(\lambda\)-calculus is that functions can take functions as input. Thus, the semantics of the \(\lambda\)-calculus is also concerned with functions over infinite entities (just like functions over the real numbers). For example, here is a \(\lambda\) expression that takes a function \(f\) and produces a function that applies \(f\) twice in succession to its input \(x\). \[\lambda f.\; \lambda x.\; f(f(x))\] The graph of this function is especially difficult to write down. Not only does it have an infinite domain and range, but each element in the domain and range is an infinite entity. \[\left\{ \begin{array}{l} \{ 0\mapsto 1, 1\mapsto 2, 2\mapsto 3, \ldots \} \mapsto \{ 0\mapsto 2, 1\mapsto 3, 2\mapsto 4, \ldots \},\\ \{ 0\mapsto 0, 1\mapsto 2, 2\mapsto 4, \ldots \} \mapsto \{ 0\mapsto 0, 1\mapsto 4, 2\mapsto 8, \ldots \},\\ \ldots \end{array} \right\}\]
Denotational semantics for the \(\lambda\)-calculus interpret \(\lambda\)’s as continuous functions, so just based on the terminology there should be another similarity with real numbers! However, these continuous functions are over special sets called domains, not real numbers, and the definition of continuity in this setting bears little resemblance to the \(\epsilon\)-\(\delta\) definition. For example, in Dana S. Scott’s classic paper Data Types as Lattices, the domain is the powerset of the natural numbers, \(\mathcal{P}(\mathbb{N})\). This domain can be used to represent a function's graph by encoding (create a bijection) between pairs and natural numbers, and between sets and naturals. The following are the easier-to-specify directions of the two bijections, the mapping from pairs to naturals and the mapping from naturals to sets of naturals.
\[\begin{aligned} \langle n, m \rangle &= 2^n (2m+1) - 1 \\ \mathsf{set}(0) &= \emptyset \\ \mathsf{set}(1+k) &= \{ m \} \cup \mathsf{set}(n) & \text{if } \langle n, m \rangle = k\end{aligned}\]
Scott defines the continuous functions on \(\mathcal{P}(\mathbb{N})\) as those functions \(h\) that satisfy
\[h(f) = \bigcup \{ h(g) \mid g \subseteq_{\mathit{fin}} f \} \hspace{1in} (2) \label{eq:cont-pn}\]
In other words, the value of a continuous function \(h\) on some function \(f \in \mathcal{P}(\mathbb{N})\) must be the same as the union of applying \(h\) to all the finite subgraphs of \(f\). One immediately wonders, why are the \(\lambda\)-definable functions continuous in this sense? Consider some \(\lambda\) expression \(h\) that takes as input a function \(f\).
But \(f\) is a function; an infinite object. What does it mean to “compute” with an “infinite” argument? In this case it means most simply that \(h(f)\) is determined by asking of \(f\) finitely many questions: \(f(m_0), f(m_1), ..., f(m_{k-1})\). —Dana S. Scott, A type-theoretical alternative to ISWIM, CUCH, OWHY, 1969.
Put another way, if \(h\) terminates and returns a result, then it will only have had a chance to call \(f\) finitely many times. So it suffices to apply \(h\) instead to a finite subset of the graph of \(f\). However, we do not know up-front which subset of \(f\) to use, but it certainly suffices to try all of them!

Relating the two kinds of continuity

But what does equation (2) have to do with continuous functions over the real numbers? What does it have to do with the \(\epsilon\)-\(\delta\) definition? This question has been in the back of my mind for some time, but only recently have I had the opportunity to learn the answer.
To understand how these two kinds of continuity are related, it helps to focus on the way that infinite entities can be approximated with finite ones in the two settings. We can approximate a real number with a rational interval. For example, refering back to the Cauchy sequence for \(\sqrt 2\), equation (1), we have \[\sqrt 2 \in \left(\frac{17}{12}, \frac{3}{2}\right)\] Of course an approximation does not uniquely identify the thing it approximates. So there are other real numbers in this interval, such as \(\sqrt{2.1}\). \[\sqrt{2.1} \in \left(\frac{17}{12}, \frac{3}{2}\right)\]
Likewise we can approximate the infinite graph of a function with a finite part of its graph. For example, let \(G\) be the a graph with just one input-output entry. \[G=\{ 1 \mapsto 2 \}\] Then we consider \(G\) to be an approximation of any function that agrees with \(G\) (maps \(1\) to \(2\)), which is to say its graph is a superset of \(G\). So the set of all functions that are approximated by \(G\) can be expressed with a set comprehension as follows: \(\{ f \mid G \subseteq f\}\). In particular, the function \(+1\) that adds one to its input is approximated by \(G\). \[\left\{ \begin{array}{l} 0\mapsto 1, \\ 1\mapsto 2, \\ 2\mapsto 3, \\ \quad\,\vdots \end{array} \right\} \in \{ f \mid G \subseteq f\}\] But also the function \(\times 2\) that doubles its input is approximated by \(G\). \[\left\{ \begin{array}{l} 0\mapsto 0, \\ 1\mapsto 2, \\ 2\mapsto 4, \\ \quad\,\vdots \end{array} \right\} \in \{ f \mid G \subseteq f\}\] Of course, a better approximation such as \(G'=\{1\mapsto 2, 2\mapsto 3\}\) is able to tell these two functions apart.
The interval \((17/12, 3/2)\) and the set \(\{f\mid G \subseteq f\}\) are both examples of neighborhoods (aka. base elements) in a topological space. The field of Topology was created to study the essence of continuous functions, capturing the similarities and abstracting away the differences regarding how such functions work in different settings. A topological space is just some set \(X\) together with a collection \(B\) of neighborhoods, called a base, that must satisfy a few conditions that we won’t get into. We’ve already seen two topological spaces.
  1. The real numbers form a topological space where each neighborhood consists of all the real numbers in a rational interval.
  2. The powerset \(\mathcal{P}(\mathbb{N})\) forms a topological space where each neighborhood consists of all the functions approximated by a finite graph.
The \(\epsilon\)-\(\delta\) definition of continuity generalizes to topological spaces: instead of talking about intervals, it talks generically about neighborhoods. In the following, the interval \((f(x) -\epsilon, f(x) + \epsilon)\) is replaced by neighborhood \(E\) and the interval \((x - \delta,x+\delta)\) is replaced by neighborhood \(D\).
A function \(f\) is continuous at a point \(x\) if for any neighborhood \(E\) that contains \(f(x)\), there exists a neighborhood \(D\) that contains \(x\) such that for any \(y\) in \(D\), \(f(y)\) is in \(E\).
Now let us instantiate this topological definition of continuity into \(\mathcal{P}(\mathbb{N})\).
A function \(f\) over \(\mathcal{P}(\mathbb{N})\) is continuous at \(X\) if for any finite set \(E\) such that \(E \subseteq f(X)\), there exists a finite set \(D\) with \(D \subseteq X\) such that for any \(Y\), \(D \subseteq Y\) implies \(E \subseteq f(Y)\).
Hmm, this still doesn’t match up with the definition of continuity in equation (2) but perhaps they are equivalent. Let us take the above as the definition and try to prove equation (2).
First we show that \[h(f) \subseteq \bigcup \{ h(g) \mid g \subseteq_{\mathit{fin}} f \}\] Let \(x'\) be an arbitrary element of \(h(f)\). To show that \(x'\) is in the right-hand side we need to identify some finite \(g\) such that \(g \subseteq f\) and \(x' \in h(g)\), that is, \(\{x'\} \subseteq h(g)\). But this is just what continuity gives us, taking \(h\) as \(f\), \(f\) as \(X\), \(\{x'\}\) as \(E\), \(g\) as \(D\), and also \(g\) as \(Y\). Second we need show that \[\bigcup \{ h(g) \mid g \subseteq_{\mathit{fin}} f \} \subseteq h(f)\] This time let \(x'\) be an element of \(\bigcup \{ h(g) \mid g \subseteq_{\mathit{fin}} f \}\). So we known there is some finite set \(g\) such that \(x' \in h(g)\) and \(g \subseteq f\). Of course \(\{x'\}\) is a finite set and \(\{x'\} \subseteq h(g)\), so we can apply the definition of continuity to obtain a finite set \(E\) such that \(E \subseteq g\) and for all \(Y\), \(E \subseteq Y\) implies \(\{x'\} \subseteq h(Y)\). From \(E \subseteq g\) and \(g \subseteq f\) we transitively have \(E \subseteq f\). So instantiating \(Y\) with \(f\) we have \(\{x'\} \subseteq h(f)\) and therefore \(x' \in h(f)\).
We have shown that the topologically-derived definition of continuity for \(\mathcal{P}(\mathbb{N})\) implies the definition used in the semantics of the \(\lambda\)-calculus, i.e., equation (2). It is also straightforward to prove the other direction, taking equation (2) as given and proving that the topologically-derived definition holds. Thus, continuity for functions over real numbers really is similar to continuity for \(\lambda\) functions, they are both instances of continuous functions in a topological space.

Continuous functions over partial orders

In the context of Denotational Semantics, domains are often viewed as partial orders where the ordering \(g \sqsubseteq f\) means that \(g\) approximates \(f\), or \(f\) is more informative than \(g\). The domain \(\mathcal{P}(\mathbb{N})\) with set containment \(\subseteq\) forms a partial order. Refering back to the examples in the first section, with \(G=\{ 1 \mapsto 2 \}\) and \(G'=\{1\mapsto 2, 2\mapsto 3\}\), we have \(G \sqsubseteq G'\), \(G' \sqsubseteq +1\), and \(G \sqsubseteq \times 2\). In a partial order, the join \(x \sqcup y\) of \(x\) and \(y\) is the least element that is greater than both \(x\) and \(y\). For the partial order on \(\mathcal{P}(\mathbb{N})\), join corresponds to set union.
In the context of partial orders, continuity is defined with respect to infinite sequences of ever-better approximations: \[f_0 \sqsubseteq f_1 \sqsubseteq f_2 \sqsubseteq \cdots\] A function \(h\) is continuous if applying it to the join of the sequence is the same as applying it to each element of the sequence and then taking the join.
\[h\left(\bigsqcup_{n\in\mathbb{N}} f_n\right) = \bigsqcup_{n\in\mathbb{N}} h(f_n) \hspace{1in} (3) \label{eq:cont-cpo}\]
But this equation is not so different from the equation (2) that expresses continuity on \(\mathcal{P}(\mathbb{N})\). For any function \(f\) (with infinite domain) we can find an sequence \((f_n)_{n=0}^{\infty}\) of ever-better but still finite approximations of \(f\) such that \[f = \bigsqcup_{n\in\mathbb{N}} f_n\] Then both equation (2) and (3) tell us that \(h(f)\) is equal to the union of applying \(h\) to each \(f_n\).

Further Reading

The following is the list of resources that I found helpful in trying to understand the relationship between real numbers, \(\lambda\)’s, and the role of continuity.
  • Data Types as Lattices by Dana S. Scott.
  • A type-theoretical alternative to ISWIM, CUCH, OWHY by Dana S. Scott.
  • The Formal Semantics of Programming Languages by Glynn Winskel.
  • Topology via Logic by Steven Vickers.
  • Topology (2nd Edition) by James R. Munkres.
  • Introduction to Lattices and Order by B.A. Davey and H.A. Priestley.
  • The Wikipedia articles on

Saturday, December 23, 2017

Putting the Function back in Lambda

Happy holidays! There’s nothing quite like curling up in a comfy chair on a rainy day and proving a theorem in your favorite proof assistant.

Lately I’ve been interested in graph models of the \(\lambda\)-calculus, that is, models that represent a \(\lambda\) with relations from inputs to outputs. The use of relations instead of functions is not a problem when reasoning about expressions that produce numbers, but it does introduce problems when reasoning about expressions that produce higher-order functions. Some of these expressions are contextually equivalent but not denotationally equivalent. For example, consider the following two expressions. \[{\lambda f.\,} (f {\;}0) + (f {\;}0) =_{\mathrm{ctx}} {\lambda f.\,} ({\lambda x.\,} x + x) {\;}(f {\;}0) \qquad\qquad (1)\] The expression on the left-hand side has two copies of a common subexpression \((f {\;}0)\). The expression on the right-hand side is optimized to have just a single copy of \((f {\;}0)\). The left and right-hand expressions in equation (1) are contextually equivalent because the \(\lambda\)-calculus is a pure language (no side effects), so whether we call \(f\) once or twice does not matter, and it always returns the same result given the same input. Unfortunately, the two expressions in equation (1) are not denotationally equivalent. \[{\mathcal{E}[\![ {\lambda f.\,} (f {\;}0) + (f {\;}0) ]\!]}\emptyset \neq {\mathcal{E}[\![ {\lambda f.\,} ({\lambda x.\,} x + x) {\;}(f {\;}0) ]\!]}\emptyset \qquad\qquad (2)\] Recall that my semantics \(\mathcal{E}\) maps an expression and environment to a set of values. The “set” is not because an expression produces multiple conceptually-different values. Sets are needed because we represent a (infinite) function as an infinite set of finite relations. So to prove the above inequality (2) we simply need to find a value that is in the set on the left-hand side that is not in the set on the right-hand side. The idea is that we consider the behavior when parameter \(f\) is bound to a relation that is not a function. In particular, the relation \[R = \{ (0,1), (0,2) \}\] Now when we consider the application \((f {\;}0)\), the semantics of function application given by \(\mathcal{E}\) can choose the result to be either \(1\) or \(2\). Furthermore, for the left-hand side of equation (2), it could choose \(1\) for the first \((f {\;}0)\) and \(2\) for the second \((f {\;}0)\) . Thus, the result of the function can be \(3\). \[\{ (R,3) \} \in {\mathcal{E}[\![ {\lambda f.\,} (f {\;}0) + (f {\;}0) ]\!]}\emptyset\] Of course, this function could never actually produce \(3\) because \(R\) does not correspond to any \(\lambda\)’s. In other words, garbage-in garbage-out. Turning to the right-hand side of equation (2), there is only one \((f{\;}0)\), which can either produce \(1\) or \(2\), so the result of the outer function can be \(2\) or \(4\), but not \(3\).

\[\begin{aligned} \{ (R,2) \} &\in {\mathcal{E}[\![ {\lambda f.\,} ({\lambda x.\,} x + x) {\;}(f {\;}0) ]\!]}\emptyset\\ \{ (R,3) \} &\notin {\mathcal{E}[\![ {\lambda f.\,} ({\lambda x.\,} x + x) {\;}(f {\;}0) ]\!]}\emptyset\\ \{ (R,4) \} &\in {\mathcal{E}[\![ {\lambda f.\,} ({\lambda x.\,} x + x) {\;}(f {\;}0) ]\!]}\emptyset\end{aligned}\]

So we need to put the function back in \(\lambda\)! That is, we need to restrict the notion of values so that all the relations are also functions. Recall the definition: a function \(f\) is a relation on two sets \(A\) and \(B\) such that for all \(a \in A\) there exists a unique \(b \in B\) such that \((a,b) \in f\). In other words, if \((a,b) \in f\) and \((a,b') \in f\), then necessarily \(b = b'\). Can we simply add this restriction to our notion of value? Not quite. If we literally applied this definition, we could still get graphs such as the following one, which maps two different approximations of the add-one function to different outputs. This graph does not correspond to any \(\lambda\). \[\{ (\{(0,1)\}, 2), (\{(0,1),(5,6) \}, 3) \}\]

So we need to generalize the notion of function to allow for differing approximations. We shall do this by generalizing from equality to consistency, written \(\sim\). Two integers are consistent when they are equal. Two graphs as consistent when they map consistent inputs to consistent outputs. We are also forced to explicitly define inconsistency, which we explain below.

\[\begin{gathered} \frac{}{n \sim n} \qquad \frac{\begin{array}{l}\forall v_1 v'_1 v_2 v'_2, (v_1,v'_1) \in t_1 \land (v_2,v'_2) \in t_2 \\ \implies (v_1 \sim v_2 \land v'_1 \sim v'_2) \lor v_1 \not\sim v_2 \end{array}} {t_1 \sim t_2} \\[2ex] \frac{n_1 \neq n_2}{n_1 \not\sim n_2} \qquad \frac{(v_1,v'_1) \in t_1 \quad (v_2,v'_2) \in t_2 \quad v_1 \sim v_2 \quad v'_1 \not\sim v'_2} {t_1 \not\sim t_2} \\[2ex] \frac{}{n \not\sim t} \qquad \frac{}{t \not\sim n}\end{gathered}\]

The definition of consistency is made a bit more complicated than I expected because the rules of an inductive definition must be monotonic, so we can’t negate a recursive application or put it on the left of an implication. In the above definition of consistency for graphs \(t_1 \sim t_2\), it would have been more natural to say \(v_1 \sim v_2 \implies v'_1 \sim v'_2\) in the premise, but then \(v_1 \sim v_2\) is on the left of an implication. The above inductive definition works around this problem by mutually defining consistency and inconsistency. We then prove that inconsistency is the negation of consistency.

Proposition 1 (Inconsistency) \(v_1 \not\sim v_2 = \neg (v_1 \sim v_2)\)
Proof. We first establish by mutual induction that \(v_1 \sim v_2 \implies \neg (v_1 \not\sim v_2)\) and \(v_1 \not\sim v_2 \implies \neg (v_1 \sim v_2)\). We then show that \((v_1 \sim v_2) \lor (v_1 \not\sim v_2)\) by induction on \(v_1\) and case analysis on \(v_2\). Therefore \(\neg (v_1 \not\sim v_2) \implies v_1 \sim v_2\), so we have proved both directions of the desired equality. \(\Box\)

Armed with this definition of consistency, we can define a generalized notion of function, let’s call it \(\mathsf{is\_fun}\). \[\mathsf{is\_fun}\;t \equiv \forall v_1 v_2 v'_1 v'_2, (v_1,v'_1) \in t \land (v_2,v'_2) \in t \land v_1 \sim v_2 \implies v'_1 \sim v'_2\] Next we restrict the notion of value to require the graphs to satisfy \(\mathsf{is\_fun}\). Recall that we use to define values by the following grammar. \[\begin{array}{lrcl} \text{numbers} & n & \in & \mathbb{Z} \\ \text{graphs} & t & ::= & \{ (v_1,v'_1), \ldots, (v_n,v'_n) \}\\ \text{values} & v & ::= & n \mid t \end{array}\] We keep this definition but add an induction definition of a more refined notion of value, namely \(\mathsf{is\_val}\). Numbers are values and graphs are values so long as they satisfy \(\mathsf{is\_fun}\) and only map values to values.

\[\begin{gathered} \frac{}{\mathsf{is\_val}\,n} \qquad \frac{\mathsf{is\_fun}\;t \quad \forall v v', (v,v') \in t \implies \mathsf{is\_val}\,v \land \mathsf{is\_val}\,v'} {\mathsf{is\_val}\,t}\end{gathered}\]

We are now ready to update our semantic function \(\mathcal{E}\). The one change that we make is to require that each graph \(t\) satisfies \(\mathsf{is\_val}\) in the meaning of a \(\lambda\). \[{\mathcal{E}[\![ {\lambda x.\,} e ]\!]}\rho = \{ t \mid \mathsf{is\_val}\;t \land \forall (v,v')\in t, v' \in {\mathcal{E}[\![ e ]\!]}\rho(x{:=}v) \}\] Hopefully this change to the semantics enables a proof that \(\mathcal{E}\) is deterministic. Indeed, we shall show that if \(v \in {\mathcal{E}[\![ e ]\!]}\rho\) and \(v' \in {\mathcal{E}[\![ e ]\!]}\rho'\) for any suitably related \(\rho\) and \(\rho'\), then \(v \sim v'\).

To relate \(\rho\) and \(\rho'\), we extend the definitions of consistency and \(\mathsf{is\_val}\) to environments.

\[\begin{gathered} \emptyset \sim \emptyset \qquad \frac{v \sim v' \quad \rho \sim \rho'} {\rho(x{:=}v) \sim \rho'(x{:=}v')} \\[2ex] \mathsf{val\_env}\;\emptyset \qquad \frac{\mathsf{is\_val}\; v \quad \mathsf{val\_env}\;\rho} {\mathsf{val\_env}\;\rho(x{:=}v)}\end{gathered}\]

We will need a few small lemmas concerning these definitions and their relationship with the \(\sqsubseteq\) ordering on values.

Proposition 2 

  1. If \(\mathsf{val\_env}\;\rho\) and \(\rho(x) = v\), then \(\mathsf{is\_val}\; v\).

  2. If \(\rho \sim \rho'\), \(\rho(x) = v\), \(\rho'(x) = v'\), then \(v \sim v'\).

Proposition 3 

  1. If \(\mathsf{is\_val}\;v'\) and \(v \sqsubseteq v'\), then \(\mathsf{is\_val}\; v\).

  2. If \(v_1 \sqsubseteq v'_1\), \(v_2 \sqsubseteq v'_2\), and \(v'_1 \sim v'_2\), then \(v_1 \sim v_2\).

We now come to the main theorem, which is proved by induction on \(e\), using the above three propositions.

Theorem (Determinism of \(\mathcal{E}\)) If \(v \in {\mathcal{E}[\![ e ]\!]}\rho\), \(v' \in {\mathcal{E}[\![ e ]\!]}\rho'\), \(\mathsf{val\_env}\;\rho\), \(\mathsf{val\_env}\;\rho'\), and \(\rho \sim \rho'\), then \(\mathsf{is\_val}\;v\), \(\mathsf{is\_val}\;v'\), and \(v \sim v'\).

Sunday, October 15, 2017

New revision of the semantics paper (POPL rejection, ESOP submission)

My submission about declarative semantics to POPL was rejected. It's been a few weeks now, so I'm not so angry about it anymore. I've revised the paper and will be submitting it to ESOP this week.

The main reason for rejection according to the reviewers was a lack of technical novelty, but I think the real reasons were that 1) the paper came across as too grandiose and as a result, it accidentally annoyed the reviewer who is an expert in denotational semantics, and 2) the paper did not do a good job of comparing to the related set-theoretic models of Plotkin and Engeler.

Regarding 1), in the paper I use the term "declarative semantics" to try and distance this new semantics from the standard lattice-based denotational semantics. However, the reviewer took it to claim that the new semantics is not a denotational semantics, which is clearly false. In the new version of the paper I've removed the term "declarative semantics" and instead refer to the new semantics as a denotational semantics of the "elementary" variety. Also, I've toned down the sales pitch to better acknowledge that this new semantics is not the first elementary denotational semantics.

Regarding 2), I've revised the paper to include a new section at the beginning that gives background on the elementary semantics of Plotkin, Engeler, and Coppo et al. This should help put the contributions of the paper in context.

Other than that, I've added a section with a counter example to full abstraction. A big thanks to the POPL reviewers for the counter example! (Also thanks to Max New, who sent me the counter example a couple months ago.)

Unfortunately, the ESOP page limit is a bit shorter, so I removed the relational version of the semantics and also the part about mutable references.

A draft of the revision is available on arXiv. Feedback is most welcome, especially from experts in denotational semantics! I really hope that this version is no longer annoying, but if it is, please tell me!

Tuesday, October 03, 2017

Comparing to Plotkin and Engeler's Set-theoretic Models of the Lambda Calculus

On the plane ride back from ICFP last month I had a chance to re-read and better understand Plotkin’s Set-theoretical and other elementary models of the \(\lambda\)-calculus (Technical Report 1972, Theoretical Computer Science 1993) and to read, for the first time, Engeler’s Algebras and combinators (Algebra Universalis 1981). As I wrote in my draft paper Declarative semantics for functional languages: compositional, extensional, and elementary, the main intuitions behind my simple semantics are present in these earlier papers, but until now I did not understand these other semantics deeply enough to give a crisp explanation of the similarities and differences. (The main intuitions are also present in the early work on intersection type systems, and my semantics is more closely related to those systems. A detailed explanation of that relationship is given in the draft paper.)

I should note that Engeler’s work was in the context of combinators (S and K), not the \(\lambda\)-calculus, but of course the \(\lambda\)-calculus can be encoded into combinators. I’ve ported his definitions to the \(\lambda\)-calculus, along the lines suggested by Plotkin (1993), to make for easier comparison. In addition, I’ll extend both Engeler and Plotkin’s semantics to include integers and integer arithmetic in addition to the \(\lambda\)-calculus. Here’s the syntax for the \(\lambda\)-calculus that we consider here: \[\begin{array}{rcl} && n \in \mathbb{Z} \qquad x \in \mathbb{X} \;\;\text{(program variables)}\\ \oplus & ::= & + \mid - \mid \times \mid \div \\ \mathbb{E} \ni e & ::= & n \mid e \oplus e \mid x \mid {\lambda x.\,} e \mid e \; e \mid {\textbf{if}\,}e {\,\textbf{then}\,}e {\,\textbf{else}\,}e \end{array}\]


Perhaps the best place to start the comparison is in the definition of what I’ll call values. All three semantics give an inductive definition of values and all three involve finite sets, but in different ways. I’ll write \(\mathbb{V}_S\) for my definition, \(\mathbb{V}_P\) for Plotkin’s, and \(\mathbb{V}_E\) for Engeler’s. \[\begin{aligned} \mathbb{V}_S &= \mathbb{Z} + \mathcal{P}_f(\mathbb{V}_S \times \mathbb{V}_S) \\ \mathbb{V}_P &= \mathbb{Z} + \mathcal{P}_f(\mathbb{V}_P) \times \mathcal{P}_f(\mathbb{V}_P) \\ \mathbb{V}_E &= \mathbb{Z} + \mathcal{P}_f(\mathbb{V}_E) \times \mathbb{V}_E\end{aligned}\] In \(\mathbb{V}_S\), a function is represented as a finite graph, that is, a finite set of input-output pairs. For example, the graph \(\{ (0,1), (1,2), (2,3) \}\) is one of the meanings for the term \((\lambda x.\, x + 1)\).

Plotkin’s values \(\mathbb{V}_P\) include only a single input-output pair from a function’s graph. For example, \((\{0\}, \{1\})\) is one of the meanings for the term \((\lambda x.\, x + 1)\). Engeler’s values also include just a single entry. For example, \((\{0\}, 1)\) is one of the meanings for the term \((\lambda x.\, x + 1)\). In this example we have not made use of the finite sets in the input and output of Plotkin’s values. To do so, let us consider a higher-order example, such as the term \((\lambda f.\, f\,1 + f\,2)\). For Plotkin, the following value is one of its meanings: \[(\{ (\{1\}, \{3\}), (\{2\}, \{4\}) \}, \{7\})\] That is, in case \(f\) is the function that adds \(2\) to its input, the result is \(7\). We see that the presence of finite sets in the input is needed to accomodate functions-as-input. The corresponding value in \(\mathbb{V}_S\) is \[\{ (\{ (1, 3), (2, 4) \}, 7) \}\]

The difference between Plotkin and Engeler’s values can be seen in functions that return functions. Consider the \(K\) combinator \((\lambda x.\,\lambda y.\, x)\). For Plotkin, the following value is one of its meanings: \[(\{1\}, \{ (\{0\},\{1\}), (\{2\},\{1\}) \})\] That is, when applied to \(1\) it returns a function that returns \(1\) when applied to either \(0\) or \(2\). The corresponding value in \(\mathbb{V}_S\) is \[\{ (1, \{ (0,1), (2,1) \}) \}\] For Engeler, there is not a single value corresponding to the above value. Instead it requires two values to represent the same information. \[(\{1\}, (\{0\},1)) \quad\text{and}\quad (\{1\}, (\{2\},1))\] We’ll see later that it doesn’t matter that Engeler requires more values to represent the same information.

The Domains

The semantics of Plotkin, Engeler, and myself does not use values for the domain, but instead a set of values. That is \[\mathcal{P}(\mathbb{V}_S) \qquad \mathcal{P}(\mathbb{V}_P) \qquad \mathcal{P}(\mathbb{V}_E)\]

The role of the outer \(\mathcal{P}\) is intimately tied to the meaning of functions in Plotkin and Engeler’s semantics because the values themselves only record a single input-output pair. The outer \(\mathcal{P}\) is needed to represent all of the input-output pairs for a given function. While the \(\mathcal{P}\) is also necessary for functions in my semantics, one can view it generically as providing non-determinism and therefore somewhat orthogonal to the meaning of functions per se. Next let’s take a look at the semantics.

Comparing the Semantics

Here is Plotkin’s semantics \(\mathcal{E}_P\). Let \(V,V'\) range over finite sets of values. \[\begin{aligned} {\mathcal{E}_P[\![ n ]\!]}\rho &= \{ n \} \\ {\mathcal{E}_P[\![ e_1 \oplus e_2 ]\!]}\rho &= \{ n_1 \oplus n_2 \mid n_1 \in {\mathcal{E}_P[\![ e_1 ]\!]}\rho \land n_2 \in {\mathcal{E}_P[\![ e_2 ]\!]}\rho \} \\ {\mathcal{E}_P[\![ x ]\!]}\rho &= \rho(x) \\ {\mathcal{E}_P[\![ {\lambda x.\,} e ]\!]}\rho &= \{ (V,V') \mid V' \subseteq {\mathcal{E}_P[\![ e ]\!]}\rho(x{:=}V) \} \\ {\mathcal{E}_P[\![ e_1\;e_2 ]\!]}\rho &= \bigcup \left\{ V' \, \middle| \begin{array}{l} \exists V.\, (V,V') {\in} {\mathcal{E}_P[\![ e_1 ]\!]}\rho \land V {\subseteq} {\mathcal{E}_P[\![ e_2 ]\!]}\rho \end{array} \right\} \\ {\mathcal{E}_P[\![ {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 ]\!]}\rho &= \left\{ v\, \middle|\, \begin{array}{l} \exists n.\, n \in {\mathcal{E}_P[\![ e_1 ]\!]}\rho \\ \land\, (n\neq 0 \implies v \in {\mathcal{E}_P[\![ e_2 ]\!]}\rho)\\ \land\, (n=0 \implies v \in {\mathcal{E}_P[\![ e_3 ]\!]}\rho) \end{array} \right\}\end{aligned}\] For Plotkin, the environment \(\rho\) maps variables to finite sets of values. In the case for application, the input set \(V\) must be a subset of the meaning of the argument, which is critical for enabling self application and, using the \(Y\) combinator, general recursion. The \(\bigcup\) flattens the set-of-finite-sets into a set.

Next we consider Engeler’s semantics \(\mathcal{E}_E\). \[\begin{aligned} {\mathcal{E}_E[\![ n ]\!]}\rho &= \{ n \} \\ {\mathcal{E}_E[\![ e_1 \oplus e_2 ]\!]}\rho &= \{ n_1 \oplus n_2 \mid n_1 \in {\mathcal{E}_E[\![ e_1 ]\!]}\rho \land n_2 \in {\mathcal{E}_E[\![ e_2 ]\!]}\rho \} \\ {\mathcal{E}_E[\![ x ]\!]}\rho &= \rho(x) \\ {\mathcal{E}_E[\![ {\lambda x.\,} e ]\!]}\rho &= \{ (V,v') \mid v' \in {\mathcal{E}_E[\![ e ]\!]}\rho(x{:=}V) \} \\ {\mathcal{E}_E[\![ e_1\;e_2 ]\!]}\rho &= \left\{ v' \, \middle| \begin{array}{l} \exists V.\, (V,v') {\in} {\mathcal{E}_E[\![ e_1 ]\!]}\rho \land V {\subseteq} {\mathcal{E}_E[\![ e_2 ]\!]}\rho \end{array} \right\} \\ {\mathcal{E}_E[\![ {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 ]\!]}\rho &= \left\{ v\, \middle|\, \begin{array}{l} \exists n.\, n \in {\mathcal{E}_E[\![ e_1 ]\!]}\rho \\ \land\, (n\neq 0 \implies v \in {\mathcal{E}_E[\![ e_2 ]\!]}\rho)\\ \land\, (n=0 \implies v \in {\mathcal{E}_E[\![ e_3 ]\!]}\rho) \end{array} \right\}\end{aligned}\] The semantics is quite similar to Plotkin’s, as again we see the use of \(\subseteq\) in the case for application. Because the output \(v'\) is just a value, and not a finite set of values as for Plotkin, there is no need for the \(\bigcup\).

Finally we review my semantics \(\mathcal{E}_S\). For it we need to define an ordering on values that is just equality for integers and \(\subseteq\) on function graphs. Let \(t\) range over \(\mathcal{P}_{f}(\mathbb{V} \times \mathbb{V})\). \[\frac{}{n \sqsubseteq n} \qquad \frac{t_1 \subseteq t_2}{t_1 \sqsubseteq t_2}\] Then we define \(\mathcal{E}_S\) as follows. \[\begin{aligned} {\mathcal{E}_S[\![ n ]\!]}\rho &= \{ n \} \\ {\mathcal{E}_S[\![ e_1 \oplus e_2 ]\!]}\rho &= \{ n_1 \oplus n_2 \mid n_1 \in {\mathcal{E}_S[\![ e_1 ]\!]}\rho \land n_2 \in {\mathcal{E}_S[\![ e_2 ]\!]}\rho \} \\ {\mathcal{E}_S[\![ x ]\!]}\rho &= \{ v \mid v \sqsubseteq \rho(x) \} \\ {\mathcal{E}_S[\![ {\lambda x.\,} e ]\!]}\rho &= \{ t \mid \forall (v,v')\in t.\, v' \in {\mathcal{E}_S[\![ e ]\!]}\rho(x{:=}v) \} \\ {\mathcal{E}_S[\![ e_1\;e_2 ]\!]}\rho &= \left\{ v \, \middle| \begin{array}{l} \exists t\, v_2\, v_3\, v_3'.\, t {\in} {\mathcal{E}_S[\![ e_1 ]\!]}\rho \land v_2 {\in} {\mathcal{E}_S[\![ e_2 ]\!]}\rho \\ \land\, (v_3, v_3') \in t \land v_3 \sqsubseteq v_2 \land v \sqsubseteq v_3' \end{array} \right\} \\ {\mathcal{E}_S[\![ {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 ]\!]}\rho &= \left\{ v\, \middle|\, \begin{array}{l} \exists n.\, n \in {\mathcal{E}_S[\![ e_1 ]\!]}\rho \\ \land\, (n\neq 0 \implies v \in {\mathcal{E}_S[\![ e_2 ]\!]}\rho)\\ \land\, (n=0 \implies v \in {\mathcal{E}_S[\![ e_3 ]\!]}\rho) \end{array} \right\}\end{aligned}\] In my semantics, \(\rho\) maps a variable to a single value. The \(v_3 \sqsubseteq v_2\) in my semantics corresponds to the uses of \(\subseteq\) in Plotkin and Engeler’s. One can view this as a kind of subsumption, allowing the use of a larger approximation of a function in places where a smaller approximation is needed. I’m not sure whether all the other uses of \(\sqsubseteq\) are necessary, but the semantics needs to be downward closed, and the above placement of \(\sqsubseteq\)’s makes this easy to prove.

Relational Semantics

For people like myself with a background in operational semantics, there is another view of the semantics that is helpful to look at. We can turn the above dentoational semantics into a relational semantics (like a big-step semantics) that hides the \(\mathcal{P}\) by making use of the following isomorphism (where \(\mathbb{V}\) is one of \(\mathbb{V}_S\), \(\mathbb{V}_P\), or \(\mathbb{V}_E\)). \[\mathbb{E} \to (\mathbb{X}\rightharpoonup \mathbb{V}) \to {\mathcal{P}(\mathbb{V})} \quad\cong\quad \mathbb{E} \times (\mathbb{X}\rightharpoonup \mathbb{V}) \times \mathbb{V}\] Let \(v\) range over \(\mathbb{V}\). We can define the semantic relation \(\rho \vdash_S e \Rightarrow v\) that corresponds to \(\mathcal{E}_S\) as follows. Note that in the rule for lambda abstraction, the table \(t\) comes out of thin air (it is existentially quantified), and that there is one premise in the rule per entry in the table, that is, we have the quantification \(\forall(v,v') \in t\). \[\begin{gathered} \frac{}{\rho \vdash_S n \Rightarrow n} \quad \frac {\rho \vdash_S e_1 \Rightarrow n_1 \quad \rho \vdash_S e_2 \Rightarrow n_2} {\rho \vdash_S e_1 \oplus e_2 \Rightarrow n_1 \oplus n_2} \quad \frac {v \sqsubseteq \rho(x)} {\rho \vdash_S x \Rightarrow v} \\[3ex] \frac{\forall (v,v'){\in} t.\; \rho(x{:=}v) \vdash_S e \Rightarrow v'} {\rho \vdash_S {\lambda x.\,}e \Rightarrow t} \quad \frac{\begin{array}{c}\rho \vdash_S e_1 \Rightarrow t \quad \rho \vdash_S e_2 \Rightarrow v_2 \\ (v_3,v'_3) \in t \quad v_3 \sqsubseteq v_2 \quad v \sqsubseteq v'_3 \end{array} } {\rho \vdash_S (e_1{\;}e_2) \Rightarrow v} \\[3ex] \frac{\rho \vdash_S e_1 \Rightarrow n \quad n \neq 0 \quad \rho \vdash_S e_2 \Rightarrow v} {\rho \vdash_S {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 \Rightarrow v} \quad \frac{\rho \vdash_S e_1 \Rightarrow 0 \quad \rho \vdash_S e_3 \Rightarrow v} {\rho \vdash_S {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 \Rightarrow v}\end{gathered}\]

For comparison, let us also turn Plotkin’s semantics into a relation. \[\begin{gathered} \frac{}{\rho \vdash_P n \Rightarrow n} \quad \frac {\rho \vdash_P e_1 \Rightarrow n_1 \quad \rho \vdash_P e_2 \Rightarrow n_2} {\rho \vdash_P e_1 \oplus e_2 \Rightarrow n_1 \oplus n_2} \quad \frac {v \in \rho(x)} {\rho \vdash_P x \Rightarrow v} \\[3ex] \frac{\forall v' \in V'.\, \rho(x{:=}V) \vdash_P e \Rightarrow v'} {\rho \vdash_P {\lambda x.\,}e \Rightarrow (V,V')} \quad \frac{\begin{array}{c}\rho \vdash_P e_1 \Rightarrow (V,V') \quad \forall v_2 \in V.\, \rho \vdash_P e_2 \Rightarrow v_2 \\ v' \in V' \end{array} } {\rho \vdash_P (e_1{\;}e_2) \Rightarrow v'} \\[3ex] \frac{\rho \vdash_P e_1 \Rightarrow n \quad n \neq 0 \quad \rho \vdash_P e_2 \Rightarrow v} {\rho \vdash_P {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 \Rightarrow v} \quad \frac{\rho \vdash_P e_1 \Rightarrow 0 \quad \rho \vdash_P e_3 \Rightarrow v} {\rho \vdash_P {\textbf{if}\,}e_1 {\,\textbf{then}\,}e_2 {\,\textbf{else}\,}e_3 \Rightarrow v}\end{gathered}\] Recall that in Plotkin’s semantics, the environment maps variables to finite sets of values. The “set” is needed to handle the case of a function bound to a variable, but is just extra baggage when we have an integer bound to a variable. So in the variable rule we have \(v \in \rho(x)\), which either extracts a singleton integer from \(\rho(x)\), or extracts one input-output entry from a function’s graph. Moving on to the lambda rule, it only produces one input-output entry, but to handle the case when the output \(V'\) is representing a function, we must build it up one entry at a time with the quantification \(\forall v'\in V'\) and a finite but arbitrary number of premises. In the application rule we again have a finite number of premises, with \(\forall v_2\in V\), and also the premise \(v' \in V'\).

The relational version of Engeler’s semantics removes the need for quantification in the lambda rule, but the application rule still has \(\forall v_2 \in V\). \[\begin{gathered} \frac{\rho(x{:=}V) \vdash_E e \Rightarrow v'} {\rho \vdash_E {\lambda x.\,}e \Rightarrow (V,v')} \quad \frac{\begin{array}{c}\rho \vdash_E e_1 \Rightarrow (V,v') \quad \forall v_2 \in V.\, \rho \vdash_E e_2 \Rightarrow v_2 \end{array} } {\rho \vdash_E (e_1{\;}e_2) \Rightarrow v'}\end{gathered}\]


My semantics is similar to Plotkin and Engeler’s in that

  1. The domain is a set of values, and values are inductively defined and involve finite sets.

  2. Self application is enabled by allowing a kind of subsumption on functions.

The really nice thing about all three semantics is that they are simple; very little mathematics is necessary to understand them, which is important pedagogically, practically (easier for practitioners to apply such semantics), and aesthetically (Occam’s razor!).

My semantics is different to Plotkin and Engeler’s in that

  1. the definition of values places \(\mathcal{P}_f\) so that functions are literally represented by finite graphs, and

  2. environments map each variable to a single value, and

  3. \(\sqsubseteq\) is used instead of \(\subseteq\) to enable self application.

The upshot of these (relatively minor) differences is that my semantics may be easier to understand.

Thursday, July 13, 2017

POPL submission, pulling together these blog posts on semantics!

Last week I submitted a paper to POPL 2018 about the new kind of denotational semantics that I've been writing about in this blog, which I am now calling declarative semantics. I think this approach to semantics has the potential to replace operational semantics for the purposes of language specification. The declarative semantics has the advantage of being compositional and extensional while, like operational semantics, using only elementary mathematics. Thus, the declarative semantics should be better than operational semantics for reasoning about programs and for reasoning about the language as a whole (i.e. it's meta-theory). The paper pulls together many of the blog posts, updates them, and adds a semantics for mutable references. The paper is available now on arXiv and the Isabelle mechanization is available here. I hope you enjoy it and I welcome your feedback!

Wednesday, June 07, 2017

Revisiting "well-typed programs cannot go wrong"

Robin Milner proved that well-typed programs cannot go wrong in his 1978 paper A Theory of Type Polymorphism in Programming (Milner 1978). That is, he defined a type system and denotational semantics for the Exp language (a subset of ML) and then proved that the denotation of a well-typed program in Exp is not the “wrong” value. The “wrong” denotation signifies that a runtime type error occurred, so Milner’s theorem proves that the type system is strong enough to prevent all the runtime type errors that could occur in an Exp program. The denotational semantics used by Milner (1978) was based on the standard domain theory for an explicitly typed language with higher-order functions.

I have been exploring, over the last month, whether I can prove a similar theorem but using my new denotational semantics, and mechanize the proof in the Isabelle proof assistant. At first I tried to stay as close to Milner’s proof as possible, but in the process I learned that Milner’s proof is rather syntactic and largely consists of proving lemmas about how substitution interacts with the type system, which does not shed much light on the semantics of polymorphism.

Last week I decided to take a step back and try a more semantic approach and switch to a cleaner but more expressive setting, one with first-class polymorphism. So I wrote down a denotational semantics for System F (Reynolds 1974) extended with support for general recursion. The proof that well-typed programs cannot go wrong came together rather quickly. Today I finished the mechanization in Isabelle and it came in at just 539 lines for all the definitions, lemmas, and main proof. I’m excited to share the details of how it went! Spoiler: the heart of the proof turned out to be a lemma I call Compositionality because it looks a lot like the similarly-named lemma that shows up in proofs of parametricity.


The types in the language include natural numbers, function types, universal types, and type variables. Regarding the variables, after some experimentation with names and locally nameless, I settled on good old DeBruijn indices to represent both free and bound type variables. \[\begin{array}{rcl} i,j & \in & \mathbb{N} \\ \sigma,\tau & ::= & \mathtt{nat} \mid \tau \to \tau \mid \forall\,\tau \mid i \end{array}\] So the type of the polymorphic identity function, normaly written \(\forall \alpha.\, \alpha \to \alpha\), is instead written \(\forall \left(0 \to 0\right)\).

The syntax of expressions is as follows. I choose to use DeBruijn indices for term variables as well, and left off all type annotations, but I don’t think that matters for our purposes here. \[\begin{array}{rcl} n & \in & \mathbb{N} \\ e & ::= & n \mid i \mid \lambda e \mid e\; e \mid \Lambda e \mid e [\,] \mid \mathtt{fix}\, e \end{array}\]

Denotational Semantics

The values in this language, described by the below grammar, include natural numbers, functions represented by finite lookup tables, type abstractions, and \(\mathsf{wrong}\) to represent a runtime type error. \[\begin{array}{rcl} f & ::= & \{ (v_1,v'_1), \ldots, (v_n,v'_n) \} \\ o & ::= & \mathsf{none} \mid \mathsf{some}(v) \\ v & ::= & n \mid \mathsf{fun}(f) \mid \mathsf{abs}(o) \mid \mathsf{wrong} \end{array}\] A type abstraction \(\mathsf{abs}(o)\) consists of an optional value, and not simply a value, because the body of a type abstraction might be a non-terminating computation.

We define the following information ordering on values so that we can reason about one lookup table being more or less-defined than another lookup table. We define \(v \sqsubseteq v'\) inductively as follows.

\[\begin{gathered} n \sqsubseteq n \quad \frac{f_1 \subseteq f_2} {\mathsf{fun}(f_1) \sqsubseteq \mathsf{fun}(f_2)} \quad \mathsf{wrong} \sqsubseteq\mathsf{wrong} \\ \mathsf{abs}(\mathsf{none}) \sqsubseteq\mathsf{abs}(\mathsf{none}) \quad \frac{v \sqsubseteq v'} {\mathsf{abs}(\mathsf{some}(v)) \sqsubseteq\mathsf{abs}(\mathsf{some}(v'))}\end{gathered}\]

The denotational semantics maps an expression to a set of values. Why a set and not just a single value? A single finite lookup table is not enough to capture the meaning of a lambda, but an infinite set of finite tables is. However, dealing with sets is somewhat inconvenient, so we mitigate this issue by working in a set monad. Also, to deal with \(\mathsf{wrong}\) we need an error monad, so we use a combined set-and-error monad.

\[\begin{aligned} X := E_1 ; E_2 &\equiv \{ v \mid \exists v'. \, v' \in E_1, v' \neq \mathsf{wrong}, v \in E_2[v'/X] \} \\ & \quad \cup \{ v \mid v = \mathsf{wrong}, \mathsf{wrong} \in E_1 \} \\ \mathsf{return}(E) & \equiv \{ v \mid v \sqsubseteq E \} \\ X \leftarrow E_1; E_2 & \equiv \{ v \mid \exists v'.\, v' \in E_1, v \in E_2[v'/X]\}\end{aligned}\]

The use of \(\sqsubseteq\) in \(\mathsf{return}\) is to help ensure that the meaning of an expression is downward-closed with respect to \(\sqsubseteq\). (The need for which is explained in prior blog posts.)

Our semantics will make use of a runtime environment \(\rho\) that includes two parts, \(\rho_1\) and \(\rho_2\). The first part gives meaning to the term variables, for which we use a list of values (indexed by their DeBruijn number). The second part, for the type variables, is a list containing sets of values, as the meaning of a type will be a set of values. We define the following notation for dealing with runtime environments.

\[\begin{aligned} v{::}\rho \equiv (v{::}\rho_1, \rho_2) \\ V{::}\rho \equiv (\rho_1, V{::}\rho_2)\end{aligned}\]

We write \(\rho[i]\) to mean either \(\rho_1[i]\) or \(\rho_2[i]\), which can be disambiguated based on the context of use.

To help define the meaning of \(\mathtt{fix}\,e\), we inductively define a predicate named \(\mathsf{iterate}\). Its first parameter is the meaning \(L\) of the expression \(e\), which is a mapping from an environment to a set of values. The second parameter is a runtime environment \(\rho\) and the third parameter is a value that is the result of iteration.

\[\begin{gathered} \mathsf{iterate}(L, \rho, \mathsf{fun}(\emptyset)) \quad \frac{\mathsf{iterate}(L, \rho, v) \quad v' \in E(v{::}\rho)} {\mathsf{iterate}(L, \rho, v')}\end{gathered}\]

To help define the meaning of function application, we define the following \(\mathsf{apply}\) functiion. \[\mathsf{apply}(V_1,V_2) \equiv \begin{array}{l} x_1 := V_1; \\ x_2 := V_2; \\ \mathsf{case}\,x_1\,\textsf{of}\\ \;\; \mathsf{fun}(f) \Rightarrow (x'_2,x'_3) \leftarrow f; \mathsf{if}\, x'_2 \sqsubseteq x_2 \, \mathsf{then}\, x'_3 \,\mathsf{else}\, \emptyset \\ \mid \_ \Rightarrow \mathsf{return}(\mathsf{wrong}) \end{array}\]

The denotational semantics is given by the following function \(E\) that maps an expression and environment to a set of values.

\[\begin{aligned} E[ n ]\rho &= \mathsf{return}(n) \\[1ex] E[ i ]\rho &= \mathsf{return}(\rho[i]) \\[1ex] E[ \lambda e ]\rho &= \{ v \mid \exists f.\, v = \mathsf{fun}(f), \forall v_1 v'_2.\, (v_1,v'_2) \in f \Rightarrow \\ & \qquad\qquad \exists v_2.\, v_2 \in E[ e ] (v_1{::}\rho), v'_2 \sqsubseteq v_2\} \\[1ex] E[ e_1\; e_2 ] \rho &= \mathsf{apply}(E[ e_1 ]\rho, E[ e_2 ]\rho) \\[1ex] E[ \mathtt{fix}\,e ] \rho &= \{ v \mid \mathsf{iterate}(E[ e ], \rho, v) \} \\[1ex] E[ \Lambda e ] \rho &= \{ v \mid \exists v'.\, v = \mathsf{abs}(\mathsf{some}(v')), \forall V. v' \in E[ e ] (V{::}\rho) \} \\ & \quad\; \cup \{ v \mid v = \mathsf{abs}(\mathsf{none}), \forall V. E[ e ](V{::}\rho) = \emptyset \} \\[1ex] E[ e [\,] ] \rho &= \begin{array}{l} x := E [ e ] \rho;\\ \mathsf{case}\,x\,\mathsf{of} \\ \;\; \mathsf{abs}(\mathsf{none}) \Rightarrow \emptyset \\ \mid \mathsf{abs}(\mathsf{some}(v')) \Rightarrow \mathsf{return}(v') \\ \mid \_ \Rightarrow \mathsf{return}(\mathsf{wrong}) \end{array}\end{aligned}\]

We give meaning to types with the function \(T\), which maps a type and an environment to a set of values. For this purposes, we only need the second part of the runtime environment which gives meaning to type variables. Instead of writing \(\rho_2\) everywhere, we’ll use the letter \(\eta\). It is important to ensure that \(T\) is downward closed, which requires some care either in the definition of \(T[ \forall \tau ]\eta\) or in the definition of \(T[ i ]\eta\). We have chosen to do this work in the definition of \(T[ i ]\eta\), and let the definition of \(T[ \forall \tau ]\eta\) quantify over any set of values \(V\) to give meaning to it’s bound type variable.

\[\begin{aligned} T[ \mathtt{nat} ] \eta &= \mathbb{N} \\ T[ i ] \eta &= \begin{cases} \{ v \mid \exists v'.\, v' \in \eta[i], v \sqsubseteq v',v \neq \mathsf{wrong} \} &\text{if } i < |\eta| \\ \emptyset & \text{otherwise} \end{cases} \\ T[ \sigma\to\tau ] \eta &= \{ v\mid \exists f. \,v=\mathsf{fun}(f), \forall v_1 v'_2.\, (v_1,v'_2) \in f, v_1 \in T[\sigma]\eta \\ & \hspace{1.5in} \Rightarrow \exists v_2.\, v_2 \in T[\tau]\eta, v'_2 \sqsubseteq v_2 \} \\ T[ \forall\tau ] \eta &= \{ v \mid \exists v'.\, v = \mathsf{abs}(\mathsf{some}(v')), \forall V.\, v' \in T[\tau ] (V{::}\eta) \} \cup \{ \mathsf{abs}(\mathsf{none}) \} \end{aligned}\]

Type System

Regarding the type system, it is standard except perhaps how we deal with the DeBruijn representation of type variables. We begin with the definition of well-formed types. A type is well formed if all the type variables in it are properly scoped, which is captured by their indices being below a given threshold (the number of enclosing type variable binders, that is, \(\Lambda\)’s and \(\forall\)’s). More formally, we write \(j \vdash \tau\) to say that type \(\tau\) is well-formed under threshold \(j\), and give the following inductive definition.

\[\begin{gathered} j \vdash \mathtt{nat} \quad \frac{j \vdash \sigma \quad j \vdash \tau}{j \vdash \sigma \to \tau} \quad \frac{j+1 \vdash \tau }{j \vdash \forall \tau} \quad \frac{i < j}{j \vdash i}\end{gathered}\]

Our representation of the type environment is somewhat unusual. Because term variables are just DeBruijn indices, we can use a list of types (instead of a mapping from names to types). However, to keep track of the type-variable scoping, we also include with each type the threshold from its point of definition. Also, we need to keep track of the current threshold, so when we write \(\Gamma\), we mean a pair where \(\Gamma_1\) is a list and \(\Gamma_2\) is a number. The list consists of pairs of types and numbers, so for example, \(\Gamma_1[i]_1\) is a type and \(\Gamma_1[i]_2\) is a number whenever \(i\) is less than the length of \(\Gamma_1\). We use the following notation for extending the type environment:

\[\begin{aligned} \tau :: \Gamma &\equiv ((\tau,\Gamma_2){::}\Gamma_1, \Gamma_2) \\ * :: \Gamma & \equiv (\Gamma_1, \Gamma_2 + 1)\end{aligned}\]

We write \(\vdash \rho : \Gamma\) to say that environment \(\rho\) is well-typed according to \(\Gamma\) and define it inductively as follows.

\[\begin{gathered} \vdash ([],[]) : ([], 0) \quad \frac{\vdash \rho : \Gamma \quad v \in T[ \tau ] \rho_2} {\vdash v{::}\rho : \tau{::}\Gamma} \quad \frac{\vdash \rho : \Gamma} {\vdash V{::}\rho : *{::}\Gamma}\end{gathered}\]

The primary operation that we perform on a type environment is looking up the type associated with a term variable, for which we define the following function \(\mathsf{lookup}\) that maps a type environment and DeBruijn index to a type. To make sure that the resulting type is well-formed in the current environment, we must increase all of its free type variables by the difference of the current threshold \(\Gamma_2\) and the threshold at its point of definition, \(\Gamma_1[i]_2\), which is accomplished by the shift operator \(\uparrow^k_c(\tau)\) (Pierce 2002). \[\mathsf{lookup}(\Gamma,i) \equiv \begin{cases} \mathsf{some}(\uparrow^{k}_{0}(\Gamma_1[i]_1) & \text{if } n < |\Gamma_1| \\ & \text{where } k = \Gamma_2 - \Gamma_1[i]_2 \\ \mathsf{none} & \text{otherwise} \end{cases}\]

To review, the shift operator is defined as follows.

\[\begin{aligned} \uparrow^{k}_{c}(\mathtt{nat}) &= \mathtt{nat} \\ \uparrow^{k}_{c}(i) &= \begin{cases} i + k & \text{if } c \leq i \\ i & \text{otherwise} \end{cases} \\ \uparrow^{k}_{c}(\sigma \to \tau) &= \uparrow^{k}_{c}(\sigma) \to \uparrow^{k}_{c}(\tau) \\ \uparrow^{k}_{c}(\forall \tau) &= \forall\, \uparrow^{k}_{c+1}(\tau)\end{aligned}\]

Last but not least, we need to define type substitution so that we can use it in the typing rule for instantiation (type application). We write \([j\mapsto \tau]\sigma\) for the substitution of type \(\tau\) for DeBruijn index \(j\) within type \(\sigma\) (Pierce 2002).

\[\begin{aligned} [j\mapsto \tau]\mathtt{nat} &= \mathtt{nat} \\ [j\mapsto\tau]i &= \begin{cases} \tau & \text{if } j = i \\ i - 1 & \text{if } j < i \\ i & \text{otherwise} \end{cases}\\ [j\mapsto\tau](\sigma\to\sigma') &= [j\mapsto\tau]\sigma \to [j\mapsto \tau]\sigma' \\ [j\mapsto \tau]\forall\sigma &= \forall\, [j+1 \mapsto \uparrow^{1}_{0}(\tau)]\sigma\end{aligned}\]

Here is the type system for System F extended with \(\mathtt{fix}\).

\[\begin{gathered} \Gamma \vdash n : \mathtt{nat} \qquad \frac{\mathsf{lookup}(\Gamma,i) = \mathsf{some}(\tau)} {\Gamma \vdash i : \tau} \\[2ex] \frac{\Gamma_2 \vdash \sigma \quad \sigma{::}\Gamma \vdash e : \tau} {\Gamma \vdash \lambda e : \sigma \to \tau} \qquad \frac{\Gamma \vdash e : \sigma \to \tau \quad \Gamma \vdash e' : \sigma} {\Gamma \vdash e \; e' : \tau} \\[2ex] \frac{\Gamma_2 \vdash \sigma \to \tau \quad (\sigma\to \tau){::}\Gamma \vdash e : \sigma \to \tau } {\Gamma \vdash \mathtt{fix}\,e : \sigma \to \tau} \\[2ex] \frac{*::\Gamma \vdash e : \tau} {\Gamma \vdash \Lambda e :: \forall\tau} \qquad \frac{\Gamma \vdash e : \forall \tau} {\Gamma \vdash e[\,] : [0\mapsto\sigma]\tau}\end{gathered}\]

We say that a type environment \(\Gamma\) is well-formed if \(\Gamma_2\) is greater or equal to every threshold in \(\Gamma_1\), that is \(\Gamma_1[i]_2 \leq \Gamma_2\) for all \(i < |\Gamma_1|\).

Proof of well-typed programs cannot go wrong

The proof required 6 little lemmas and 4 big lemmas. (There were some itsy bitsy lemmas too that I’m not counting.)

Little Lemmas

Lemma [\(\sqsubseteq\) is a preorder]  

  • \(v \sqsubseteq v\)

  • If \(v_1 \sqsubseteq v_2\) and \(v_2 \sqsubseteq v_3\), then \(v_1 \sqsubseteq v_3\).

[lem:less-refl] [lem:less-trans]

I proved transitivity by induction on \(v_2\).

Lemma [\(T\) is downward closed] If \(v \in T [ \tau ] \eta\) and \(v' \sqsubseteq v\), then \(v' \in T [ \tau ] \eta\). [lem:T-down-closed]

The above is a straightforward induction on \(\tau\)

Lemma [\(\mathsf{wrong}\) not in \(T\)] For any \(\tau\) and \(\eta\), \(\mathsf{wrong} \notin T [ \tau ] \eta\). [lem:wrong-not-in-T]

The above is another straightforward induction on \(\tau\)

Lemma If \(\vdash \rho : \Gamma\), then \(\Gamma\) is a well-formed type environment. [lem:wfenv-good-ctx]

The above is proved by induction on the derivation of \(\vdash \rho : \Gamma\).

Lemma \[T [ \tau ] (\eta_1 \eta_3) = T [ \uparrow^{|\eta_2|}_{ |\eta_1|}(\tau) ] (\eta_1\eta_2\eta_3)\]

The above lemma is proved by induction on \(\tau\). It took me a little while to figure out the right strengthening of the statement of this lemma to get the induction to go through. The motivations for this lemma were the following corollaries.

Corollary [Lift/Append Preserves \(T\)] \[T [ \tau ](\eta_2) = T [ \uparrow^{|\eta_1|}_{0}(\tau) ] (\eta_1\eta_2)\] [lem:lift-append-preserves-T]

Corollary[Lift/Cons Preserves \(T\)] \[T [ \tau ] (\eta) = T [ \uparrow^{1}_{0}(\tau) ] (V{::}\eta)\] [lem:shift-cons-preserves-T]

Of course, two shifts can be composed into a single shift by adding the amounts.

Lemma [Compose Shift] \[\uparrow^{j+k}_{c}(\tau) = \uparrow^{j}_{c}( \uparrow^{k}_{c}(\tau))\] [lem:compose-shift]

The proof is a straightforward induction on \(\tau\).

Big Lemmas

There are one or two big lemmas for each of the “features” in this variant of System F.

The first lemma shows that well-typed occurrences of term variables cannot go wrong.

Lemma [Lookup in Well-typed Environment] 
If \(\vdash \rho : \Gamma\) and \(\mathsf{lookup}(\Gamma,i) = \mathsf{some}(\tau)\), then \(\exists v.\, \rho_1[i] = v\) and \(v \in T [ \tau ] \rho_2\). [lem:lookup-wfenv]

The proof is by induction on the derivation of \(\vdash \rho : \Gamma\). The first two cases were straightforward but the third case required some work and used lemmas [lem:wfenv-good-ctx], [lem:shift-cons-preserves-T], and [lem:compose-shift].

Lemma [Application cannot go wrong] If \(V \subseteq T [ \sigma \to \tau ] \eta\) and \(V' \subseteq T [ \sigma ] \eta\), then \(\mathsf{apply}(V,V') \subseteq T [ \tau ] \eta\). [lem:fun-app]

The proof of this lemma is direct and does not use induction. However, it does use lemmas [lem:wrong-not-in-T] and [lem:T-down-closed].

Lemma [Compositionality] Let \(V = T [ \sigma ] (\eta_1\eta_2)\). \[T [ \tau ] (\eta_1 V \eta_2) = T [ \tau[\sigma/|\eta_1|] ] (\eta_1 \eta_2)\] [lem:compositionality]

I proved the Compositionality lemma by induction on \(\tau\). All of the cases were straightforward except for \(\tau=\forall\tau'\). In that case I used the induction hypothesis to show that \[T [ \tau' ] (V \eta_1 S \eta_2) = T [ ([|V\eta_1|\mapsto \uparrow^1_0(\sigma)] \tau' ] (V\eta_1\eta_2) \text{ where } S = T [ \uparrow^1_0(\sigma) ] (V\eta_1\eta_2)\] and I used Lemma [lem:shift-cons-preserves-T].

Lemma [Iterate cannot go wrong] If

  • \(\mathsf{iterate}(L,\rho,v)\) and

  • for any \(v'\), \(v' \in T[ \sigma\to\tau ] \rho_2\) implies \(L(v'{::}\rho) \subseteq T[ \sigma\to\tau ] \rho_2\),

then \(v \in T [ \sigma \to \tau ] \rho_2\). [lem:iterate-sound]

This was straightfroward to prove by induction on the derivation of \(\mathsf{iterate}(L,\rho,v)\). The slightly difficult part was coming up with the definition of \(\mathsf{iterate}\) to begin with and in formulating the second premise.

The Theorem

Theorem [Well-typed programs cannot go wrong] 
If \(\Gamma \vdash e : \tau\) and \(\vdash \rho : \Gamma\), then \(E [ e ] \rho \subseteq T[ \tau ] \rho_2\). [thm:welltyped-dont-go-wrong]

The proof is by induction on the derivation of \(\Gamma \vdash e : \tau\).

  • \(\Gamma \vdash n : \mathtt{nat}\)

    This case is immediate.

  • \(\frac{\mathsf{lookup}(\Gamma,i) = \mathsf{some}(\tau)} {\Gamma \vdash i : \tau}\)

    Lemma [lem:lookup-wfenv] tells us that \(\rho_1[i] = v\) and \(v \in T [ \tau ] \rho_2\) for some \(v\). We conclude by Lemma [lem:T-down-closed].

  • \(\frac{\Gamma_2 \vdash \sigma \quad \sigma{::}\Gamma \vdash e : \tau} {\Gamma \vdash \lambda e : \sigma \to \tau}\)

    After unraveling some definitions, for arbitrary \(f,v_1,v_2,v'_2\) we can assume \(v_1 \in T [ \sigma ] \rho_2\), \(v_2 \in E [ e ](v_1{::}\rho)\), and \(v'_2 \sqsubseteq v_2\). We need to prove that \(v_2 \in T [ \tau ] (v_1{::}\rho)_2\).

    We can show \(\vdash v_1{::}\rho : \sigma{::}\Gamma\) and therefore, by the induction hypothesis, \(E [ e ] (v_1{::}\rho) \subseteq T [ \tau ] (v_1{::}\rho)_2\). So we conclude that \(v_2 \in T [ \tau ] (v_1{::}\rho)_2\).

  • \(\frac{\Gamma \vdash e : \sigma \to \tau \quad \Gamma \vdash e' : \sigma} {\Gamma \vdash e \; e' : \tau}\)

    By the induction hypothesis, we have \(E [ e ] \rho \subseteq T [ \sigma\to\tau ] \rho_2\) and \(E [ e' ] \rho \subseteq T [ \sigma ] \rho_2\). We conclude by Lemma [lem:fun-app].

  • \(\frac{\Gamma_2 \vdash \sigma \to \tau \quad (\sigma\to \tau){::}\Gamma \vdash e : \sigma \to \tau } {\Gamma \vdash \mathtt{fix}\,e : \sigma \to \tau}\)

    For an arbitrary \(v\), we may assume \(\mathsf{iterate}(E[ e ], \rho, v)\) and need to show that \(v \in T [ \sigma\to\tau ]\rho_2\).

    In preparation to apply Lemma [lem:iterate-sound], we first prove that for any \(v'\), \(v' \in T[ \sigma\to\tau ] \rho_2\) implies \(E[ e](v'{::}\rho) \subseteq T[ \sigma\to\tau ] \rho_2\). Assume \(v'' \in E[ e](v'{::}\rho)\). We need to show that \(v'' \in T[ \sigma\to\tau ] \rho_2\). We have \(\vdash v'{::}\rho : (\sigma\to\tau){::}\Gamma\), so by the induction hypothesis \(E [ e ](v'{::}\rho) \subseteq T[ \sigma\to\tau ](v'{::}\rho)\). From this we conclude that \(v'' \in T[ \sigma\to\tau ] \rho_2\).

    We then apply Lemma [lem:iterate-sound] to conclude this case.

  • \(\frac{*::\Gamma \vdash e : \tau} {\Gamma \vdash \Lambda e :: \forall\tau}\)

    After unraveling some definitions, for an arbitrary \(v'\) and \(V\) we may assume that \(\forall V'.\, v' \in E [ e ](V{::}\rho)\). We need to show that \(v' \in T [ \tau ] (V{::}\rho_2)\). We have \(\vdash V{::}\rho : *{::}\Gamma\), so by the induction hypothesis \(E[ e ](V{::}\rho) \subseteq T [ \tau ] (V{::}\rho)_2\). Also, from the assumption we have \(v' \in E [ e ](V{::}\rho)\), so we can conclude.

  • \(\frac{\Gamma \vdash e : \forall \tau} {\Gamma \vdash e[\,] : [0\mapsto\sigma]\tau}\)

    Fix a \(v' \in E [ e ] \rho\). We have three cases to consider.

    1. \(v'=\mathsf{abs}(\mathsf{none})\). This case is immediate.

    2. \(v'=\mathsf{abs}(\mathsf{some}(v''))\) for some \(v''\). By the induction hypothesis, \(v' \in T [ \forall\tau ]\rho_2\). So we have \(v'' \in T [ \tau ](V{::}\rho_2)\) where \(V=T[\sigma]\rho_2\). Then by Compositionality (Lemma [lem:compositionality]) we conclude that \(v'' \in T [ [0\mapsto \sigma]\tau]\rho_2\).

    3. \(v'\) is some other kind of value. This can’t happen because, by the induction hypothesis, \(v' \in T [ \forall\tau ]\rho_2\).


Milner, Robin. 1978. “A Theory of Type Polymorphism in Programming.” Journal of Computer and System Sciences 17 (3): 348–75.

Pierce, Benjamin C. 2002. Types and Programming Languages. MIT Press.

Reynolds, John C. 1974. “Towards a Theory of Type Structure.” In Programming Symposium: Proceedings, Colloque Sur La Programmation, 19:408–25. LNCS. Springer-Verlag.

Friday, March 24, 2017

Consolidation of the Denotational Semantics and an Application to Compiler Correctness

This is a two part post. The second part depends on the first.

Part 1. Consolidation of the Denotational Semantics

As a matter of expediency, I've been working with two different versions of the intersection type system upon which the denotational semantics is based, one version with subsumption and one without. I had used the one with subsumption to prove completeness with respect to the reduction semantics whereas I had used the one without subsumption to prove soundness (for both whole programs and parts of programs, that is, contextual equivalence). The two versions of the intersection type system are equivalent. However, it would be nice to simplify the story and just have one version. Also, while the correspondence to intersection types has been enormously helpful in working out the theory, it would be nice to have a presentation of the semantics that doesn't talk about them and instead talks about functions as tables.

Towards these goals, I went back to the proof of completeness with respect to the reduction semantics and swapped in the "take 3" semantics. While working on that I realized that the subsumption rule was almost admissible in the "take 3" semantics, just the variable and application equations needed more uses of \(\sqsubseteq\). With those changes in place, the proof of completeness went through without a hitch. So here's the updated definition of the denotational semantics of the untyped lambda calculus.

The definition of values remains the same as last time: \[ \begin{array}{lrcl} \text{function tables} & T & ::= & \{ v_1\mapsto v'_1,\ldots,v_n\mapsto v'_n \} \\ \text{values} & v & ::= & n \mid T \end{array} \] as does the \(\sqsubseteq\) operator. \begin{gather*} \frac{}{n \sqsubseteq n} \qquad \frac{T_1 \subseteq T_2}{T_1 \sqsubseteq T_2} \end{gather*} For the denotation function \(E\), we add uses of \(\sqsubseteq\) to the equations for variables (\(v \sqsubseteq \rho(x)\)) and function application (\(v_3 \sqsubseteq v_3'\)). (I've also added the conditional expression \(\mathbf{if}\,e_1\,e_2\,e_3\) and primitive operations on numbers \(f(e_1,e_2)\), where \(f\) ranges over binary functions on numbers.) \begin{align*} E[\!| n |\!](\rho) &= \{ n \} \\ E[\!| x |\!](\rho) &= \{ v \mid v \sqsubseteq \rho(x) \} \\ E[\!| \lambda x.\, e |\!](\rho) &= \left\{ T \middle| \begin{array}{l} \forall v_1 v_2'. \, v_1\mapsto v_2' \in T \Rightarrow\\ \exists v_2.\, v_2 \in E[\!| e |\!](\rho(x{:=}v_1)) \land v_2' \sqsubseteq v_2 \end{array} \right\} \\ E[\!| e_1\;e_2 |\!](\rho) &= \left\{ v_3 \middle| \begin{array}{l} \exists T v_2 v_2' v_3'.\, T {\in} E[\!| e_1 |\!](\rho) \land v_2 {\in} E[\!| e_2 |\!](\rho) \\ \land\, v'_2\mapsto v_3' \in T \land v'_2 \sqsubseteq v_2 \land v_3 \sqsubseteq v_3' \end{array} \right\} \\ E[\!| f(e_1, e_2) |\!](\rho) &= \{ f(n_1,n_2) \mid \exists n_1 n_2.\, n_1 \in E[\!| e_1 |\!](\rho) \land n_2 \in E[\!| e_2 |\!](\rho) \} \\ E[\!| \mathbf{if}\,e_1\,e_2\,e_3 |\!](\rho) &= \left\{ v \, \middle| \begin{array}{l} v \in E[\!| e_2 |\!](\rho) \quad \text{if } n \neq 0 \\ v \in E[\!| e_3 |\!](\rho) \quad \text{if } n = 0 \end{array} \right\} \end{align*}

Here are the highlights of the results for this definition.

Proposition (Admissibility of Subsumption)
If \(v \in E[\!| e |\!] \) and \(v' \sqsubseteq v\), then \(v' \in E[\!| e |\!] \).

Theorem (Reduction implies Denotational Equality)

  1. If \(e \longrightarrow e'\), then \(E[\!| e |\!] = E[\!| e' |\!]\).
  2. If \(e \longrightarrow^{*} e'\), then \(E[\!| e |\!] = E[\!| e' |\!]\).

Theorem (Whole-program Soundness and Completeness)

  1. If \(v' \in E[\!| e |\!](\emptyset)\), then \(e \longrightarrow^{*} v\) and \(v' \in E[\!| v |\!](\emptyset)\).
  2. If \(e \longrightarrow^{*} v\), then \(v' \in E[\!| e |\!](\emptyset) \) and \(v' \in E[\!| v |\!](\emptyset) \) for some \(v'\).

Proposition (Denotational Equality is a Congruence)
For any context \(C\), if \(E[\!| e |\!] = E[\!| e' |\!]\), then \(E[\!| C[e] |\!] = E[\!| C[e'] |\!]\).

Theorem (Soundness wrt. Contextual Equivalence)
If \(E[\!| e |\!] = E[\!| e' |\!]\), then \(e \simeq e'\).

Part 2. An Application to Compiler Correctness

Towards finding out how useful this denotational semantics is, I've begun looking at using it to prove compiler correctness. I'm not sure exactly which compiler I want to target yet, but as a first step, I wrote a simple source-to-source optimizer \(\mathcal{O}\) for the lambda calculus. It performs inlining and constant folding and simplifies conditionals. The optimizer is parameterized over the inlining depth to ensure termination. We perform optimization on the body of a function after inlining, so this is a polyvariant optimizer. Here's the definition. \begin{align*} \mathcal{O}[\!| x |\!](k) &= x \\ \mathcal{O}[\!| n |\!](k) &= n \\ \mathcal{O}[\!| \lambda x.\, e |\!](k) &= \lambda x.\, \mathcal{O}[\!| e |\!](k) \\ \mathcal{O}[\!| e_1\,e_2 |\!](k) &= \begin{array}{l} \begin{cases} \mathcal{O}[\!| [x{:=}e_2'] e |\!] (k{-}1) & \text{if } k \geq 1 \text{ and } e_1' = \lambda x.\, e \\ & \text{and } e_2' \text{ is a value} \\ e_1' \, e_2' & \text{otherwise} \end{cases}\\ \text{where } e_1' = \mathcal{O}[\!|e_1 |\!](k) \text{ and } e_2' = \mathcal{O}[\!|e_2 |\!](k) \end{array} \\ \mathcal{O}[\!| f(e_1,e_2) |\!](k) &= \begin{array}{l} \begin{cases} f(n_1,n_2) & \text{if } e_1' = n_1 \text{ and } e_2' = n_2 \\ f(e_1',e_2') & \text{otherwise} \end{cases}\\ \text{where } e_1' = \mathcal{O}[\!|e_1 |\!](k) \text{ and } e_2' = \mathcal{O}[\!|e_2 |\!](k) \end{array} \\ \mathcal{O}[\!| \mathbf{if}\,e_1\,e_2\,e_3 |\!](k) &= \begin{array}{l} \begin{cases} e_2' & \text{if } e_1' = n \text{ and } n \neq 0 \\ e_3' & \text{if } e_1' = n \text{ and } n = 0 \\ \mathbf{if}\,e_1'\, e_2'\,e_3'|\!](k) & \text{otherwise} \end{cases}\\ \text{where } e_1' = \mathcal{O}[\!|e_1 |\!](k) \text{ and } e_2' = \mathcal{O}[\!|e_2 |\!](k)\\ \text{ and } e_3' = \mathcal{O}[\!|e_3 |\!](k) \end{array} \end{align*}

I've proved that this optimizer is correct. The first step was proving that it preserves denotational equality.

Lemma (Optimizer Preserves Denotations)
\(E(\mathcal{O}[\!| e|\!](k)) = E[\!|e|\!] \)
The proof is by induction on the termination metric for \(\mathcal{O}\), which is the lexicographic ordering of \(k\) then the size of \(e\). All the cases are straightforward to prove because Reduction implies Denotational Equality and because Denotational Equality is a Congruence. QED

Theorem (Correctness of the Optimizer)
\(\mathcal{O}[\!| e|\!](k) \simeq e\)
The proof is a direct result of the above Lemma and Soundness wrt. Contextual Equivalence. QED

Of course, all of this is proved in Isabelle. Here is the tar ball. I was surprised that this proof of correctness for the optimizer was about the same length as the definition of the optimizer!