Processing math: 53%

Wednesday, December 21, 2016

Take 3: Application with Subsumption for Den. Semantics of Lambda Calculus

Alan Jeffrey tweeted the following in reply to the previous post:

@jeremysiek wouldn't it be easier to change the defn of application to be
⟦MN⟧σ = { W | T ∈ ⟦M⟧σ, V ∈ ⟦N⟧σ, (V′,W) ∈ T, V′ ⊆ V }?

The idea is that, for higher order functions, if the function M is expecting to ask all the questions in the table V, then it is OK to apply M to a table V that answers more questions than V. This idea is quite natural, it is like Liskov's subsumption principle but for functions instead of objects. If this change can help us with the self application problem, then it will be preferable to the graph-o-tables approach described in the previous post because it retains the simple inductive definition of values. So let's see where this takes us!

We have the original definition of values

valuesv::=n{(v1,v1),,(vn,vn)}

and here is the denotational semantics, updated with Alan's suggestion to include the clause v2v2 in the case for application.

E[|n|](ρ)={n}E[|x|](ρ)={ρ(x)}E[|λx.e|](ρ)={Tvv.(v,v)TvE[|e|](ρ(x:=v))}E[|e1e2|](ρ)={v|Tv2v2.TE[|e1|](ρ)v2E[|e2|](ρ)v2v2(v2,v)T}

The ordering on values used above is just equality on numbers and subset on function tables.

The first thing to check is whether this semantics can handle self application at all, such as (λf.ff)(λg.42)

Example 1. 42E[|(λf.ff)(λg.42)|]()
The main work is figuring out witnesses for the function tables. We're going to need the following tables: T0=T1={(,42)}T2={(T1,42)} Here's the proof, working top-down, or goal driven. The important use of subsumption is the T1 below.

  • T2E[|(λf.ff)|]()
    So we need to show: 42E[|ff|](f:=T1)
    • T1E[|f|](f:=T1)
    • T1E[|f|](f:=T1)
    • T1
    • (,42)T1
  • T1E[|(λg.42)|]()
    So we need to show 42E[|42|](g:=), which is immediate.
  • T1T1
  • (T1,42)T2

Good, so this semantics can handle a simple use of self application. How about factorial? Instead of considering factorial of 3, as in the previous post, we'll go further this time and consider factorial of an arbitrary number n.

Example 2. We shall compute the factorial of n using the strict version of the Y combinator, that is, the Z combinator. Mλx.f(λv.(xx)v)Zλf.MMFλn.ifn=0then1elsen×r(n1)Hλr.FfactZH We shall show that n!E[|factn|]() For this example we need very many tables, but fortunately there are just a few patterns. To capture these patterns, be define the following table-producing functions. TF(n)={(n,n!)}TH(n)={(,TF(0)),(TF(0),TF(1)),,(TF(n1),TF(n))}TM(n)={if n=0{(TM(n),TF(n))}TM(n)if n=1+nTZ(n)={(TH(n),TF(n))} TF(n) is a fragment of the factorial function, for the one input n. TH(n) maps each TF(i) to TF(i+1) for up to i+1=n. TM(n) is the heart of the matter, and what makes the self application work. It maps successively larger versions of itself to fragments of the factorial function, that is TM(n)={TM(0)TF(0)TM(1)TF(1)TM(n1)TF(n1)} For example, here is TM(4):

The tables TM enable self application because we have the following two crucial properties:
  1. TM(n)TM(1+n)
  2. (TM(n),TF(n))TM(1+n)
The main lemma's that we prove are

Lemma 1. If nk, then TM(1+n)E[|M|](f:=TH(k)).

Lemma 2. TZ(n)E[|Z|]()

If you're curious about the details for the complete proof of n!E[|factn|]() you can take a look at the proof in Isabelle that I've written here.

This is all quite promising! Next we look at the proof of soundness with respect to the big step semantics.

Soundness with Respect to the Big-Step Semantics

The proof of soundness is quite similar to that of the first version, as the relation between the denotational and big-step values remains the same. However, the following two technical lemmas are needed to handle subsumption.

Lemma (Related Table Subsumption) If TT and Tλx.e,ρ, then Tλx.e,ρ.
The proof is by induction on T.

Lemma (Related Value Subsumption) If v1v and v2v, then v2v.
The proof is by case analysis, using the previous lemma when the values are function tables.

Theorem (Soundness).
If vE[|e|](ρ) and ρρ, then ρev and vv for some v.

The mechanization of soundness in Isabelle is here.

Monday, December 19, 2016

Take 2: Graph of Tables for the Denotational Semantics of the Lambda Calculus

In the previous post, the denotational semantics I gave for the lambda calculus could not deal with self application, such as the program (λf.ff)(λg.42) whose result should be 42. The problem was that I defined function values to be tables of pairs of values, using a datatype definition, which rules out the possibility of cycles. In the above program, the table for λg.42 needs to include itself so that the application ff makes sense.

A straightforward way to solve this problem is to allow cycles by representing all the functions created by the program as a graph of tables. A function value will just contain an index, i.e. it will have the form fun[i] with index i, and the graph will map the index to a table. So we define values as follows. numbersnNvaluesvV::=nfun[i]graph=N(V×V)list Given a function value v and graph G, we write tab(v,G) for the table t when v=fun[i] and G(i)=t for some index i. So we modify the semantic function E to be defined as follows.

E:expenv(V×graph)setE[|n|](ρ)={(v,G)v=n}E[|x|](ρ)={(v,G)v=ρ(x)}E[|λx.e|](ρ)={(v,G)|v1v2.(v1,v2)tab(v,G)v2E[|e|](ρ(x:=v1))}E[|e1e2|](ρ)={(v,G)|v1v2.v1E[|e1|](ρ)v2E[|e2|](ρ)(v2,v)tab(v1,G)}

Example 1. Let's consider again the program (λf.ff)(λg.42) We'll define a graph G as follows. G(0)=[(fun[1],42)]G(1)=[(fun[1],42)] To show (42,G)E[|(λf.ff)(λg.42)|] we need to show

  1. (fun[0],G)E[|(λf.ff)|]. So we need to show (42,G)E[|ff|](f:=fun[1]) which we have because (fun[1],42)fun[1].
  2. (fun[1],G)E[|(λg.42)|]. We need (42,G)E[|42|](f:=fun[1]) which is immediate.
  3. (fun[1],42)tab(fun[0]). This is immediate.

Example 2. We shall compute the factorial of 3 using the strict version of the Y combinator, that is, the Z combinator. Rλv.(xx)vMλx.fRZλf.MMFλn.ifn=0then1elsen×r(n1)Hλr.FfactZH We shall show that (6,G)E[|fact3|] for some graph G that we need to construct. By instrumenting an interpreter for the lambda calculus and running fact3, we observe the following graph. G(0)=[(fun[1],fun[4])]for ZG(1)=[(fun[3],fun[4])]for HG(2)=[(fun[2],fun[4])]for MG(3)=[(0,1),(1,1),(2,2)]for RG(4)=[(0,1),(1,1),(2,2),(3,6)]for F We check all of the following (Tedious! I'm going to write a program to do this next time.):

  • (fun[4],G)E[ZH]
  • (3,6)G(4)
  • (fun[0],G)E[|Z|]
  • (fun[4],G)E[|MM|](f:=fun[1])
  • (fun[2],G)E[|M|](f:=fun[1])
  • (fun[4],G)E[|fR|](f:=fun[1],x:=fun[2])
  • (fun[3],G)E[|R|](f:=fun[1],x:=fun[2])
  • (1,G)E[|(xx)v|](f:=fun[1],x:=fun[2],v:=0) and (0,1)G(4)
  • (1,G)E[|(xx)v|](f:=fun[1],x:=fun[2],v:=1) and (1,1)G(4)
  • (2,G)E[|(xx)v|](f:=fun[1],x:=fun[2],v:=2) and (2,2)G(4)
  • (fun[2],fun[4])G(2)
  • (fun[1],G)E[|H|]
  • (fun[4],G)E[|F|](r:=fun[3])
  • (1,G)E[|ifn=0then1elsen×r(n1)|](r:=fun[3],n:=0)
  • (1,G)E[|ifn=0then1elsen×r(n1)|](r:=fun[3],n:=1)
  • (2,G)E[|ifn=0then1elsen×r(n1)|](r:=fun[3],n:=2)
  • (6,G)E[|ifn=0then1elsen×r(n1)|](r:=fun[3],n:=3)

The next step is to update the proof of soundness wrt. the big-step semantics. The graphs will make it a bit more challenging. But hopefully they will make it possible to also prove completeness!

Thursday, December 15, 2016

Simple Denotational Semantics for the Lambda Calculus, Pω Revisited?

I've been trying to understand Dana Scott's Pω and D models of the lambda calculus, as well as a couple large Coq formalizations of domain theory, and in the process have come up with an extraordinarily simple denotational semantics for the call-by-value lambda calculus. It borrows some of the main ideas from Pω but doesn't encode everything into numbers and puts infinity in a different place. (More discussion about this near the bottom of the post.) That being said, I still don't 100% understand Pω, so there may be other subtle differences. In any event, what follows is so simple that it's either wrong or amazing that it's not already a well-known semantics.

UPDATE: It's wrong. See the section titled Counterexample to Completeness below. Now I need to go and read more literature.
UPDATE: It seems that there is an easy fix to the problem! See the subsequent post.

To get started, here's the syntax of the lambda calculus.

variablesxnumbersnNexpressionse::=nxλx.eee

So we've got a language with numbers and first-class functions. I'm going to represent functions as data in the most simple way possible, as a lookup table, i.e., a list of pairs, each pair is an input value vi and it's corresponding output vi. Of course, functions can have an infinite number of inputs, so how will a finite-length list suffice?

valuesv::=n[(v1,v1),,(vn,vn)]

The answer is that we're going to write the denotational semantics as a logic program, that is, as a relation instead of a function. (We'll turn it back into a function later.) Instead of producing the result value, the semantics will ask for a result value and then say whether it is correct or not. So when it comes to a program that produces a function, the semantics will ask for a list of input-output values and say whether they agree with the function. A finite list suffices because, after all, you can't actually build an infinite list to pass into the semantics.

So here we go, our first version of the semantics, in relational style, or equivalently, as a function with three parameters that returns a Boolean. We name the function denoto, with an o at the end as a nod to The Reasoned Schemer. The meaning of a λx.e is any table T that agrees with the semantics of the body e. (This allows the table to be empty!) The meaning of an application e1e2 is simply to do a lookup in a table for e1. (Requiring a non-empty table.) We write (v,v)T when (v,v) is one of the pairs in the table T.

denoto(n,ρ,n)=truedenoto(x,ρ,ρ(x))=truedenoto(λx.e,ρ,T)=vv.(v,v)Tdenoto(e,ρ(x:=v),v)denoto(e1e2,ρ,v)=(Tv2.denoto(e1,ρ,T)denoto(e2,ρ,v2)(v2,v)T)

The existential quantifier in the line for application is powerful. It enables the semantics to guess a sufficiently large table T, so long as the table agrees with the semantics of e1 and the later uses of the result value. Because the execution of a terminating program can only call the function with a finite number of arguments, and the execution can only use the results in a finite number of ways, there is a sufficiently large table T to cover all its uses in the execution of the whole program. Also, note that T can be large in a couple dimensions: it may handle a large number of inputs, but also, it can specify large outputs (in the case when the outputs are functions).

Denotational semantics are usually written as functions, not relations, so we remove the third parameter and instead return a set of values. This will bring the semantics more in line with Pω. Also, for this version we'll use the name E for the semantic function instead of denoto.

E[|n|](ρ)={n}E[|x|](ρ)={ρ(x)}E[|λx.e|](ρ)={Tvv.(v,v)TvE[|e|](ρ(x:=v))}E[|e1e2|](ρ)={vTv2.TE[|e1|](ρ)v2E[|e2|](ρ)(v2,v)T}

With the semantics written this way, it is clear that the meaning of a λ is not just a finite table, but instead it is typically an infinite set of tables, each of which is an approximation of the actual infinite graph of the function.

Is this semantics correct? I'm not entirely sure yet, but I have proved that it is sound with respect to the big-step semantics.

Theorem (Soundness).
If vE[|e|](ρ) and ρρ, then ρev and vv for some v.

The two semantics have different notions of values, so the relation is defined to bridge the two worlds.

nnv1v2v1.(v1,v2)Tv1v1v2.ρ(x:=v1)ev2v2v2Tλx.e,ρ The definition of extends to environments in the natural way.

The semantics and proof of soundness in Isabelle is here.

Counterexample to Completeness

This semantics does not handle self application properly. Consider the program (λf.ff)(λg.1) The operational semantics says the answer is 1. The denotational semantics requires us to find some tables T[|λf.ff|] and T[|λg.1|]. We need (T,1)T, so we need 1[|ff|](f:=T). That requires (T,1)T, but that's impossible given that we've defined the values and tables in a way that does not allow cycles.

Relationship with Pω

The idea of representing functions as data, and as a lookup table, comes from Pω, as does having the denotation's result be a set of values. As mentioned above, one (minor) difference is that Pω encodes everything into numbers, whereas here we've used a datatype definition for the values. However, the most important difference (if I understand Pω correctly) is that its functions are infinite in a first-class sense. That is, Pω is a solution to D=N+(DD) and the things in DD are functions with potentially infinite graphs. In contrast, I've taken a stratified approach in which I've defined the values V to include only finite representations of functions V=N+(V×V)list and then, only at the top level, I've allowed for infinity by making the denotation of an expression be a (potentially infinite) set of values. E:expenvVset

Thursday, December 08, 2016

Denotational Semantics of IMP without the Least Fixed Point

It has been too long since I wrote a blog post! Needless to say, parenthood, teaching, research, service, and my wonderful graduate students and post-docs have been keeping me busy. But with the Fall 2016 semester winding down I can sneak away for a bit and think.

But first, I have something to admit. My education in the theory of programming languages has a glaring hole in it. I only have a very basic understanding of Denotational Semantics despite having taking a course on Domain Theory in graduate school. I figured it would be fun to remedy this situation, and it might even be useful in the future. So my first step was to understand the denotational semantics of the IMP language, which is the starter language for most textbooks on denotational semantics. IMP is simply an imperative language with assignment statements, while loops, if statements, and arithmetic on integers. The IMP language has two syntactic categories, expressions and commands. The following is the syntax of IMP.

variablesxnumbersnNunary operatorsu::=¬binary operatorsb::=+×=expressionse::=nxu(e)b(e,e)commandsc::=skipx:=ec;cifethencelsecwhileedoc

As far as I can tell, for a semantics to be a denotational semantics it has to satisfy two properties.

  • It is a mapping from abstract syntax (the program) to a mathematical object, which is just to say some precisely defined entity, that describes the observable behavior of the program. For example, the mathematical object could be a relation between a program's inputs and outputs.
  • It is compositional, which means that the denotation of a particular language construct is defined in terms of the denotation of the syntactic sub-parts of the construct. For example, the meaning of a while loop is defined in terms of the meaning of its conditional and the meaning of its body.
For the expressions of IMP, it is straightforward to write down a denotational semantics, the following function E. This E is no different from a recursively defined interpreter. In the following, we map expressions to natural numbers. Following custom, we use the brackets [|e|] ask a kind of quote to distinguish abstract syntax from the surrounding mathematics. To handle variables, we also pass in a function σ from variables to numbers, which we call a state.

E[|n|](σ)=nE[|xi|](σ)=σ(x)E[|u(e)|](σ)=[|u|](E[|e|])E[|b(e1,e2)|](σ)=[|b|](E[|e1|],E[|e2|])E[|¬|](n)={1if n=00if n0E[|+|](n1,n2)=n1+n2E[||](n1,n2)=n1n2E[|×|](n1,n2)=n1×n2E[|=|](n1,n2)={1if n1=n20if n1n2

What do the commands of IMP have in common regarding what they do? They change a state to a new state. For example, if σ is the incoming state, then the assignment command x:=42 outputs a new state σ which is the same as σ except that σ(x)=42. In general, the denotation of a command, C[|c|], will be a relation on states, that is, a set of pairs that match up input states with their corresponding output states. We shall give the denotational semantics of commands, one construct as a time.

The meaning of the skip command is that it it doesn't change the state, so it relates each state to itself.

C[|skip|]={(σ,σ)σstate}

The meaning of the assignment statement is to update the state to map the left-hand side variable to the result of the right-hand side expression. So the new state is a function that takes in a variable named y and returns [|e|](σ) if y=x and otherwise returns the same thing as σ.

C[|x:=e|]={(σ,σ)σstate}where σ(y)={E[|e|](σ)ify=xσ(y)ifyx

The meaning of two commands in sequence is just the meaning of the first followed by the meaning of the second.

C[|c1;c2|]={(σ,σ)σ.(σ,σ)C[|c1|](σ,σ)C[|c2|]}

The meaning of an if command depends on the conditional expression e. If the e evaluates to 0, then the meaning of if is given by the else branch c_2. Otherwise, the meaning of if is given by the then branch c_1.

C[\!| \mathtt{if}\, e \,\mathtt{then}\, c_1 \,\mathtt{else}\, c_2 |\!] = \left\{ (\sigma,\sigma') \middle| \begin{array}{l} (\sigma,\sigma') \in C[\!| c_2 |\!] & \text{if}\, E[\!|e|\!](\sigma) = 0 \\ (\sigma,\sigma') \in C[\!| c_1 |\!] & \text{if}\, E[\!|e|\!](\sigma) \neq 0 \end{array} \right\}

The meaning of the while command is the crux of the matter. This is normally where a textbook includes several pages of beautiful mathematics about monotone and continuous functions, complete partial orders, and least fixed points. We're going to bypass all of that.

The meaning of an while command is to map each starting state \sigma_0 to an ending state \sigma_n obtained by iterating it's body command so long as the condition is non-zero. Pictorially, we have the following:

We define an auxiliary function named L to express the iterating of the loop. It takes as input the number of iterations of the loop, the denotation of the condition expression de, and the denotation of the command dc.

\begin{align*} L(0, de, dc) &= \{ (\sigma,\sigma) \mid de(\sigma) = 0 \} \\ L(1+n, de, dc) &= \{ (\sigma,\sigma'') \mid de(\sigma) \neq 0 \land \exists \sigma'. (\sigma,\sigma') \in dc \land (\sigma',\sigma'') \in L(n, de, dc) \} \end{align*}

The meaning of the while command is to relate any state \sigma to state \sigma' if L(n,[\!|e|\!],[\!|c|\!]) relates \sigma to \sigma' for some n.

C[\!| \mathtt{while}\, e \,\mathtt{do}\, c |\!] = \{ (\sigma,\sigma') \mid \exists n.\, (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) \}

At this point I'm worried that this is so simple that it couldn't possibly be correct. A good way to check is to prove that it is equivalent to the standard big-step semantics for IMP, which we shall do now.

Equivalence to the Standard Big-Step Semantics

The big-step semantics for the IMP language is a three-place relation on a command, a starting state, and the final state, which we shall write c\mid\sigma\Downarrow\sigma'. It is defined inductively by the following rules.

\begin{gather*} \frac{}{\mathtt{skip} \mid \sigma \Downarrow \sigma} \\[2ex] \frac{\sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases}}{ x := e \mid \sigma \Downarrow \sigma'} \qquad \frac{c_1 \mid \sigma \Downarrow \sigma' \quad c_2 \mid \sigma' \Downarrow \sigma''} {c_1 ; c_2 \mid \sigma \Downarrow \sigma''} \\[2ex] \frac{E[\!|e|\!](\sigma) = 0 \quad c_2 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\qquad \frac{E[\!|e|\!](\sigma) \neq 0 \quad c_1 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\\[2ex] \frac{E[\!|e|\!](\sigma) = 0} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma} \qquad \frac{E[\!|e|\!](\sigma) \neq 0 \quad c \mid \sigma \Downarrow \sigma' \quad \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma' \Downarrow \sigma''} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma''} \end{gather*}

(The big-step semantics is not denotational because the second rule for while is not compositional: the recursion is not on a proper sub-part but instead on the entire while command.)

We shall prove that the denotational semantics is equivalent to the big-step semantics in two steps.

  1. The big-step semantics implies the denotational semantics. (completeness)
  2. The denotation semantics implies the big-step semantics. (soundness)

Theorem (Completeness). If c \mid \sigma \Downarrow \sigma', then (\sigma,\sigma') \in [\!|c|\!].
Proof. We proceed by induction on the derivation of c \mid \sigma \Downarrow \sigma'. We have one case to consider per rule in the big-step semantics. (For the reader in a hurry: the case for while at the end is the only interesting one.)
Case: \frac{}{\mathtt{skip} \mid \sigma \Downarrow \sigma} We need to show that (\sigma,\sigma) \in [\!|\mathtt{skip}|\!], which is immediate from the definition of the denotational semantics.
Case: \frac{\sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases}}{ x := e \mid \sigma \Downarrow \sigma'} We need to show that (\sigma,\sigma') \in [\!|x := e|\!]. Again this is immediate.
Case: \frac{c_1 \mid \sigma \Downarrow \sigma' \quad c_2 \mid \sigma' \Downarrow \sigma''} {c_1 ; c_2 \mid \sigma \Downarrow \sigma''} We have two induction hypotheses: (\sigma,\sigma') \in C[\!|c_1|\!] and (\sigma',\sigma'') \in C[\!|c_2|\!]. It follows (by definition) that (\sigma,\sigma'') \in C[\!|c_1 ; c_2|\!].
Case: \frac{E[\!|e|\!](\sigma) = 0 \quad c_2 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\qquad We have the induction hypothesis (\sigma,\sigma') \in [\!|c_2|\!]. Together with the condition expression evaluating to 0, we have (\sigma,\sigma') \in C[\!|\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2|\!].
Case: \frac{E[\!|e|\!](\sigma) \neq 0 \quad c_1 \mid \sigma \Downarrow \sigma' } {\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2 \mid \sigma \Downarrow \sigma'}\\[2ex] We have the induction hypothesis (\sigma,\sigma') \in C[\!|c_1|\!]. Together with the condition is non-zero, we have (\sigma,\sigma') \in C[\!|\mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,c_2|\!].
Case: \frac{E[\!|e|\!](\sigma) = 0} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma} \qquad From E[\!|e|\!](\sigma) = 0 we have (\sigma,\sigma) \in L(0,E[\!|e|\!],E[\!|c|\!]) . Therefore (\sigma,\sigma) \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!].
Case: \frac{E[\!|e|\!](\sigma) \neq 0 \quad c \mid \sigma \Downarrow \sigma' \quad \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma' \Downarrow \sigma''} {\mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma''} We have the induction hypotheses (\sigma,\sigma') \in C[\!|c|\!] and (\sigma',\sigma'') \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!]. Unpacking the definition of the later, we have (\sigma',\sigma'') \in L(n,E[\!|e|\!],C[\!|c|\!]) for some n. Therefore we have (\sigma,\sigma'') \in L(1+n,E[\!|e|\!],C[\!|c|\!]) . So we conclude that (\sigma,\sigma'') \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!].
QED.

The other direction, that if the denotation of c relates \sigma to \sigma', then so does the big-step semantics, takes a bit more work. The proof will be by induction on the structure of c. In the case for while we need to reason about the L function. We get to assume that (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) for some n and we have the induction hypothesis that \forall \sigma \sigma'.\, (\sigma,\sigma') \in C[\!|c|\!] \to c \mid \sigma \Downarrow \sigma' . Because L is recursive, we going to need a lemma about L and prove it by induction on the number of iterations.

Lemma If (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) and \forall \sigma \sigma'.\, (\sigma,\sigma') \in [\!|c|\!] \to c \mid \sigma \Downarrow \sigma' , then \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma' .
Proof. The proof is by induction on n.
Case n=0. We have (\sigma,\sigma') \in L(0,E[\!|e|\!],C[\!|c|\!]), so \sigma = \sigma' and E[\!|e|\!](\sigma) = 0. Therefore we can conclude that \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma'.
Case n=1 + n'. We have (\sigma,\sigma') \in L(1+n',E[\!|e|\!],C[\!|c|\!]), so E[\!|e|\!](\sigma) \neq 0 and (\sigma,\sigma_1) \in C[\!|c|\!] and (\sigma_1,\sigma') \in L(n',E[\!|e|\!],C[\!|c|\!]) for some \sigma_1. From the premise about c, we have c \mid \sigma \Downarrow \sigma_1. From the induction hypothesis, we have \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma_1 \Downarrow \sigma'. Putting all of these pieces together, we conclude that \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma'.
QED.

Theorem (Soundness). For any \sigma and \sigma', if (\sigma,\sigma') \in C[\!|c|\!], then c \mid \sigma \Downarrow \sigma'.
Proof. The proof is by induction on the structure of c.
Case \mathtt{skip}. From (\sigma,\sigma') \in C[\!|\mathtt{skip}|\!] we have \sigma = \sigma' and therefore \mathtt{skip} \mid \sigma \Downarrow \sigma'.
Case x:=e. We have \sigma'(y) = \begin{cases} E[\!|e|\!](\sigma) & \text{if}\, y = x\\ \sigma(y) & \text{if}\, y \neq x \end{cases} and therefore x := e \mid \sigma \Downarrow \sigma'.
Case c_1 ; c_2. We have (\sigma, \sigma_1) \in C[\!|c_1|\!] and (\sigma_1, \sigma') \in C[\!|c_2|\!] for some \sigma_1. So by the induction hypothesis, we have c_1 \mid \sigma \Downarrow \sigma_1 and c_2 \mid \sigma_1 \Downarrow \sigma', from which we conclude that c_1 ; c_2 \mid \sigma \Downarrow \sigma'.
Case \mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,e_2. We have two cases to consider, whether E[\!|e|\!](\sigma) = 0 or not.

  • Suppose E[\!|e|\!](\sigma) = 0. Then (\sigma,\sigma') \in C[\!|c_2|\!] and by the induction hypothesis, c_2 \mid \sigma \Downarrow \sigma' . We conclude that \mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,e_2 \mid \sigma \Downarrow \sigma' .
  • Suppose E[\!|e|\!](\sigma) \neq 0. Then (\sigma,\sigma') \in C[\!|c_1|\!] and by the induction hypothesis, c_1 \mid \sigma \Downarrow \sigma' . We conclude that \mathtt{if}\,e\,\mathtt{then}\,c_1\,\mathtt{else}\,e_2 \mid \sigma \Downarrow \sigma' .

Case \mathtt{while}\, e \,\mathtt{do}\, c. From (\sigma,\sigma') \in C[\!|\mathtt{while}\, e \,\mathtt{do}\, c|\!] we have (\sigma,\sigma') \in L(n, E[\!|e|\!], C[\!|c|\!]) . Also, by the induction hypothesis, we have that \forall \sigma \sigma'. \; (\sigma,\sigma') \in C[\!|c|\!] \to c \mid \sigma \Downarrow \sigma' . By the Lemma about L, we conclude that \mathtt{while}\, e \,\mathtt{do}\, c \mid \sigma \Downarrow \sigma'.
QED.

Wow, the simple denotational semantics of IMP is correct!

The mechanization of all this in Coq is available here.

What about infinite loops?

Does this denotational semantics give meaning to programs with infinite loops, such as \mathtt{while}\, 1 \,\mathtt{do}\, \mathtt{skip} The answer is yes, the semantics defines a total function from commands to relations, so every program gets a meaning. So the next question is which relation is the denotation of an infinite loop? Just like the fixed-point semantics, the answer is the empty relation. C[\!|\mathtt{while}\, 1 \,\mathtt{do}\, \mathtt{skip} |\!] = \{ (\sigma,\sigma') \mid \exists n.\; (\sigma,\sigma') \in L(n, E[\!|1|\!], C[\!|\mathtt{skip}|\!]) \} = \emptyset

Comparison to using the least fixed point semantics

The standard denotational semantics for IMP defines the meaning of while in terms of the least fixed point of the following functional.

W(de, dc)(dw) = \left\{ (\sigma,\sigma') \middle| \begin{array}{l} \sigma = \sigma' & \text{if } de(\sigma) = 0 \\ \exists \sigma_1, (\sigma,\sigma_1) \in dc \land (\sigma_1,\sigma') \in dw & \text{if } de(\sigma) \neq 0 \end{array} \right\}

One of the standard ways to compute the least fixed point of a functional F is from Kleene's fixpoint theorem, which says that the least fixed point of F is \bigcup_{k=0}^{\infty} F^k(\emptyset) where \begin{align*} F^0(x) & = x \\ F^{k+1}(x) & = F (F^k(x)) \end{align*} So the traditional denotation of while is: C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \bigcup_{k=0}^{\infty} W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) Applying the definition of infinitary union, we have C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \{ (\sigma,\sigma') \mid \exists k.\; (\sigma,\sigma') \in W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) \} which starts to look similar to our definition. But they are not trivially equivalent.

Consider the following loop that counts down to zero. \mathtt{while}\,\neg (x=0)\, \mathtt{do}\, x := x - 1 To talk about the semantics of this loop, we create the following abbreviations for some relations on states. \begin{align*} R_0 &= \{ (\sigma, \sigma) \mid \sigma(x) = 0 \} \\ R_1 &= \{ (\sigma, \sigma') \mid \sigma(x) = 1 \land \sigma'(x) = 0 \} \\ R_2 &= \{ (\sigma, \sigma') \mid \sigma(x) = 2 \land \sigma'(x) = 0 \} \\ R_3 &= \{ (\sigma, \sigma') \mid \sigma(x) = 3 \land \sigma'(x) = 0 \} \\ & \vdots \end{align*}

  • If x=0 in the initial state, the loop immediately terminates, so the final state is the same as the input state. This is R_0.
  • If x=1 in the initial state, the loop executes one iteration, so the final state has x=0. This is R_1.
  • If x=2 in the initial state, the loop executes one iteration, so the final state has x=0. This is R_2.
  • and so on.
The L function computes exactly these R's. \begin{align*} L(0, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_0 \\ L(1, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_1 \\ L(2, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_2 \\ L(3, E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!]) &= R_3\\ & \vdots \end{align*} The semantics of while given by L says that an initial state is related to a final state if it is possible to guess the iteration count n to pick out the appropriate line of L that relates the two states. C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \{ (\sigma,\sigma') \mid \exists n.\, (\sigma,\sigma') \in L(n, [\!|e|\!],[\!|c|\!]) \} In contrast, Kleene's iteration incrementally builds up the union of all the R's: \begin{align*} W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^0(\emptyset) &= \emptyset \\ W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^1(\emptyset) &= R_0\\ W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^2(\emptyset) &= R_0 \cup R_1 \\ W(E[\!| \neg (x = 0) |\!], C[\!| x := x - 1 |\!])^3(\emptyset) &= R_0 \cup R_1 \cup R_2 \\ & \vdots \end{align*} The semantics of while given by the least fixed point of W says that an initial state is related to a final state if, after a sufficient number of applications of W, say k, the two states are in the resulting union R_0 \cup \cdots \cup R_{k-1}. C[\!| \mathtt{while}\,e\,\mathtt{do}\,c |\!] = \{ (\sigma,\sigma') \mid \exists k.\; (\sigma,\sigma') \in W([\!|e|\!],[\!|c|\!])^k(\emptyset) \}

In general, L(n,E[\!|e|\!],C[\!|c|\!]) gives the denotation of the loop exactly when n is the number of iterations executed by the loop in a given initial state. In contrast, W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) produces the kth approximation of the loop's meaning, providing the appropriate initial/final states for up to k-1 iterations of the loop.

However, these two algorithms are equivalent in the following sense.

Theorem (Equivalence to LFP semantics)

  1. If (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) , then \exists k.\; (\sigma,\sigma') \in W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) .
  2. If (\sigma,\sigma') \in W(E[\!|e|\!],C[\!|c|\!])^k(\emptyset) , then \exists n.\; (\sigma,\sigma') \in L(n,E[\!|e|\!],C[\!|c|\!]) .
The first part is proved by induction on n. The second part is proved by induction on k. The full proofs are in the Coq development linked to above.

Parting Thoughts

  • The traditional least fixed-point semantics is overkill for IMP.
  • What makes this simple version work is the combination of choosing the denotation of commands to be relations on states (instead of functions on states) and the use of an existential for the number of loop iterations in the meaning of while.
  • The IMP language is just a toy. As we consider more powerful language constructs such as functions, pointers, etc., at what point will we be forced to use the heavier-duty math of least fixed-points, continuous functions, etc.? I'm looking forward to finding out!