Processing math: 100%

Friday, March 24, 2017

Consolidation of the Denotational Semantics and an Application to Compiler Correctness

This is a two part post. The second part depends on the first.

Part 1. Consolidation of the Denotational Semantics

As a matter of expediency, I've been working with two different versions of the intersection type system upon which the denotational semantics is based, one version with subsumption and one without. I had used the one with subsumption to prove completeness with respect to the reduction semantics whereas I had used the one without subsumption to prove soundness (for both whole programs and parts of programs, that is, contextual equivalence). The two versions of the intersection type system are equivalent. However, it would be nice to simplify the story and just have one version. Also, while the correspondence to intersection types has been enormously helpful in working out the theory, it would be nice to have a presentation of the semantics that doesn't talk about them and instead talks about functions as tables.

Towards these goals, I went back to the proof of completeness with respect to the reduction semantics and swapped in the "take 3" semantics. While working on that I realized that the subsumption rule was almost admissible in the "take 3" semantics, just the variable and application equations needed more uses of . With those changes in place, the proof of completeness went through without a hitch. So here's the updated definition of the denotational semantics of the untyped lambda calculus.

The definition of values remains the same as last time: function tablesT::={v1v1,,vnvn}valuesv::=nT as does the operator. nnT1T2T1T2 For the denotation function E, we add uses of to the equations for variables (vρ(x)) and function application (v3v3). (I've also added the conditional expression ife1e2e3 and primitive operations on numbers f(e1,e2), where f ranges over binary functions on numbers.) E[|n|](ρ)={n}E[|x|](ρ)={vvρ(x)}E[|λx.e|](ρ)={T|v1v2.v1v2Tv2.v2E[|e|](ρ(x:=v1))v2v2}E[|e1e2|](ρ)={v3|Tv2v2v3.TE[|e1|](ρ)v2E[|e2|](ρ)v2v3Tv2v2v3v3}E[|f(e1,e2)|](ρ)={f(n1,n2)n1n2.n1E[|e1|](ρ)n2E[|e2|](ρ)}E[|ife1e2e3|](ρ)={v|vE[|e2|](ρ)if n0vE[|e3|](ρ)if n=0}

Here are the highlights of the results for this definition.

Proposition (Admissibility of Subsumption)
If vE[|e|] and vv, then vE[|e|].

Theorem (Reduction implies Denotational Equality)

  1. If ee, then E[|e|]=E[|e|].
  2. If ee, then E[|e|]=E[|e|].

Theorem (Whole-program Soundness and Completeness)

  1. If vE[|e|](), then ev and vE[|v|]().
  2. If ev, then vE[|e|]() and vE[|v|]() for some v.

Proposition (Denotational Equality is a Congruence)
For any context C, if E[|e|]=E[|e|], then E[|C[e]|]=E[|C[e]|].

Theorem (Soundness wrt. Contextual Equivalence)
If E[|e|]=E[|e|], then ee.

Part 2. An Application to Compiler Correctness

Towards finding out how useful this denotational semantics is, I've begun looking at using it to prove compiler correctness. I'm not sure exactly which compiler I want to target yet, but as a first step, I wrote a simple source-to-source optimizer O for the lambda calculus. It performs inlining and constant folding and simplifies conditionals. The optimizer is parameterized over the inlining depth to ensure termination. We perform optimization on the body of a function after inlining, so this is a polyvariant optimizer. Here's the definition. O[|x|](k)=xO[|n|](k)=nO[|λx.e|](k)=λx.O[|e|](k)O[|e1e2|](k)={O[|[x:=e2]e|](k1)if k1 and e1=λx.eand e2 is a valuee1e2otherwisewhere e1=O[|e1|](k) and e2=O[|e2|](k)O[|f(e1,e2)|](k)={f(n1,n2)if e1=n1 and e2=n2f(e1,e2)otherwisewhere e1=O[|e1|](k) and e2=O[|e2|](k)O[|ife1e2e3|](k)={e2if e1=n and n0e3if e1=n and n=0ife1e2e3|](k)otherwisewhere e1=O[|e1|](k) and e2=O[|e2|](k) and e3=O[|e3|](k)

I've proved that this optimizer is correct. The first step was proving that it preserves denotational equality.

Lemma (Optimizer Preserves Denotations)
E(O[|e|](k))=E[|e|]
Proof
The proof is by induction on the termination metric for O, which is the lexicographic ordering of k then the size of e. All the cases are straightforward to prove because Reduction implies Denotational Equality and because Denotational Equality is a Congruence. QED

Theorem (Correctness of the Optimizer)
O[|e|](k)e
Proof
The proof is a direct result of the above Lemma and Soundness wrt. Contextual Equivalence. QED

Of course, all of this is proved in Isabelle. Here is the tar ball. I was surprised that this proof of correctness for the optimizer was about the same length as the definition of the optimizer!

Friday, March 10, 2017

The Take 3 Semantics, Revisited

In my post about intersection types as denotations, I conjectured that the simple "take 3" denotational semantics is equivalent to an intersection type system. I haven't settled that question per se, but I've done something just as good, which is to show that everything that I've done with the intersection type system can also be done with the "take 3" semantics (with a minor modification).

Recall that the main difference between the "take 3" semantics and the intersection type system is how subsumption of functions is handled. The "take 3" semantics defined function application as follows, using the subset operator to require the argument v2 to include all the entries in the parameter v2, while allowing v2 to have possibly more entries. E[|e1e2|](ρ)={v3|v1v2v2.v1E[|e1|](ρ)v2E[|e2|](ρ){v2v3}v1v2v2} Values are either numbers or functions. Functions are represented as a finite tables mapping values to values. tablesT::={v1v1,,vnvn}valuesv::=nT and is defined as equality on numbers and subset for function tables: nnT1T2T1T2 Recall that is defined in terms of equality on elements.

In an intersection type system (without subsumption), function application uses subtyping. Here's one way to formulate the typing rule for application: Γ2e1:CΓ2e2:AC<:ABA<:AΓ2e1e2:B Types are defined as follows typesA,B,C::=nABAB and the subtyping relation is given below. n<:n(a)<:(b)AB<:(c)A<:AB<:BAB<:AB(d)C<:AC<:BC<:AB(e)AB<:A(f)AB<:B(g)(CA)(CB)<:C(AB)(h) Recall that values and types are isomorphic (and dual) to eachother in this setting. Here's the functions T and V that map back and forth between values and types. T(n)=nT({v1v1,,vnvn})=T(v1)T(v1)T(vn)T(vn)V(n)=nV(AB)={V(A)V(B)}V(AB)=V(A)V(B)V()=

Given that values and types are really the same, the the typing rule for application is almost the same as the equation for the denotation of E[|e1e2|](ρ). The only real difference is the use of <: versus . However, subtyping is a larger relation than , i.e., v1v2 implies T(v1)<:T(v2) but it is not the case that A<:B implies V(A)V(B). Subtyping is larger because of rules (d) and (h). The other rules just express the dual of .

So the natural question is whether subtyping needs to be bigger than , or would we get by with just ? In my last post, I mentioned that rule (h) was not necessary. Indeed, I removed it from the Isabelle formalization without disturbing the proofs of whole-program soundness and completeness wrt. operational semantics, and was able to carry on and prove soundness wrt. contextual equivalence. This morning I also replaced rule (d) with a rule that only allows equal function types to be subtypes. AB<:AB(d) The proofs went through again! Though I did have to make two minor changes in the type system without subsumption to ensure that it stays equivalent to the version of the type system with subsumption. I used the rule given above for function application instead of Γ2e1:CΓ2e2:AC<:ABΓ2e1e2:B Also, I had to change the typing rule for λ to use subtyping to relate the body's type to the return type. Γ,x:Ae:BB<:BΓλx.e:AB Transposing this back into the land of denotational semantics and values, we get the following equation for the meaning of λ, in which everything in the return specification v2 must be contained in the value v2 produced by the body. E[|λx.e|](ρ)={v|v1v2.{v1v2}vv2.v2E[|e|](ρ(x:=v1))v2v2}

So with this little change, the "take 3" semantics is a great semantics for the call-by-value untyped lambda calculus! For whole programs, it's sound and complete with respect to the standard operational semantics, and it is also sound with respect to contextual equivalence.

Wednesday, March 08, 2017

Sound wrt. Contextual Equivalence

The ICFP paper submission deadline kept me busy for much of February, but now I'm back to thinking about the simple denotational semantics of the lambda calculus. In previous posts I showed that this semantics is equivalent to standard operational semantics when considering the behavior of whole programs. However, sometimes it is necessary to reason about the behavior of program fragments and we would like to use the denotational semantics for this as well. For example, an optimizing compiler might want to exchange one expression for another less-costly expression that does the same job.

The formal notion of two such ``exchangeable'' expressions is contextual equivalence (Morris 1968). It says that two expression are equivalent if plugging them into an arbitrary context produces programs that behave the same.

Definition (Contextual Equivalence)
Two expressions e1 and e2 are contextually equivalent, written e1e2, iff for any closing context C, eval(C[e1])=eval(C[e2]).

We would like to know that when two expressions are denotationally equal, then they are also contextually equivalent.

Theorem (Sound wrt. Contextual Equivalence)
If E[e1]Γ=E[e2]Γ for any Γ, then e1e2.

The rest of the blog post gives an overview of the proof (except for the discussion of related work at the very end). The details of the proof are in the Isabelle mechanization. But first we need to define the terms used in the above statements.

Definitions

Recall that our denotational semantics is defined in terms of an intersection type system. The meaning of an expression is the set of all types assigned to it by the type system. E[e]Γ{AΓ2e:A} Recall that the types include singletons, functions, intersections, and a top type: A,B,C::=nABAB I prefer to think of these types as values, where the function, intersection, and top types are used to represent finite tables that record the input-output values of a function.

The intersection type system that we use here differs from the one in the previous post in that we remove the subsumption rule and sprinkle uses of subtyping elsewhere in a standard fashion (Pierce 2002).

Γ2n:nΓ2λx.e:Γ2λx.e:AΓ2λx.e:BΓ2λx.e:ABx:AΓΓ2x:AΓ,x:A2BΓ2λx.e:ABΓ2e1:CC<:ABΓ2e2:AΓ2e1e2:BΓ2e1:AA<:n1Γ2e2:BB<:n2[|op|](n1,n2)=n3Γ2op(e1,e2):n3Γ2e1:AA<:0Γ2e3:BΓ2ife1thene2elsee3:BΓ2e1:AA<:nn0Γ2e2:BΓ2ife1thene2elsee3:B

Regarding subtyping, we make a minor change and leave out the rule (CA)(CB)<:C(AB) because I had a hunch that it wasn't needed to prove Completeness with respect to the small step semantics, and indeed it was not. So the subtyping relation is defined as follows.

n<:n<:AB<:A<:AB<:BAB<:ABC<:AC<:BC<:ABAB<:AAB<:B

This type system is equivalent to the one with subsumption in the following sense.

Theorem (Equivalent Type Systems)

  1. If Γe:A, then Γ2e:A and A<:A for some A.
  2. If Γ2e:A, then Γe:A.
Proof
The proofs of the two parts are straightforward inductions on the derivations of the typing judgments. QED

This type system satisfies the usual progress and preservation properties.

Theorem (Preservation)
If Γ2e:A and ee, then Γee:A and A<:A for some A.
Proof
The proof of preservation is by induction on the derivation of the reduction. The case for β reduction relies on lemmas about substitution and type environments. QED

Theorem (Progress)
If 2e:A and FV(e)=, then e is a value or ee for some e.
Proof
The proof of progress is by induction on the typing derivation. As usual it relies on a canonical forms lemma. QED

Lemma (Canonical forms)
Suppose 2v:A.

  1. If A<:n, then v=n.
  2. If A<:BC, then v=λx.e for some x,e.

Next we turn to the definition of eval. As usual, we shall define the behavior of a program in terms of the operational (small-step) semantics and an observe function. eval(e)={observe(v)if evbadotherwiseobserve(n)=nobserve(λx.e)=fun In the above we categorize programs as bad if they do not produce a value. Thus, we are glossing over the distinction between programs that diverge and programs that go wrong (e.g., segmentation fault). We do this because our denotational semantics does not make such a distinction. However, I plan to circle back to this issue in the future and develop a version of the semantics that does.

Soundness wrt. Contextual Equivalence

We assume that E[e1]Γ=E[e2]Γ for any Γ and need to show that e1e2. That is, we need to show that eval(C[e1])=eval(C[e2]) for any closing context C. We shall prove Congruence which lets us lift the denotational equality of e1 and e2 through any context, so we have E[C[e1]]=E[C[e2]](1) Now let us consider the cases for eval(C[e1]).

  • Case eval(C[e1])=observe(v) and C[e1]v:
    By Completeness of the intersection type system we have 2C[e1]:A and 2v:A for some A,A such that A<:A. Then with (1) we have 2C[e2]:A(2) The type system is sound wrt. the big-step semantics, so C[e2]v for some v. Therefore C[e2]v because the big-step semantics is sound wrt. the small-step semantics. It remains to show that observe(v)=observe(v). From (2) we have 2v:A for some A where A<:A, by Preservation. Noting that we already have 2v:A, 2v:A, A<:A, and A<:A, we conclude that observe(v)=observe(v) by the Lemma Observing values of subtypes.
  • Case eval(C[e1])=bad:
    So C[e1] either diverges or gets stuck. In either case, we have E[C[e1]]= (Lemmas Diverging programs have no meaning and Programs that get stuck have no meaning). So by (1) we have E[C[e2]]=. We conclude that C[e2] either diverges or gets stuck by Lemma (Programs with no meaning diverge or get stuck). Thus, eval(C[e2])=bad.
QED

Lemma (Congruence)
Let C be an arbitrary context. If E[e1]Γ=E[e2]Γ for any Γ, then E[C[e1]]Γ=E[C[e2]]Γ.
Proof
We prove congruence by structural induction on the context C, using the induction hypothesis and the appropriate Compatibility lemma for each kind of expression. QED

Most of the Compatibility lemmas are straightforward, though the one for abstraction is worth discussing.

Lemma (Compatibility for abstraction)
If E[e1]Γ=E[e2]Γ for any Γ, then E[λx.e1]Γ=E[λx.e2]Γ.
Proof
To prove compatibility for abstractions, we first prove that

If Γ2e1:B implies Γ2e2:B for any Γ,B, then Γ2λx.e1:C implies Γ2λx.e2:C.
This is a straightforward induction on the type C. Compatibility follows by two uses this fact. QED

Theorem (Completeness wrt. small-step semantics) If ev then 2e:A and 2v:A for some A,A such that A<:A.
Proof
We have e:B and v:B by Completeness of the type system with subsumption. Therefore 2e:A and A<:B by Theorem Equivalent Type Systems. By preservation we conclude that 2v:A and A<:A. QED

In a previous blog post, we proved soundness with respect to big-step semantics for a slightly different denotational semantics. So we update that proof for the denotational semantics defined above. We shall make use of the following logical relation G in this proof. G[n]={n}G[AB]={λx.e,ρvG[A].ρ(x:=v)ev and vG[B]}G[AB]=G[A]G[B]G[]={vvValues}G[]={}G[Γ,x:A]={ρ(x:=v)vG[A] and ρG[Γ]}

We shall need two lemmas about this logical relation.

Lemma (Lookup in G)
If x:AΓ and ρG[Γ], then ρ(x)=v and vG[A].

Lemma (G preserves subtyping )
If A<:B and vG[A], then vG[B].

Theorem (Soundness wrt. big-step semantics)
If Γ2e:A and ρG[Γ], then ρev and vG[A].
Proof
The proof is by induction on the typing derivation. The case for variables uses the Lookup Lemma and all of the elimination forms use the above Subtyping Lemma (because their typing rules use subtyping). QED

Lemma (Observing values of subtypes)
If 2v:A, 2v:B, A<:C, and B<:C, then observe(v)=observe(v).
Proof
The proof is by cases of v and v. We use Lemmas about the symmetry of subtyping for singletons, an inversion lemma for functions, and that subtyping preserves function types. QED

Lemma (Subtyping symmetry for singletons) If n<:A, then A<:n.

For the next lemma we need to characterize the types for functions. fun(AB)fun(A)fun(B)fun(AB)fun()

Lemma (Inversion on Functions)
If Γ2λx.e:A, then fun(A).

Lemma (Subtyping preserves functions)
If A<:B and fun(A), then fun(B).

Lemma (Diverging Programs have no meaning)
If e diverges, then E[e]=.
Proof
Towards a contradiction, suppose E[e]. Then we have 2e:A for some A. Then by soundness wrt. big-step semantics, we have ev and so also ev. But this contradicts the premise that e diverges. QED

Lemma (Programs that get stuck have no meaning)
Suppose that ee and e is stuck (and not a value). Then E[e]=.
Proof
Towards a contradiction, suppose E[e]. Then we have 2e:A for some A. Therefore 2e:A for some A<:A. By Progress, either e is a value or it can take a step. But that contradicts the premise. QED

Lemma (Programs with no meaning diverge or gets stuck)
If E[e]=, then e diverges or reduces to a stuck non-value.
Proof
Towards a contradiction, suppose that e does not diverge and does not reduce to a stuck non-value. So ev for some v. But then by Completeness wrt. the small-step semantics, we have 2e:A for some A, which contradicts the premise E[e]=. QED

Related Work

The proof method used here, of proving Compatibility and Congruence lemmas to show soundness wrt. contextual equivalence, is adapted from Gunter's book (1992), where he proves that the standard model for PCF (CPO's and continuous functions) is sound. This approach is also commonly used to show that logical relations are sound wrt. contextual equivalence (Pitts 2005).

The problem of full abstraction is to show that denotational equivalence is both sound (aka. correct): E[e1]=E[e2]impliese1e2 and complete: e1e2impliesE[e1]=E[e2] with respect to contextual equivalence (Milner 1975). Here we showed that the simple denotational semantics is sound. I do not know whether it is complete wrt. contextual equivalence.

There are famous examples of denotational semantics that are not complete. For example, the standard model for PCF is not complete. There are two expressions in PCF that are contextually equivalent but not denotationally equivalent (Plotkin 1977). The idea behind the counter-example is that parallel-or cannot be defined in PCF, but it can be expressed in the standard model. The two expressions are higher-order functions constructed to behave differently only when applied to parallel-or.

Rocca and Paolini (2004) define a filter model V for the call-by-value lambda calculus, similar to our simple denotational semantics, and prove that it is sound wrt. contextual equivalence (Theorem 12.1.18). Their type system and subtyping relation differs from ours in several ways. Their intro rule is not restricted to λ, they include subsumption, their type is a super-type of all types (not just function types), they include the distributivity rule discussed at the beginning of this post, and they include a couple other rules (labeled (g) and (v) in Fig. 12.1). I'm not sure whether any of these differences really matter; the two systems might be equivalent. Their proof is quite different from ours and more involved; it is based on the notion of approximants. They also show that V is incomplete wrt. contextual equivalence, but go on to create another model based on V that is. The fact that V is incomplete leads me suspect that E is also incomplete. This is certainly worth looking into.

Abramsky (1990) introduced a domain logic whose formulas are intersetion types: ϕ::=ϕϕϕϕ and whose proof theory is an intersection type system designed to capture the semantics of the lazy lambda calculus. Abramsky proves that it is sound with respect to contextual equivalence. As far as I can tell, the proof is different than the approach used here, as it shows that the domain logic is sound with respect to a denotational semantics that solves the domain equation D=(DD), then shows that this denotational semantics is sound wrt. contextual equivalence. (See also Alan Jeffrey (1994).)