A couple weeks ago I wrote about what separate compilation means for the lambda calculus. To capture the observable behavior of the compilation units, I defined an abstract machine that produced a trace of the calls between them. The idea was that an optimizing compiler has free reign within a compilation unit, but that it must make and respond to the appropriate external function calls. I noted that while the abstract machine approach got the job done, it would be preferable to map directly from the syntax of the lambda calculus to a trace instead of indirectly via the abstract machine. Further, we would like to know what a a single compilation unit means in isolation, but the abstract machine only gives a semantics to whole programs.
A semantics that can give meaning to parts of a program is, roughly speaking, called a compositional semantics. A compositional semantics defines the meaning of each construct of the language in terms of the meaning of the sub-parts of the construct. For example, a conditional 'if' is defined in terms of the meaning of the conditional and the two branches. One of the main styles of semantics that is compositional is denotational semantics. Such a semantics maps from the syntax of the program to some mathematical constructs that specify the behavior of the program (typically a function from inputs to outputs). A big-step semantics also maps from syntax to mathematical constructs, but big-step semantics are often not compositional. For example, the meaning of a lambda term is not defined in terms of the meaning of its body. Instead, a big-step semantics maps a lambda term to a closure, which is still syntax. (For reference, see Big-step, diverging or stuck?.)
Here I'm going to give a denotational semantics for the separately compiled lambda calculus, mapping from the syntax of a compilation unit to a set of all possible execution traces.
The following will serve as the syntax for the separately-compiled lambda calculus. The main addition is the notation for referring to functions in other compilation units. Also, each compilation unit consists of a function table that maps function names to lambdas and a whole program is a mapping from compilation unit names to function tables.
A trace is a sequence of actions. In our setting, actions are function calls or producing a result value. For a function call, we record the name of the function, the argument, and the return value. We'll need three kinds of values: integers, external functions, and functions. From within one compilation unit, we don't know what the external functions do, so we treat them symbolically. On the other hand, functions from within the current unit are known and we represent them directly in terms of their observable behavior, that is, as a mapping from values to a set of traces. The definitions of the sets , , , and are mutually recursive. One doesn't do that with run-of-the-mill set theory. Here we must invoke the inverse limit construction to obtain sets that solve the above equations. Thanks Dana Scott! (If you want to read more about this, I recommend starting with Chapter 11 of David A. Schmidt's book on denotational semantics.)
We'll represent function tables with the following function maps, and programs with the following unit maps.
Preview of the Semantics
Before giving the general definition of the semantics, let's consider a couple examples. We'll write for the behavior (set of traces) of expression in environment . (Environments map variables to values.)
First, let's consider the semantics of the identify function. It's behavior is captured by a single trace which consists of the single action of producing a function . Of course, this function maps every value to itself.
Next let's consider a call to an external function. We don't know anything about the other compilation units, so we'll include all possible return values as possible behavior. For example, we'll have
Semantics as Traces
The interesting part of the semantics is in function application. The result from may be either an external function or an internal function. The helper function handles the external calls and the internal calls. where We can now define the meaning of a compilation unit, that is, the meaning of a function table, as follows.
Meaning of Whole Programs via Composition
That was the main event, but let's go a bit further and see how we can give the meaning of a program in terms of the meanings of the compilation units. To do this, we need a way of filtering out traces that include calls to external functions that couldn't happen, that is, calls whose argument and return values don't match what you'd actually get from the given function in the other compilation unit. To do this, we'll need to pass values back and forth between units. If the values are just numbers, this is easy, but for functions things are a bit more complex because we need to convert back and forth from the symbolic (hidden) representation of a function to the value to behavior mapping. First, we define a reg function for registering a function under a fresh name. We also need a way to convert from a symbolic function back to the value to behavior mapping, which we accomplish with the following up-arrow function. We now define the following predicates on actions and sequences of actions. Next we define a filter that takes a behavior (set of traces) and keeps only those traces that can really happen. Filtering extends naturally to functions and unit maps.
With filtering in place, we are now ready to give the meaning of a whole program in terms of the meaning of each compilation unit. where
I'm not completely happy with the above semantics. The reason is that the behavior of an expression includes receiving internal functions that it never created. Hopefully I'll read about or discover a solution to this!