A language is type safe if running a program in the language cannot result in an untrapped error. A trapped error is something like an exception or a message-not-found error, that is, it's an error that is part of the defined behavior of the language. An untrapped error is something like a segmentation fault, that is, the program has run into an undefined state in which the language doesn't define what should happen. A segmentation fault is the underlying operating system catching the error, not the language itself. It is the untrapped errors that hackers take advantage of to break into computer systems. If you want to run untrusted code without getting into trouble, then it's a good idea to only run code that is in a type safe language!
Wright and Felleisen pioneered what has become the most flexible approach to proving that a language is type safe in their 1992 paper A Syntactic Approach to Type Soundness. The general idea is to define a small-step operational semantics for the language and show that if a program is well typed, then it is either done reducing or it can reduce to another well-typed program. The safety of an entire sequence of reductions can then be proved by induction. It is now common practice for language designers to prove type safety for new language designs, or at least for interesting subsets of the languages of interest. Proving type safety does not require advanced mathematics and isn't particularly challenging, except that it is often rather tedious, requiring many technical lemmas in which it is easy to make a mistake.
It turns out that the choice in formulation of the operational semantics can make a significant difference regarding how many lemmas, and how tedious, the proof of type safety becomes. In this blog post, I present an operational semantics that removes the need for many of the standard lemmas. The semantics is based on an abstract machine. The language will be the simply-typed lambda calculus in A-normal form, extended with recursive functions and a few primitive operators. Despite its small size, this language is Turing complete. This language is roughly equivalent to the target language discussed in my previous posts about Structural Induction and the ECD Machine. In the following I present the syntax, operational semantics, and type system of this language. Then I give the proof of type safety. The proof will rely on five lemmas, one lemma for each function or relation used in the operational semantics: the variable lookup partial function, the δ partial function that defines the behavior of the primitive operators, the V partial function for evaluating expression, the transition relation ⟼ for the abstract machine, and the multi-transition relation ⟼∗. The lemma for the transition relation is the main event.
Syntax
typesT::=Int∣Bool∣T→Tvariablesx,y,zintegersnoperatorso::=+∣−∣=constantsc::=n∣true∣falseexpressionse::=c∣x∣funf(x:T)sdefinitionsd::=x=e(e)∣x=o(¯e)statementss::=d;s∣returne
Operational Semantics
We use the notation ϵ for the empty list. Given a list L, the notation a⋅L is a larger list with a as the first element and the rest of the elements are the same as L. We use lists of key-value pairs (association lists) to represent mapping from variables to types (type environments) and variables to values (environments). The following lookup (partial) function finds the thing associated with a given key in an association list. lookup(K1,(K2,V)⋅L)={Vif K1=K2lookup(K1,L)otherwise Next we define the δ function, which gives meaning to the primitive operators. δ(+,n1,n2)=n1+n2δ(−,n)=−nδ(=,n1,n2)=(n1=n2) The function V maps expressions to values, using environment ρ. The values of this language are constants and closures, as defined below. valuesv::=c∣⟨f(x:T)s,ρ⟩ The definition of V uses the lookup function for variables, the δ function for primitive operations, and turns functions to closures. V(c,ρ)=cV(x,ρ)=lookup(x,ρ)V(funf(x:T)s,ρ)=⟨funf(x:T)s,ρ⟩ A stack κ is a list of statement-environment pairs. The state σ of the machine is simply a stack. The top of the stack contains the actively-executing statement and its environment. The relation ⟼ defines transitions between states. There are only three transition rules, for primitive operators, calling functions, and returning from functions. (x=o(¯e);s,ρ)⋅κ⟼(s,(x,v)⋅ρ)⋅κwhere v=δ(o,¯V(e,ρ))(x=e1(e2);s,ρ)⋅κ⟼(s′,(y,v2)⋅(f,v1)⋅ρ′)⋅(x=e1(e2);s,ρ)⋅κwhere v1=V(e1,ρ)=⟨funf(y:T)s′,ρ′⟩and v2=V(e2,ρ)(returne,ρ)⋅(x=e1(e2)s,ρ′)⋅κ⟼(s,(x,v)⋅ρ′)⋅κwhere v=V(e,ρ) We define ⟼∗ in the usual way, as follows. σ⟼∗σσ1⟼σ2σ2⟼∗σ3σ1⟼∗σ3 The semantics of this language is given by the following eval function. A state is final if it is of the form (returne,ρ)⋅ϵ and V(e,ρ)=v for some v. eval(s)={cif (s,ϵ)⋅ϵ⟼∗(returne,ρ)⋅ϵand V(e,ρ)=cfunif (s,ϵ)⋅ϵ⟼∗(returne,ρ)⋅ϵand V(e,ρ)=⟨funf(x:T)s,ρ′⟩stuckif (s,ϵ)⋅ϵ⟼∗σ,σ is not final, and ∄σ′.σ⟼σ′⊥if ∀σ.(s,ϵ)⋅ϵ⟼∗σ implies ∃σ′.σ⟼σ′
Type System
The types for the constants is given by the typeof function. typeof(n)=Inttypeof(true)=Booltypeof(false)=Bool The Δ function maps a primitive operator and argument types to the return type. Δ(+,IntInt)=IntΔ(−,Int)=IntΔ(=,IntInt)=Bool The following presents the type rules for expressions, definitions, and statements. lookup(x,Γ)=TΓ⊢x:Ttypeof(c)=TΓ⊢c:T(x,T1)⋅(f,T1→T2)⋅Γ⊢s:T2Γ⊢funf(x:T1)s:T1→T2Γ⊢e1:T1→T2Γ⊢e2:T1Γ⊢x=e1(e2):(x,T2)Δ(o,¯T)=TΓ⊢ei:Ti for i∈{1,…,n}Γ⊢x=o(¯e):(x,T)Γ⊢d:(x,T1)(x,T1)⋅Γ⊢s:T2Γ⊢d;s:T2Γ⊢e:TΓ⊢returne:T Our proof of type safety will require that we define notions of well-typed values, well-typed environments, well-typed stacks, and well-typed states. typeof(c)=T⊢c:TΓ⊢ρ(x:T1)⋅(f:T1→T2)⋅Γ⊢s:T2⊢⟨funf(x:T1)s,ρ⟩:T1→T2ϵ⊢ϵ⊢v:TΓ⊢ρ(x,T)⋅Γ⊢(x,v)⋅ρ⊢ϵ:T⇒TΓ⊢ρ(x,T1)⋅Γ⊢s:T2⊢κ:T2⇒T3⊢(x=e1(e2);s,ρ)⋅κ:T1⇒T3Γ⊢ρΓ⊢s:T1⊢κ:T1⇒T2⊢(s,ρ)⋅κ:T2
Proof of Type Safety
The first lemma proves that when an operator is applied to
values of the expected type, the result is a value whose type
matches the return type of the operator.
Lemma (δ is safe)
If Δ(o,¯T)=T and ⊢vi:Ti for i∈{1,…,n}, then δ(o,¯v)=v and ⊢v:T,
for some v.
- If the operator o is +, then we have T1=T2=Int and T=Int. Then because ⊢vi:Int, we know that vi=ni for i∈{1,2}. Then δ(+,n1,n2)=n1+n2 and we have ⊢(n1+n2):Int.
- If the operator o is −, then we have T1=Int and T=Int. Then because ⊢v1:Int, we know that v1=n for some n. Then δ(−,n)=−n and we have ⊢−n:Int.
- If the operator o is =, then we have T1=T2=Int and T=Bool. Then because ⊢vi:Int, we know that vi=ni for i∈{1,2}. Then δ(=,n1,n2)=n1=n2 and we have ⊢(n1=n2):Bool.
The second lemma says that if you have an environment that is well-typed with respect to the type environment, and if a variable x is associated with type T in the type environment, then looking up x in the environment produces a value that has type T.
Lemma (lookup is safe)
If Γ⊢ρ and lookup(x,Γ)=T, then lookup(x,ρ)=v and ⊢v:T for some v.
Proof. We proceed by induction on Γ⊢ρ.
- Case ϵ⊢ϵ:(Γ=ϵ,ρ=ϵ)
But then we have a contradition with the premise lookup(x,Γ)=T, so this case is vacuously true. - Case ⊢v:T′Γ′⊢ρ′(x′,T′)⋅Γ′⊢(x′,v)⋅ρ′:
Next we consider two cases, whether x=x′ or not.- Case x=x′: Then lookup(x,ρ)=v and T=T′, so we conclude that ⊢v:T.
- Case x≠x′: Then lookup(x,ρ)=lookup(x,ρ′) and lookup(x,Γ)=lookup(x,Γ′)=T. By the induction hypothesis, we have lookup(x,ρ′)=v and ⊢v:T for some v, which completes this case.
The next lemma proves that a well-typed expression evaluates to a value of the expected type.
Lemma (V is safe)
If Γ⊢e:T and Γ⊢ρ, then V(e,ρ)=v and ⊢v:T for some v.
Proof. We proceed by cases on the expression e.
- Case e=c:
From Γ⊢c:T we have typeof(c)=T and therefore ⊢c:T. Also, we have V(c,ρ)=c, which completes this case. - Case e=x:
From Γ⊢x:T we have lookup(x,Γ)=T. We then apply the lookup is safe lemma to obtain lookup(x,ρ)=v and ⊢v:T for some v. Thus, we have V(x,ρ)=v and this case is complete. - Case e=funf(x:T1)s:
We have V(funf(x:T1)s,ρ)=⟨funf(x:T1)s,ρ⟩. From Γ⊢funf(x:T1)s:T we have (x,T1)⋅(f,T1→T2)⋅Γ⊢s:T2, with T=T1→T2. Together with Γ⊢ρ, we conclude that ⊢⟨funf(x:T1)s,ρ⟩:T.
Now for the fourth and most important lemma. This lemma states that if a state is well typed, then either the state is a final state or the state can transition to a new state of the same type. In the literature, this lemma is often split into two lemmas called progress and preservation. In the setting of an abstract machine, it's convenient to merge these two lemmas into one lemma. Note that the conclusion of this lemmas includes two alternatives: the state is final or it can take a step. The power of this lemma is that it rules out the third alternative, that the state is not final and can't take a step. Such a situation is referred to as "stuck" and corresponds to untrapped errors.
Lemma (⟼ is safe)
If ⊢σ:T, then either σ is a final state or σ⟼σ′ and ⊢σ′:T.
Proof. Because ⊢σ:T we know that σ=(s,ρ)⋅κ, Γ⊢ρ, Γ⊢s:T1, and ⊢κ:T1⇒T. We proceed by cases on s because the transition rule that will apply depends primarily on s.
- Case s=(x=o(¯e);s′):
We have Γ⊢x=o(¯e):(x,T1) and (x,T1)⋅Γ⊢s′:T1. So Δ(o,¯T′)=T1 and Γ⊢ei:T′i for i∈{1,…,n}. Because V is safe, we have V(ei,ρ)=vi and ⊢vi:T′i for i∈{1,…,n}. Because δ is safe, we have δ(o,¯v)=v and ⊢v:T1 for some v. Thus, the current state takes the following transition. (x=o(¯e);s′,ρ)⋅κ⟼(s′,(x,v)⋅ρ)⋅κ We have (x,T1)⋅Γ⊢(x,v)⋅ρ
and therefore ⊢(s′,(x,v)⋅ρ)⋅κ:T - Case s=(x=e1(e2);s′):
From Γ⊢x=e1(e2);s′:T1 we have Γ⊢x=e1(e2):(x,T2) and (x,T2)⋅Γ⊢s′:T1. Thus, we also have Γ⊢e1:T3→T2 and Γ⊢e2:T3. Because V is safe, there exist v1 and v2 such that V(e1,ρ)=v1, V(e2,ρ)=v2, ⊢v1:T3→T2, and ⊢v2:T3. The only way for ⊢v1:T3→T2 to be true is for v1 to be a closure. That is, v1=⟨funf(y:T3)s″,ρ′⟩. This closure is well typed, so we have Γ′⊢ρ′ and (y:T3)⋅(f,T3→T2)⋅Γ′⊢s″:T2. We have what we need to know that the current state σ=(x=e1(e2);s′,ρ)⋅κ transitions as follows. σ⟼(s″,(y,v2)⋅(f,v1)⋅ρ′)⋅σ We can deduce ⊢σ:T2⇒T
and (y,T3)⋅(f,T3→T2)⋅Γ′⊢(y,v2)⋅(f,v1)⋅ρ′
so we have everything necessary to conclude ⊢(s″,(y,v2)⋅(f,v1)⋅ρ′)⋅σ:T - Case s=(returne):
If the stack κ is empty, then σ is a final state and this case is complete. If the stack is not empty, we have κ=(s′,ρ′)⋅κ′. Then, because ⊢κ:T1⇒T, we have s′=(x=e1(e2);s″), Γ′⊢ρ′, (x,T1)⋅Γ′⊢s″:T2, and ⊢κ′:T2⇒T. Because Γ⊢returne:T1, we have Γ⊢e:T1 and therefore V(e,ρ)=v and ⊢v:T1 for some v (because V is safe). So the current state σ takes the following transition: (returne,ρ)⋅(x=e1(e2);s″,ρ′)⋅κ′⟼(s″,(x,v)⋅ρ′)⋅κ′ We have (x,T1)⋅Γ′⊢(x,v)⋅ρ′, which is the last thing we needed to conclude that ⊢(s″,(x,v)⋅ρ′)⋅κ′:T.
Lemma (⟼∗ is safe)
If ⊢σ:T and σ⟼∗σ′, then ⊢σ′:T.
Proof. The proof is by induction on σ⟼∗σ′.
- Case σ⟼∗σ: We already have ⊢σ:T.
- Case σ⟼σ1σ1⟼∗σ′σ⟼∗σ′: Because ⟼ is safe and deterministic (the three transition rules obviously do not overlap), we have ⊢σ1:T. Then by the induction hypothesis, we conclude that ⊢σ′:T.
Theorem (Type Safety)
If ϵ⊢s:T, then either
- eval(s)=c and ⊢c:T, or
- eval(s)=fun and T=T1→T2 for some T1,T2, or
- eval(s)=⊥ (the program does not terminate).
Proof. Suppose that the program terminates. We have (s,ϵ)⋅ϵ⟼∗σ for some σ and ¬∃σ′.σ⟼σ′. Because ⟼∗ is safe, we have ⊢σ:T. Then because ⟼ is safe, we know that either σ is final or it can take a step. But we know it can't take a step, so it must be final. So σ=(returne,ρ)⋅ϵ and Γ⊢e:T. Then because V is safe, we have V(e,ρ)=v and ⊢v:T. If v=c, then we have eval(s)=c and ⊢c:T. If v=⟨funf(x:T)s′,ρ′⟩, then T=T1→T2, eval(s)=fun, and ⊢fun:T1→T2.
QED.
That's it! Type safety in just five easy lemmas.
In the transition rule Call, I think v_2 should equal to V(e_2, /rho), instead of V(e_1, /rho).
ReplyDeleteYes, thank you!
Delete