Wednesday, October 09, 2024

Take-aways from using Deduce in the classroom

During two weeks of September 2024 I experimented with using the Deduce proof assistant in an honors undergraduate data structures course at Indiana University. The rest of the semester is taught using the Java programming language.

The primary goal of the experiment was to find out whether it would be feasible to teach our undergraduate students how to use a proof assistant and prove correctness of some basic algorithms on linked lists. I gave three lectures about Deduce: (1) how to define functions and data types, (2) how to write proofs, and (3) an application of these ideas to defining insertion sort and proving its correctness. The teaching assistants conducted a lab each week. The first lab helped the students install Deduce and write some functions on linked lists, such as reverse, sum, and cumulative sum. The second lab asked the students to define a tail-recursive variant of insertion sort and prove its correctness, reusing the main lemma from the proof given in lecture. During the second week we also assigned some proof exercises as homework. A week after the second lab, I asked the students to complete a survey regarding the use of Deduce, and the results of the survey are shown below.

The students were able to complete the two labs and the exercises, so the main take-away from the experiment is, yes, it is feasible to teach undergraduate students at IU how to use a proof assistant in a relatively short amount of time! However, I think the survey results tell us that it was just barely feasible and the degree of frustration for the students was still too high. Fortunately, the survey also points to many things that can be improved about Deduce that will help to lower the frustration for beginners.

The number one challenge for students learning Deduce was learning the syntax and dealing with syntax errors. This comes up over and over in the survey. This was surprising to me because the number one goal for me when designing Deduce was to make the syntax as unsurprising as possible. However, at the end-of-the-day, Deduce is still a computer language, which means it takes time for students to learn the grammar. Unfortunately, I built the parser for Deduce using a parser generator and the default error message is just “unexpected token” and the position in the file, which typically pointed to the token after the incorrect one. So an important take-away is that the syntax error messages need to be top notch. This past week I rewrote the parser by hand (recursive descent) and I’ve greatly improved the error messages.

Another aspect of the syntax challenge was a bit more subtle. The students had difficulty learning which keywords to use in different parts of a proof. In Deduce, there is typically one keyword (and syntactic form) for each logical rule. For example, if pq is a proof of (if P then Q) and p is a proof of P, then apply pq to p a proof of Q (modus ponens in Deduce). However, in English mathematical prose, there are no specific words that correspond to using modus ponens (AFAIK). Something that I’m now experimenting with is allowing the proof rule to be inferred, so one could just write pq, p (comma is conjunction introduction) and Deduce figures out that it can use modus ponens to conclude Q. However, in situations when that fails, it may become more difficult to provide an informative error message because the syntax does not capture as much of the user’s intent.

On a related note, several students mentioned that they would like a reference manual for Deduce. At the time of the course, the documentation for Deduce consisted of two tutorials, one for functional programming in Deduce and the other writing proofs. However, those documents are not ideal for answering questions like, “What is the have keyword for and how does it work?” I’m now in the process of adding a reference manual.

Deduce allows students to write incomplete proofs, using ? as a placeholder for parts that still need to be filled in. For each ?, Deduce reports what formula needs to be proved and it gives advice regarding what the next step of the proof could be, based on the top logical connective of the formula. For example, if the formula is an all, then Deduce suggests arbitrary as the next step. Of course, induction is also an alternative choice, and one student would have liked Deduce to also suggest that alternative. An undergraduate research assistant has already added that to Deduce. Inspired by how much this advice helped the students, I’ve added advice for elimination rules. For example, if pq is a proof that P implies Q, then writing help pq will cause Deduce to suggest that you use the apply-to form.

Helping students during office hours was also a great way learn more about student expectations and challenges using Deduce. One thing I noticed is that a student would sometimes refer to an already-proved formula, i.e. a given, by writing the formula itself instead of using the label for the formula. For example, writing apply (if P then Q) to P instead of apply pq to p. I’ve now added the ability to refer to givens by formula. In general, I expect users to prefer the formula when its small and the prefer the label when the formula is large.

Finally, it looks like it would be better to spend 4 weeks introducing students to Deduce instead of 2 weeks. This would allow for a more gentle introduction to Deduce and it would enable more applications of Deduce to proofs of correctness. I’ll give this a try with the 100-student regular section of data structures this Spring!

I’ll be going back over the survey results from time to time, looking for more lessons to be learned and ways to improve Deduce.

Survey Results

There were 7 questions in the survey that were completed by 21 students in an honors undergraduate data structures course. 4 of the questions were multiple choice and 3 were open-ended.

4 Multiple-choice Questions

DeduceLength
DeduceDiscreteMath
DeduceJava
DeduceCorrectProofs

3 Open-ended Questions

How did you feel when you completed a proof of a theorem using Deduce?

Extremely satisfied

In all honesty, it felt like I was just trying all possible combinations
of applying the logic rather than figuring out how to prove the questions. 
More time spent on learning what the syntax actually means might help.

I had a hard time proving theorems in Deduce, mostly because of the syntax. 
So, when I finally saw "xyz.pf is valid," the joy I felt was indescribable. 
I could sleep at night peacefully knowing I didn't prove anything wrong.

It felt satisfying when it finally completed, however it was because of consistent
frustration in figuring out how to phrase the proofs in deduce.

I typically felt confused and frustrated.

Seeing the words: "proof is valid", would always bring me satisfaction. However, 
I would frequently feel sad when I struggled more with the implementation of a 
proof in Deduce rather than the proof itself.
Also, I would like to add a comment about question 2. While I think doing proofs 
without a program like Deduce is sufficient for students in C241, it could make 
grading proofs for UIs/TAs easier.

Liberated (I struggled with the syntax of Deduce so it was satisfying when the proof finally ran).

I felt accomplished afterwards, like I had completed a journey of smaller goals to
achieve proving the final theorem.

Completing a proof felt rewarding. Pf. Siek mentioned that it would eventually feel 
like a video game, and I agree. It felt like putting together pieces of a puzzle
until the puzzle was complete as opposed to proof-writing as I'm used to it.

Very satisfied once I understood why it was correct.

It feels a little nice having confirmation of my proof correctness, but I can still
feel confused after completing a proof. Sometimes after completing a proof in Deduce, 
I have trouble reading the proof I just wrote. 

It did feel good because you know what you’ve done is correct.

relief, like a weight has been lifted off my shoulders. 
During the proof: I hate deduce
After completing the proof: I love deduce I did it I can do anything the world is mine

I felt really accomplished because the syntax in Deduce was hard to pick up in the 2 weeks
we learned it. I still think Deduce was helpful in a way even though I struggled, so maybe
spending more time with Deduce would help with syntax knowledge.

Relieved, the process was challenging, but typically rewarding in the end. However, 
I sometimes felt frustrated with Deduce since I sometimes didn't understand the syntax, 
which made it hard to get Deduce to give helpful feedback.

I think proofs that I wrote in deduce were super thorough, your logic needed to be super
strong and for a more difficult proof, you needed to map out the definitions and the 
equations you are trying to prove. I felt more confident about my logic skills and code 
after the proofs. It was definitely a lot of hard work but I think it was worth it to get 
an understanding of the correctness of your programs.

I felt good and rewarded when the valid statement came out, especially after trial and error
with the syntax

Every time I completed a proof of a theorem using Deduce, I felt mostly relieved.  
It was frustrating trying to understand Deduce's syntax or how to apply certain concepts
to complete a proof, but at least when Deduce says that the proof is valid, I know it's 
correct and I do not have to go looking for someone to look over my work. I think, to make 
it easier to learn Deduce, it would be helpful to have more completed step-by-step examples
of proved theorems available to students. I'd also argue that when Deduce did offer advice, 
the advice that I got was usually very helpful and assisted greatly in pushing me towards 
correctness. However, often, Deduce would not give advice at all when I had an error in my 
proof, so I would suggest having a handbook or guide available to students to help decipher
what errors lead to certain, unhelpful, error messages. 

I felt really happy that the theorem worked and was marked as valid since this means 
my logic is actually correct.

I found completing the theorems in Deduce satisfying. The immediate feedback was nice
compared to traditional informal proofs that had delayed feedback.

I felt good. Especially because it was much harder than normal proofs. Nevertheless, 
I felt like I understood the logic behind the functions I programmed.

What changes or improvements to Deduce would you find most helpful? Please be as specific as you can.

The biggest thing that would help would be more diverse/descriptive error messages. 
Most of the error messages I got were something like "unexpected symbol ____". 
Clearly this meant I wasn't using the syntax correctly, but I didn't get any feedback
on what I was getting wrong. So I think if the error messages were somehow able to have a 
bit more information that would be good. Also if you could make the ? work while using the
equations statement, that would help a lot.

Better error messages would probably be the most helpful, followed by an easier process of
applying facts, definitions, and rewrites to the goal.

I'm not sure if we already have it, but a documentation explaining how all the functions work
would be really helpful. For example, how "suffices," "apply," "definition," and others function.

Sometimes the ? would not work as expected - where it would give me an error because it was
thinking that ? was part of the proof rather than to be used as like a "fill in the blank".

Some of the syntax just feels very unintuitive, it seems unreasonable to me to try to teach us the
language in just a few weeks and expect us to be able to use it well. 

In class, we were taught how to use suffices to change the goal when proving equivalence. 
Similarly, when proving things in C241, we only changed the goal when we did logical equivalence
proofs. However, in the Insertion Sort lab, when we had to use suffices in our proofs, the novelty
of suffices/Deduce compounded onto the fact that we had not practiced changing the goal in proofs
that were not about equivalence. This led to me feeling very sad after taking a long time to complete
only the base case of the isort proof.
The addition of syntax highlighting could also be cool, but I do not think it is too important.

Maybe an index for all the syntax (eg. apply, suffice) that was used in the examples on the Github page
(I had to use ctrl+f to find the terms on the page).

I think making the error statements a little more specific in more areas would be helpful for 
troubleshooting when writing the proof.

Having tools to support it such as intelliSense and autocomplete or even a VSCode extension would
improve the learning curve. A lot of times you know where the proof needs to go but you don't know 
the syntax that deduce wants, which is frustrating. Also a frequent error message I would get was
"unexpected token". I think it would be useful if there was a way for deduce to (if possible) 
partially parse the line the unexpected token was on to see what type of command I was trying to
use and give some advice on it.

In the question mark advice section, include induction as a possibility.

A more complete guide to using deduce

I wish we covered it more in class because I struggled with it a lot outside of class.

The "evidence" part of each line could be improved. When we had to reference a function, 
or theorem, or a previously proved fact, we had to use a large variety of different keywords. 
Definition, apply, definition in, rewrite, rewrite in, and maybe more had me spending a lot 
of time figuring out which one I was supposed to use even though I already knew the next step
of the proof. I just didn't know the format for the justification deduce was looking for. 
Maybe it is possible to combine some of these?

I think listing out the proper ways to approach proofs would help. Moreover, there should be
a reference listing when to use "suffices ... with definition ...", "by definition", "by rewrite",
or "by apply". I looked for guidance in GitHub and think that the new syntax learned up to a certain
point should be accessible in an efficient way.

Here are a few things I would suggest:
Clearer feedback on errors: A lot of the time I would get errors/responses from Deduce where 
I had a difficult time interpreting what I was supposed to be doing to fix my issues.
More detailed examples: It would be nice to have a reference for proofs in the homework, 
especially if there were a line-by-line breakdown of how the proofs work.
Better documentation or tutorials: Learning the system was difficult due to a lack of resources,
as the only materials were the two .md files on proofs and Functional Programming.

While the feedback from deduce definitely helped me identify some minor errors, I feel the
error messages sometimes lack clarity. And its more unclear when dealing with type inference
or mismatch errors or issues with multisets. I don't know to what extent it could be possible
in the context of a class, but more detailed error messages with specific suggestions or more
context can help debug proofs more easily. 

Sometimes it got difficult to know when to use which words like "with", "by", "apply." 
If there is some way to be able to use most of the words we used in 241, I think that would 
be very helpful.

I think Deduce would benefit most from better error messages and more forgiving syntax. 
First, it seems that when Deduce recognizes an error in a proof, but doesn't know specifically
how to explain the error, it just defaults to some sort of "parsing error" or reports an
"unexpected token" which I think is sometimes unhelpful. Additionally, I believe that when trying
to use a theorem, if it is used incorrectly, then Deduce should mention every parameter the theorem 
needs in some sort of explicit detail, so I don't have to look through another document to decipher
what the theorem expects. I will also say that it seems like Deduce is only willing to give advice
when I put the "?" symbol in the exact right place in the proof. For example, I've had issues where
the "?" symbol was not formatted properly (such as indentation or location in a line) and I received
one of the two errors I mentioned above. It would also be great to have more overall documentation
about specifically the syntax of Deduce in addition to the existing documentation about how to prove
theorems using features of Deduce.

It would be nice to have better error messages that were a bit clearer or better explain what the
problem was that occurred. For example, I had an issue where I was trying to use a "definition" 
instead of using "apply," and the error messages would just say something like "no proofbinding."

Allow the use of question marks in the equations statements. I also think adding a little statement
when solving for equalities v.s. other goals (ex. Insertion sort theorem) would be nice.

Better IntelliSense, and to make it less confusing, and so many keywords mean the same thing, which
makes this very confusing. So, looking at it at first glance, you won't know that they have different
functionalities. I am trying to make it easier for a first-timer to learn the language without depending
on the TAs or professors to tell them what everything does. Also, when someone hovers over an in-built
function or keyword, it should show a brief definition of what that does.

What about Deduce did you dislike or find frustrating? Please be as specific as you can.

I got better about this as we went on, but at the beginning and even somewhat at the end,
it felt somewhat ambiguous to me when to use which keyword ("suffices," "definition," 
"rewrite," etc.). I'm not sure how to fix this and a large part of this is probably just
because we didn't spend long on Deduce but that was the biggest learning curve for me.

It was often very difficult to apply facts, definitions, or rewrites in the way that I would
expect them to. Maybe this was just because I didn't understand the syntax too well.

Sometimes, even though my proof was logically correct, I had to restructure it because Deduce 
didn’t recognize it as valid.

The syntax really frustrated me. I felt that it would be so much easier to phrase in english
and then I had to work backwards to try to figure out how deduce wanted the proofs to be.
I put 4 weeks as the right amount of time because I felt if I had more time to get used to
deduce I wouldn't be as upset with it, but I was between putting that or "Not using deduce". 

Occasionally, I would receive feedback that was not very helpful. Unfortunately, I do not
remember specific examples of that happening. 
It is also slightly frustrating that after taking the time to learn it, we are not continuing
the use of Deduce. Although I am relieved, I feel like I finally have a grasp on Deduce.

I'm not sure if it's because I have such a low understanding of Deduce, but it was hard to figure
out what syntax to use for which situations (eg. suffice, rewrite, apply).

Honestly, the only thing I can think of is I would occasionally get confused with the question mark
and when or where I would need to put it, but that might have been user error as well.

Basically what I mentioned above. Not knowing the exact syntax to get where you want to go
(usually a trivial theorem or rule) and unexpected token errors not showing much help. 

Learning syntax and translating things I knew to actual proofs.

What I find most frustrating about Deduce is knowing how to do a proof but not being able to figure
out how to write it in Deduce. I also feel like Deduce is a bit restrictive. But I understand both
of these are because it's a computer program. If I had to prove something through a proof 
programming language, I guess Deduce is as good as any. 

I found the syntax very difficult. During lab I didn’t know how to solve a problem in deduce
because I didn’t know how to word it properly.

1. sometimes to see what to do next you can use ?, but other times ? will create an error
2. I know the next step of the proof but I don't know how deduce wants me to prove it

Coding the functions wasn't frustrating since I had taken H211, but despite having previous
experience with similar syntax, I still struggled with Deduce syntax, which was pretty frustrating.

One of the main frustrations I encountered while using Deduce was the feedback system. The error
messages were often unclear, and it was difficult to pinpoint exactly what went wrong in my proofs. 
For example, when I was trying to rewrite an expression using insertion_sort2(xs) = isort(xs, empty) 
by definition insertion_sort2 to prove sorted(insertion_sort2(xs)), Deduce gave me an error message
stating, “no matches found for rewrite with insertion_sort2(xs) = isort(xs, empty) in
sorted(isort(xs, empty)).” This feedback was vague and didn’t help me understand why the rewrite failed
or how to fix it.
Additionally, the learning curve for Deduce felt steep, especially compared to programming in more 
familiar languages like Java. The formal proof system required precise logic, and I often struggled
with applying theorems correctly due to the complex syntax and lack of intuitive error handling.
I also found that Deduce lacked detailed examples or a more comprehensive tutorial system. When I was
trying to apply the isort_sorted theorem, for instance, there wasn’t a clear guide on how to handle
such proofs step-by-step, leaving me unsure whether I was using the right approach.

When I took C241, the way we proved theorems was slightly different. In deduce, the most frustrating
thing was the syntax and flow. More often than not, I had all the logic and steps down, but there 
would be issues with matching the right statements. I usually use have statements a lot and that 
confuses me. And then I tried using the equations feature which was far simpler to complete the same proof. 

The most frustrating part of using Deduce was knowing how to prove a theorem to a person, 
but not knowing how to prove a theorem to Deduce. The problem of trying to prove a specific
theorem to Deduce becomes even more apparent when error messages can be unhelpful. I think some 
Deduce features such as "?" and "sorry" are great steps in the right direction, but I still think
more should be done to help point students to the correct solution when they make a mistake in their proofs.

I found it frustrating how specific you have to be with deduce as compared to real life.
I would understand the logic of a proof but I would not be able to prove it in deduce because
I did not know exactly how to format my reasoning.

I found using the equations implementation an annoyance because you couldn’t get feedback after
every line when a question mark was placed. I would write out everything in steps and that would 
take up a lot of space on the line and become hard to read.

Some syntax was very confusing. Like defining functions. It felt like it was not meant for 
UG students. especially the use of Lambda functions. I don't know how you can improve that 
given that it is a vital part of the language but that is how I felt.

Sunday, August 11, 2024

Binary Search Trees, Correctly!

This is the seventh blog post in a series about developing correct implementations of basic data structures and algorithms using the Deduce language and proof checker.

This post continues on the theme of binary trees, that is, trees in which each node has at most two children. The focus of this post is to implement the Search interface, described next, using binary trees.

The Search Interface

The Search interface includes operations to (1) create an empty data structure, (2) search for a value based on its associated key, and (3) insert a new key-value association.

The Search interface can also be implemented in a simple but less efficient way, using a function to map keys to values. With this approach, the operation to search for a value is just function call. The Maps.pf file defines the empty_map operation, which returns a function that maps every input to none.

assert @empty_map<Nat,Nat>(5) = none

The Maps.pf file also defined the update(f, k, v) operation, which returns a function that associates the key k with v but otherwise behaves like the given function f. Here is an example use of update.

define m2 = update(@empty_map<Nat,Nat>, 4, just(99))
assert m2(4) = just(99)
assert m2(5) = none

We will use this function implementation of the Search interface to specify the correctness of the binary tree implementation of Search.

We will store the keys and their values in a binary tree and implement BST_search and BST_insert operations. These operations are efficient (logarithmic time) when the binary tree is balanced, but we will save how to balance trees for a later blog post.

The main idea of a binary search tree comes from the notion of binary search on a sequence, that is, keep the sequennce in sorted-order and when searching for a key, start in the middle and go left by half of the subsequence if the key you’re looking for is less than the one at the current position; go right by half of the subsequence if the key is greater than the one at the current position. Of course, if they are equal, then you’ve found what you’re looking for. Thus, binary search is just like looking up the definition of a word in a dictionary. The word is your key and the dictionary is sorted alphabetically. You can start in the middle and compare your word to those on the current page, then flip to the left or right depending on whether your word is lower or higher in the alphabet.

The binary search tree adapts the idea of binary search from a sequence to a tree. Each node in the tree stores a key and its value. The left subtree of the node contain keys that are less than the node and the right subtree contain keys that are greater than the node. Thus, when searching for a key, one can compare it to the current node and then either go left or right depending on whether the key is less-than or greater-than the current node.

Consider the following diagram of a binary search tree. For simplicity, we will use numbers for both the keys and the values. In this diagram the key is listed before the colon and the value is after the colon. For example, this tree contains

  • key 10 associated with value 32,
  • key 13 associated with value 63,
  • etc.
Diagram of a Binary Search Tree

The following code builds this binary search tree using the Tree union type defined in the Binary Tree blog post and the Pair type from the Pair.pf file.

define mt = @EmptyTree<Pair<Nat,Nat>>
define BST_1 = TreeNode(mt, pair(1, 53), mt)
define BST_9 = TreeNode(mt, pair(9, 42), mt)
define BST_6 = TreeNode(BST_1, pair(6, 85), BST_9)
define BST_11 = TreeNode(mt, pair(11, 99), mt)
define BST_13 = TreeNode(BST_11, pair(13, 69), mt)
define BST_19 = TreeNode(mt, pair(19, 74), mt)
define BST_14 = TreeNode(BST_13, pair(14, 27), BST_19)
define BST_10 = TreeNode(BST_6, pair(10, 32), BST_14)

There are three operations in the binary search tree interface and here are their specifications.

  • The EmptyTree constructor from the Tree union type, which builds a binary search tree that does not contain any key-value associations.

  • BST_search : fn Tree<Pair<Nat,Nat>> -> (fn Nat -> Option<Nat>)

    The operation BST_search(T) returns a function that maps each key to its associated value.

  • BST_insert : fn Tree<Pair<Nat,Nat>>, Nat, Nat -> Tree<Pair<Nat,Nat>>

    The operation BST_insert(T, k, v) produces a new tree that associates value v with key k and for all other keys, associates keys with the values according to tree T. In other words, BST_insert(T, k, v) = update(BST_search(T), k, v).

Write the BST_search and BST_insert functions

The BST_search function is recursive over the Tree parameter. If the tree is empty, the result is none. Otherwise, we compare the key k with the key in the current node x. If they are equal, return the value in the current node; if k is less-than, recursively search the left subtree; if k is greater-than, recursively search the right subtree.

The BST_insert function follows a similar control structure: recursive over the Tree parameter followed by an if-then-else based on the key k and the key of the current node. However, BST_insert returns a new tree that contains the specified key and value. When the key k is already in the tree, BST_insert overrides the current value with the new value, as implied by the specification above.

function BST_insert(Tree<Pair<Nat,Nat>>, Nat, Nat) -> Tree<Pair<Nat,Nat>> {
  BST_insert(EmptyTree, k, v) = TreeNode(EmptyTree, pair(k, v), EmptyTree)
  BST_insert(TreeNode(L, x, R), k, v) =
    if k = first(x) then
      TreeNode(L, pair(k, v), R)
    else if k < first(x) then
      TreeNode(BST_insert(L, k, v), x, R)
    else
      TreeNode(L, x, BST_insert(R, k, v))
}

Test

We test the correctness of the EmptyTree, BST_search, and BST_insert operations by making sure they behave according to their specification. Starting with EmptyTree, the result of BST_search with any key should be none.

assert BST_search(EmptyTree)(5) = none

After inserting key 10 with value 32, the result of BST_search on 10 should be 32. For other keys, such as 5, the result should be the same as for EmptyTree.

define BST_a = BST_insert(EmptyTree, 10, 32)
assert BST_search(BST_a)(10) = just(32)
assert BST_search(BST_a)(5) = none

The story is similar for inserting key 6 with value 85.

define BST_b = BST_insert(BST_a, 6, 85)
assert BST_search(BST_b)(6) = just(85)
assert BST_search(BST_b)(10) = just(32)
assert BST_search(BST_b)(5) = none

If we insert with the same key 6 but a different value 59, the result of BST_search for 6 should be the new value 59. For other keys, the result of BST_search remains the same.

define BST_c = BST_insert(BST_b, 6, 59)
assert BST_search(BST_c)(6) = just(59)
assert BST_search(BST_c)(10) = just(32)
assert BST_search(BST_c)(5) = none

Prove

Starting with EmptyTree, we prove that applying BST_search produces an empty map.

theorem BST_search_EmptyTree: 
  BST_search(EmptyTree) = λk{none}
proof
  extensionality
  arbitrary k:Nat
  conclude BST_search(EmptyTree)(k) = none
      by definition BST_search
end

The main correctness theorem is to show that BST_insert behaves the same as update. That is to say, applying BST_insert to a tree followed by BST_search is the same as first applying BST_search and then applying update.

theorem BST_search_insert_udpate: all T:Tree<Pair<Nat,Nat>>. all k:Nat, v:Nat.
  BST_search(BST_insert(T, k, v)) = update(BST_search(T), k, just(v))

The proof is by induction on the tree. For the case T = EmptyTree, we start by using extensionality to apply both sides of the equation to an arbitrary number i. We then expand BST_insert(EmptyTree, k, v).

    // <<BST_search_insert_empty_ext>> =
    arbitrary k:Nat, v:Nat
    extensionality
    arbitrary i:Nat
    suffices BST_search(TreeNode(EmptyTree, pair(k, v), EmptyTree))(i)
           = update(BST_search(EmptyTree), k, just(v))(i)   with definition BST_insert

Looking at the definition of BST_search, the left-hand side will either be none or just(v) depending on whether i is less-than, equal to, or greater than k. So we proceed with case analysis using the trichotomy theorem from Nat.pf.

    // <<BST_search_insert_empty_tri>> =
    cases trichotomy[i][k]
    case i_less_k: i < k {
      <<BST_search_insert_empty_less>>
    }
    case i_eq_k: i = k {
      <<BST_search_insert_empty_equal>>
    }
    case i_greater_k: k < i {
      <<BST_search_insert_empty_greater>>
    }

Indeed, when i is less than k, both the left-hand side and the right-hand side are equal to none.

    // <<BST_search_insert_empty_less>> =
    have not_i_eq_k: not (i = k)   by apply less_not_equal to i_less_k
    equations
        BST_search(TreeNode(EmptyTree, pair(k, v), EmptyTree))(i)
         = @none<Nat>
            by definition {BST_search, BST_search, first, second}
               and rewrite not_i_eq_k | i_less_k
     ... = update(BST_search(EmptyTree), k, just(v))(i)
            by definition {BST_search, update} and rewrite not_i_eq_k 

When i is equal to k, both sides are equal to just(v).

    // <<BST_search_insert_empty_equal>> =
    equations
        BST_search(TreeNode(EmptyTree, pair(k, v), EmptyTree))(i)
         = just(v)
            by definition {BST_search, first, second} and rewrite i_eq_k
     ... = update(BST_search(EmptyTree), k, just(v))(i)
            by definition {BST_search, update} and rewrite i_eq_k

When i is greater than k, both side are equal to none.

    // <<BST_search_insert_empty_greater>> =
    have not_k_eq_i: not (k = i)  by apply less_not_equal to i_greater_k
    have not_i_eq_k: not (i = k)  by suppose ik apply not_k_eq_i to symmetric ik
    have not_i_less_k: not (i < k) 
        by apply less_implies_not_greater to i_greater_k
    equations
        BST_search(TreeNode(EmptyTree, pair(k, v), EmptyTree))(i)
         = @none<Nat>
            by definition {BST_search, BST_search, first, second}
               and rewrite not_i_eq_k | not_i_less_k
     ... = update(BST_search(EmptyTree), k, just(v))(i)
            by definition {BST_search, update}
               and rewrite not_i_eq_k 

Next we consider the case where T = TreeNode(L, x, R). Again we begin with extensionality.

    // <<BST_search_insert_node_ext>> =
    arbitrary k:Nat, v:Nat
    extensionality
    arbitrary i:Nat
    suffices BST_search(BST_insert(TreeNode(L, x, R), k, v))(i) 
           = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)   by .

Looking at BST_insert(TreeNode(L, x, R), k, v), its result will depend on whether k is less than, equal to, or greater than first(x). So we proceed by cases, using the trichotomy theorem from Nat.py.

    // <<BST_search_insert_node_tri>> =
    cases trichotomy[k][first(x)]
    case k_less_fx: k < first(x) {
      <<BST_search_insert_node_k_less_fx>>
    }
    case k_eq_fx: k = first(x) {
      <<BST_search_insert_node_k_equal_fx>>
    }
    case k_greater_fx: first(x) < k {
      <<BST_search_insert_node_k_greater_fx>>
    }

For the case k < first(x), we have

  BST_insert(TreeNode(L, x, R), k, v) 
= BST_search(TreeNode(BST_insert(L, k, v), x, R))(i)

so it suffices to prove the following.

    // <<BST_search_insert_node_k_less_fx_suffices>> =
    have not_k_eq_fx: not (k = first(x))   by apply less_not_equal to k_less_fx
    suffices BST_search(TreeNode(BST_insert(L, k, v), x, R))(i)
           = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)
                with definition {BST_insert} and rewrite not_k_eq_fx | k_less_fx

The result of BST_search(TreeNode(BST_insert(L, k, v), x, R))(i) depends on the relationship between i and first(x), so we again proceed by cases using the trichotomy theorem. There sure are a lot of cases in this proof!

    // <<BST_search_insert_node_k_less_fx_tri>> =
    cases trichotomy[i][first(x)]
    case i_less_fx: i < first(x) {
      <<BST_search_insert_node_k_less_fx_i_less_fx>>
    }
    case i_eq_fx: i = first(x) {
      <<BST_search_insert_node_k_less_fx_i_eq_fx>>
    }
    case fx_less_i: first(x) < i {
      <<BST_search_insert_node_k_less_fx_i_greater_fx>>
    }

For the case i < first(x), we proceed by the following several steps of equational reasoning, shown below. The key step is applying the induction hypothesis for the left subtree L.

    // <<BST_search_insert_node_k_less_fx_i_less_fx>> =
    have not_i_eq_fx: not (i = first(x)) by apply less_not_equal to i_less_fx
    equations
          BST_search(TreeNode(BST_insert(L, k, v), x, R))(i) 
        = BST_search(BST_insert(L, k, v))(i)
            by definition{BST_search} and rewrite not_i_eq_fx | i_less_fx
    ... = update(BST_search(L), k, just(v))(i)
            by rewrite IH_L[k,v]
    ... = update(BST_search(TreeNode(L, x, R)), k, just(v))(i) by
            switch i = k {
              case true suppose ik_true {
                definition {BST_search,update} and rewrite ik_true
              }
              case false suppose ik_false {
                definition {BST_search,update}
                and rewrite ik_false | not_i_eq_fx | i_less_fx
              }
            }

For the case i = first(x), both sides simplify to just(second(x)).

    // <<BST_search_insert_node_k_less_fx_i_eq_fx>> =
    have not_fx_eq_k: not (first(x) = k)
      by suppose fx_eq_k
         conclude false by rewrite not_k_eq_fx in symmetric fx_eq_k 
    equations
          BST_search(TreeNode(BST_insert(L, k, v), x, R))(i) 
        = just(second(x))
            by definition {BST_search} and rewrite i_eq_fx
    ... = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)
            by definition {BST_search,update} and rewrite i_eq_fx | not_fx_eq_k

For the case first(x) < i, both side are equal to BST_search(R)(i). because we know that i ≠ k.

    // <<BST_search_insert_node_k_less_fx_i_greater_fx>> =
    have not_i_eq_fx: not (i = first(x))
      by suppose i_eq_fx
         apply (apply less_not_equal to fx_less_i) to symmetric i_eq_fx
    have not_i_less_fx: not (i < first(x))
      by apply less_implies_not_greater to fx_less_i
    have not_i_eq_k: not (i = k)
      by suppose i_eq_k
         have fx_less_k: first(x) < k   by rewrite i_eq_k in fx_less_i
         have not_k_less_fx: not (k < first(x)) 
             by apply less_implies_not_greater to fx_less_k
         conclude false by apply not_k_less_fx to rewrite k_less_fx
    equations
          BST_search(TreeNode(BST_insert(L, k, v), x, R))(i) 
        = BST_search(R)(i)
            by definition BST_search and rewrite not_i_eq_fx | not_i_less_fx
    ... = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)
            by definition {BST_search, update}
               and rewrite not_i_eq_k | not_i_eq_fx | not_i_less_fx

This completes the proof of the case for k < first(x).

    <<BST_search_insert_node_k_less_fx_suffices>>
    <<BST_search_insert_node_k_less_fx_tri>>

Next consider the case for k = first(x). We have

  BST_insert(TreeNode(L, x, R), k, v) 
= TreeNode(L, pair(k, v), R)

so it suffices to prove the following

    // <<BST_search_insert_node_k_equal_fx_suffices>> =
    suffices BST_search(TreeNode(L, pair(k, v), R))(i) 
           = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)
                by definition BST_insert and rewrite k_eq_fx

Looking at the definition of BST_search, the result of BST_search(TreeNode(L, pair(k, v), R))(i) will depend on the relationship between i and k. So we proceed by cases, using the trichotomy theorem.

    // <<BST_search_insert_node_k_equal_fx_tri>> =
    cases trichotomy[i][k]
    case i_less_k: i < k {
      <<BST_search_insert_node_k_equal_fx_i_less_k>>
    }
    case i_eq_k: i = k {
      <<BST_search_insert_node_k_equal_fx_i_eq_k>>
    }
    case k_less_i: k < i {
      <<BST_search_insert_node_k_equal_fx_i_greater_k>>
    }

When i < k, both sides of the equation are equal to BST_search(L)(i).

    // <<BST_search_insert_node_k_equal_fx_i_less_k>> =
    have not_i_eq_k: not (i = k)   by apply less_not_equal to i_less_k
    equations
          BST_search(TreeNode(L, pair(k, v), R))(i) 
        = BST_search(L)(i)
              by definition {BST_search, first}
                 and rewrite not_i_eq_k | i_less_k
    ... = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)
              by definition {update,BST_search}
                 and rewrite symmetric k_eq_fx | not_i_eq_k | i_less_k

When i = k, both sides are equal to just(v).

    // <<BST_search_insert_node_k_equal_fx_i_eq_k>> =
    suffices BST_search(TreeNode(L, pair(k, v), R))(k)
            = update(BST_search(TreeNode(L, x, R)), k, just(v))(k)
            with rewrite i_eq_k
    equations
      BST_search(TreeNode(L, pair(k, v), R))(k)
        = just(v)          by definition {BST_search, first, second}
    ... = update(BST_search(TreeNode(L, x, R)), k, just(v))(k)
                           by definition {BST_search, update}

When i > k, both sides are equal to BST_search(R)(i).

    have not_i_eq_k: not (i = k) 
      by have nki: not (k = i) by apply less_not_equal to k_less_i
         suppose i_eq_k apply nki to symmetric i_eq_k
    have not_i_less_k: not (i < k) 
        by apply less_implies_not_greater to k_less_i
    equations
          BST_search(TreeNode(L, pair(k, v), R))(i) 
        = BST_search(R)(i)
            by definition {BST_search, first, second}
               and rewrite not_i_eq_k | not_i_less_k
    ... = update(BST_search(TreeNode(L, x, R)), k, just(v))(i)
            by definition {update, BST_search}
               and rewrite symmetric k_eq_fx | not_i_eq_k | not_i_less_k

This concludes the proof of the case for k = first(x).

    <<BST_search_insert_node_k_equal_fx_suffices>>
    <<BST_search_insert_node_k_equal_fx_tri>>

The last case to prove is for k > first(x). We leave this as an exercise.

The following puts together the pieces of the proof for BST_search_insert_udpate.

theorem BST_search_insert_udpate: all T:Tree<Pair<Nat,Nat>>. all k:Nat, v:Nat.
  BST_search(BST_insert(T, k, v)) = update(BST_search(T), k, just(v))
proof
  induction Tree<Pair<Nat,Nat>>
  case EmptyTree {
    <<BST_search_insert_empty_ext>>
    <<BST_search_insert_empty_tri>>
  }
  case TreeNode(L, x, R) suppose IH_L, IH_R {
    <<BST_search_insert_node_ext>>
    <<BST_search_insert_node_tri>>
  }
end

Saturday, July 20, 2024

Binary Trees with In-order Iterators (Part 2)

This is the sixth blog post in a series about developing correct implementations of basic data structures and algorithms using the Deduce language and proof checker.

This post continues were we left off from the previous post in which we implemented binary trees and in-order tree iterators.

Our goal in this post is to prove that we correctly implemented the iterator operations:

ti2tree : < E > fn TreeIter<E> -> Tree<E>
ti_first : < E > fn Tree<E>,E,Tree<E> -> TreeIter<E>
ti_get : < E > fn TreeIter<E> -> E
ti_next : < E > fn TreeIter<E> -> TreeIter<E>
ti_index : < E > fn(TreeIter<E>) -> Nat

The first operation, ti2tree, requires us to first obtain a tree iterator, for example, with ti_first, so ti2tree does not have a correctness criteria all of its own, but instead the proof of its correctness will be part of the correctness of the other operations.

So we skip to the proof of correctness for ti_first.

Correctness of ti_first

Let us make explicit the specification of ti_first:

Specification: The ti_first(A, x, B) function returns an iterator pointing to the first node, with respect to in-order traversal, of the tree TreeNode(A, x, B).

Also, recall that we said the following about ti2tree and ti_first: creating an iterator from a tree using ti_first and then applying ti2tree produces the original tree.

So we have two properties to prove about ti_first. For the first property, we need a way to formalize "the first node with respect to in-order traversal". This is where the ti_index operation comes in. If ti_first returns the first node, then its index should be 0. (One might worry that if ti_index is incorrect, then this property would not force ti_first to be correct. Not to worry, we will prove that ti_index is correct!) So we have the following theorem:

theorem ti_first_index: all E:type, A:Tree<E>, x:E, B:Tree<E>.
  ti_index(ti_first(A, x, B)) = 0
proof
  arbitrary E:type, A:Tree<E>, x:E, B:Tree<E>
  definition ti_first
  ?
end

After expanding the definition of ti_first, we are left with the following goal. So we need to prove a lemma about the first_path auxiliary function.

    ti_index(first_path(A,x,B,empty)) = 0

Here is a first attempt to formulate the lemma.

lemma first_path_index: all E:type. all A:Tree<E>. all y:E, B:Tree<E>.
  ti_index(first_path(A,y,B, empty)) = 0

However, because first_path is recursive, we will need to prove this by recursion on A. But looking at the second clause of in the definition of first_path, the path argument grows, so our induction hypothesis, which requires the path argument to be empty, will not be applicable. As is often the case, we need to generalize the lemma. Let’s replace empty with an arbitrary path as follows.

lemma first_path_index: all E:type. all A:Tree<E>. all y:E, B:Tree<E>, path:List<Direction<E>>.
  ti_index(first_path(A,y,B, path)) = 0

But now this lemma is false. Consider the following situation in which the current node y is 5 and the path is L,R (going from node 5 up to node 3).

Diagram for lemma first path index

The index of node 5 is not 0, it is 5! Instead the index of node 5 is equal to the number of nodes that come before 5 according to in-order travesal. We can obtain that portion of the tree using functions that we have already defined, in particular take_path followed by plug_tree. So we can formulate the lemma as follows.

lemma first_path_index: all E:type. all A:Tree<E>. all y:E, B:Tree<E>, path:List<Direction<E>>.
  ti_index(first_path(A,y,B, path)) = num_nodes(plug_tree(take_path(path), EmptyTree))
proof
  arbitrary E:type
  induction Tree<E>
  case EmptyTree {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    ?
  }
  case TreeNode(L, x, R) suppose IH {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    ?
  }
end

For the case A = EmptyTree, the goal simply follows from the definitions of first_path, ti_index, and ti_take.

    conclude ti_index(first_path(EmptyTree,y,B,path))
           = num_nodes(plug_tree(take_path(path),EmptyTree))
                by definition {first_path, ti_index, ti_take}.

For the case A = TreeNode(L, x, R), after expanding the definition of first_path, we need to prove:

  ti_index(first_path(L,x,R,node(LeftD(y,B),path)))
= num_nodes(plug_tree(take_path(path),EmptyTree))

But that follows from the induction hypothesis and the definition of take_path.

    definition {first_path}
    equations
          ti_index(first_path(L,x,R,node(LeftD(y,B),path)))
        = num_nodes(plug_tree(take_path(node(LeftD(y,B),path)),EmptyTree))
                by IH[x, R, node(LeftD(y,B), path)]
    ... = num_nodes(plug_tree(take_path(path),EmptyTree))
                by definition take_path.

Here is the completed proof of the first_path_index lemma.

lemma first_path_index: all E:type. all A:Tree<E>. all y:E, B:Tree<E>, path:List<Direction<E>>.
  ti_index(first_path(A,y,B, path)) = num_nodes(plug_tree(take_path(path), EmptyTree))
proof
  arbitrary E:type
  induction Tree<E>
  case EmptyTree {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    conclude ti_index(first_path(EmptyTree,y,B,path))
           = num_nodes(plug_tree(take_path(path),EmptyTree))
                by definition {first_path, ti_index, ti_take}.
  }
  case TreeNode(L, x, R) suppose IH {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    definition {first_path}
    equations
          ti_index(first_path(L,x,R,node(LeftD(y,B),path)))
        = num_nodes(plug_tree(take_path(node(LeftD(y,B),path)),EmptyTree))
                by IH[x, R, node(LeftD(y,B), path)]
    ... = num_nodes(plug_tree(take_path(path),EmptyTree))
                by definition take_path.
  }
end

Returning to the proof of ti_first_index, we need to prove that ti_index(first_path(A,x,B,empty)) = 0. So we apply the first_path_index lemma and then the definitions of take_path, plug_tree, and num_nodes. Here is the completed proof of ti_first_index.

theorem ti_first_index: all E:type, A:Tree<E>, x:E, B:Tree<E>.
  ti_index(ti_first(A, x, B)) = 0
proof
  arbitrary E:type, A:Tree<E>, x:E, B:Tree<E>
  definition ti_first
  equations  ti_index(first_path(A,x,B,empty))
           = num_nodes(plug_tree(take_path(empty),EmptyTree))
                       by first_path_index[E][A][x,B,empty]
       ... = 0      by definition {take_path, plug_tree, num_nodes}.
end

Our next task is to prove that creating an iterator from a tree using ti_first and then applying ti2tree produces the original tree.

theorem ti_first_stable: all E:type, A:Tree<E>, x:E, B:Tree<E>.
  ti2tree(ti_first(A, x, B)) = TreeNode(A, x, B)
proof
  arbitrary E:type, A:Tree<E>, x:E, B:Tree<E>
  definition ti_first
  ?
end

After expanding the definition of ti_first, we are left to prove that

ti2tree(first_path(A,x,B,empty)) = TreeNode(A,x,B)

So we need to prove another lemma about first_path and again we need to generalize the empty path to an arbitrary path. Let us consider again the situation where the current node x is 5.

Diagram for lemma first path index

The result of first_path(A,x,B,path) will be the path to node 4, and the result of ti2tree will be the whole tree, not just TreeNode(A,x,B) as in the above equation. However, we can construct the whole tree from the path and TreeNode(A,x,B) using the plug_tree function. So we have the following lemma to prove.

lemma first_path_stable:
  all E:type. all A:Tree<E>. all y:E, B:Tree<E>, path:List<Direction<E>>.
  ti2tree(first_path(A, y, B, path)) = plug_tree(path, TreeNode(A, y, B))
proof
  arbitrary E:type
  induction Tree<E>
  case EmptyTree {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    ?
  }
  case TreeNode(L, x, R) suppose IH_L, IH_R {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    ?
  }
end

In the case A = EmptyTree, we prove the equation using the definitions of first_path and ti2tree.

    equations  ti2tree(first_path(EmptyTree,y,B,path))
             = ti2tree(TrItr(path,EmptyTree,y,B))       by definition first_path.
         ... = plug_tree(path,TreeNode(EmptyTree,y,B))  by definition ti2tree.

In the case A = TreeNode(L, x, R), we need to prove that

  ti2tree(first_path(TreeNode(L,x,R),y,B,path))
= plug_tree(path,TreeNode(TreeNode(L,x,R),y,B))

We probably need to expand the definition of first_path, but doing so in your head is hard. So we can instead ask Deduce to do it. We start by constructing an equation with a bogus right-hand side and apply the definition of first_path.

    equations
          ti2tree(first_path(TreeNode(L,x,R),y,B,path))
        = EmptyTree
             by definition first_path ?
    ... = plug_tree(path,TreeNode(TreeNode(L,x,R),y,B))
             by ?

Deduce responds with

incomplete proof
Goal:
    ti2tree(first_path(L,x,R,node(LeftD(y,B),path))) = EmptyTree

in which the left-hand side has expanded the definition of first_path. So we cut and paste that into our proof and move on to the next step.

    equations
          ti2tree(first_path(TreeNode(L,x,R),y,B,path))
        = ti2tree(first_path(L,x,R,node(LeftD(y,B),path)))
             by definition first_path.
    ... = plug_tree(path,TreeNode(TreeNode(L,x,R),y,B))
             by ?

We now have something that matches the induction hypothesis, so we instantiate it and ask Deduce to tell us the new right-hand side.

    equations
          ti2tree(first_path(TreeNode(L,x,R),y,B,path))
        = ti2tree(first_path(L,x,R,node(LeftD(y,B),path)))
             by definition first_path.
    ... = EmptyTree
             by IH_L[x,R,node(LeftD(y,B),path)]
    ... = plug_tree(path,TreeNode(TreeNode(L,x,R),y,B))
             by ?

Deduce responds with

expected
ti2tree(first_path(L,x,R,node(LeftD(y,B),path))) = EmptyTree
but only have
ti2tree(first_path(L,x,R,node(LeftD(y,B),path))) = plug_tree(node(LeftD(y,B),path),TreeNode(L,x,R))

So we cut and paste the right-hand side of the induction hypothesis to replace EmptyTree.

    equations
          ti2tree(first_path(TreeNode(L,x,R),y,B,path))
        = ti2tree(first_path(L,x,R,node(LeftD(y,B),path)))
             by definition first_path.
    ... = plug_tree(node(LeftD(y,B),path),TreeNode(L,x,R))
             by IH_L[x,R,node(LeftD(y,B),path)]
    ... = plug_tree(path,TreeNode(TreeNode(L,x,R),y,B))
             by ?

The final step of the proof is easy; we just apply the definition of plug_tree. Here is the completed proof of first_path_stable.

lemma first_path_stable:
  all E:type. all A:Tree<E>. all y:E, B:Tree<E>, path:List<Direction<E>>.
  ti2tree(first_path(A, y, B, path)) = plug_tree(path, TreeNode(A, y, B))
proof
  arbitrary E:type
  induction Tree<E>
  case EmptyTree {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    equations  ti2tree(first_path(EmptyTree,y,B,path))
             = ti2tree(TrItr(path,EmptyTree,y,B))       by definition first_path.
         ... = plug_tree(path,TreeNode(EmptyTree,y,B))  by definition ti2tree.
  }
  case TreeNode(L, x, R) suppose IH_L, IH_R {
    arbitrary y:E, B:Tree<E>, path:List<Direction<E>>
    equations
          ti2tree(first_path(TreeNode(L,x,R),y,B,path))
        = ti2tree(first_path(L,x,R,node(LeftD(y,B),path)))
             by definition first_path.
    ... = plug_tree(node(LeftD(y,B),path),TreeNode(L,x,R))
             by IH_L[x,R,node(LeftD(y,B),path)]
    ... = plug_tree(path,TreeNode(TreeNode(L,x,R),y,B))
             by definition plug_tree.
  }
end

Returning to the ti_first_stable theorem, the equation follows from our first_path_stable lemma and the definition of plug_tree.

theorem ti_first_stable: all E:type, A:Tree<E>, x:E, B:Tree<E>.
  ti2tree(ti_first(A, x, B)) = TreeNode(A, x, B)
proof
  arbitrary E:type, A:Tree<E>, x:E, B:Tree<E>
  definition ti_first
  equations  ti2tree(first_path(A,x,B,empty))
           = plug_tree(empty,TreeNode(A,x,B))  by first_path_stable[E][A][x,B,empty]
       ... = TreeNode(A,x,B)                   by definition plug_tree.
end

Correctness of ti_next

We start by writing down a more careful specification of ti_next.

Specification: The ti_next(iter) operation returns an iterator whose position is one more than the position of iter with respect to in-order traversal, assuming the iter is not at the end of the in-order traversal.

To make this specification formal, we can again use ti_index to talk about the position of the iterator. So we begin to prove the following theorem ti_next_index, taking the usual initial steps in the proof as guided by the formula to be proved and the definition of ti_next, which performs a switch on the right child R of the current node.

theorem ti_next_index: all E:type, iter : TreeIter<E>.
  if suc(ti_index(iter)) < num_nodes(ti2tree(iter))
  then ti_index(ti_next(iter)) = suc(ti_index(iter))
proof
  arbitrary E:type, iter : TreeIter<E>
  suppose prem: suc(ti_index(iter)) < num_nodes(ti2tree(iter))
  switch iter {
    case TrItr(path, L, x, R) suppose iter_eq {
      definition ti_next
      switch R {
        case EmptyTree suppose R_eq {
          ?
        }
        case TreeNode(RL, y, RR) suppose R_eq {
          ?
        }
      }
    }
  }
end

In the case R = EmptyTree, ti_next calls the auxiliary function next_up and we need to prove.

ti_index(next_up(path,L,x,EmptyTree)) = suc(ti_index(TrItr(path,L,x,EmptyTree)))

As usual, we must create a lemma that generalizes this equation.

Proving the next_up_index lemma

Looking at the definition of next_up, we see that the recursive call grows the fourth argument, so we must replace the EmptyTree in the needed equation with an arbitrary tree R:

ti_index(next_up(path,L,x,R)) = suc(ti_index(TrItr(path,L,x,R)))

But this equation is not true in general. Consider the situation below where the current node x is node 1 in our example tree. The index of the next_up from node 1 is 3, but the index of node 1 is 1 and of course, adding one to that is 2, not 3!

Diagram for path to node 1

So we need to change this equation to account for the situation where R is not empty, but instead an arbitrary subtree. The solution is to add the number of nodes in R to the right-hand side:

ti_index(next_up(path,L,x,R)) = suc(ti_index(TrItr(path,L,x,R))) + num_nodes(R)

One more addition is necessary to formulate the lemma. The above equation is only meaningful when the index on the right-hand side is in bounds. That is, it must be smaller than the number of nodes in the tree. So we formula the lemma next_up_index as follows and take a few obvious steps into the proof.

lemma next_up_index: all E:type. all path:List<Direction<E>>. all A:Tree<E>, x:E, B:Tree<E>.
  if suc(ti_index(TrItr(path, A, x, B)) + num_nodes(B)) < num_nodes(ti2tree(TrItr(path, A, x, B)))
  then ti_index(next_up(path, A, x, B)) = suc(ti_index(TrItr(path, A,x,B)) + num_nodes(B))
proof
  arbitrary E:type
  induction List<Direction<E>>
  case empty {
    arbitrary A:Tree<E>, x:E, B:Tree<E>
    suppose prem: suc(ti_index(TrItr(empty,A,x,B)) + num_nodes(B)) 
                  < num_nodes(ti2tree(TrItr(empty,A,x,B)))
    ?
  }
  case node(f, path') suppose IH {
    arbitrary A:Tree<E>, x:E, B:Tree<E>
    suppose prem
    switch f {
      case LeftD(y, R) {
        ?
      }
      case RightD(L, y) suppose f_eq {
        ?
      }
    }
  }
end

In the case path = empty, the premise is false because there are no nodes that come afterwards in the in-order traversal. In particular, the premise implies the following contradictory inequality.

    have AB_l_AB: suc(num_nodes(A) + num_nodes(B)) < suc(num_nodes(A) + num_nodes(B))
      by definition {ti_index, ti_take, take_path, plug_tree, ti2tree, num_nodes} 
         in prem
    conclude false  by apply less_irreflexive to AB_l_AB

Next consider the case path = node(LeftD(y, R), path'). After expanding all the relevant definitions, we need to prove that

  num_nodes(plug_tree(take_path(path'), TreeNode(A,x,B))) 
= suc(num_nodes(plug_tree(take_path(path'), A)) + num_nodes(B))

We need a lemma that relates num_nodes and plug_tree. So we pause the current proof for the following exercise.

Exercise: prove the num_nodes_plug lemma

lemma num_nodes_plug: all E:type. all path:List<Direction<E>>. all t:Tree<E>.
  num_nodes(plug_tree(path, t)) = num_nodes(plug_tree(path, EmptyTree)) + num_nodes(t)

Back to the next_up_index lemma

We use num_nodes_plug on both the left and right-hand sides of the equation, and apply the definition of num_nodes.

    rewrite num_nodes_plug[E][take_path(path')][TreeNode(A,x,B)]
    rewrite num_nodes_plug[E][take_path(path')][A]
    definition num_nodes

After that it suffices to prove the following.

  num_nodes(plug_tree(take_path(path'),EmptyTree)) + suc(num_nodes(A) + num_nodes(B)) 
= suc((num_nodes(plug_tree(take_path(path'),EmptyTree)) + num_nodes(A)) + num_nodes(B))

This equation is rather big, so let’s squint at it by giving names to its parts. (This is a new version of define that I’m experimenting with.)

    define_ X = num_nodes(plug_tree(take_path(path'),EmptyTree))
    define_ Y = num_nodes(A)
    define_ Z = num_nodes(B)

Now it’s easy to see that our goal is true using some simple arithmetic.

    conclude X + suc(Y + Z) = suc((X + Y) + Z)
        by rewrite add_suc[X][Y+Z] | add_assoc[X][Y,Z].

Finally, consider the case path = node(RightD(L, y), path'). After expanding the definition of next_up, we need to prove

  ti_index(next_up(path',L,y,TreeNode(A,x,B))) 
= suc(ti_index(TrItr(node(RightD(L,y),path'),A,x,B)) + num_nodes(B))

The left-hand side matches the induction hypothesis, so we have

    equations
      ti_index(next_up(path',L,y,TreeNode(A,x,B))) 
        = suc(ti_index(TrItr(path',L,y,TreeNode(A,x,B))) + num_nodes(TreeNode(A,x,B)))
            by apply IH[L,y,TreeNode(A,x,B)] 
               to definition {ti_index, ti_take, num_nodes, ti2tree} ?
    ... = suc(ti_index(TrItr(node(RightD(L,y),path'),A,x,B)) + num_nodes(B))
            by ?

But we need to prove the premise of the induction hypothesis. We can do that as follows, with many uses of num_nodes_plug and some arithmetic that we package up into lemma XYZW_equal.

    have IH_prem: suc(num_nodes(plug_tree(take_path(path'),L)) 
                      + suc(num_nodes(A) + num_nodes(B))) 
                  < num_nodes(plug_tree(path',TreeNode(L,y,TreeNode(A,x,B))))
      by rewrite num_nodes_plug[E][take_path(path')][L]
          | num_nodes_plug[E][path'][TreeNode(L,y,TreeNode(A,x,B))]
         definition {num_nodes, num_nodes}
         define_ X = num_nodes(plug_tree(take_path(path'),EmptyTree))
         define_ Y = num_nodes(L) define_ Z = num_nodes(A) define_ W = num_nodes(B)
         define_ P = num_nodes(plug_tree(path',EmptyTree))
         suffices suc((X + Y) + suc(Z + W)) < P + suc(Y + suc(Z + W))
         have prem2: suc((X + suc(Y + Z)) + W) < P + suc(Y + suc(Z + W))
           by enable {X,Y,Z,W,P}
              definition {num_nodes, num_nodes} in
              rewrite num_nodes_plug[E][take_path(path')][TreeNode(L,y,A)]
                    | num_nodes_plug[E][path'][TreeNode(L,y,TreeNode(A,x,B))] in
              definition {ti_index, ti_take, take_path, ti2tree, plug_tree} in
              rewrite f_eq in prem
         rewrite XYZW_equal[X,Y,Z,W]
         prem2

Here is the proof of XYZW_equal.

lemma XYZW_equal: all X:Nat, Y:Nat, Z:Nat, W:Nat.
  suc((X + Y) + suc(Z + W)) = suc((X + suc(Y + Z)) + W)
proof
  arbitrary X:Nat, Y:Nat, Z:Nat, W:Nat
  enable {operator+}
  equations
        suc((X + Y) + suc(Z + W))
      = suc(suc(X + Y) + (Z + W))      by rewrite add_suc[X+Y][Z+W].
  ... = suc(suc(((X + Y) + Z) + W))    by rewrite add_assoc[X+Y][Z,W].
  ... = suc(suc((X + (Y + Z)) + W))    by rewrite add_assoc[X][Y,Z].
  ... = suc((X + suc(Y + Z)) + W)      by rewrite add_suc[X][Y+Z].
end

Getting back to the equational proof, it remains to prove that

  suc(ti_index(TrItr(path',L,y,TreeNode(A,x,B))) + num_nodes(TreeNode(A,x,B)))
= suc(ti_index(TrItr(node(RightD(L,y),path'),A,x,B)) + num_nodes(B))

which we can do with yet more uses of num_nodes_plug and XYZW_equal.

    ... = suc(num_nodes(plug_tree(take_path(path'),L)) + suc(num_nodes(A) + num_nodes(B)))
          by definition {ti_index, ti_take, num_nodes}.
    ... = suc((num_nodes(plug_tree(take_path(path'),EmptyTree)) + num_nodes(L))
              + suc(num_nodes(A) + num_nodes(B)))
          by rewrite num_nodes_plug[E][take_path(path')][L].
    ... = suc((num_nodes(plug_tree(take_path(path'),EmptyTree)) 
              + suc(num_nodes(L) + num_nodes(A))) + num_nodes(B))
          by define_ X = num_nodes(plug_tree(take_path(path'),EmptyTree))
             define_ Y = num_nodes(L) define_ Z = num_nodes(A) define_ W = num_nodes(B)
             define_ P = num_nodes(plug_tree(path',EmptyTree))
             conclude suc((X + Y) + suc(Z + W)) = suc((X + suc(Y + Z)) + W)
                 by XYZW_equal[X,Y,Z,W]
    ... = suc(num_nodes(plug_tree(take_path(path'),TreeNode(L,y,A))) + num_nodes(B))
          by rewrite num_nodes_plug[E][take_path(path')][TreeNode(L,y,A)]
             definition {num_nodes, num_nodes}.
    ... = suc(ti_index(TrItr(node(RightD(L,y),path'),A,x,B)) + num_nodes(B))
          by definition {ti_index, ti_take, take_path, plug_tree}.

That completes the last case of the proof of next_up_index. Here’s the completed proof.

lemma next_up_index: all E:type. all path:List<Direction<E>>. all A:Tree<E>, x:E, B:Tree<E>.
  if suc(ti_index(TrItr(path, A, x, B)) + num_nodes(B)) < num_nodes(ti2tree(TrItr(path, A, x, B)))
  then ti_index(next_up(path, A, x, B)) = suc(ti_index(TrItr(path, A,x,B)) + num_nodes(B))
proof
  arbitrary E:type
  induction List<Direction<E>>
  case empty {
    arbitrary A:Tree<E>, x:E, B:Tree<E>
    suppose prem: suc(ti_index(TrItr(empty,A,x,B)) + num_nodes(B)) 
                  < num_nodes(ti2tree(TrItr(empty,A,x,B)))
    have AB_l_AB: suc(num_nodes(A) + num_nodes(B)) < suc(num_nodes(A) + num_nodes(B))
      by definition {ti_index, ti_take, take_path, plug_tree, ti2tree, num_nodes} 
         in prem
    conclude false  by apply less_irreflexive to AB_l_AB
  }
  case node(f, path') suppose IH {
    arbitrary A:Tree<E>, x:E, B:Tree<E>
    suppose prem
    switch f {
      case LeftD(y, R) {
        definition {next_up, ti_index, ti_take, take_path}
        rewrite num_nodes_plug[E][take_path(path')][TreeNode(A,x,B)]
        rewrite num_nodes_plug[E][take_path(path')][A]
        definition num_nodes
        define_ X = num_nodes(plug_tree(take_path(path'),EmptyTree))
        define_ Y = num_nodes(A)
        define_ Z = num_nodes(B)
        conclude X + suc(Y + Z) = suc((X + Y) + Z)
            by rewrite add_suc[X][Y+Z] | add_assoc[X][Y,Z].
      }
      case RightD(L, y) suppose f_eq {
        definition {next_up}
        have IH_prem: suc(num_nodes(plug_tree(take_path(path'),L)) 
                          + suc(num_nodes(A) + num_nodes(B))) 
                      < num_nodes(plug_tree(path',TreeNode(L,y,TreeNode(A,x,B))))
          by rewrite num_nodes_plug[E][take_path(path')][L]
              | num_nodes_plug[E][path'][TreeNode(L,y,TreeNode(A,x,B))]
             definition {num_nodes, num_nodes}
             define_ X = num_nodes(plug_tree(take_path(path'),EmptyTree))
             define_ Y = num_nodes(L) define_ Z = num_nodes(A) define_ W = num_nodes(B)
             define_ P = num_nodes(plug_tree(path',EmptyTree))
             suffices suc((X + Y) + suc(Z + W)) < P + suc(Y + suc(Z + W))
             have prem2: suc((X + suc(Y + Z)) + W) < P + suc(Y + suc(Z + W))
               by enable {X,Y,Z,W,P}
                  definition {num_nodes, num_nodes} in
                  rewrite num_nodes_plug[E][take_path(path')][TreeNode(L,y,A)]
                        | num_nodes_plug[E][path'][TreeNode(L,y,TreeNode(A,x,B))] in
                  definition {ti_index, ti_take, take_path, ti2tree, plug_tree} in
                  rewrite f_eq in prem
             rewrite XYZW_equal[X,Y,Z,W]
             prem2
        equations
              ti_index(next_up(path',L,y,TreeNode(A,x,B))) 
            = suc(ti_index(TrItr(path',L,y,TreeNode(A,x,B))) + num_nodes(TreeNode(A,x,B)))
                by apply IH[L,y,TreeNode(A,x,B)] 
                   to definition {ti_index, ti_take, num_nodes, ti2tree} IH_prem
        ... = suc(num_nodes(plug_tree(take_path(path'),L)) + suc(num_nodes(A) + num_nodes(B)))
              by definition {ti_index, ti_take, num_nodes}.
        ... = suc((num_nodes(plug_tree(take_path(path'),EmptyTree)) + num_nodes(L))
                  + suc(num_nodes(A) + num_nodes(B)))
              by rewrite num_nodes_plug[E][take_path(path')][L].
        ... = suc((num_nodes(plug_tree(take_path(path'),EmptyTree)) 
                  + suc(num_nodes(L) + num_nodes(A))) + num_nodes(B))
              by define_ X = num_nodes(plug_tree(take_path(path'),EmptyTree))
                 define_ Y = num_nodes(L) define_ Z = num_nodes(A) define_ W = num_nodes(B)
                 define_ P = num_nodes(plug_tree(path',EmptyTree))
                 conclude suc((X + Y) + suc(Z + W)) = suc((X + suc(Y + Z)) + W)
                     by XYZW_equal[X,Y,Z,W]
        ... = suc(num_nodes(plug_tree(take_path(path'),TreeNode(L,y,A))) + num_nodes(B))
              by rewrite num_nodes_plug[E][take_path(path')][TreeNode(L,y,A)]
                 definition {num_nodes, num_nodes}.
        ... = suc(ti_index(TrItr(node(RightD(L,y),path'),A,x,B)) + num_nodes(B))
              by definition {ti_index, ti_take, take_path, plug_tree}.
      }
    }
  }
end

Back to the proof of ti_next_index

With the next_up_index lemma complete, we can get back to proving the ti_next_index theorem. Recall that we were in the case R = EmptyTree and needed to prove the following.

ti_index(next_up(path,L,x,EmptyTree)) = suc(ti_index(TrItr(path,L,x,EmptyTree)))

To use the next_up_index lemma, we need to prove its premise:

    have next_up_index_prem:
        suc(ti_index(TrItr(path,L,x,EmptyTree)) + num_nodes(EmptyTree))
        < num_nodes(ti2tree(TrItr(path,L,x,EmptyTree)))
      by enable num_nodes
         rewrite add_zero[ti_index(TrItr(path,L,x,EmptyTree))]
         rewrite iter_eq | R_eq in prem

We can finish the proof of the equation using the definition of num_nodes and the add_zero property.

    equations
          ti_index(next_up(path,L,x,EmptyTree))
        = suc(ti_index(TrItr(path,L,x,EmptyTree)) + num_nodes(EmptyTree))
          by apply next_up_index[E][path][L, x, EmptyTree] to next_up_index_prem
    ... = suc(ti_index(TrItr(path,L,x,EmptyTree)))
          by definition num_nodes
             rewrite add_zero[ti_index(TrItr(path,L,x,EmptyTree))].

The next case in the proof of ti_next_index is for R = TreeNode(RL, y, RR). We need to prove

  ti_index(first_path(RL,y,RR,node(RightD(L,x),path))) 
= suc(ti_index(TrItr(path,L,x,TreeNode(RL,y,RR))))

We can start by applying the first_path_index lemma, which gives us

equations
      ti_index(first_path(RL,y,RR,node(RightD(L,x),path))) 
    = num_nodes(plug_tree(take_path(node(RightD(L,x),path)),EmptyTree))

We have opportunities to expand take_path and then plug_tree.

... = num_nodes(plug_tree(take_path(path),TreeNode(L,x,EmptyTree)))
        by definition {take_path,plug_tree}.

We can separate out the TreeNode(L,x,EmptyTree) using num_nodes_plug.

... = num_nodes(plug_tree(take_path(path),EmptyTree)) + suc(num_nodes(L))
        by rewrite num_nodes_plug[E][take_path(path)][TreeNode(L,x,EmptyTree)]
           definition {num_nodes, num_nodes}
           rewrite add_zero[num_nodes(L)].

Then we can move the L back into the plug_tree with num_nodes_plug.

... = suc(num_nodes(plug_tree(take_path(path),L)))
       by rewrite add_suc[num_nodes(plug_tree(take_path(path),EmptyTree))][num_nodes(L)]
          rewrite num_nodes_plug[E][take_path(path)][L].

We conclude the equational reasoning with the definition of ti_index and ti_take.

... = suc(ti_index(TrItr(path,L,x,TreeNode(RL,y,RR))))
        by definition {ti_index, ti_take}.

Here is the complete proof of ti_next_index.

theorem ti_next_index: all E:type, iter : TreeIter<E>.
  if suc(ti_index(iter)) < num_nodes(ti2tree(iter))
  then ti_index(ti_next(iter)) = suc(ti_index(iter))
proof
  arbitrary E:type, iter : TreeIter<E>
  suppose prem: suc(ti_index(iter)) < num_nodes(ti2tree(iter))
  switch iter {
    case TrItr(path, L, x, R) suppose iter_eq {
      definition ti_next
      switch R {
        case EmptyTree suppose R_eq {
          have next_up_index_prem:
              suc(ti_index(TrItr(path,L,x,EmptyTree)) + num_nodes(EmptyTree))
              < num_nodes(ti2tree(TrItr(path,L,x,EmptyTree)))
            by enable num_nodes
               rewrite add_zero[ti_index(TrItr(path,L,x,EmptyTree))]
               rewrite iter_eq | R_eq in prem
          equations
                ti_index(next_up(path,L,x,EmptyTree))
              = suc(ti_index(TrItr(path,L,x,EmptyTree)) + num_nodes(EmptyTree))
                by apply next_up_index[E][path][L, x, EmptyTree] to next_up_index_prem
          ... = suc(ti_index(TrItr(path,L,x,EmptyTree)))
                by definition num_nodes
                   rewrite add_zero[ti_index(TrItr(path,L,x,EmptyTree))].
        }
        case TreeNode(RL, y, RR) suppose R_eq {
          equations
                ti_index(first_path(RL,y,RR,node(RightD(L,x),path))) 
              = num_nodes(plug_tree(take_path(node(RightD(L,x),path)),EmptyTree))
                  by first_path_index[E][RL][y,RR,node(RightD(L,x),path)]
          ... = num_nodes(plug_tree(take_path(path),TreeNode(L,x,EmptyTree)))
                  by definition {take_path,plug_tree}.
          ... = num_nodes(plug_tree(take_path(path),EmptyTree)) + suc(num_nodes(L))
                  by rewrite num_nodes_plug[E][take_path(path)][TreeNode(L,x,EmptyTree)]
                     definition {num_nodes, num_nodes}
                     rewrite add_zero[num_nodes(L)].
          ... = suc(num_nodes(plug_tree(take_path(path),L)))
                 by rewrite add_suc[num_nodes(plug_tree(take_path(path),EmptyTree))][num_nodes(L)]
                    rewrite num_nodes_plug[E][take_path(path)][L].
          ... = suc(ti_index(TrItr(path,L,x,TreeNode(RL,y,RR))))
                  by definition {ti_index, ti_take}.

        }
      }
   }
  }
end

Proof of ti_next_stable

The second correctness condition for ti_next(iter) is that it is stable with respect to ti2tree. Following the definition of ti_next, we switch on the iterator and then on the right child of the current node.

theorem ti_next_stable: all E:type, iter:TreeIter<E>.
  ti2tree(ti_next(iter)) = ti2tree(iter)
proof
  arbitrary E:type, iter:TreeIter<E>
  switch iter {
    case TrItr(path, L, x, R) {
      switch R {
        case EmptyTree {
          definition {ti2tree, ti_next}
          ?
        }
        case TreeNode(RL, y, RR) {
          definition {ti2tree, ti_next}
          ?
        }
      }
    }
  }
end

For the case R = EmptyTree, we need to prove the following, which amounts to proving that next_up is stable.

ti2tree(next_up(path,L,x,EmptyTree)) = plug_tree(path,TreeNode(L,x,EmptyTree))

We’ll pause the current proof to prove the next_up_stable lemma.

Exercise: next_up_stable lemma

lemma next_up_stable: all E:type. all path:List<Direction<E>>. all A:Tree<E>, y:E, B:Tree<E>.
  ti2tree(next_up(path, A, y, B)) = plug_tree(path, TreeNode(A,y,B))

Back to ti_next_stable

Now we conclude the R = EmptyTree case of the ti_next_stable theorem.

    conclude ti2tree(next_up(path,L,x,EmptyTree))
       = plug_tree(path,TreeNode(L,x,EmptyTree))
      by next_up_stable[E][path][L,x,EmptyTree]

In the case R = TreeNode(RL, y, RR), we need prove the following, which is to say that first_path is stable. Thankfully we already proved that lemma!

    conclude ti2tree(first_path(RL,y,RR,node(RightD(L,x),path))) 
           = plug_tree(path,TreeNode(L,x,TreeNode(RL,y,RR)))
      by rewrite first_path_stable[E][RL][y,RR,node(RightD(L,x),path)]
         definition {plug_tree}.

Here is the completed proof of ti_next_stable.

theorem ti_next_stable: all E:type, iter:TreeIter<E>.
  ti2tree(ti_next(iter)) = ti2tree(iter)
proof
  arbitrary E:type, iter:TreeIter<E>
  switch iter {
    case TrItr(path, L, x, R) {
      switch R {
        case EmptyTree {
          definition {ti2tree, ti_next}
          conclude ti2tree(next_up(path,L,x,EmptyTree))
             = plug_tree(path,TreeNode(L,x,EmptyTree))
            by next_up_stable[E][path][L,x,EmptyTree]
        }
        case TreeNode(RL, y, RR) {
          definition {ti2tree, ti_next}
          conclude ti2tree(first_path(RL,y,RR,node(RightD(L,x),path))) 
                 = plug_tree(path,TreeNode(L,x,TreeNode(RL,y,RR)))
            by rewrite first_path_stable[E][RL][y,RR,node(RightD(L,x),path)]
               definition {plug_tree}.
        }
      }
    }
  }
end

Correctness of ti_get and ti_index

Recall that ti_get(iter) should return the data in the current node of iter and ti_index should return the position of iter as a natural number with respect to in-order traversal. Thus, if we apply in_order to the tree, the element at position ti_index(iter) should be the same as ti_get(iter). So we have the following theorem to prove.

theorem ti_index_get_in_order: all E:type, iter:TreeIter<E>, a:E.
  ti_get(iter) = nth(in_order(ti2tree(iter)), a)(ti_index(iter))
proof
  arbitrary E:type, iter:TreeIter<E>, a:E
  switch iter {
    case TrItr(path, L, x, R) {
      definition {ti2tree, ti_get, ti_index, ti_take}
      ?
    }
  }
end

After expanding with some definitions, we are left to prove

x = nth(in_order(plug_tree(path,TreeNode(L,x,R))),a)
       (num_nodes(plug_tree(take_path(path),L)))

We see num_nodes applied to plug_tree, so we can use the num_nodes_plug lemma

      rewrite num_nodes_plug[E][take_path(path)][L]

The goal now is to prove

x = nth(in_order(plug_tree(path, TreeNode(L,x,R))),a)
       (num_nodes(plug_tree(take_path(path), EmptyTree)) + num_nodes(L))

The next step to take is not so obvious. Perhaps one hint is that we have the following theorem about nth from List.pf that also involves addition in the index argument of nth.

theorem nth_append_back: all T:type. all xs:List<T>. all ys:List<T>, i:Nat, d:T.
  nth(append(xs, ys), d)(length(xs) + i) = nth(ys, d)(i)

So we would need to prove a lemma that relates in_order and plug_tree to append. Now the take_path function returns the part of the tree before the path, so perhaps it can be used to create the xs in nth_append_back. But what about ys? It seems like we need a function that returns the part of the tree after the path. Let us call this function drop_path.

function drop_path<E>(List<Direction<E>>) -> List<Direction<E>> {
  drop_path(empty) = empty
  drop_path(node(f, path')) =
    switch f {
      case RightD(L, x) {
        drop_path(path')
      }
      case LeftD(x, R) {
        node(LeftD(x, R), drop_path(path'))
      }
    }
}

So using take_path and drop_path, we should be able to come up with an equation for in_order(plug_tree(path, TreeNode(A, x, B))). The part of tree before x should be take_path(path) followed by the subtree A. The part of the tree after x should be the subtree B followed by drop_path(path).

lemma in_order_plug_take_drop: all E:type. all path:List<Direction<E>>. all A:Tree<E>, x:E, B:Tree<E>.
  in_order(plug_tree(path, TreeNode(A, x, B)))
  = append(in_order(plug_tree(take_path(path), A)), 
           node(x, in_order(plug_tree(drop_path(path), B))))

It turns out that to prove this, we will also need a lemma about the combination of plug_tree and take_path:

lemma in_order_plug_take: all E:type. all path:List<Direction<E>>. all t:Tree<E>.
  in_order(plug_tree(take_path(path), t)) 
  = append( in_order(plug_tree(take_path(path),EmptyTree)), in_order(t))

and a lemma about the combination of plug_tree and drop_path:

lemma in_order_plug_drop: all E:type. all path:List<Direction<E>>. all t:Tree<E>.
  in_order(plug_tree(drop_path(path), t)) = append( in_order(t), in_order(plug_tree(drop_path(path),EmptyTree)))

Exercise: prove the in_order_plug... lemmas

Prove the three lemmas in_order_plug_take_drop, in_order_plug_take, and in_order_plug_drop.

Back to the proof of ti_index_get_in_order

Our goal was to prove

x = nth(in_order(plug_tree(path,TreeNode(L,x,R))), a)
       (num_nodes(plug_tree(take_path(path),EmptyTree)) + num_nodes(L))

So we use lemma in_order_plug_take_drop to get the following

  in_order(plug_tree(path,TreeNode(L,x,R)))
= append(in_order(plug_tree(take_path(path),L)), node(x, in_order(plug_tree(drop_path(path),R))))

and then lemma in_order_plug_take separates out the L.

  in_order(plug_tree(take_path(path), L))
= append(in_order(plug_tree(take_path(path),EmptyTree)), in_order(L))

So rewriting with the above equations

    rewrite in_order_plug_take_drop[E][path][L,x,R]
    rewrite in_order_plug_take[E][path][L]

transforms our goal to

x = nth(append(append(in_order(plug_tree(take_path(path),EmptyTree)), in_order(L)),
               node(x,in_order(plug_tree(drop_path(path),R)))),a)
       (num_nodes(plug_tree(take_path(path),EmptyTree)) + num_nodes(L))

Recall that our plan is to use the nth_append_back lemma, in which the index argument to nth is length(xs), but in the above we have the index expressed in terms of num_nodes. The following exercise proves a theorem that relates length and in_order to num_nodes.

Exercise: prove the length_in_order theorem

theorem length_in_order: all E:type. all t:Tree<E>.
  length(in_order(t)) = num_nodes(t)

Back to ti_index_get_in_order

Now we rewrite with the length_in_order lemma a couple times, give some short names to these big expressions, and apply length_append from List.pf.

      rewrite symmetric length_in_order[E][L]
            | symmetric length_in_order[E][plug_tree(take_path(path),EmptyTree)]
      define_ X = in_order(plug_tree(take_path(path),EmptyTree))
      define_ Y = in_order(L)
      define_ Z = in_order(plug_tree(drop_path(path),R))
      rewrite symmetric length_append[E][X][Y]

Now we’re in a position to use nth_append_back.

x = nth(append(append(X,Y), node(x, Z)), a)
       (length(append(X,Y)))

In particular, nth_append_back[E][append(X,Y)][node(x,Z), 0, a] gives us

  nth(append(append(X,Y), node(x,Z)),a)(length(append(X,Y)) + 0) 
= nth(node(x,Z),a)(0)

With that we prove the goal using add_zero and the definition of nth.

  conclude x = nth(append(append(X,Y), node(x,Z)), a)(length(append(X,Y)))
    by rewrite (rewrite add_zero[length(append(X,Y))] in
                nth_append_back[E][append(X,Y)][node(x,Z), 0, a])
       definition nth.

Here is the complete proof of ti_index_get_in_order.

theorem ti_index_get_in_order: all E:type, z:TreeIter<E>, a:E.
  ti_get(z) = nth(in_order(ti2tree(z)), a)(ti_index(z))
proof
  arbitrary E:type, z:TreeIter<E>, a:E
  switch z {
    case TrItr(path, L, x, R) {
      definition {ti2tree, ti_get, ti_index, ti_take}
      rewrite num_nodes_plug[E][take_path(path)][L]
      
      suffices x = nth(in_order(plug_tree(path,TreeNode(L,x,R))),a)
                      (num_nodes(plug_tree(take_path(path),EmptyTree)) + num_nodes(L))
      rewrite in_order_plug_take_drop[E][path][L,x,R]
      rewrite in_order_plug_take[E][path][L]
      
      suffices x = nth(append(append(in_order(plug_tree(take_path(path),EmptyTree)),
                                     in_order(L)),
                              node(x,in_order(plug_tree(drop_path(path),R)))),a)
                      (num_nodes(plug_tree(take_path(path),EmptyTree)) + num_nodes(L))
      rewrite symmetric length_in_order[E][L]
            | symmetric length_in_order[E][plug_tree(take_path(path),EmptyTree)]
      define_ X = in_order(plug_tree(take_path(path),EmptyTree))
      define_ Y = in_order(L)
      define_ Z = in_order(plug_tree(drop_path(path),R))
      rewrite symmetric length_append[E][X][Y]
      
      conclude x = nth(append(append(X,Y), node(x,Z)), a)(length(append(X,Y)))
        by rewrite (rewrite add_zero[length(append(X,Y))] in
                    nth_append_back[E][append(X,Y)][node(x,Z), 0, a])
           definition nth.
    }
  }
end

This concludes the proofs of correctness for in-order iterator and the five operations ti2tree, ti_first, ti_get, ti_next, and ti_index.

Exercise: Prove that ti_prev is correct

In the previous post there was an exercise to implement ti_prev, which moves the iterator backwards one position with respect to in-order traversal. This exercise is to prove that your implementation of ti_prev is correct. There are two theorems to prove. The first one makes sure that ti_prev reduces the index of the iterator by one.

theorem ti_prev_index: all E:type, iter : TreeIter<E>.
  if 0 < ti_index(iter)
  then ti_index(ti_prev(iter)) = pred(ti_index(iter))

The second theorem makes sure that the resulting iterator is still an iterator for the same tree.

theorem ti_prev_stable: all E:type, iter:TreeIter<E>.
  ti2tree(ti_prev(iter)) = ti2tree(iter)