# 16. First-Order Logic in Lean¶

## 16.1. Quantifiers¶

## 16.2. Equational reasoning¶

## 16.3. Structural induction¶

A feature of working with a system like Lean, which is based on a formal logical foundation, is that you can not only define data types and functions but also prove things about them. The goal of this section is to give you a flavor of using Lean as a proof assistant. It isn’t easy: Lean syntax is finicky and its error messages are often inscrutable. In class, we’ll try to give you some pointers as to how to interact with Lean to construct proofs. The examples in this section will serve as a basis for discussion.

Remember that Lean’s core library defines the List data type and notation for it. In the example below, we import the library, open the namespace, declare some variables, and try out the notation.

```
import Init
open List
variable {α : Type}
variable (as bs cs : List α)
variable (a b c : α)
#check a :: as
#check as ++ bs
example : [] ++ as = as := nil_append as
example : (a :: as) ++ bs = a :: (as ++ bs) := cons_append a as bs
```

The variable command does not do anything substantive.
It tells Lean that when the corresponding identifiers are used
in definitions and theorems that follow,
they should be interpreted as arguments to those theorems and proofs,
with the indicated types.
The curly brackets around the declaration α : Type indicate that that
argument is meant to be *implicit*, which is to say,
users do not have to write it explicitly.
Rather, Lean is expected to infer it from the context.

The library proves the theorems [] ++ as and (a :: as) ++ bs = a :: (as ++ bs) under the names nil_append and cons_append, respectively. You can see them by writing #check nil_append and #check cons_append. Remember that we took these to be the defining equations for the append function in Section 2.3. Although Lean uses a different definition of the append function, for illustrative purposes we will treat them as the defining equations and base our subsequent proofs on that.

Lean’s library also proves as ++ [] under the name append_nil, but to illustrate how proofs like this go, we will prove it again under the name append_nil’.

```
theorem append_nil' : as ++ [] = as := by
induction as with
| nil => rw [nil_append]
| cons a as ih => rw [cons_append, ih]
```

In class, we will help you make sense of this.
The by command tell Lean that we are going to write
a *tactic* proof.
In other words, instead of writing the proof as an expression,
we are going to give Lean a list of instructions
that tell it how to prove the theorem.
At the start of the tactic proof, the theorem in question
is our *goal*.
At each step, tactics act on one more more of the remaining goals;
when no more goals remain, the theorem is proved.

In this case, there are only two tactics that are needed.
The induction tactic, as the name suggests, sets up a proof
by induction, and the rw tactic *rewrites* the goal
using given equations.
Moving the cursor around in the editor windows shows you the
goals at the corresponding state of the proof.

```
theorem append_assoc' : as ++ bs ++ cs = as ++ (bs ++ cs) := by
induction as with
| nil => rw [nil_append, nil_append]
| cons a as ih => rw [cons_append, cons_append, ih, ←cons_append]
```

Here is a similar proof of the associativity of the append function. Note that the left arrow in the expression ←cons_append tell Lean that we want to use the equation from right to left instead of from left to right.

Now let us consider Lean’s definition of the reverse function:

```
theorem reverse_def : reverse as = reverseAux as [] := rfl
theorem reverseAux_nil : reverseAux [] as = as := rfl
theorem reverseAux_cons : reverseAux (a :: as) bs = reverseAux as (a :: bs) := rfl
```

We will use these identities in the proofs that follow. Let’s think about what it would take to prove the identity reverse (as ++ bs) = reverse bs ++ reverse as. Since reverse is defined in terms of reverseAux, we should expect to have to prove something about reverseAux. And since the identity mentions the append function, it is natural to try to characterize the way that reverseAux interacts with append. These are the two identities we need:

```
theorem reverseAux_append : reverseAux (as ++ bs) cs = reverseAux bs (reverseAux as cs) := by
induction as generalizing cs with
| nil => rw [nil_append, reverseAux_nil]
| cons a as ih => rw [cons_append, reverseAux_cons, reverseAux_cons, ih]
theorem reverseAux_append' : reverseAux as (bs ++ cs) = reverseAux as bs ++ cs := by
induction as generalizing bs with
| nil => rw [reverseAux_nil, reverseAux_nil]
| cons a as ih => rw [reverseAux_cons, reverseAux_cons, ←cons_append, ih]
```

Note the generalizing clause in the induction. What it means is that
what we are proving by induction on as is that the identity holds
*for every choice of* bs.
This means that, when we apply the inductive hypothesis,
we can apply it to any choice of the parameter bs.
You should try deleting the generalizing clause to see what goes
wrong when we omit it.

With those facts in hand, we have the identity we are after:

```
theorem reverse_append : reverse (as ++ bs) = reverse bs ++ reverse as := by
rw [reverse_def, reverseAux_append, reverse_def, ←reverseAux_append', nil_append,
reverse_def]
```