\[\begin{prooftree}
\AXC{$\Gamma, p, \lnot p$}
\end{prooftree}\]
\[\begin{prooftree}
\AXC{$\Gamma, A$}
\AXC{$\Gamma, B$}
\BIC{$\Gamma, A \land B$}
\end{prooftree}\quad\quad
\begin{prooftree}
\AXC{$\Gamma, A, B$}
\UIC{$\Gamma, A \lor B$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AXC{$\Gamma, p, \lnot p$}
\DP \\
\ \\
\AXC{$\Gamma, A$}
\AXC{$\Gamma, B$}
\BIC{$\Gamma, A \land B$}
\DP \quad \quad
\AXC{$\Gamma, A, B$}
\UIC{$\Gamma, A \lor B$}
\DP
\end{center}
The first rule says that either something in :math:`\Gamma` is true,
or :math:`p` is true, or :math:`\lnot p` is true. The second rule says that
if either something in :math:`\Gamma` is true or :math:`A` is true, and
either something in :math:`\Gamma` is true or :math:`B` is true, then
either something in :math:`\Gamma` is true or :math:`A \land B` is true.
The third rule says that if either something in :math:`\Gamma` is true or :math:`A` is true
or :math:`B` is true, then either something in :math:`\Gamma` is true or :math:`A \lor B` is true.
If we take these to be statements about truth values relative to some truth assignment
:math:`\tau`,
these rules are clearly sound with respect to the semantics.
The set :math:`\Gamma` is called a *sequent*. (More specifically, it is called
as *one-sided* sequent; we'll see two-sided sequents below.)
A system of rules like this is therefore called a sequent calculus.
As with axiomatic systems, we can think of a proof as a sequence of lines,
but it is also common to represent proofs diagrammatically,
as trees whose nodes are labeled by sequents. The following example provides
a proof of the NNF equivalent of
:math:`A \land (B \lor C) \limplies (A \land B) \lor (A \land C)`.
.. image:: ../figures/sequentproof.*
:alt: sequentproof
:align: center
It is probably easiest to read this from the bottom up.
Remember that saying that a sequent :math:`\Gamma` is *valid* in our semantics means that
for every truth assignment :math:`\tau`, we have :math:`\tval{A}_\tau = \top` for some
:math:`A` in :math:`\Gamma`.
.. admonition:: Theorem
The sequent calculus presented above is sound and complete. In other words,
a sequent :math:`\Gamma` is provable if and only if it is valid.
.. admonition:: Proof
The soundness direction is easy. Suppose there is a proof of :math:`\Gamma`.
Let :math:`\tau` be any truth assignment.
We have already noted that each rule is sound, which is to say, if the premise or premises
are true under an assignment, then so is the conclusion.
By induction, we have that every sequent in the proof is true under assignment :math:`\tau`.
Proving completeness is usually trickier. In this case, we can use
the fact that the rules of the calculus are bidirectional:
any truth assignment that refutes the conclusion of a rule has to refute one of the premises.
We can also use the fact that reading each rule backward decreases the number of binary
connectives.
We prove the following by induction on the number of binary connectives: for every
sequent :math:`\Gamma`, either :math:`\Gamma` is provable, or there is a truth
assignment that makes every formula in :math:`\Gamma` false.
If :math:`\Gamma` has no binary connectives, then it is a set of literals.
If :math:`\Gamma` contains a complementary pair of literals :math:`P` and :math:`\lnot P`,
it is an axiom, and otherwise there is an assignment that makes it false.
For the inductive step, :math:`\Gamma` must have a :math:`\land` or a :math:`\lor`.
Applying the corresponding rule, we have either the premises are valid or there is a
counterexample to one of them. In the first case, :math:`\Gamma` is valid, and in the
second case, there is a counterexample to the conclusion.
Notice that this gives us yet another decision procedure for propositional logic:
start with a sequent :math:`\Gamma` (which can consist of a single formula, if you want),
and apply the rules backward. If you reach an axiom on each leaf, you have a proof.
If one branch fails to terminate with an axiom, reading off a counterexample to the leaf yields
a counterexample to :math:`\Gamma`.
This is an important idea in automated reasoning, namely, it is desirable to search for a proof in
such a way that failure implies the existence of a counterexample.
What we have described is more precisely a *cut-free* sequent calculus.
It is also sound to add the cut rule:
.. raw:: html
\[\begin{prooftree}
\AXC{$\Gamma, A$}
\AXC{$\Gamma, \mathord{\sim}A$}
\BIC{$\Gamma$}
\end{prooftree}\]

.. raw:: latex
\begin{center}
\AXC{$\Gamma, A$}
\AXC{$\Gamma, \mathord{\sim}A$}
\BIC{$\Gamma$}
\DP
\end{center}
Here :math:`\mathord{\sim}A` is the negation operator for negation-normal form formulas,
which switches :math:`\land` and :math:`\lor` and switches positive and negative literals.
Proofs with cuts can be more efficient proofs without them, but we have seen that
the calculus is complete without them.
The cut-free sequent calculus is closely related to *tableau* proof systems that are also commonly
used in automated reasoning.
In the sequent proof above, there is a lot of needless repetition of formulas; tableau representations
do a better job of recording only what changes as we go up the tree.
Another difference is that tableau proof systems usually don't require that the formulas are in
negation normal form. Rather, the rules of a tableau system correspond to the sequent rules
for the negation-normal-form equivalents. Since :math:`A \limplies B` is equivalent to
:math:`\lnot A \lor B`,
this requires changing :math:`A` to :math:`\lnot A` as we move up the tree. To avoid introducing
new connectives, tableau systems often annotate formulas with \emph{polarities},
so that :math:`A^+` represents :math:`A` and :math:`A^-` represents :math:`\lnot A`.
The most jarring difference between sequent calculi and tableau systems is that the latter
are often described in terms of search for a satisfying assignment rather than searching for a proof.
For example, the rule in the sequent calculus that says
to find a proof of :math:`A \land B`, split and find a proof of :math:`A` and a proof of :math:`B`
becomes
to find a satisfying assignment to :math:`A \lor B`, split and find a satisfying assignment
to :math:`A` or a satisfying assignment to :math:`B`.
The automated reasoning community is split between people who like to think in terms
of searching for proofs and people who like to think in terms of searching for models.
It is therefore important to learn how to speak both languages,
and to be able to translate between them on the fly.
.. _section_resolution:
Resolution
----------
Given that the sequent calculus implicitly corresponds to a decision procedure for
propositional logic, it is natural to ask whether there is a proof system that corresponds
more closely to DPLL, the decision procedure that was the focus of
:numref:`Chapter %s
\[\begin{prooftree}
\AXC{$C, p$}
\AXC{$D, \lnot p$}
\BIC{$D \lor D$}
\end{prooftree}\]

.. raw:: latex
\begin{prooftree}
\AXC{$C, p$}
\AXC{$D, \lnot p$}
\BIC{$C \lor D$}
\end{prooftree}
The rule says that if either :math:`C` or :math:`p` is true, and either :math:`D` or :math:`\lnot p`
is true, then :math:`C \lor D` has to be true. A *resolution proof* of a clause :math:`C` from
a set of clauses :math:`\Gamma` is a sequence of steps (or a labelled tree) that obtains :math:`C`
from :math:`\Gamma` using instances of the resolution rule. A *resolution refutation* of :math:`\Gamma`
is a resolution proof of the empty clause from :math:`\Gamma`.
.. admonition:: Theorem
A CNF formula :math:`\Gamma` has a resolution refutation if and only if it is unsatisfiable.
Resolution can therefore be understood as a proof system for propositional logic in the following way:
given any formula :math:`A`, put :math:`\lnot A` in CNF, and look for a resolution refutation.
Such a refutation is a proof of :math:`A`. The theorem above says that this system is sound
and complete: :math:`A` is valid if and only if :math:`\lnot A` is unsatisfiable, which
happens if and only if there is a refutation of :math:`\lnot A`.
.. admonition:: Proof
Soundness follows straightforwardly from the fact that the resolution rule preserves
truth under any truth assignment, while the empty clause is unsatisfiable.
To prove completeness, we use induction on the number of propositional variables
to show that if :math:`\Gamma` is unsatisfiable, there is a resolution refutation
of :math:`\Gamma`.
If there are no variables, the fact that :math:`\Gamma` is unsatisfiable means that
it must be the set consisting of the empty clause, and we are done.
In the induction step, let :math:`P` be any variable that occurs in :math:`\Gamma`.
If :math:`\Gamma` is unsatisfiable, then so are :math:`\tval{\Gamma}_{[P \mapsto \top]}`
and :math:`\tval{\Gamma}_{[P \mapsto \bot]}`. By the inductive hypothesis,
both of these are refutable.
Remember the relationship between :math:`\tval{\Gamma}_{[P \mapsto \top]}` and :math:`\Gamma`:
in the former, we remove all the clauses that include :math:`P` and delete :math:`\lnot P`
from the remaining clauses.
So a resolution refutation of the empty clause from :math:`\tval{\Gamma}_{[P \mapsto \top]}`
uses only the clauses of :math:`\tval{\Gamma}` that don't contain :math:`P`,
possibly with :math:`\lnot P` removed.
Restoring :math:`\lnot P` to all the initial clauses yields either a proof of the
empty clause or a proof of :math:`\lnot P`.
In the first case, we have a proof of the empty clause from :math:`\Gamma`, and we are done.
Otherwise, applying the inductive hypotheses to :math:`\tval{\Gamma}_{[\lnot P \mapsto \bot]}`
and repeating the previous argument,
we obtain either a proof of the empty clause or a proof of :math:`P`.
Once again, in the first case, we are done. Otherwise, we apply the resolution rule to the
proof of :math:`P` and the proof of :math:`\lnot P`, and we have a proof of the empty clause.
We can once again view the completeness proof as a decision procedure in disguise.
In fact, the strategy of picking a variable and trying to refute
:math:`\tval{\Gamma}_{[P \mapsto \top]}` and :math:`\tval{\Gamma}_{[P \mapsto \bot]}`
simultaneously is exactly the splitting rule of DPLL,
formulated in terms of demonstrating unsatisfiability rather than searching for a satisfying
assignment. We can formulate the theorem above in more constructive terms as follows:
.. admonition:: Theorem
For any CNF formula :math:`\Gamma`, either :math:`\Gamma` is satisfiable or there is a
resolution refutation.
Remember that at any point in the DPLL search, we have a partial assignment :math:`\tau`
that we are trying to extend to a satisfying assignment to :math:`\tval{\Gamma}_\tau`.
If we fail, we then have to give up on :math:`\tau`, backtrack, and try another path.
To extract either a satisfying assignment or a resolution proof from the result,
it suffices to show the following constructively:
For any partial truth assignment :math:`\tau`,
either :math:`\tval{\Gamma}_\tau` is satisfiable
or there is a resolution proof of a clause :math:`C` from :math:`\Gamma` such that
:math:`\tval{C}_\tau = \bot`.
This yields the desired conclusion when :math:`\tau` is the empty assignment,
since the only clause that evaluates to :math:`\bot` under the empty assignment is the empty clause.
We will sketch an explanation of how to read off the information above from the DPLL search,
and we will leave it as an exercise for you to fill in the details.
Remember that there are three steps that are interleaved in DPLL:
#. splitting on a variable :math:`p`
#. unit propagation
#. removing pure literals
Unit propagation can be viewed as a special case of the splitting rule:
If a CNF formula contains a unit clause with literal :math:`\ell`,
then splitting on :math:`\ell` fails immediately on one branch,
and the other branch corresponds to the result of applying unit propagation.
Reasoning about the pure literal rule is more subtle. Suppose :math:`\ell` is pure in
:math:`\tval{\Gamma}_\tau`, and consider the DPLL search
starting from :math:`\tval{\Gamma}_{\tau[\ell \mapsto \top]}`,
which is a subset of :math:`\tval{\Gamma}_\tau`. If DPLL finds a satisfying assignment
to :math:`\tval{\Gamma}_{\tau[\ell \mapsto \top]}`, it can be extended to a satisfying
assignment to :math:`\tval{\Gamma}_\tau` by mapping :math:`\ell` to :math:`\top`.
On the other hand, for any clause :math:`C` such that :math:`\tval{C}_{\tau[\ell \mapsto \top]} = \bot`,
we have :math:`\tval{C}_\tau = \bot`, because :math:`\ell` is pure in all the clauses of
:math:`\tval{\Gamma}_\tau`.
So we only have to deal with the splitting rule. A satisfying assignment for either branch
results in a satisfying assignment for :math:`\Gamma`, we only only have to prove the following:
If there are a resolution proof of a clause :math:`C` from :math:`\Gamma` such that
:math:`\tval{C}_{\tau[P \mapsto \top]} = \bot`
and a resolution proof of a clause :math:`D` from :math:`\Gamma` such that
:math:`\tval{D}_{\tau[P \mapsto \bot]} = \bot`,
then there is a resolution proof of a clause :math:`E` from :math:`\Gamma` such that
:math:`\tval{E}_\tau = \bot`.
We leave the proof of this fact to you.
.. _section_natural_deduction:
Natural deduction
-----------------
We now consider a type of deductive system that was introduced by Gerhard Gentzen
in the 1930s, known as *natural deduction*. As the name suggests,
the system was designed to model the way someone might carry out an informal
logical argument.
As a result, the system is not particularly good for automated reasoning and
proof search,
though it might be a good choice if the goal is to find a human-readable proof in the end.
The main interest, rather, is that it provides a nice framework for representing
informal arguments.
Of all the systems we consider in this section, this one is the closest to the
foundation system of logic that is built in to Lean.
In natural deduction, the goal is to derive sequents of the form
:math:`\Gamma \fCenter A`,
where :math:`\Gamma` is a finite set of formulas and :math:`A` is a formula.
The interpretation of such a sequent is, of course, that :math:`A` follows
from the hypotheses in :math:`\Gamma`.
Notice that we have overloaded the symbol :math:`\proves`, which is ordinarily
used to express the provability relation.
The two are clearly related: :math:`\Gamma` proves :math:`A` in the ordinary
sense is now interpreted as saying that there is a finite subset :math:`\Gamma' \subseteq \Gamma` such that the sequent :math:`\Gamma \fCenter A` is derivable in natural deduction.
Sometimes people use other notation for sequents, like :math:`\Gamma \Rightarrow A`.
But Lean also uses the :math:`\proves` symbol for sequents,
so we will stick with that.
We write ":math:`\Gamma, A`" for :math:`\Gamma \cup \{ A \}` to represent the hypotheses
in :math:`\Gamma` together with the additional hypothesis :math:`A`.
The first rule is trivial: we always have
.. raw:: html
\[\begin{prooftree}
\AXC{$\Gamma, A \fCenter A$.}
\end{prooftree}\]

.. raw:: latex
\begin{prooftree}
\Axiom$\Gamma, A \fCenter A$.
\end{prooftree}
This says that :math:`A` follows from any list of assumptions that includes :math:`A`.
Most of the other connectives include *introduction rules*,
which allow us to *introduce* the connective into a proof,
and *elimination rules*, that tell us how to use them.
For example, the rules for :math:`\land` are as follows:
.. raw:: html
\[\begin{prooftree}
\AXC{$\Gamma \fCenter A$}
\AXC{$\Gamma \fCenter B$}
\BIC{$\Gamma \fCenter A \land B$}
\end{prooftree}
\quad \quad
\begin{prooftree}
\AXC{$\Gamma \fCenter A_0 \land A_1$}
\UIC{$\Gamma \fCenter A_i$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AX$\Gamma \fCenter A$
\AX$\Gamma \fCenter B$
\BI$\Gamma \fCenter A \land B$
\DP
\quad \quad
\AX$\Gamma \fCenter A_0 \land A_1$
\UI$\Gamma \fCenter A_i$
\DP
\end{center}
The rules for implication are as follows:
.. raw:: html
\[\begin{prooftree}
\AXC{$\Gamma, A \fCenter B$}
\UIC{$\Gamma \fCenter A \limplies B$}
\end{prooftree}
\quad \quad
\begin{prooftree}
\AXC{$\Gamma \fCenter A \limplies B$}
\AXC{$\Gamma \fCenter A$}
\BIC{$\Gamma \fCenter B$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AX$\Gamma, A \fCenter B$
\UI$\Gamma \fCenter A \limplies B$
\DP
\quad \quad
\AX$\Gamma \fCenter A \limplies B$
\AX$\Gamma \fCenter A$
\BI$\Gamma \fCenter B$
\DP
\end{center}
Notice that in the introduction rule, to prove :math:`A \limplies B`, we
temporarily assume :math:`A` and show that :math:`B` follows.
The rules for disjunction, which codify proof by cases, are as follows:
.. raw:: html
\[\begin{prooftree}
\AXC{$\Gamma \fCenter A_i$}
\UIC{$\Gamma \fCenter A_0 \lor A_1$}
\end{prooftree}
\quad \quad
\begin{prooftree}
\AXC{$\Gamma \fCenter A \lor B$}
\AXC{$\Gamma, A \fCenter C$}
\AXC{$\Gamma, B \fCenter C$}
\TIC{$\Gamma \fCenter C$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AX$\Gamma \fCenter A_i$
\UI$\Gamma \fCenter A_0 \lor A_1$
\DP
\quad \quad
\AX$\Gamma \fCenter A \lor B$
\AX$\Gamma, A \fCenter C$
\AX$\Gamma, B \fCenter C$
\TI$\Gamma \fCenter C$
\DP
\end{center}
For classical logic, we also add the following principle of proof by contradiction:
.. raw:: html
\[\begin{prooftree}
\AXC{$\Gamma, \lnot A \fCenter \bot$}
\UIC{$\Gamma \fCenter A$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AX$\Gamma, \lnot A \fCenter \bot$
\UI$\Gamma \fCenter A$
\DP
\end{center}
These rules cover a complete set of connectives, since we can define :math:`\lnot A`
to be :math:`A \limplies \bot`, define :math:`\top` to be :math:`\lnot \bot`, and
define :math:`A \liff B` to be :math:`(A \limplies B) \land (B \limplies A)`.
You should think about what the natural rules for these connectives would be if we were
to include them in the calculus.
As an example, here is a proof of :math:`A \limplies (B \limplies A \land B)`:
.. raw:: html
\[\begin{prooftree}
\AXC{$A, B \fCenter A$}
\AXC{$A, B \fCenter B$}
\BIC{$A, B \fCenter A \land B$}
\UIC{$A \fCenter B \limplies A \land B$}
\UIC{$\fCenter A \limplies (B \limplies A \land B)$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AX$A, B \fCenter A$
\AX$A, B \fCenter B$
\BI$A, B \fCenter A \land B$
\UI$A \fCenter B \limplies A \land B$
\UI$\fCenter A \limplies (B \limplies A \land B)$
\DP
\end{center}
And here is a proof of :math:`A \land B \limplies B \land A`:
.. raw:: html
\[\begin{prooftree}
\AXC{$A \land B \fCenter A \land B$}
\UIC{$A \land B \fCenter B$}
\AXC{$A \land B \fCenter A \land B$}
\UIC{$A \land B \fCenter A$}
\BIC{$A \land B \fCenter B \land A$}
\UIC{$\fCenter A \land B \limplies B \land A$}
\end{prooftree}
\]

.. raw:: latex
\begin{center}
\AX$A \land B \fCenter A \land B$
\UI$A \land B \fCenter B$
\AX$A \land B \fCenter A \land B$
\UI$A \land B \fCenter A$
\BI$A \land B \fCenter B \land A$
\UI$\fCenter A \land B \limplies B \land A$
\DP
\end{center}
As with the sequent calculus, there are more efficient ways of representing
natural deduction proofs that only show the conclusion at each node and
leave the hypotheses implicit.
This avoids having to repeat a long list of hypotheses at every node.
There are also presentations of sequent calculi for classical logic that use
two-sided sequents, as we did for natural deduction.
The most effective approach is to use sequents of the
form :math:`\Gamma \fCenter \Delta`, where :math:`\Gamma` and :math:`\Delta`
are finite sets and we interpret the sequent as saying that if all the hypotheses
in :math:`\Gamma` are true, then at least one of the formulas in :math:`\Delta` is true.
This is aligned with the annotations of positive and negative formulas that one
sometimes finds in tableau calculi.
.. Admonition:: Theorem
Natural deduction is sound and complete for classical propositional logic.
In other words, a sequent :math:`\Gamma \fCenter A` is derivable if and only
if :math:`\Gamma \models A`.
We will not take the time to prove this here. One way to prove it is to
show that natural deduction can simulate another calculus, like the sequent calculus,
for which a completeness proof is easier.
.. _section_compactness:
Compactness
-----------
In automated reasoning, when we write :math:`\Gamma \models A` to express
that :math:`A` is entailed by hypotheses in :math:`\Gamma`,
we generally have the in mind case where :math:`\Gamma` is finite.
But the definition makes sense when :math:`\Gamma` is infinite.
The same is true in the when we talk about provability of :math:`A`
from a set of hypotheses :math:`\Gamma`.
Now, in the case of provability, it is clear that for any set :math:`\Gamma`,
finite or infinite, :math:`\Gamma \proves A` if and only if there is a finite
subset :math:`\Gamma' \subseteq \Gamma` such that :math:`\Gamma' \proves A`.
The corresponding fact about the entailment relation is also true:
.. Admonition:: Theorem (the compactness theorem for propositional logic)
For any set of propositional formulas :math:`\Gamma`,
:math:`\Gamma \models A` if and only if there is a finite
subset :math:`\Gamma' \subseteq \Gamma` such that :math:`\Gamma' \models A`.
In this chapter, we focused on the soundness and completeness theorems in the
case where :math:`\Gamma` is finite.
But soundness carries over straightforwardly to the infinite case,
and it is possible to prove completeness for arbitrary :math:`\Gamma` as well.
You can check that compactness follows from these stronger versions of soundness
and completeness, given the fact that any proof can use only a finite number of hypotheses.
Conversely, the stronger versions of the completeness theorem
follows from the weaker one using the compactness theorem.
As an example of an application of the compactness theorem, imagine a
finite set of oriented square tiles :math:`S`, all the same size, where "oriented" means that
each has is a distinguished top edge.
Suppose each edge of each tile is labeled with finitely many colors.
A *tiling of the plane* with tiles from :math:`S` is an ordinary arrangement of (copies of) tiles in :math:`S` in an infinite square grid,
like the squares on an infinite sheet of graph paper,
such that adjacent edges have the same color.
.. Admonition:: Theorem
Suppose that for every natural number :math:`n`, there is an :math:`n \times n`
tiling with tiles from :math:`S`. Then there is a tiling of the entire
plane with tiles from :math:`S`.
You should think about how this follows from the compactness theorem.