14. Proof Systems for First-Order Logic

The fundamental difference between propositional logic and first-order logic is that in first-order logic there are variables and terms that stand for objects, and we can form atomic propositions that depend on those objects. Beyond that, the two key additions to the language of propositional logic are the equality symbol and the quantifiers. Proof systems for first-order logic have to extend those for propositional logic by providing rules for these.

When we studied propositional logic, we explored decision procedures for the validity question first, and only later determined that it is possible to extract proofs of propositional validity when, for example, DPLL search fails for find a countermodel. When it comes to first-order logic, however, the natural order is inverted. The validity problem is, in general, undecidable, so the best we can do is search for proofs.

Remember that when we have a proof system in mind, we write \(\Gamma \vdash A\) to mean that there is a proof of \(A\) from the hypotheses in \(\Gamma\). A proof system is sound if whenever \(\Gamma \vdash A\), we have \(\Gamma \models A\). It is complete if whenever \(\Gamma \models A\), we have \(\Gamma \vdash A\). It is refutationally complete if whenever \(\Gamma \models \bot\), we have \(\Gamma \vdash \bot\). Remember that the resolution proof system for propositional logic is refutationally complete; in general, establish \(\Gamma \models A\) we put the formulas in \(\Gamma \cup \{ \lnot A \}\) in conjunctive normal form, view the result as a set of clauses, and search for a refutation. The same is true of the resolution calculus for first-order logic, though we need an extra step, Skolemization, to put formulas into normal form. Instead of a set of clauses, the result is a set of universally quantified clauses, as we will explain below. We will show that the resolution calculus is refutationally complete for such sets.

In Chapter 12, we considered proof systems and a decision procedure for equational reasoning. In this chapter, we will start with first-order logic without equality, and then consider how to bring equational reasoning back in. In fact, we will start with an additional restriction. Remember that a first-order formula is universal if it consists of any number of universal quantifiers followed by a quantifier-free formula, and it is existential if it consists of any number of existential quantifiers followed by a quantifier-free formula. In Chapter 12, we saw that the question as to the validity of a universal formula is decidable. In sharp contrast, the question as to the validity of an existential formula is undecidable. Since a formula \(\ex {x_1, x_2, \ldots, x_n} A\) is valid if and only if \(\fa {x_1, x_2, \ldots, x_n} \lnot A\) is unsatisfiable, we can design a search procedure for establishing the validity of an existential formula by designing a proof system that is refutationally complete for universal formulas and systematically searching for a refutation.

In summary, we will obtain a search procedure for establishing the validity of first-order sentences in three steps:

  1. We will design a proof system that is refutationally complete for universal formulas in first-order logic without equality.

  2. Will will show how to convert any first-order sentence \(A\) into an equisatisfiable universal sentence \(A'\), so we can show that \(A\) is unsatisfiable by refuting \(A'\).

  3. We will explain what to add to make resolution refutation complete for first-order logic with equality. (The reduction in step 2 will still work in this setting.)

We can then establish the validity of a first-order sentence \(A\) by putting its negation in the appropriate form and searching for a refutation. Let us start with the first step.

14.1. Resolution

Suppose \(\fa {\vec x} A\) is a universally quantified sentence in first-order logic, where \(A\) is quantifier free. We can put \(A\) in conjunctive normal form and write it as \(A_1 \land \ldots \land A_n\), where each \(A_i\) is a clause. In that case, \(\fa {\vec x} A\) is equivalent to

\[(\fa {\vec x} A_1) \land \ldots \land (\fa {\vec x} A_n).\]

You should think about why this is true, that is, why any model of the first is a model of the second and vice-versa. As result, to show that a set \(\Gamma\) of universally quantified sentence is unsatisfiable, it suffices to have a refutationally complete procedure for sets of universally quantified clauses.

To get a sense of how the resolution rule plays out in first-order logic, consider the following two universally quantified clauses:

  • \(\fa {x, y} P(f(x), y)\)

  • \(\fa {w, z} \lnot P(w, g(z)) \lor Q(w, z)\)

We can do the following:

  • instantiate \(x\) to \(u\) and \(y\) to \(g(v)\) in the first to get \(P(f(u),g(v))\),

  • instantiate \(w\) to \(f(u)\) and \(z\) to \(v\) in the second to get \(\lnot P(f(u), g(v)) \lor Q(f(u), v)\),

  • resolve, to get \(Q(f(u), v)\), and then

  • generalize, to get \(\fa {u, v} Q(f(u), v)\).

We then say that \(\fa {u, v} Q(f(u), v)\) has been obtained by resolution from the two hypotheses. Notice that the substitution \(\sigma\) given by \(\{x \mapsto u, y \mapsto g(v), w \mapsto f(u), z \mapsto v\}\) is a emph{most general unifier} of \(P(f(x), y)\) and \(\lnot P(w, g(z))\). (We have renamed \(x\) to \(u\) and \(z\) to \(v\) just for clarity, to make the universal variables in the conclusion fresh; but we could just as well have left them alone.) We could have carried out the procedure with a more specific unifier, but that would have resulted in a less general result. To obtain the strongest possible conclusion, it is always best to use a most general unifier, as described in Section 11.4

In the first-order setting, it is common to refer to call a universally quantified clause simply a “clause.” In the literature, the universal quantifiers are often left out, but you should keep in mind that in first-order resolution proofs, the clauses are universally quantified over their variables.

In general, the resolution rule works as follows. Suppose we have two clauses

\[\begin{split}C_1 & = \fa {x_1, \ldots, x_i} \ell \lor \ell_1 \lor \ldots \lor \ell_m \\ C_2 & = \fa {y_1, \ldots, y_j}. \ell' \lor \ell'_1 \lor \ldots \lor \ell'_n\end{split}\]

where the literals \(\ell\) and \(- \ell'\) have a most general unifier \(\sigma\). Then resolving \(C_1\) and \(C_2\) on \(\ell\) and \(\ell'\) yields

\[\fa {z_1, \ldots, z_k} \sigma(\ell_1 \lor \ldots \ell_m \lor \ell'_1 \lor \ldots \lor \ell'_n)\]

where \(z_1, \ldots, z_k\) are the variables that remain after the substitution.

As with propositional logic, a first-order resolution refutation is a proof of the empty clause using the resolution rule. For example, suppose we have the following hypotheses:

  • Every smart and motivated student likes logic.

  • Every student at Carnegie Mellon is smart.

  • At least one student at Carnegie Mellon is motivated.

Let us show that someone likes logic. We introduce predicates \(S(x)\), \(M(x)\), \(L(x)\), and \(C(x)\) for “smart,” “motivated,” “likes logic”, and “is at Carnegie Mellon.” Negating the conclusion and expressing the statements as universally quantified clauses, we get:

  1. \(\fa x \lnot S(x) \lor \lnot M(x) \lor L(x)\).

  2. \(\fa x \lnot C(x) \lor S(x)\).

  3. \(C(a)\).

  4. \(M(a)\).

  5. \(\fa x \lnot L(x)\).

Notice that we have replaced the existential quantifier in the third hypothesis by the constant \(a\); that last step is a special case of Skolemization, which is discussed below. We then split the resulting conjunction into two singleton clauses. We now refute the hypotheses as follows:

  1. \(\fa x \lnot C(x) \lor \lnot M(x) \lor L(x)\), from 1 and 2.

  2. \(\lnot M(a) \lor L(a)\), from 6 and 3.

  3. \(L(a)\), from 7 and 4.

  4. \(\bot\) from 8 and 5.

One thing to keep in mind is that in general, different choices of literals to resolve on can produce different results. This doesn’t happen in propositional logic; for example, we can resolve \(P \lor Q \lor C\) and \(\lnot P \lor \lnot Q \lor D\) on either \(P\) or \(Q\), but both result in an tautology. In contrast, consider the two clauses

\[\begin{split}C_1 & = \fa {x, y} P(f(x), y) \lor Q(x, f(y)) \\ C_2 & = \fa {w, z} \lnot P(w, g(z)) \lor \neg Q(g(w), z).\end{split}\]

Resolving on the first literal gives \(\fa {u,v} Q(u, f(g(v))) \lor \lnot Q(g(f(u)), v)\) and resolving on the second gives \(\fa {u,v} P(f(g(u)), v) \lor \lnot P(u, g(f(v)))\).

Another example (stolen from John Harrison’s book) is the barber paradox. Suppose we are told that in a given town there is a male barber who shaves all and only those men who don’t shave themselves. We obtain a contradiction by asking who shaves the barber. Formally, this amounts to a refutation of the sentence \(\ex x \fa y S(x, y) \liff \lnot S(y, y)\). To show that this has no model, we can once again replace \(x\) by a constant, \(a\), and refute \(\fa y S(a, y) \liff \lnot S(y, y)\). Expressing the sentence as a conjunction of clauses, we get:

  • \(\fa y \lnot S(a, y) \lor \lnot S(y,y)\).

  • \(\fa z S(z, z) \lor S(a, z)\).

Instantiating \(y\) and \(z\) to \(a\) in both cases and resolving yields the empty clause.

This example illustrates that when looking for resolvents, it’s not always sufficient to cancel a single literal from each clause; sometimes a resolution step unifies two or more literals in one clause with the negation of a literal in the other. In the example of the barber paradox, if you remove only one disjunct from each clause you are left with a tautology. Thus a proper statement of the resolution rule has us consider clauses \(C_1 = D_1 \lor E_1\) and \(C_2 = D_2 \lor E_2\) where there is a most general unifier for the literals in \(D_1\) and the negations of the literals in \(D_2\). The result of the resolution step applies that unifier to \(E_1 \lor E_2\).

14.2. Completeness of resolution

We need to show that the resolution calculus is refutationally complete, which is to say, if a set \(\Gamma\) of universally quantified clauses is unsatisfiable, then there is a a proof of the empty clause using hypothesis in \(\Gamma\). This provides a complete search procedure for establishing the unsatisfiability of \(\Gamma\): we systematically choose pairs of clauses and apply the resolution rule to derive new ones until we find a proof of the empty clause.

It’s important to keep in mind that when \(\Gamma\) is satisfiable, the search can go on forever without terminating. For example, you should convince yourself that if we start with the clause \(\fa x P(x) \lor \lnot P(f(x))\), we can obtain infinitely many clauses

\[\fa y P(y) \lor \lnot P(f(f(y))), \quad \fa z P(z) \lor \lnot P(f(f(f(z)))), \quad \ldots\]

without ever reaching a contradiction. In this case, it’s not hard to see that the original sentence is satisfiable, but in general we can’t avoid some sort of search. The question as to whether a particular Turing machine halts can be encoded as a question about the unsatisfiability of a universal formula, so an algorithm that could determine whether a resolution refutation exists would amount to a solution to the halting problem. At best, we can take comfort in the fact that the procedure is complete: if the hypotheses are unsatisfiable, we will eventually find a refutation.

The proof of completeness we are about to give is sketchy, but more detail can be found in John Harrison’s book, Practical Logic and Automated Reasoning, on which much of our presentation is based. Given a universally quantified formula \(\fa {x_1, \ldots, x_n} A(x_1, \ldots, x_n)\), a substitution instance, or instance for short, is any formula of the form \(A(t_1, \ldots, t_n)\) where \(t_1, \ldots, t_n\) are terms. Remember that a sentence is a formula without free variables; in the context of resolution, a clause without free variables (that is, a universally quantified clause without any universal quantifiers in front) is said to be ground.

We will assume that the underlying language has at least one constant symbol. (If it doesn’t, just add one.) Our first step is to relate the first-order resolution calculus to the propositional resolution calculus as follows.

Lemma

Let \(\Gamma\) be a set of universally quantified clauses. Suppose there is a propositional resolution proof of a ground clause \(C\) from ground instances of the clauses in \(\Gamma\). Then there is a first-order resolution proof of a universally quantified clauses \(C'\) from \(\Gamma\) such that \(C\) is an instance of \(C'\).

In other words, a propositional resolution proof from instances of \(\Gamma\) can be “lifted” to a first-order resolution proof from \(\Gamma\). We prove the lemma by induction on the length of the propositional resolution proof. The base case is when \(C\) is an instance of a formula in \(\Gamma\), in which case the conclusion is immediate. In the induction step, we need to show that if \(C\) is a ground instance of universally quantified clause \(C'\), \(D\) is a ground instance of a universally quantified clause \(D'\), and \(E\) is obtained from \(C\) and \(D\) by resolution, then there is a universally quantified clause \(E'\) such that \(E\) is an instance of \(E'\) and \(E'\) is obtained from \(C'\) and \(D'\) by the first-order resolution rule. In short, it requires showing that propositional resolution steps on instances can be lifted to first-order resolution steps. The details are a bit technical, but they can be found in Harrison’s book. The argument amounts to little more than writing everything down carefully and reasoning about properties of most general unifiers.

The lemma tells us in particular that if there is a propositional resolution refutation of a set of ground instances of universally quantified clauses in \(\Gamma\), then there is a first-order resolution refutation of \(\Gamma\). Turing it around, if there is no first-order resolution refutation of \(\Gamma\), then there is no propositional resolution refutation of its instances.

Suppose there is no first-order resolution refutation of \(\Gamma\). In general, there may be infinitely many ground instances of formulas in \(\Gamma\), even if \(\Gamma\) is finite. But now suppose we view each atomic formula \(R(t_1, \ldots, t_k)\) as a propositional variable. From the refutation completeness of propositional resolution, we know that for every finite set of ground instances of formulas in \(\Gamma\) there is a satisfying truth assignment. Thus the refutation completeness of first-order resolution follows from the following theorem.

Theorem

Let \(\Gamma\) be a set of of universal first-order formulas in a language that contains at least one constant. Suppose every finite set of closed instances of formulas in \(\Gamma\) is satisfiable as a set of propositional formulas. Then there is a model of \(\Gamma\).

This theorem is version of Herbrand’s theorem. As we have stated it, the theorem says nothing about provability; it is expressed entirely in terms of the semantics of propositional and first-order logic.

Proof

Suppose every finite set of closed instances of the formulas in \(\Gamma\) is satisfiable as a set of propositional formulas. Then, by the compactness theorem for propositional logic, there is a truth assignment \(\tau\) that satisfies all the closed instances of the formulas in \(\Gamma\).

We use this fact to define a first-order model \(\mdl M\) as follows. The universe of \(\mdl M\) is the set of all closed terms in the language of \(\Gamma\). For each constant symbol \(c\), we define \(c^{\mdl M}\) to be the term \(c\). For each function symbol \(f\) of arity \(n\), we define \(f^{\mdl M}(t_1, \ldots, t_n)\) to be the term \(f(t_1, \ldots, t_n)\). In other words, we interpret each function \(f^{\mdl M}\) as the syntactic operation that takes a tuple of terms \(t_1, \ldots, t_n\) and returns the term \(f(t_1, \ldots, t_n)\). Such a model is sometimes called a term model or an Herbrand model. A straightforward induction on terms shows that, in such a model, any closed term \(t\) is interpreted as itself. More generally, if \(\sigma\) is any assignment of terms to variables and \(t\) has the variables \(x_1, \ldots, x_n\) free, then \(\tval{t}_{\mdl M, \sigma}\) is exactly \(t[\sigma(x_1)/x_1, \ldots, \sigma(x_n)/x_n]\).

For each relation symbol \(R\) of arity \(n\), we define \(R^{\mdl M}\) to hold if and only if \(\tau(R(t_1, \ldots, t_n))\) is true. In other words, we look to the truth assignment \(\tau\) to determine the truth of the atomic formulas in the language of \(\Gamma\). Once again, an easy induction shows that for every quantifier-free sentence \(A\), \(\tval{A}_{\mdl M, \sigma} = \tval{A}_\tau\). In other words, first-order evaluation in the model \(\mdl M\) is the same as propositional evaluation under the truth assignment \(\tau\).

We now show that \(\mdl M\) is a model of \(\Gamma\). Suppose \(\fa {x_1, \ldots, x_n} A\) is any formula in \(\Gamma\). Since the universe of \(\mdl M\) consists of closed terms, the semantics of first-order logic tells us that \(\fa {x_1, \ldots, x_n} A\) is true in \(\mdl M\) if and only if for every tuple \(t_1, \ldots, t_n\), \(A\) is true under the assignment \(\{ x_1 \mapsto t_1, \ldots, x_n \mapsto t_n \}\). By the definition of \(\mdl M\), this is the case if and only if \(\tau(A(t_1, \ldots, t_n))\) is true. But this holds, in turn, because \(\tau\) satisfies every closed instance of a formula in \(\Gamma\).

14.3. Skolemization

We now explain how to apply first-order resolution to arbitrary first-order formulas. We follow a strategy analogous to that used for propositional logic. Remember that the propositional resolution calculus serves to refute sets of propositional clauses, which can be viewed as CNF formulas. To use it as a refutation calculus, we introduced the Tseitin transformation, which converts any propositional formula into an equisatisfiable CNF formula. Our goal here is to show how to convert any first-order formula into an an equisatisfiable universal formula. At that point, we can apply the transformation described above to turn it until a set of universally quantified clauses.

Here is the idea. Suppose we are trying to refute a formula of the form \(\fa x \ex y A(x, y)\). From a semantic point of view, we are trying to show that this formula is unsatisfiable. Let \(f(x)\) be a new function symbol. We claim that \(\fa x \ex y A(x, y)\) is equisatisfiable with \(\fa x A(x, f(x))\). As a result, we can refute \(\fa x \ex y A(x, y)\) by refuting \(\fa x A(x, f(x))\). One direction of the claim is easy: any model of \(\fa x A(x, f(x))\) is a model of \(\fa x \ex y A(x, y)\). To see this, notice that if \(\mdl M\) satisfies \(\fa x A(x, f(x))\), then it also satisfies \(\fa x \ex y A(x, y)\): given any value \(a\) in the universe of \(\mdl M\), if we set \(b\) equal to \(f^{\mdl M}(a)\), then \(A\) holds of \(a\) and b in \(\mdl M\). For the other direction, suppose \(\mdl M\) is a model of \(\fa x \ex y A(x, y)\). That means that for every value \(a\) in the universe of \(\mdl M\), there is a value \(b\) such that \(A\) holds of \(a\) and b. We can then interpret \(f\) as any function that, for any \(a\), returns such a value \(b\).

More generally, given any formula \(A(x, y)\) and a new function symbol \(f(x)\), we can expand any model for the language of \(A\) with an interpretation of \(f(x)\) satisfying \(\fa x (\ex y A(x, y)) \liff A(x, f(x))\). We simply interpret \(f(x)\) as a function that, whenever there is a \(y\) in the model satisfying \(A(x, y)\), returns such a \(y\). Such a function is known as a Skolem function for \(\ex y A(x, y)\), and the process of replacing \(\fa x \ex y A(x, y)\) by \(\fa x A(x, f(x))\) is known as Skolemization. The equivalence \(\fa x (\ex y A(x, y)) \liff A(x, f(x))\) is known as the corresponding Skolem axiom. Because we can always choose \(f\) to satisfy the Skolem axiom, we can replace \(\ex y A(x, y)\) with \(A(x, f(x))\) in a set of hypotheses and preserve satisfiability. In general, a formula \(\ex y A(y)\) may have more than one free variable, in which case they should all be arguments to \(f\).

Iteratively eliminating all the existential quantifiers in this way results in a formula in Skolem normal form. Given a formula \(A\), we first put \(A\) in negation normal form, and then replace each existential quantifier by a Skolem function applied to the universally quantified variables that it depends on. For example, suppose we are given the sentence

\[\fa {u, x} \ex y (\fa v Q(x, y, u, v)) \to \fa z \ex w R(x, y, z, w)\]

where \(Q\) and \(R\) are relation symbols. First, we rewrite the sentence as

\[\fa {u, x} \ex y (\ex v \lnot Q(x, y, u, v)) \lor \fa z \ex w R(x, y, z, w).\]

First, we replace \(y\) by a Skolem function that depends on \(u\) and \(x\):

\[\fa {u, x} (\ex v \lnot Q(x, f(u, x), u, v)) \lor \fa z \ex w R(x, f(u, x), z, w).\]

Then we replace \(v\):

\[\fa {u, x} \lnot Q(x, f(u, x), u, g(u, x)) \lor \fa z \ex w R(x, f(u, x), z, w).\]

Finally, we replace \(w\):

\[\fa {u, x} \lnot Q(x, f(u, x), u, g(u, x)) \lor \fa z R(x, f(u, x), z, h(u, x, z)).\]

We are now left with only universal quantifiers.

In this example, if we bring \(z\) to the front, we get a universally quantified clause:

\[\fa {u, x, z} \lnot Q(x, f(u, x), u, g(u, x)) \lor R(x, f(u, x), z, h(u, x, z)).\]

In general, we may have a more complicated propositional structure, but we can always bring universal quantifiers to the front and put the rest of the formula in conjunctive normal form. Using the identity \((\fa x A \land B) \liff (\fa x A) \land (\fa x B)\), we can then reduce the original sentence to a conjunction of universally quantified clauses.

As an example of how this plays out in first-order resolution, consider the following hypotheses:

  1. Every student owns either an iphone or a laptop.

  2. Every student who owns a laptop also owns a pair of bluetooth headphones.

  3. Every iphone is buggy.

  4. Every pair of bluetooth headphones is buggy.

  5. Every student who owns something buggy is sad.

From these we should be able to conclude that every student is sad. We can formalize the hypotheses as follows:

  1. \(\fa x \fn{Student}(x) \to \ex y \fn{Owns}(x, y) \land (\fn{Iphone}(y) \lor \fn{Laptop}(y))\)

  2. \(\fa {x, y} \fn{Student}(x) \land \fn{Owns}(x,y) \land \fn{Laptop}(y) \to \ex z \fn{Owns}(x, z) \land \fn{Headphones}(z)\)

  3. \(\fa y \fn{Iphone}(y) \to \fn{Buggy}(y)\)

  4. \(\fa y \fn{Headphones}(y) \to \fn{Buggy}(y)\)

  5. \(\fa {x, y} \fn{Student}(x) \land \fn{Owns}(x, y) \land \fn{Buggy}(y) \to \fn{Sad}(x)\)

The desired conclusion is \(\fa x \fn{Student}(x) \to \fn{Sad}(x)\). You can check that if we negate the conclusion, Skolemize, put the results in CNF, and move universal quantifiers through the hypotheses, we get the following:

  1. \(\fa x \lnot \fn{Student}(x) \lor \fn{Owns}(x, f(x))\)

  2. \(\fa x \lnot \fn{Student}(x) \lor \fn{Iphone}(f(x)) \lor \fn{Laptop}(f(x))\)

  3. \(\fa {x, y} \lnot \fn{Student}(x) \lor \lnot \fn{Owns}(x, y) \lor \lnot \fn{Laptop}(y) \lor \fn{Headphones}(g(x,y))\)

  4. \(\fa {x, y} \lnot \fn{Student}(x) \lor \lnot \fn{Owns}(x, y) \lor \lnot \fn{Laptop}(y) \lor \fn{Owns}(x,g(x,y))\)

  5. \(\fa y \lnot \fn{Iphone}(y) \lor \fn{Buggy}(y)\)

  6. \(\fa y \lnot \fn{Headphones}(y) \lor \fn{Buggy}(y)\)

  7. \(\fa {x, y} \lnot \fn{Student}(x) \lor \lnot \fn{Owns}(x, y) \lor \lnot \fn{Buggy}(y) \lor \fn{Sad}(x)\)

  8. \(\fn{Student}(a)\)

  9. \(\lnot \fn{Sad}(a)\)

We leave it to you to construct a resolution proof of the empty clause from these.

14.4. Adding equality

We have already discussed equational reasoning in Section 12.1. We have seen that the natural rules for equality are given by reflexivity, symmetry, transitivity, and congruence with respect to functions and relations. These can be expressed as rules, but also as first-order axioms:

  • \(\fa x x = x\)

  • \(\fa {x, y} x = y \limplies y = x\)

  • \(\fa {x, y, z} x = y \land y = z \limplies x = z\)

  • \(\fa {x_1, \ldots, x_n, y_1, \ldots, y_n} x_1 = y_1 \land \cdots x_n = y_n \limplies f(x_1, \ldots, x_n) = f(y_1, \ldots, y_n)\)

  • \(\fa {x_1, \ldots, x_n, y_1, \ldots, y_n} x_1 = y_1 \land \cdots x_n = y_n \land R(x_1, \ldots, x_n) \limplies R(y_1, \ldots, y_n)\).

From these, we can derive substitution for terms and formulas:

  • \(\fa {x, y} x = y \limplies t(x) = t(y)\)

  • \(\fa {x, y} x = y \land A(x) \limplies A(y)\)

Sometimes these are taken as axioms instead of congruence. Remember that we are adopting the convention what when we use notation like \(t(x)\), we have in mind a certain variable \(z\) that \(t\) might depend on and we take \(t(x)\) to stand for \(t[x/z]\) and \(t(y)\) to stand for \(t[y/z]\). Similar conventions hold for formula \(A(x)\).

On way to restore equality to first-order logic is to treat it like any other binary relation symbol, subject to the axioms above. In other words, we take first-order logic with equality to be first-order logic with a binary relation symbol, \(=\), and the axioms above. The refutation completeness of first-order resolution tells us that if we fail to refute a set of first-order formulas together with the equality axioms, there is a model in which the equality symbol is interpreted as an equivalence relation that respects all the function symbols. A method similar to that used in our analysis of congruence closure, namely, replacing terms by equivalence classes, can be used to tern this into a model in which the equality symbol is replaced by actual equality.

For efficiency, however, it is better to add rules to the resolution calculus that are specific to equality. Modern first-order provers use techniques based on paramodulation and superposition. You can learn more about these in a number of online sources.

14.5. Axiomatic proof systems

One can also obtain axiomatic proof systems for first-order logic that extend axiomatic systems for propositional logic with quantifiers and equality. We have already seen suitable axioms for equality. The axioms for the quantifiers follow naturally from their meanings:

  • \((\fa x A) \limplies A[t/x]\)

  • \(A[t/x] \limplies \ex x A\).

The first says that if \(A\) holds of everything, then it holds of any particular thing, and the second says that if \(A\) holds of any particular thing, then it holds of something.

These axioms are only half the story, though. The first axiom tells us how to use a statement that starts with a universal quantifier but not how to prove it, and the second one tells us how to prove a statement with an existential quantifier but not how to use it. How do we prove \(\fa x A\)? Establishing \(\fa x A\) involves showing that \(A\) holds of an arbitrary value of \(x\). To do that, we let \(x\) be arbitrary, and prove \(A\). This suggests the following rule of generalization:

From \(A\), conclude \(\fa x A\).

The actual rule we use is a generalization of this: if we have shown that \(A\) follows from \(B\), and \(B\) doesn’t say anything about \(x\), then we have shown that \(B\) implies that \(A\) holds for any \(x\) at all.

  • From \(B \limplies A\) conclude \(B \limplies \fa x A\), assuming \(x\) is not free in \(B\).

The dual rule for the existential quantifier says that if \(B\) follows from the assumption that \(A\) holds of some \(x\), then \(B\) follows from the assumption that there exists an \(x\) satisfying \(B\).

  • From \(A \limplies B\) conclude \((\ex x A) \limplies B\), assuming \(x\) is not free in \(B\).

14.6. A sequent calculus

We can also extend the cut-free sequent calculus described in Section 8.5 to first-order logic. Remember that the system derives sets of formulas \(\Gamma\) in negation-normal form. We interpret a proof as telling us that in any model, and with any assignment to the free variables, at least one of the formula in \(\Gamma\) is true. In the case of propositional logic, we took the axioms to be finite sets \(\Gamma\) that contain a complementary pair of atoms \(P\) and \(\lnot P\). In the presence of equality, we now take the axioms to be sets \(\Gamma\) such that the set of negations of those formulas can be refuted using equational reasoning. For example, \(\lnot P(a), a \ne b, P(b)\) is an axiom, because equational reasoning refutes the set \(P(a), a = b, \lnot P(b)\). We can determine whether a finite set of formulas is an axiom using congruence closure.

The other rules of the system are as follows:

\[\begin{prooftree} \AXC{$\Gamma, A$} \AXC{$\Gamma, B$} \BIC{$\Gamma, A \land B$} \end{prooftree} \quad\quad \begin{prooftree} \AXC{$\Gamma, A, B$} \UIC{$\Gamma, A \lor B$} \end{prooftree} \] \[\begin{prooftree} \AXC{$\Gamma, A$} \UIC{$\Gamma, \fa x A$} \end{prooftree} \quad\quad \begin{prooftree} \AXC{$\Gamma, A[t/x]$} \UIC{$\Gamma, \ex x A$} \end{prooftree} \]

In the rule for the universal quantifier, we require that \(x\) is not free in any formula in \(\Gamma\). This is analogous to the requirement in the previous section that \(x\) is not free in the formula \(B\) in the generalization rule. You can think of it as saying that \(x\) is really arbitrary.

As in Section 8.5, we can add the cut rule, which represents a form of modus ponens. In class, we will sketch a proof that this system is complete even without the cut rule.

14.7. Natural deduction

Finally, we can extend the system of natural deduction of Section 8.6. The quantifier rules are as follows:

\[\begin{prooftree} \AXC{$\Gamma, A[t/x] \fCenter B$} \UIC{$\Gamma, \fa x A \fCenter B$} \end{prooftree} \quad \quad \begin{prooftree} \AXC{$\Gamma \fCenter A$} \UIC{$\Gamma \fCenter \fa x A$} \end{prooftree} \] \[\begin{prooftree} \AXC{$\Gamma, A \fCenter B$} \UIC{$\Gamma, \ex x A \fCenter B$} \end{prooftree} \quad \quad \begin{prooftree} \AXC{$\Gamma \fCenter A[t/x]$} \UIC{$\Gamma \fCenter \ex x A$} \end{prooftree} \]

In the right rule for \(\forall\) and the left rule for \(\exists\), the eigenvariable condition amounts to the requirement that \(x\) is not free in any formula other than \(A\). The rules for equality are the same as in Section 12.1, except that in all the rules, we allow a set \(\Gamma\) of formulas on the left side of the sequent.