Formalized mathematics is presented in terms of axiomatic theories and valid deductions therein. The best developed, most precise vehicle for handling axiomatic theories is predicate or first-order logic, as explicated by Frege more than a century ago. It is the baseline language for exact reasoning in mathematics, and universally accepted currency for exchanging proofs between mathematicians.
Logic comes in various degrees of expressive and deductive strength, but in its most common form, classical predicate logic means
The principle of excluded middle is accepted;
Quantification takes place over elements of basic types (making it first-order; higher-order logic permits quantification over relations between basic types, relations between higher types of relations, etc.);
Provision is made for an equality predicate, which in the common currency is treated as “extensional equality” between terms.
This article aims to describe finitary classical first-order logic with equality from a conceptual point of view. Many textbooks, e.g., Hodges’ A Shorter Model Theory, tend to pass through this material in a hurry, perhaps under the (basically correct) assumption that mathematicians already know how to reason correctly. However, the mathematical structure of predicate logic is rich in categorical significance and has many pleasant offshoots, and shouldn’t be relegated to a hurried recitation of the basic vocabulary of terms, symbols, well-formed formulas and so on.
For those who like condensed summaries, much of what we describe below is to be a detailed description of the free Boolean hyperdoctrine generated from a signature. We also outline Gentzen sequent calculus as a formal structuring of deductions.
There is a frequently voiced discomfort about logical foundations: that it calls upon the mathematics that it is supposed to be prior to. One is dealing with such mathematical items as the set of words or lists over an alphabet, sets of well-formed expressions, trees which exhibit the structure of a deduction, and so on. Thus, logical foundations appears to assume the prior existence of certain infinite sets, whose manipulation is the business of a set theory, or at the very least a complete formalization of logic would appear to involve some substrate of Peano arithmetic. But wouldn’t a background metatheory of sets or arithmetic then presuppose a logic to handle it correctly? Isn’t logical foundations hopelessly circular?
In spirit, this is like what might be called the “paradox of the dictionary”. A dictionary, real or ideal, defines words in terms of other words. So either a dictionary is hopelessly circular, or some words must be left undefined (permitting some inexactness to creep in). Similarly, it would seem that the structure of logic itself is circular, or needs “undefined” terms, or … perhaps calls on an infinite regress of metatheories?
Our own view is that logical foundations avoids this paradox ultimately by being relentlessly concrete. We may put it this way: logic at the primary level consists of instructions for dealing with formal linguistic items, but the concrete actions for executing those instructions (electrons moving through logic gates, a person unconsciously making an inference in response to a situation) are not themselves linguistic items, not of the language. They are nevertheless as precise and exact as one could wish.
We emphasize this point because in our descriptions below, we obviously must use language to describe logic, and some of this language indeed looks just like the formal mathematics that logic is supposed to be prior to. Nevertheless, the apparent circularity should be considered spurious: it is assumed that the programmer who reads an instruction such as “concatenate a list of lists into a master list” does not need full-blown mathematical language to formalize this, but will be able to translate this directly into actions performed in the real world. However, at the same time, the mathematically literate reader may like having a mathematical meta-layer in which to understand the instructions. The presence of this meta-level should not be a source of confusion, leading one to think we are pulling a circular “fast one”.
Overall, the most useful metaphor might be a computer that can recognize and construct valid deductions in a finitary set theory, such as ETCS. This doesn’t presuppose some infinite sets that somehow reside in the computer. The computer only deals with finite fragments of a theory at a time, using hardware and software of determinate size. The ultimate manifestation of logic is therefore machine-level and extra-linguistic (where the “machine” is a computer or a human who acquires the skill of exact reasoning). Our aim here is not to describe this ultimate concrete manifestation, but rather to present a conceptual overview for mathematically sophisticated readers.
Predicate logic comes in various layers, each layer given in terms of basic data or “generators”, and basic rules or operations for constructing more complicated expressions. There are four layers of data and constructions we deal with, each layer depending on the previous:
Sorts, and rules for constructing types;
Function symbols, and rules for constructing terms;
Relation symbols, and rules for constructing well-formed formulas or predicates;
Axioms, and rules for constructing valid deductions.
The stuff of logic per se is in these rules of construction. An axiomatic theory per se is given by a collection of basic data that generates it. In other words, a theory consists of:
An underlying language generated by a signature, i.e., a collection of sorts, function symbols, and (non-equality) relation symbols;
Axioms of the theory, i.e., a collection of sequents where each is a formula in the language and is a finite list of formulas of the same type as .
This follows a familiar generators-and-relations pattern: the signature itself can be viewed as generating a “free theory”, and then one mods out by the filter of deductions that can be derived from the axioms, which is like modding out by relations.
Sometimes theories are further classified. For example, a functional theory is one whose signature includes no basic (non-equality) predicates. An equational theory is a functional theory whose only axioms are of the form where is an equality predicate between terms. There are Horn theories, geometric theories, and so on, depending on the structure of formulas given in the axioms.
We now discuss each of these layers in turn. We suppose in what follows that the data of a theory (sorts, function symbols, relation symbols, and axioms) has been given.
A basic type is one of a (for us, finite) collection of items called sorts. The basic rule of type construction is that given two types , we may construct a new type , and there is an empty type . (Thus, we are dealing only with product types.)
The best way to consider types is that they are the objects of the free category with finite cartesian products generated from a discrete category of objects . This may be constructed quite syntactically (and fussily), but however it is constructed, it is equivalent to the category
where is the category of finite sets, and is the finite collection of sorts. For our purposes, it is convenient to take the objects of to be finite cardinals , so that an object of is a function , or an element of the free monoid . Thus, a product type is identified with a word in . The category may be regarded as a monoid in the bicategory whose 1-cells are spans from to , and we sometimes commit an abuse of language, writing
for the underlying span (the apex is of course actually the set of morphisms of ), and at other times writing for the category .
A morphism , where and are words in , is described by a function making the diagram
commute. Alternatively, a morphism is effectively an -indexed collection of functions between finite sets.
Two words are isomorphic if they are equal when regarded as commutative words, and in this sense we informally (and harmlessly) write for any such product type together with its product structure (the projection maps to individual sorts). From this point of view, a morphism between product types is a product of diagonal maps
where for each single-sort factor in the domain, there is a diagonal map . (If is empty, this diagonal map is actually a projection map .) In this way, each morphism in is a “generalized diagonal map”. Such generalized diagonal maps may be considered as operations between types which come “for free”, by virtue of the cartesian structure of product types. These operations are basically what is needed to construct and manipulate equality predicates, as we will discuss below.
However such generalized diagonal maps (i.e., -indexed collections of finite functions) may be registered in a machine, a good modern notation for visualizing generalized diagonal maps is in terms of string diagrams.
We now describe the terms generated by function symbols. Each function symbol has an arity where is a type and is a sort; we write to indicate the arity. (The semantics in terms of sets is that an “element” of a product type is a tuple of multiple inputs of , and an “element” of is a single output.) In other words, the collection of function symbols forms an -multigraph:
Let be a set of sorts, and the free monoid on . A multigraph over is a set together with a span .
If the domain of a function symbol is the identity (the empty word), then we call a constant.
Roughly speaking, the terms generated from a multigraph of function symbols are built up by formally plugging in outputs of function symbols as inputs of another, and iterating. Allowances are made so that some of the inputs used might be equal, or might not be used at all. We describe this more precisely in two ways, first from the more or less traditional approach of term syntax, and then from a more conceptual categorical point of view.
The usual syntax (which we will only sketch) is via terms , each of a specified sort which we indicate by writing . Intuitively, terms of type are considered “elements of type ”. To get started, one supposes given for each sort an infinite stock of “variables” of that sort, and which are to be used as input placeholders in terms. Then general terms are built by syntactically replacing variables with other terms, which themselves have their own variables.
In the traditional approach, terms (relative to a multigraph of function symbols over a set of sorts) are introduced in a recursive manner:
Each variable of sort is a term .
Given a function symbol , and given terms for , there is a term .
The last rule includes the case where the domain of is empty. Thus, if is a constant function symbol with codomain , there is a corresponding term , called a constant term of type .
So a typical term might look like this:
where denote variables of appropriate sorts and is a constant. In particular, a variable might appear more than once in a term . In the older logical approaches, one would read off the set of variables of as the set of variables that appear “inside”, in this example . In more expressive typed theories, which allow for “function-space” types, one moreover distinguishes between free variables and bound variables inside terms, and one would analogously read off the set of free variables inside a term. Even though we won’t consider theories with function-types here, we will refer to “free” variables even in the “products-only” case.
In more recent approaches to term syntax, one considers that terms might have free variables which aren’t explicitly given within the body of the term. (Intuitively, variables which don’t appear can be considered to have been “projected out”, by means of precomposing a function with a projection map like .)
An approach that works for “products-only” typed theories and for theories with function-types alike proceeds by giving “terms-in-context”. A term-in-context is an expression
where the context, appearing before the turnstyle, lists distinct variable terms and their types that are considered to be “free variables” of . The context in effect prescribes a domain for the term: a term-in-context has an arity ; in a set-theoretic interpretation, a term-in-context as above gets interpreted as a function of the form
Each variable appearing in appears within the context, but there may be extra variables which don’t appear in (ones that are “projected out”). Note that the context is a listing of the free variables in a specific order, so that
is considered different to (but each is considered as valid as the other).
Basic to the enterprise is the notion of substituting terms (in context) for free variables. The idea is that given a term
and given terms whose contexts are abbreviated by capital Greek letters, for , one may define by recursion a term
where the idea is that one substitutes the term for each instance of , and then concatenates the contexts of the into a single context.
Finally there is a global rule of so-called “-conversion”, by which we identify a term with another term obtained by substituting a fresh variable in for . (In more formal terms, by performing a substitution where the term (in context) is , provided that does not already occur in or .) This is reasonable, since it is intuitively clear that the semantics of both terms are exactly the same.
In summary, “normal terms” are -equivalence classes of terms in contexts as sketched above.
This concludes our informal description of the term syntax. It should be clear that although the rules of term formation, substitutions, etc. are not hard to understand, the fully formal set-up is a bit fussy, and somewhat heavy on syntax.
There are a number of ways of explaining the same material from a more categorical point of view. Whatever the explanation is, the point is to describe the free category with finite products generated from a multigraph over . We denote this free category with products as .
As before, the objects of are product types: elements of . Morphisms from a type to a type are, in the language of the preceding section, -tuples of normal terms where each has arity . Composition is effected by term substitution. The cartesian product structure is essentially given by concatenating (juxtaposing) lists of sorts and terms.
It is hopefully more or less clear how all of this works in the term syntax approach. Despite this, we believe that the bureaucracy of handling variables in the term syntax is something of a hack; from one point of view (closely related to string diagrams), some of it is actually unnecessary.
For example, the input strings play the role of variables declared in the context, but the difference is that they do not need “variable names” – they only need to be labeled by an appropriate sort, for type-checking purposes. This trick effectively eliminates the need for rules of -conversion. As we will see, one can also effect a neat division of labor between the business of variable declarations and the business of “pure substitutions”; moreover, this division clarifies the precise entry point of the particular doctrine we are working in (the doctrine of finite product categories).
To begin, recall the following abstract definition:
Let be a cartesian monad acting on a finitely complete category . An -span in from to is a span of the form
-spans are composed by consideration of a pullback
where is the multiplication on the monad . Under this composition, the -spans are 1-cells of a bicategory -.
In the case where is the free monoid monad acting on , an -span from a set to is the same as a multigraph over . A monad on in the bicategory - is a multicategory over . We are especially interested in the free multicategory generated from a multigraph over .
The free multicategory construction has other names and descriptions. We could also call it the free nonpermutative -sorted operad generated by a set of -typed function symbols. The apex of its underlying span, together with its map to , can also be referred to as the initial algebra for the polynomial endofunctor on which takes an -indexed set to
(). However it is named, the underlying multigraph of the free multicategory generated by a multigraph can be described as
where is the set of -labeled planar trees. This means that
Nodes of a planar tree are labeled by elements of ;
Edges of that planar tree are labeled by sorts , such that the list of labels of incoming edges at a node is the word , and the outgoing edge is labeled .
The list of labels of “leaves” of an -labeled tree (edges that are not outgoing edges of any node) gives an element , and the label of the “root” edge gives an element . Notice that -labeled trees have obvious string diagram representations.
Next, any multicategory generates a (strict monoidal) category. In the present instance, we denote this category as (we write “Pro” by analogy with “prop” – whereas props are used for the doctrine of symmetric monoidal categories, pros are used for monoidal categories). The objects of are elements of . The morphisms of could be described as “-labeled forests”. Formally, the underlying span of is
The monoidal product on forests is simply juxtaposition. The composition of forests is the obvious one, plugging in roots of one forest for leaves of another.
The construction formalizes what we meant earlier by “pure substitutions of terms”, and lives in the doctrine of (strict) monoidal categories. To switch over to the doctrine of categories with finite cartesian products, we apply a simple trick. Let be the composite of the two spans
Then, in the first place, carries a canonical category structure. The reason is that, viewing and as monads in the bicategory of spans, there is a canonical distributive law
(here can be replaced by any pro over ). The idea is that the distributive law maps an element of , which is a pair of arrows
with an arrow of and an arrow of , to a pair of arrows in whose precise form is dictated by a naturality requirement in . For example, if , then
Using this distributive law, the composite of the monads and is another monad in the bicategory of spans, and therefore a category with set of objects .
The same trick works for other doctrines over the doctrine of monoidal categories. For example, if is the free symmetric (strict) monoidal category generated by , regarded as a span from to , then there is a distributive law . Similarly with “symmetric monoidal” replaced with “braided monoidal”.
, regarded as a cartesian category with product structure inherited from , meaning that
is a product-preserving functor, where is the unit of the monad .
More explicitly, the tensor product on is a cartesian product provided that there are natural transformations , which endow each object with a cocommutative comonoid structure. But the naturality follows from the definition of the distributive law, and the cocommutative comonoid axioms already hold in .
Note that this abstract description of is identical to that given by the syntax of normal terms. The “bureaucracy of variables” is here organized into two departments, and , each having its own individual categorical structure, which interact via the distributive law. (This is the “division of labor” we were talking about, where each arrow of is factorized into a generalized diagonal map followed by an -labeled forest.)
Another name for is “term algebra”, and yet another name for it is “the free -sorted Lawvere theory generated by a set of -sorted operation symbols ”.
is the free category with products generated by the multigraph over .
is the free multicategory generated by the multigraph over , and is the free monoidal category generated by the multicategory . Therefore is the free monoidal category generated by the multigraph . It remains to show that the monoidal inclusion
is universal among monoidal functors to cartesian monoidal categories .
There is of course a product-preserving functor compatible with the restriction . At the level of spans, this gives a composable pair of span morphisms
which we then compose with the span morphism given by composition in :
This gives a morphism between underlying spans, . This is functorial (i.e., is a morphism of monads in ) because both and are functorial, and also because the compositional equalities enforced by the distributive law are preserved: they are taken to equalities expressed by naturality of diagonal maps and projection maps in . The functor is product-preserving, because is product-preserving. The uniqueness of the product-preserving extension is clear since the subcategories and together generate .
This section contains some technical material which will be important for the upcoming discussion of the Beck-Chevalley condition. The point is that the Beck-Chevalley condition is a very powerful principle, and should be applied only to the “right” sorts of pullbacks. The pullbacks we are interested in here can be roughly described as those which are “based on products”; more precisely, as pullbacks that are absolute with respect to the doctrine of finite-product categories, in the sense of the following definition.
Let be a category with finite products, and let be a pullback square in . We say that is a (semantically) productive pullback if, for every category with finite products and every finite-product preserving functor , the square is a pullback square in .
It suffices to test this condition for the case :
For a pullback square in to be productive, it is necessary and sufficient that every product-preserving functor take to a pullback in .
Since the Yoneda embedding preserves and reflects pullbacks, and since limits in are computed objectwise, the representable functors jointly preserve and reflect pullbacks in .
Example: Every pullback in is productive. This is intuitively obvious since all morphisms in are built from product data. More formally, every finite-product preserving functor
is isomorphic to one of the form , where is the inclusion and is the restriction
For there is, up to isomorphism, only one extension of to a product-preserving functor , and both and are such extensions. On the other hand, notice that both and preserve not just finite products but all finite limits. Therefore an arbitrary product-preserving functor preserves any pullback square in .
Example: On the other hand, not every pullback in a term algebra need be productive. For an especially simple example, take a single-sorted theory with a single unary function symbol . The Lawvere theory for a single unary function, whose models are sets equipped with an -action (i.e., with an endofunction), is the category opposite to the full subcategory of whose objects are finite coproducts of copies of the representable , or the finite coproduct cocompletion of the one-object category . Now, the following square is a pushout in the finite coproduct cocompletion:
For, any square in the finite coproduct cocompletion of the form
factors through one of the summands of the lower-right corner (by extensivity of , say), so we are reduced to checking that the first square is a pushout in the image of the Yoneda embedding – and it is, because is cancellable. It follows that the dual of the first square – call this dual square – is a pullback in the Lawvere theory, i.e., in . However, cannot be preserved by all product-preserving functors , because there are plenty of models where is non-injective, i.e., where the square
is not a pullback.
One way of thinking about the last example is that, externally speaking, we can see that is cancellable (monic) in the term category, but the term category cannot “see” this fact “internally” in the doctrine of finite product categories, otherwise the monicity would be preserved under any interpretation (i.e., in any model ).
There is a more verbose (but straightforward) syntactic notion of productive pullback, i.e., a pullback which comes “for free” in any category with finite products. They include the following:
(Compare Seely, section 3.) Let be a category with finite products. A pullback square in is syntactically productive if it belongs to the smallest class of squares containing squares of form (1)-(4) above and closed under the following operations:
If
belong to , then so does their composite;
If
belongs to , then so does the square obtained by applying a functor of the form or to this square.
It is clear by induction that if is a finite-product preserving functor and is a syntactically productive pullback in , then is a pullback square in . Thus, a syntactically productive square is semantically productive. This may be regarded as a “soundness theorem”.
We aim to prove a “completeness theorem” as well: that every semantically productive pullback square in is syntactically productive.
is clearly a semantically productive pullback. It is syntactically productive because it is a composite of squares of types (1) and (2):
where denotes the unique map, and we tacitly condense product expressions by erasing empty factors .
Let be the canonical inclusion. Then each morphism in the image of this inclusion is monic in .
Suppose for two morphisms of . Each morphism of has a unique factorization as , where belongs to and to ; similarly we uniquely factorize as . From , we derive and , again by the unique factorization. From we derive , according to the recursive definition of -labeled forest. Therefore , as desired.
Let be the canonical inclusion. Then each morphism in the image of is monic in .
This follows easily by induction. Each morphism of the form is a cartesian product of -labeled trees, and each non-trivial tree is of the form where is a function symbol and is an -labeled forest. Then is monic by the preceding lemma, and is monic by induction. Thus is monic, and since a cartesian product of monos is monic, this completes the proof.