|logic||category theory||type theory|
|true||terminal object/(-2)-truncated object||h-level 0-type/unit type|
|false||initial object||empty type|
|proposition||(-1)-truncated object||h-proposition, mere proposition|
|cut elimination for implication||counit for hom-tensor adjunction||beta reduction|
|introduction rule for implication||unit for hom-tensor adjunction||eta conversion|
|disjunction||coproduct ((-1)-truncation of)||sum type (bracket type of)|
|implication||internal hom||function type|
|negation||internal hom into initial object||function type into empty type|
|universal quantification||dependent product||dependent product type|
|existential quantification||dependent sum ((-1)-truncation of)||dependent sum type (bracket type of)|
|equivalence||path space object||identity type|
|equivalence class||quotient||quotient type|
|induction||colimit||inductive type, W-type, M-type|
|higher induction||higher colimit||higher inductive type|
|completely presented set||discrete object/0-truncated object||h-level 2-type/preset/h-set|
|set||internal 0-groupoid||Bishop set/setoid|
|universe||object classifier||type of types|
|modality||closure operator monad||modal type theory, monad (in computer science)|
|linear logic||(symmetric, closed) monoidal category||linear type theory/quantum computation|
|proof net||string diagram||quantum circuit|
|(absence of) contraction rule||(absence of) diagonal||no-cloning theorem|
expressing the idea that some statement must be true if every statement is true. The original sequent calculi were developed by Gerhard Gentzen in 1933.
A central property of sequent calculi, which distinguishes them from systems of natural deduction, is that it allows an easier analysis of proofs: the subformula property that it enjoys allows easy induction over proof-steps. The reason is roughly that, in the language of natural deduction, in sequent calculus “every rule is an introduction rule” which introduces a term on either side of a sequent with no elimination rules. This means that working backward every “un-application” of such a rule makes the sequent necessarily simpler.
We will start with an arbitrary signature ; the simplest case (for propositional logic) uses the empty signature (with no types). We will assume unlimited variables for propositions and unlimited variables for each type.
We will be defining several terms below; but keep in mind that these are definitions for sequent calculus, since these terms may also be used more generally.
For the empty signature, the only context is empty; if consists of a single type, then a context is effectively a list of variables.
Given a context over , a cedent in over is simply a list or
of (formulas for) propositions in over .
(Recall that a proposition in the context allows the variables introduced by , but no others, to be free variables. Such a proposition in context is also called a predicate.)
where is the context , with the antecedent on the left and the succedent on the right.
If has a single type (a so-called ‘untyped’ signature) and one tacitly assumes that the single type is inhabited, then one may (up to equivalence in a certain sense) reconstruct the context of any sequent from the cedents, by examining which variables appear; this is even possible if has more than one type (all assumed inhabited), although more difficult if certain abuses of notation are used. Then the context can be ignored. This is not really the right way to do things, but (especially for untyped signatures) it is traditional.
Confusingly, the cedents are sometimes also called ‘contexts’; then one speaks of the left context (the antecedent), the right context (the succedent). This is most easily done when ingoring what we have been calling the context; I'm not even sure what term would then be used for that context (maybe the type context?). See also also the interpretation of sequents as hypothetical judgements (which I hope to write soon), which shows how the type context, antecedent, and succedent are all used to form the context of a related hypothetical judgement.
One also sometimes uses ‘antecedent’ or ‘succedent’ for an individual proposition on the relevant side of the sequent; see also the use of ‘consequent’ in minimal sequents below.
We will explain below how the sequent calculus may be used to prove theorems from the axioms; each theorem will also be a sequent.
We may also consider particular kinds of sequents or theories, by form or content.
A closed sequent is one in which the context is empty; a closed theory is one in which every sequent is closed.
An intuitionistic sequent is one in which the succedent consists of at most one proposition; an intuitionistic theory is one in which every sequent is intuitionistic.
A minimal sequent is one in which the succedent consists of exactly one proposition (then called the consequent); a minimal theory is one in which every sequent is minimal.
A dual-intuitionistic sequent is one in which the antecedent consists of at most one proposition; a dual-intuitionistic theory is one in which every sequent is dual-intuitionistic.
A dual-minimal sequent is one in which the antecedent consists of exactly one proposition; a dual-minimal theory is one in which every sequent is dual-minimal.
A classical sequent is simply a sequent, emphasising that we are not restricting to (dual)-intuitionistic or (dual)-minimal sequents; similarly, a classical theory is any theory.
These (except the first) are so-called because of their relationship to intuitionistic logic, minimal logic, classical logic, etc. Any of these logics may be presented using any sort of sequent, but Gentzen's original sequent calculi presented each logic using only corresponding sequents.
A regular sequent is one in which every proposition is given by a formula in regular logic; a regular theory is one in which every sequent is regular.
A coherent sequent is one in which every proposition is given by a formula in coherent logic; a coherent theory is one in which every sequent is coherent.
Obviously, this list could be continued.
The basic building blocks of a proof in sequent calculus are inference rules? or rewrite rules, which are rules for inferring the validity of certain sequents from other sequents. One writes
where is a (possibly empty) list of sequents and is a single sequent, to indicate that from the validity of the sequents in that of may be inferred, or that may be rewritten to .
A derivation or proof tree is a compound of rewrite rules, such as
We will give the rules in several classes.
The identity rule? is
stating that, in any context , any proposition in follows from itself, always.
The substitution rule? is
where is any interpretation of in . Explicitly, such an interpretation is a list of terms (of the same length as the list which is the context ), where each term is a term over of type in the context . Then , or , is obtained from by substituting each for the corresponding , and (and ) are obtained by applying this substitution to every proposition in the list. Of course, this rule is vacuous if has no terms.
The cut rule is
the proposition has been ‘cut’.
The exchange rules are
The weakening rules are
more generally, we may insert any sequence of propositions anywhere in the antecedent and succedent.
The contraction rules are
in the absence of the exchange rule, it is important that we can remove duplications only one proposition at a time (as shown here) and only if they are adjacent (as shown here).
Some or all of these rules may be dropped in a substructural logic. However, even so, the first three rules can generally be proved (except for the identity rule for atomic propositions); see cut elimination.
The truth rule is
that is, any sequent is valid if the necessarily true statement is in the succedent.
that is, any sequent is valid if the necessarily false statement is in the antecedent.
The binary conjunction rules are
on the left and
on the right.
Dually, the binary disjunction rules are
on the left and
on the right.
The negation rules are
in other words, a proposition may be moved to the other side if it is replaced by its negation.
The conditional rules are
Dually, the relative complement rules are
The equality rules are
where is any term of any type in the context .
One could (but rarely does) introduce dual apartness rules.
The universal quantification rules are
where is a term of type in the context and is a proposition in the extended context? , and
where is the proposition in the context produced by weakening (in the type context) the proposition in the context (and similarly for a list of propositions).
Dually, the existential quantification rules are
We have written the rules additively, which works best when using (dual)-intuitionistic or (dual)-minimal sequents, but it is also possible to write them in a slightly different multiplicative manner. This makes a difference if some of the weakening rule and contraction rule are abandoned; linear logic uses both rules but for different connectives. Similarly, we have written the rules to apply as much as possible without the exchange rule, but the remaining asymmetries (in the rules for , , , and ) mean that these diverge into left and right operations in noncommutative logic?. (Compare the difference between left duals and right duals in a non-symmetric monoidal category.)
Gentzen originally did not include or , but if is defined as for any atomic and is defined as , then their rules can be proved. Similarly, he did not use ; we may define to mean . It would also be possible to leave out , defining as . With or without these optional operations and rules, the resulting logic is classical.
If we use only those rules that can be stated using intuitionistic sequents, then the result is intuitionistic logic; this is again true with or without , , or . However, if we leave out , then we cannot reconstruct it; the definition of using and is not valid intuitionistically. On the other hand, if we include (and optionally ) but leave out and , then we get intuitionistic logic using all (classical) sequents. (Conversely, we could get classical logic using only intuitionistic sequents and adding the law of excluded middle as an axiom.)
If we use only those rules that can be stated using minimal sequents (so necessarily leaving out ), then the result is still intuitionistic logic; but if we also leave out , then the result is minimal logic. Dual results hold for dual-intuitionistic and dual-minimal sequents.
The cut rule expresses the composition of proofs. Gentzen’s main result (Gentzen, Haupsatz) is that any derivation that uses the cut rule can be transformed into one that doesn’t – the cut-elimination theorem. This yields a normalization algorithm for proofs, which provided much of the inspiration behind Lambek’s approach to categorical logic. Similarly, any derivation that uses the identity rule can be transformed into one that uses it only for atomic propositions (those provided by the signature and equality).
The most important property of cut-free proofs is that every formula occurring anywhere in a proof is a subformula of a formula contained in the conclusion of the proof (the subformula property). This makes induction over proof-trees much more straightforward than with natural deduction or other systems.
See cut elimination (if we ever write it).
For the basic definition of bi-minimal sequents see for instance def. D1.1.5 of
and section D1.3 for sequent calculus.
Sequent calculus was introduced by Gerhard Gentzen in his 1933 thesis
as a formal system for studying the properties of proof systems presented in the style of natural deduction (also introduced by Gentzen). It has since been widely adopted over natural deduction in proof theory.
provides a good summary. So does
is discussion of a sequent calculus for type theory.