nLab sequent calculus

Sequent calculus


Type theory

natural deduction metalanguage, practical foundations

  1. type formation rule
  2. term introduction rule
  3. term elimination rule
  4. computation rule

type theory (dependent, intensional, observational type theory, homotopy type theory)

syntax object language

computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory

logicset theory (internal logic of)category theorytype theory
predicatefamily of setsdisplay morphismdependent type
proofelementgeneralized elementterm/program
cut rulecomposition of classifying morphisms / pullback of display mapssubstitution
introduction rule for implicationcounit for hom-tensor adjunctionlambda
elimination rule for implicationunit for hom-tensor adjunctionapplication
cut elimination for implicationone of the zigzag identities for hom-tensor adjunctionbeta reduction
identity elimination for implicationthe other zigzag identity for hom-tensor adjunctioneta conversion
truesingletonterminal object/(-2)-truncated objecth-level 0-type/unit type
falseempty setinitial objectempty type
proposition, truth valuesubsingletonsubterminal object/(-1)-truncated objecth-proposition, mere proposition
logical conjunctioncartesian productproductproduct type
disjunctiondisjoint union (support of)coproduct ((-1)-truncation of)sum type (bracket type of)
implicationfunction set (into subsingleton)internal hom (into subterminal object)function type (into h-proposition)
negationfunction set into empty setinternal hom into initial objectfunction type into empty type
universal quantificationindexed cartesian product (of family of subsingletons)dependent product (of family of subterminal objects)dependent product type (of family of h-propositions)
existential quantificationindexed disjoint union (support of)dependent sum ((-1)-truncation of)dependent sum type (bracket type of)
logical equivalencebijection setobject of isomorphismsequivalence type
support setsupport object/(-1)-truncationpropositional truncation/bracket type
n-image of morphism into terminal object/n-truncationn-truncation modality
equalitydiagonal function/diagonal subset/diagonal relationpath space objectidentity type/path type
completely presented setsetdiscrete object/0-truncated objecth-level 2-type/set/h-set
setset with equivalence relationinternal 0-groupoidBishop set/setoid with its pseudo-equivalence relation an actual equivalence relation
equivalence class/quotient setquotientquotient type
inductioncolimitinductive type, W-type, M-type
higher inductionhigher colimithigher inductive type
-0-truncated higher colimitquotient inductive type
coinductionlimitcoinductive type
presettype without identity types
set of truth valuessubobject classifiertype of propositions
domain of discourseuniverseobject classifiertype universe
modalityclosure operator, (idempotent) monadmodal type theory, monad (in computer science)
linear logic(symmetric, closed) monoidal categorylinear type theory/quantum computation
proof netstring diagramquantum circuit
(absence of) contraction rule(absence of) diagonalno-cloning theorem
synthetic mathematicsdomain specific embedded programming language

homotopy levels


Sequent calculus


A sequent calculus is a deductive system for predicate logic (or something similar) in which the basic judgment is a sequent

ϕ 1,ϕ 2,ψ 1,ψ 2,, \phi_1, \phi_2, \ldots \vdash \psi_1, \psi_2, \ldots ,

expressing the idea that some statement ψ i\psi_i must be true if every statement ϕ i\phi_i is true. The original sequent calculi were developed by Gerhard Gentzen in 1933 building on anticipatory ideas by Paul Hertz.

A central property of sequent calculi, which distinguishes them from systems of natural deduction, is that it allows an easier analysis of proofs: the subformula property that it enjoys allows easy induction over proof-steps. The reason is roughly that, using the language of natural deduction, in sequent calculus “every rule is an introduction rule” which introduces a term on either side of a sequent with no elimination rules. This means that working backward every “un-application” of such a rule makes the sequent necessarily simpler.


We will start with an arbitrary signature Σ\Sigma; the simplest case (for propositional logic) uses the empty signature (with no types). We will assume unlimited variables for propositions and unlimited variables for each type.


We will be defining several terms below; but keep in mind that these are definitions for sequent calculus, since these terms may also be used more generally.


A context over Σ\Sigma is a (finite) list x:T\vec{x}\colon \vec{T} or

x 1:T 1,x 2:T 2,, x_1\colon T_1, x_2\colon T_2, \ldots ,

where the x ix_i are variables and the T iT_i are (formulas for) types over Σ\Sigma.

For the empty signature, the only context is empty; if Σ\Sigma consists of a single type, then a context is effectively a list of variables.


Given a context Γ\Gamma over Σ\Sigma, a cedent in Γ\Gamma over Σ\Sigma is simply a list ϕ\vec{\phi} or

ϕ 1,ϕ 2, \phi_1, \phi_2, \ldots

of (formulas for) propositions in Γ\Gamma over Σ\Sigma.

(Recall that a proposition in the context Γ\Gamma allows the variables introduced by Γ\Gamma, but no others, to be free variables. Such a proposition in context is also called a predicate.)


A sequent over Σ\Sigma consists of a context Γ\Gamma over Σ\Sigma and two cedents in Γ\Gamma, called the antecedent and the succedent. This is written

ϕ x:Tψ, \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} ,

where x:T\vec{x}\colon \vec{T} is the context Γ\Gamma, with the antecedent on the left and the succedent on the right.


If Σ\Sigma has a single type (a so-called ‘untyped’ signature) and one tacitly assumes that the single type is inhabited, then one may (up to equivalence in a certain sense) reconstruct the context of any sequent from the cedents, by examining which variables appear; this is even possible if Σ\Sigma has more than one type (all assumed inhabited), although more difficult if certain abuses of notation are used. Then the context can be ignored. This is not really the right way to do things, but (especially for untyped signatures) it is traditional.

Confusingly, the cedents are sometimes also called ‘contexts’; then one speaks of the left context (the antecedent), the right context (the succedent). This is most easily done when ingoring what we have been calling the context; I'm not even sure what term would then be used for that context (maybe the type context?). See also also the interpretation of sequents as hypothetical judgements (which I hope to write soon), which shows how the type context, antecedent, and succedent are all used to form the context of a related hypothetical judgement.

One also sometimes uses ‘antecedent’ or ‘succedent’ for an individual proposition on the relevant side of the sequent; see also the use of ‘consequent’ in minimal sequents below.


A theory over Σ\Sigma is an arbitrary set of sequents over Σ\Sigma, called the axioms of the theory.

We will explain below how the sequent calculus may be used to prove theorems from the axioms; each theorem will also be a sequent.

We may also consider particular kinds of sequents or theories, by form or content.

Definitions (by form)

A closed sequent is one in which the context is empty; a closed theory is one in which every sequent is closed.

An intuitionistic sequent is one in which the succedent consists of at most one proposition; an intuitionistic theory is one in which every sequent is intuitionistic.

A minimal sequent is one in which the succedent consists of exactly one proposition (then called the consequent); a minimal theory is one in which every sequent is minimal.

A dual-intuitionistic sequent is one in which the antecedent consists of at most one proposition; a dual-intuitionistic theory is one in which every sequent is dual-intuitionistic.

A dual-minimal sequent is one in which the antecedent consists of exactly one proposition; a dual-minimal theory is one in which every sequent is dual-minimal.

A classical sequent is simply a sequent, emphasising that we are not restricting to (dual)-intuitionistic or (dual)-minimal sequents; similarly, a classical theory is any theory.

These (except the first) are so-called because of their relationship to intuitionistic logic, minimal logic, classical logic, etc. Any of these logics may be presented using any sort of sequent, but Gentzen's original sequent calculi presented each logic using only corresponding sequents.

Definitions (by content)

A regular sequent is one in which every proposition is given by a formula in regular logic; a regular theory is one in which every sequent is regular.

A coherent sequent is one in which every proposition is given by a formula in coherent logic; a coherent theory is one in which every sequent is coherent.

Obviously, this list could be continued.

Rules of proof

The basic building blocks of a proof in sequent calculus are inference rules or rewrite rules, which are rules for inferring the validity of certain sequents from other sequents. One writes

στ \frac{\vec{\sigma}}{\tau}

where σ\vec{\sigma} is a (possibly empty) list of sequents and τ\tau is a single sequent, to indicate that from the validity of the sequents in σ\vec{\sigma} that of τ\tau may be inferred, or that σ\vec{\sigma} may be rewritten to τ\tau.

A derivation or proof tree is a compound of rewrite rules, such as

σ 1σ 2σ 3σ 4σ 5. \frac{\frac{\sigma_1 \;\; \sigma_2}{\sigma_3} \;\; \sigma_4 }{\sigma_5} \,.

We will give the rules in several classes.

Rules (structural rules)

The identity rule is

α x:Tα, \frac{}{\alpha \vdash_{\vec{x}\colon \vec{T}} \alpha} \,,

stating that, in any context Γ\Gamma, any proposition in Γ\Gamma follows from itself, always.

The substitution rule is

ϕ x:Tψϕ[s/x] y:Uψ[s/x], \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} }{ \vec{\phi}[\vec{s}/\vec{x}] \vdash_{\vec{y}\colon \vec{U}} \psi[\vec{s}/\vec{x}] } \,,


ϕ Γψa *ϕ Δa *ψ, \frac{ \vec{\phi} \vdash_{\Gamma} \vec{\psi} }{ a^*\vec{\phi} \vdash_{\Delta} a^*\vec{\psi} } \,,

where aa is any interpretation of Γ\Gamma in Δ\Delta. Explicitly, such an interpretation is a list s\vec{s} of terms (of the same length as the list which is the context Γ\Gamma), where each term s is_i is a term over Σ\Sigma of type T iT_i in the context Δ\Delta. Then a *ϕa^*\phi, or ϕ[s/x]\phi[\vec{s}/\vec{x}], is obtained from ϕ\phi by substituting each s is_i for the corresponding x ix_i, and a *ϕa^*\vec{\phi} (and a *ψa^*\vec{\psi}) are obtained by applying this substitution to every proposition in the list. Of course, this rule is vacuous if Σ\Sigma has no terms.

The cut rule is

(ϕ x:Tψ,α)(α,χ x:Tω)ϕ,χ x:Tψ,ω; \frac{ (\vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha) \;\;\; (\alpha, \vec{\chi} \vdash_{\vec{x}\colon \vec{T}} \vec{\omega}) }{ \vec{\phi}, \vec{\chi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \vec{\omega} }\,;

the proposition α\alpha has been ‘cut’.

The exchange rules are

ϕ,α,β,ψ x:Tχ,ωϕ,β,α,ψ x:Tχ,ω \frac{ \vec{\phi}, \alpha, \beta, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi}, \vec{\omega} }{ \vec{\phi}, \beta, \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi}, \vec{\omega} } \,


ϕ,ψ x:Tχ,α,β,ωϕ,ψ x:Tχ,β,α,ω; \frac{ \vec{\phi}, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi}, \alpha, \beta, \vec{\omega} }{ \vec{\phi}, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi}, \beta, \alpha, \vec{\omega} } \,;

more generally, we may apply any permutation of the antecedent and succedent (because every permutation is a composite of transpositions).

The weakening rules are

ϕ,ψ x:Tχϕ,α,ψ x:Tχ \frac{ \vec{\phi}, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} }{ \vec{\phi}, \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,


ϕ x:Tψ,χϕ x:Tψ,α,χ; \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \vec{\chi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha, \vec{\chi} } \,;

more generally, we may insert any sequence of propositions anywhere in the antecedent and succedent.

The contraction rules are

ϕ,α,α,ψ x:Tχϕ,α,ψ x:Tχ \frac{ \vec{\phi}, \alpha, \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} }{ \vec{\phi}, \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} }


ϕ x:Tψ,α,α,χϕ x:Tψ,α,χ; \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha, \alpha, \vec{\chi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha, \vec{\chi} } \,;

in the absence of the exchange rule, it is important that we can remove duplications only one proposition at a time (as shown here) and only if they are adjacent (as shown here).

Some or all of these rules may be dropped in a substructural logic. However, even so, the first three rules can generally be proved (except for the identity rule for atomic propositions); see cut elimination.

Rules (logical rules)

The truth rule is

ϕ x:Tψ,,χ; \frac{ }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \top, \vec{\chi} } \,;

that is, any sequent is valid if the necessarily true statement is in the succedent.

Dually, the falsehood rule is

ϕ,,ψ x:Tχ; \frac{ }{ \vec{\phi}, \bot, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,;

that is, any sequent is valid if the necessarily false statement is in the antecedent.

The binary conjunction rules are

ϕ,α,ψ x:Tχϕ,αβ,ψ x:Tχ \frac{ \vec{\phi}, \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} }{ \vec{\phi}, \alpha \wedge \beta, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,


ϕ,β,ψ x:Tχϕ,αβ,ψ x:Tχ \frac{ \vec{\phi}, \beta, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} }{ \vec{\phi}, \alpha \wedge \beta, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,

on the left and

(ϕ x:Tψ,α,χ)(ϕ x:Tψ,β,χ)ϕ x:Tψ,αβ,χ \frac{ (\vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha, \vec{\chi}) \;\;\; (\vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \beta, \vec{\chi}) }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha \wedge \beta, \vec{\chi} } \,

on the right.

Dually, the binary disjunction rules are

(ϕ,α,ψ x:Tχ)(ϕ,β,ψ x:Tχ)ϕ,αβ,ψ x:Tχ \frac{ (\vec{\phi}, \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi}) \;\;\; (\vec{\phi}, \beta, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi}) }{ \vec{\phi}, \alpha \vee \beta, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,

on the left and

ϕ x:Tψ,α,χϕ x:Tψ,αβ,χ \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha, \vec{\chi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha \vee \beta, \vec{\chi} } \,


ϕ x:Tψ,β,χϕ x:Tψ,αβ,χ \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \beta, \vec{\chi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha \vee \beta, \vec{\chi} } \,

on the right.

The negation rules are

ϕ x:Tα,ψϕ,¬α x:Tψ \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \alpha, \vec{\psi} }{ \vec{\phi}, \neg{\alpha} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} } \,


ϕ,α x:Tψϕ x:T¬α,ψ; \frac{ \vec{\phi}, \alpha \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \neg{\alpha}, \vec{\psi} } \,;

in other words, a proposition may be moved to the other side if it is replaced by its negation.

The conditional rules are

(ϕ x:Tα)(β x:Tψ)ϕ,αβ x:Tψ \frac{ (\vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \alpha) \;\;\; (\beta \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}) }{ \vec{\phi}, \alpha \to \beta \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} } \,


ϕ,α x:Tβ,ψϕ x:Tαβ,ψ. \frac{ \vec{\phi}, \alpha \vdash_{\vec{x}\colon \vec{T}} \beta, \vec{\psi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \alpha \to \beta, \vec{\psi} } \,.

Dually, the relative complement rules are

ϕ,α x:Tβ,ψϕ,αβ x:Tψ \frac{ \vec{\phi}, \alpha \vdash_{\vec{x}\colon \vec{T}} \beta, \vec{\psi} }{ \vec{\phi}, \alpha \setminus \beta \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} } \,


(ϕ x:Tα)(β x:Tψ)ϕ x:Tαβ,ψ. \frac{ (\vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \alpha) \;\;\; (\beta \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}) }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \alpha \setminus \beta, \vec{\psi} } \,.

The equality rules are

(ϕ x:Tα[s/y])(α[t/y] x:Tψ)ϕ,(s=t) x:Tψ, \frac{ (\vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \alpha[s/y]) \;\;\; (\alpha[t/y] \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}) }{ \vec{\phi}, (s = t) \vdash_{\vec{x}\colon \vec{T}} \vec{\psi} } \,,

where ss and tt are any terms of the same type UU in the context x:T\vec{x}\colon \vec{T} and α\alpha is a proposition in the context (x:T,y:U)(\vec{x}\colon \vec{T}, y\colon U), and

ϕ x:Tψ,(s=s), \frac{ }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, (s = s) } \,,

where ss is any term of any type in the context x:T\vec{x}\colon \vec{T}.

One could (but rarely does) introduce dual apartness rules.

The universal quantification rules are

ϕ,α[s/y],ψ x:Tχϕ,(y:U)α,ψ x:Tχ, \frac{ \vec{\phi}, \alpha[s/y], \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} }{ \vec{\phi}, \forall(y\colon U) \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,,

where ss is a term of type UU in the context x:T\vec{x}\colon \vec{T} and α\alpha is a proposition in the extended context? (x:T,y:U)(\vec{x}\colon \vec{T}, y\colon U), and

ϕ[y^] x:T,y:Uψ[y^],α,χ[y^]ϕ x:Tψ,(y:U)α,χ, \frac{ \vec{\phi}[\hat{y}] \vdash_{\vec{x}\colon \vec{T}, y\colon U} \vec{\psi}[\hat{y}], \alpha, \vec{\chi}[\hat{y}] }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \forall(y\colon U) \alpha, \vec{\chi} } \,,

where ϕ[y^]\phi[\hat{y}] is the proposition in the context (x:T,y:U)(\vec{x}\colon \vec{T}, y\colon U) produced by weakening (in the type context) the proposition ϕ\phi in the context x:T\vec{x}\colon \vec{T} (and similarly for a list of propositions).

Dually, the existential quantification rules are

ϕ[y^],α,ψ[y^] x:T,y:Uχ[y^]ϕ,(y:U)α,ψ x:Tχ \frac{ \vec{\phi}[\hat{y}], \alpha, \vec{\psi}[\hat{y}] \vdash_{\vec{x}\colon \vec{T}, y\colon U} \vec{\chi}[\hat{y}] }{ \vec{\phi}, \exists(y\colon U) \alpha, \vec{\psi} \vdash_{\vec{x}\colon \vec{T}} \vec{\chi} } \,


ϕ x:Tψ,α[s/y],χϕ x:Tψ,(y:U)α,χ. \frac{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \alpha[s/y], \vec{\chi} }{ \vec{\phi} \vdash_{\vec{x}\colon \vec{T}} \vec{\psi}, \exists(y\colon U) \alpha, \vec{\chi} } \,.

We have written the rules additively, which works best when using (dual)-intuitionistic or (dual)-minimal sequents, but it is also possible to write them in a slightly different multiplicative manner. This makes a difference if some of the weakening rule and contraction rule are abandoned; linear logic uses both rules but for different connectives. Similarly, we have written the rules to apply as much as possible without the exchange rule, but the remaining asymmetries (in the rules for ¬\neg, \to, \setminus, and ==) mean that these diverge into left and right operations in noncommutative logic?. (Compare the difference between left duals and right duals in a non-symmetric monoidal category.)

Gentzen originally did not include \top or \bot, but if \top is defined as αα\alpha \to \alpha for any atomic α\alpha and \bot is defined as ¬\neg{\top}, then their rules can be proved. Similarly, he did not use \setminus; we may define αβ\alpha \setminus \beta to mean α¬β\alpha \wedge \neg{\beta}. It would also be possible to leave out \to, defining αβ\alpha \to \beta as ¬αβ\neg{\alpha} \vee \beta. With or without these optional operations and rules, the resulting logic is classical.

If we use only those rules that can be stated using intuitionistic sequents, then the result is intuitionistic logic; this is again true with or without \top, \bot, or \setminus. However, if we leave out \to, then we cannot reconstruct it; the definition of \to using ¬\neg and \vee is not valid intuitionistically. On the other hand, if we include \bot (and optionally \top) but leave out ¬\neg and \setminus, then we get intuitionistic logic using all (classical) sequents except the right conditional rule, which must remain intuitionistic. (Conversely, we could get classical logic using only intuitionistic sequents and adding the law of excluded middle as an axiom.)

If we use only those rules that can be stated using minimal sequents (so necessarily leaving out \setminus), then the result is still intuitionistic logic; but if we also leave out \bot, then the result is minimal logic. Dual results hold for dual-intuitionistic and dual-minimal sequents.

Cut-free proofs

The cut rule expresses the composition of proofs. Gentzen’s main result (Gentzen, Haupsatz) is that any derivation that uses the cut rule can be transformed into one that doesn’t – the cut-elimination theorem. This yields a normalization algorithm for proofs, which provided much of the inspiration behind Lambek‘s approach to categorical logic. Similarly, any derivation that uses the identity rule can be transformed into one that uses it only for atomic propositions (those provided by the signature Σ\Sigma and equality).

The most important property of cut-free proofs is that every formula occurring anywhere in a proof is a subformula of a formula contained in the conclusion of the proof (the subformula property). This makes induction over proof-trees much more straightforward than with natural deduction or other systems.

See cut elimination.


Sequent calculus was introduced in

which appeared in English translation as:

  • Gerhard Gentzen, Investigations into Logical Deduction, in M. E. Szabo (ed.), The Collected Papers of Gerhard Gentzen, Studies in Logic and the Foundations of Mathematics 55, Springer (1969) 68-131 [ISBN:978-0-444-53419-4, pdf]


Textbook accounts:

For the basic definitions in a context of topos theory:

See also:

Discussion for type theory:

  • Stéphane Lengrand, Roy Dyckhoff, James McKinna, A Sequent Calculus for Type Theory (inria, pdf)

Last revised on September 15, 2023 at 08:32:27. See the history of this page for a list of all contributions to it.