nLab differential linear logic

Redirected from "Differential linear logic".
Contents

Contents

(work in progress)

Idea

Differential linear logic is an extension of linear logic. New inference rules are added which allow to differentiate proofs. Proofs must be understood as morphisms and the new inference rules are here to differentiate them. It was realised than in some denotational models of linear logic, the morphisms of the co-Kleisli category are differentiable, and it was then reflected in the syntax by these new inference rules. Thus, one really needs to take the point of view of computational trinitarianism in order to understand the transition from linear logic to differential linear logic. It is more difficult to understand naively the proofs of differential linear logic than the ones of linear logic.

Differential linear logic can be presented in different guises whether we include the additives or not, and whether we include the promotion or not. We start by presenting the version without additives and with promotion. We will refer to this logic as DiLL.

Syntax

There are two ways to present the syntax in sequent calculus. With bilateral sequents or monolateral sequents. The first one is closer to the language of category theory but the second one allows to make the links with the syntax in proof nets and to divide by two the number of inference rules. Using the monolateral version (or proof nets) require to quotient the formulas by some equalities related to negation and De Morgan duality.

Formulas - Generalities

There are no novelties concerning formulas compared to linear logic. We will work with the formulas of Multiplicative Exponential Linear Logic (MELL). In the next subsection, the difference between MELL and DiLL will appear as additional inference rules.

Formulas are given by induction starting from atoms and applying connectives. In accordance with the tradition, formulas and atoms are denoted in the same way by upper case letters:

  • Atoms are denoted by upper case letter A,B,...A,B,....
  • If we have two formulas A,BA,B, we can form ABA \otimes B.
  • If we have two formulas A,BA,B, we can form ABA \parr B.
  • If we have a formula AA, we can form !A!{A}.
  • If we have a formula AA, we can form ?A?{A}.
  • If we have a formula AA, we can form A A^\perp.
  • We have the formula 11.
  • We have the formula \bot.

Formulas - Quotienting by duality in monolateral syntax

If we want to use the monolateral sequent calculus (or proof nets), formulas must be quotiented by the following equalities:

  • If we have a formula AA, we put A =AA^{\perp\perp} = A.
  • If we have two formulas A,BA,B, we put (AB) =A B (A \otimes B)^\perp = A^\perp \parr B^\perp.
  • If we have two formulas A,BA,B, we put (AB) =A B (A \parr B)^\perp = A^\perp \otimes B^\perp.
  • If we have a formula AA, we put (!A) =?A (!{A})^\perp = ?{A^\perp}.
  • If we have a formula AA, we put (?A) =!A (?{A})^\perp = !{A^\perp}.
  • We have 1 =1^\perp = \bot.
  • We have =1\bot^\perp = 1.

Notice that some equalities are implied by other ones and thus we could give a shorter list. We have preferred writting directly all the equalities of pratical use.

Inference rules - Bilateral syntax

Inference rules will allow us to build proofs. Notice that compared to MELL, we need the ability to sum proofs. We give an explicit rule for this summation of proofs. We give also a rule for null proofs. Finally, the new inference rules are: coderiliction, cocontraction, coweakening, sum of proofs, null proof.

  • Axiom: AA ax\frac{}{A \vdash A}^{ax}
  • Left exchange: Γ 1,A,B,Γ 2ΔΓ 1,B,A,Γ 2Δ\frac{\Gamma_{1},A,B,\Gamma_{2} \vdash \Delta}{\Gamma_{1},B,A,\Gamma_{2} \vdash \Delta}
  • Right exchange: ΓΔ 1,A,B,Δ 2ΓΔ 1,B,A,Δ 2\frac{\Gamma \vdash \Delta_{1},A,B,\Delta_{2}}{\Gamma \vdash \Delta_{1},B,A,\Delta_{2}}
  • Left \otimes: Γ 1,A,B,Γ 2ΔΓ 1,AB,Γ 2Δ\frac{\Gamma_{1}, A, B, \Gamma_{2} \vdash \Delta}{\Gamma_{1}, A \otimes B, \Gamma_{2} \vdash \Delta}
  • Right \parr: ΓΔ 1,A,B,Δ 2ΓΔ 1,AB,Δ 2\frac{\Gamma \vdash \Delta_{1}, A, B, \Delta_{2}}{\Gamma \vdash \Delta_{1}, A \parr B, \Delta_{2}}
  • Right \otimes: Γ 1Δ 1,A,Δ 2Γ 2Δ 3,B,Δ 4Γ 1,Γ 2Δ 1,Δ 3,AB,Δ 4,Δ 2\frac{\Gamma_{1} \vdash \Delta_{1}, A, \Delta_{2} \qquad \Gamma_{2} \vdash \Delta_{3}, B, \Delta_{4}}{\Gamma_{1}, \Gamma_{2} \vdash \Delta_{1}, \Delta_{3}, A \otimes B, \Delta_{4}, \Delta_{2}}
  • Left \parr: Γ 1,A,Γ 2Δ 1Γ 3,B,Γ 4Δ 2Γ 1,Γ 3,AB,Γ 4,Γ 2Δ 1,Δ 2\frac{\Gamma_{1}, A, \Gamma_{2} \vdash \Delta_{1} \qquad \Gamma_{3}, B, \Gamma_{4} \vdash \Delta_{2}}{\Gamma_{1}, \Gamma_{3}, A \parr B, \Gamma_{4}, \Gamma_{2} \vdash \Delta_{1}, \Delta_{2}}
  • Left 11: Γ 1,Γ 2ΔΓ 1,1,Γ 2Δ\frac{\Gamma_{1}, \Gamma_{2} \vdash \Delta}{\Gamma_{1}, 1, \Gamma_{2} \vdash \Delta}
  • Right \bot: ΓΔ 1,Δ 2ΓΔ 1,,Δ 2\frac{\Gamma \vdash \Delta_{1},\Delta_{2}}{\Gamma \vdash \Delta_{1}, \bot, \Delta_{2}}
  • Right 11: 1\frac{}{\vdash 1}
  • Left \bot: \frac{}{\bot \vdash}
  • Left negation: ΓA,ΔΓ,A Δ\frac{\Gamma \vdash A, \Delta}{\Gamma, A^{\perp} \vdash \Delta}
  • Right negation: Γ,AΔΓ,A ,Δ\frac{\Gamma, A \vdash \Delta}{\Gamma, \vdash A^{\perp}, \Delta}
  • Left dereliction: Γ 1,A,Γ 2ΔΓ 1,!A,Γ 2Δ\frac{\Gamma_{1},A ,\Gamma_{2} \vdash \Delta}{\Gamma_{1}, !A, \Gamma_{2} \vdash \Delta}
  • Right dereliction: ΓΔ 1,A,Δ 2ΓΔ 1,?A,Δ 2\frac{\Gamma \vdash \Delta_{1},A ,\Delta_{2}}{\Gamma \vdash \Delta_{1}, ?A, \Delta_{2}}
  • Left weakening: Γ 1,Γ 2ΔΓ 1,!A,Γ 2Δ\frac{\Gamma_{1},\Gamma_{2} \vdash \Delta}{\Gamma_{1}, !A, \Gamma_{2} \vdash \Delta}
  • Right weakening: ΓΔ 1,Δ 2ΓΔ 1,?A,Δ 2\frac{\Gamma \vdash \Delta_{1}, \Delta_{2}}{\Gamma \vdash \Delta_{1}, ?A, \Delta_{2}}
  • Left contraction: Γ 1,!A,!A,Γ 2ΔΓ 1,!A,Γ 2Δ\frac{\Gamma_{1},!A ,!A, \Gamma_{2} \vdash \Delta}{\Gamma_{1}, !A, \Gamma_{2} \vdash \Delta}
  • Right contraction: ΓΔ 1,?A,?A,Δ 2ΓΔ 1,?A,Δ 2\frac{\Gamma \vdash \Delta_{1},?A , ?A, \Delta_{2}}{\Gamma \vdash \Delta_{1}, ?A, \Delta_{2}}
  • Right Promotion: !Γ?Δ 1,A,?Δ 2!Γ?Δ 1,!A,?Δ 2\frac{!\Gamma \vdash ?\Delta_{1}, A, ?\Delta_{2}}{!\Gamma \vdash ?\Delta_{1}, !A, ?\Delta_{2}}
  • Left promotion: !Γ 1,A,!Γ 2Δ!Γ 1,?A,!Γ 2Δ\frac{!\Gamma_{1}, A, !\Gamma_{2} \vdash \Delta}{!\Gamma_{1}, ?A, !\Gamma_{2} \vdash \Delta}
  • Right codereliction: ΓΔ 1,A,Δ 2ΓΔ 1,!A,Δ 2\frac{\Gamma \vdash \Delta_{1}, A, \Delta_{2}}{\Gamma \vdash \Delta_{1}, !A, \Delta_{2}}
  • Left codereliction: Γ 1,A,Γ 2ΔΓ 1,?A,Γ 2Δ\frac{\Gamma_{1}, A, \Gamma_{2} \vdash \Delta}{\Gamma_{1}, ?A, \Gamma_{2} \vdash \Delta}
  • Right coweakening: !A\frac{}{\vdash !A}
  • Left coweakening: ?A\frac{}{?A \vdash}
  • Right cocontraction: Γ 1Δ 1,!A,Δ 2Γ 3Δ 3,!A,Δ 4Γ 1,Γ 3Δ 1,Δ 3,!A,Δ 4,Δ 2\frac{\Gamma_{1} \vdash \Delta_{1}, !A, \Delta_{2} \qquad \Gamma_{3} \vdash \Delta_{3}, !A, \Delta_{4}}{\Gamma_{1}, \Gamma_{3} \vdash \Delta_{1}, \Delta_{3}, !A, \Delta_{4}, \Delta_{2}}
  • Left cocontraction: Γ 1,?A,Γ 2Δ 1Γ 3,?A,Γ 4Δ 2Γ 1,Γ 3,?A,Γ 4,Γ 2Δ 1,Δ 2\frac{\Gamma_{1}, ?A, \Gamma_{2} \vdash \Delta_{1} \qquad \Gamma_{3}, ?A, \Gamma_{4} \vdash \Delta_{2}}{\Gamma_{1}, \Gamma_{3}, ?A, \Gamma_{4}, \Gamma_{2} \vdash \Delta_{1}, \Delta_{2}}
  • Nul proof: ΓΔ 0\frac{}{\Gamma \vdash \Delta}^{0}
  • Sum of proofs: ΓΔΓΔΓΔ\frac{\Gamma \vdash \Delta \qquad \Gamma \vdash \Delta}{\Gamma \vdash \Delta}
  • Cut: Γ 1Δ 1,A,Δ 2Γ 2,A,Γ 3Δ 3Γ 2,Γ 1,Γ 3Δ 1,Δ 3,Δ 2\frac{\Gamma_{1} \vdash \Delta_{1}, A, \Delta_{2} \qquad \Gamma_{2}, A, \Gamma_{3} \vdash \Delta_{3}}{\Gamma_{2}, \Gamma_{1}, \Gamma_{3} \vdash \Delta_{1}, \Delta_{3}, \Delta_{2}}

Inference rules - Monolateral syntax

Isomorphisms in bilateral syntax

Cut elimination

There is a rewriting system which, given an instance of the cut rule in a proof, reduces the proof by making the cut go higher in the proof. There is the key cases which induce important commuting diagrams in the categorical semantics and the commuting cases which are less important. The idea is that when a cut arrives at the beginning of the proof, ie. at the level of the leafs which are axioms, we can finally eliminate it by a last rewriting step. This is when the cut is made against an axiom rule, that we can eliminate it. We know that this rewriting system is strongly normalizing and that the normal forms are the cut-free proofs. It means that by using rewriting rules in the order you want, at the end you obtain a proof without cut, thus the name cut-elimination. It is important to know actually this rewriting system and not only that we can eleminate the cuts in a proof because this cut-elimination prodecure creates the commuting diagrams of the categorical semantics when we quotient the proofs by these cut-elimination steps.

Cut elimination steps

What is an isomorphism?

Semantics

Categorical semantics

We need a *-autonomous category in order to interpret the connectors and rules related to these connectors: ,,1,, \otimes, \parr, 1, \bot, {}^{\perp}.

This category must be enriched over commutative monoids in order to be able to interpret sum of proofs and null proofs.

Definition

We say that a *-autonomous category 𝒞\mathcal{C} is enriched over commutative monoids if it is enriched over commutative monoids as a symmetric monoidal category ie. every hom-set 𝒞[A,B]\mathcal{C}[A,B] is a commutative monoid and -\otimes- as well as ;-;- are bilinear over morphisms ie. preserve addition and zero in every variable.

The tricky part is to interpret the exponentials !,?!,?. We just need to add enough to interpret !!, the interpretation of ?? will follow by the **-autonomous structure. We need:

  • A natural transformation !A!A!A!A \otimes !A \rightarrow !A to interpret the cocontraction.
  • A natural transformation !A!A!A!A \rightarrow !A \otimes !A to interpret the contraction.
  • A natural transformation I!AI \rightarrow !A to interpret the coweakening.
  • A natural transformation !AI!A \rightarrow I to interpret the weakening.
  • A natural transformation A!AA \rightarrow !A to interpret the codereliction.
  • A natural transformation !AA!A \rightarrow A to interpret the dereliction.

Thus, for every object AA, we will have that !A!A is a bimonoid. !! will be a comonad. The codereliction is added on top of this structure.

(…)

Concrete models

(…)

The logic at work

The co-Kleisli category

Differentiating a proof

We start from a proof of !AB!A \vdash B and by using as only exponential rules cocontraction and codereliction, we obtain a proof of !AAB!A \vdash A \multimap B. In the co-Kleisli category we thus go from a morphism f:ABf:A \rightarrow B, which must be interpreted as a smooth function, to a morphism df:A(AB)df: A \rightarrow (A \multimap B). It is the differential of ff which associates to every point of AA the linear approximation of ff around this point.

!A!A axAA axA!A!A,A!A!AB!A,AB!AA ,B!AAB \frac{\frac{\frac{\frac{\frac{}{!A \vdash !A}^{ax} \frac{\frac{}{A \vdash A}^{ax}}{A \vdash !A}}{!A,A \vdash !A} !A \vdash B}{!A,A \vdash B}}{!A \vdash A^{\perp},B}}{!A \vdash A \multimap B}

Categorical semantics of differential linear logic and differential categories

(…)

Additives and Seely isomorphisms

References

Original articles:

Review:

Recent work on categorical semantics:

Relation to monoidal bicategories and analytic functors:

Last revised on July 13, 2024 at 12:53:38. See the history of this page for a list of all contributions to it.