Linear logic is a substructural logic in which the contraction rule and the weakening rule are omitted, or at least have their applicability restricted. In the original definition of (Girard 87) linear logic is the internal logic of/has categorical semantics in star-autonomous categories (Seely 89, porp. 1.5). But more generally linear logic came to refer to the internal logic of any possibly-non-cartesian symmetric closed monoidal category or even polycategory. Indeed, this general notion still faithfully follows the original motivation for the term “linear” as introduced in (Girard 87), since the non-cartesianness of the tensor product means the absence of a diagonal map and hence the impossibility of functions to depend on more than a single (linear) copy of their variables. Linear logic was introduced in Girard (1987).
Although linear logic is traditionally presented in terms of inference rules, it was apparently discovered by Girard while studying coherent spaces.
Unlike traditional logics, which deal with the truth of propositions, linear logic is often described as dealing with the availability of resources. A proposition, if it is true, remains true no matter how we use that fact in proving other propositions. By contrast, in using a resource to make available a resource , itself may be consumed or otherwise modified. Linear logic deals with this by restricting our ability to duplicate or discard resources freely. For example, we have
from which we can prove
Linear logic would disallow the contraction step and treat as explicitly meaning that two slices of cake yield . Disallowing contraction then corresponds to the fact that we can’t turn one slice of cake into two (more’s the pity), so you can't have your cake and eat it too.
Linear logic can also be interpreted using a semantics of “games” or “interactions”. Under this interpretation, each proposition in a sequent represents a game being played or a transaction protocol being executed. An assertion of, for instance,
means roughly that if I am playing three simultaneous games of , , and , in which I am the left player in and and the right player in , then I have a strategy which will enable me to win at least one of them. Now the above statements about “resources” translate into saying that I have to play in all the games I am given and can’t invent new ones on the fly.
Linear logic is closely related to notions of relevant logic?, which have been studied for much longer. The goal of relevant logic is to disallow statements like “if pigs can fly, then grass is green” which are true, under the usual logical interpretation of implication, but in which the hypothesis has nothing to do with the conclusion. Clearly there is a relationship with the “resource semantics”: if we want to require that all hypotheses are “used” in a proof then we need to disallow weakening.
Linear logic is usually given in terms of sequent calculus. There is a set of propositions (although as remarked above, to be thought of more as resources to be acquired than as statements to be proved) which we construct through recursion. Each pair of lists of propositions is a sequent (written as usual with ‘’ between the lists), some of which are valid; we determine which are valid also through recursion. Technically, the propositional calculus of linear logic also requires a set of propositional variables from which to start; this is usually identified with the set of natural numbers (so the variables are , , etc), although one can also consider the linear logic where is any initial set of propositional variables.
Here we define the set of propositions:
The terms “exponential”, “multiplicative”, and “additive” come from the fact that “exponentiation converts addition to multiplication”: we have and so on (see below).
However, the connectives and constants can also be grouped in different ways. For instance, the multiplicative conjunction and additive disjunction are both positive types, while the additive conjunction and multiplicative disjunction are negative types. Similarly, the multiplicative truth and the additive falsity are positive, while the additive truth and multiplicative falsity are negative. This grouping has the advantage that the similarity of symbols matches the adjective used.
In relevant logic?, the terms “conjunction” and “disjunction” are often reserved for the additive versions and , which are written with the traditional notations and . In this case, the multiplicative conjunction is called fusion and denoted , while the multiplicative disjunction is called fission and denoted (or sometimes, confusingly, ). In relevant logic the symbol may also be used for the additive falsity, here denoted . Also, sometimes the additive connectives are called extensional and the multiplicatives intensional.
Sometimes one does not define the operation of negation, defining only for a propositional variable . It is a theorem that every proposition above is equivalent (in the sense defined below) to a proposition in which negation is applied only to propositional variables.
We now define the valid sequents, where we write to state the validity of the sequent consisting of the list and the list and use Greek letters for sublists:
The main point of linear logic is the restricted use of the weakening and contraction rules; if these were universally valid (applying to any rather than only to or ), then the additive and multiplicative operations would be equivalent (in the sense defined below) and similarly and would be equivalent to , which would give us classical logic. On the other hand, one can also remove the exchange rule to get a variety of noncommutative logic?; one must then be careful about how to write the other rules (which we have been above).
As usual, there is a theorem of cut elimination? showing that the cut rule and identity rule follow from all other rules and the special cases of the identity rule of the form for a propositional variable .
The propositions and are equivalent if and are both valid. It is then a theorem that either may be swapped for the other anywhere in a sequent without affecting its validity. Up to equivalence, negation is an involution, and the operations , , , and are all associative, with respective identity elements , , , and . These operations are also commutative? (although this fails for the multiplicative connectives if we drop the exchange rule). The additive connectives are also idempotent (but the multiplicative ones are not).
We also have distributive laws that explain the adjectives ‘additive’, ‘multiplicative’, and ‘exponential’:
It is also a theorem that negation (except for the negations of propositional variables) can be defined (up to equivalence) recursively as follows:
(In this way, linear logic has a perfect de Morgan duality.) The logical rules for negation can then be proved.
We can also restrict attention to sequents with one term on either side as follows: is valid if and only if is valid, where , etc, and similarly for (using implicitly that these are associative, with identity elements to handle the empty sequence).
We can even restrict attention to sequents with no term on the left side and one term on the right; is valid if and only if is valid, where . In this way, it's possible to ignore sequents entirely and speak only of propositions and valid propositions, eliminating half of the logical rules in the process. However, this approach is not as beautifully symmetric as the full sequent calculus.
The logic described above is full classical linear logic. There are many important fragments of linear logic, such as multiplicative linear logic, intuitionistic linear logic (in which is a primitive operation), etc.
Firstly, there is a monoidal ‘tensor’ connective . Negation is modelled by the duality involution , while linear implication corresponds to the internal hom, which can be defined as . There is a de Morgan dual of the tensor called ‘par’, with . Tensor and par are the ‘multiplicative’ connectives, which roughly speaking represent the parallel availability of resources.
The ‘additive’ connectives and , which correspond in another way to traditional conjunction and disjunction, are modelled as usual by products and coproducts. Seely (1989) notes that products are sufficient, as -autonomy then guarantees the existence of coproducts; that is, they are also linked by de Morgan duality.
LL recaptures the notion of a resource that can be discarded or copied arbitrarily by the use of the modal operator : denotes an ’-factory’, a resource that can produce zero or more s on demand. It is modelled using a comonad on the underlying -autonomous category that is (symmetric) monoidal, in the sense that there is an isomorphism . Since every is canonically a symmetric -comonoid (remembering that is the product), is then a symmetric -comonoid.
An LL sequent
is interpreted as a morphism
The comonoid structure on then yields the weakening
maps. The corresponding rules are interpreted by precomposing the interpretation of a sequent with one of these maps.
A different way to explain linear logic categorically (though equivalent, in the end) is to start with a categorical structure which lacks any of the connectives, but has sufficient structure to enable us to characterize them with universal properties. If we ignore the exponentials for now, such a structure is given by a polycategory. The polymorphisms
in a polycategory correspond to sequents in linear logic. The multiplicative connectives are then characterized by representability and corepresentability properties:
(Actually, we should allow arbitrarily many unrelated objects to carry through in both cases.) The additives are similarly characterized as categorical products and coproducts, in a polycategorically suitable sense.
Finally, dual objects can be recovered as a sort of “adjoint”:
If all these representing objects exist, then we recover a -autonomous category.
One merit of the polycategory approach is that it makes the role of the structural rules clearer, and also helps explain why sometimes seems like a disjunction and sometimes like a conjunction. Allowing contraction and weakening on the left corresponds to our polycategory being “left cartesian”; that is, we have “diagonal” and “projection” operations such as and . In the presence of these operations, a representing object is automatically a cartesian product; thus coincides with . Similarly, allowing contraction and weakening on the right makes the polycategory “right cocartesian”, which causes corepresenting objects to be coproducts and thus to coincide with .
On the other hand, if we allow “multi-composition” in our polycategory, i.e. we can compose a morphism with one to obtain a morphism , then our polycategory becomes a PROP, and representing and corepresenting objects must coincide; thus and become the same. This explains why has both a disjunctive and a conjunctive aspect. Of course, if in addition to multi-composition we have the left and right cartesian properties, then all four connectives coincide (including the categorical product and coproduct) and we have an additive category.
We can interpret any proposition in linear logic as a game between two players: we and they. The overall rules are perfectly symmetric between us and them, although no individual game is. At any given moment in a game, exactly one of these four situations obtains: it is our turn, it is their turn, we have won, or they have won; the last two states continue forever afterwards (and the game is over). If it is our turn, then they are winning; if it is their turn, then we are winning. So there are two ways to win: because the game is over (and a winner has been decided), or because it is forever the other players turn (either because they have no move or because every move results in its still being their turn).
This is a little complicated, but it's important in order to be able to distinguish the four constants:
The binary operators show how to combine two games into a larger game:
So we can classify things as follows:
To further clarify the difference between and (the additive and multiplicative versions of truth, both of which we win); consider and . In , it is always their move (since it is their move in , hence their move in at least one game), so we win just as we win . (In fact, .) However, in , the game ends immediately, so play continues as in . We have won , so we only have to end the game to win overall, but there is no guarantee that this will happen. Indeed, in , the game never ends and it is always our turn, so they win. (In , both games end immediately, and we win. In , we must win both games to win overall, so this reduces to ; indeed, .)
Negation is easy:
There are several ways to think of the exponentials. As before, they have control in a conjunction, while we have control in a disjunction. Whoever has control of or chooses how many copies of to play and must win them all to win overall. There are many variations on whether the player in control can spawn new copies of or close old copies of prematurely, and whether the other player can play different moves in different copies (whenever the player in control plays the same moves).
Other than the decisions made by the player in control of a game, all moves are made by transmitting resources. Ultimately, these come down to the propositional variables; in the game , we must transmit a to them, while they must transmit a to us in .
A game is valid if we have a strategy to win (whether by putting the game in a state where we have won or by guaranteeing that it is forever their turn). The soundness and completeness of this interpretation is the theorem that is a valid game if and only if is a valid sequent. (Recall that all questions of validity of sequents can be reduced to the validity of single propositions.)
Much as there are many exponential functions (say from to ), even though there is only one addition operation and one multiplication operation, so there can be many versions of the exponential operators and . (However, there doesn't seem to be any analogue of the logarithm to convert between them.)
More precisely, if we add to the language of linear logic two more operators, and , and postulate of them the same rules as for and , we cannot prove that and . In contrast, if we introduce , , etc, we can prove that the new operators are equivalent to the old ones.
In terms of the categorial interpretation above, there may be many comonads ; it is not determined by the underlying -autonomous category. In terms of game/resource semantics, there are several slightly different interpretations of the exponentials.
One sometimes thinks of the exponentials as coming from infinitary applications of the other operations. For example:
All of these justify the rules for the exponentials, so again we see that there may be many ways to satisfy these rules.
The original article is
and reviewed/further discussed in
Further discussion of linear type theory is for instance in
Capter 7, Linear type theory pdf
Anders Schack-Nielsen, Carsten Schürmann, Linear contextual modal type theory pdf