natural deduction metalanguage, practical foundations
type theory (dependent, intensional, observational type theory, homotopy type theory)
computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory
basic constructions:
strong axioms
further
A logical framework is a formal metalanguage for deductive systems, such as logic, natural deduction, type theories, sequent calculus, etc. Of course, like any formal system, these systems can be described in any sufficiently strong metalanguage. However, all logical systems of this type share certain distinguishing features, so it is helpful to have a particular metalanguage which is well-adapted to describing systems with those features.
Much of the description below is taken from (Harper).
The sentences of a logical framework are called judgments. It turns out that in deductive systems, there are two kinds of non-basic forms that arise very commonly, which we may call
These two forms turn out to have many parallel features, e.g. reflexivity and transitivity of hypothetical judgments correspond to variable-use and substitution in generic judgments. Appealing to the propositions as types principle, therefore, it is convenient to describe a system in which they are actually both instances of the same thing. That is, we identify the notion of evidence for a judgment with the notion of object of a syntactic category.
This leads to a notion that we will call an LF-type. Thus we will have types such as
We will also have some general type-forming operations. Perhaps surprisingly, it turns out that
are all that we need.
There is a potential confusion of terminology, because these LF-types in a logical framework (being itself a type theory) are distinct from the objects that may be called “types” in any particular logic we might be talking about inside the logical framework. Thus, for instance, when formalizing Martin-Lof type theory in a logical framework, there is an “LF-type” which is the type of objects of the syntactic category of MLTT-types. This is furthermore distinct from a type of types, which is itself an object of the syntactic category of MLTT-types, i.e. a term belonging to the LF-type of such.
The type theory of a logical framework often includes a second layer called LF-kinds, which enables us to classify families of LF-types. For instance, the universe of all LF-types is an LF-kind, as is the collection of all families of LF-types dependent on some LF-type . The LF-types and LF-kinds together are very similar to a pure type system with two sorts and , with axiom and rules and , although there are some minor technical differences such as the treatment of definitional equality (PTS’s generally use untyped conversion, whereas logical frameworks are often formulated in a way so that only canonical forms exist).
Thus, we might have the following hierarchy of “universes”, which we summarize to fix the notation:
Once we have set up the logical framework as a language, there are then two approaches to describing a given logic inside of it. See (Harper), and the other references, for more details.
In a synthetic presentation, we use LF-types to represent the syntactic objects and judgments of the object theory. Thus, if the object theory is a type theory, then in LF we have things like:
Note that we do not have to explicitly carry around an ambient context, as we sometimes do when presenting type theories in a more explicit style of a deductive system. This is because the notions of hypothetical and generic judgments are built into the logical framework and handled automatically by its contexts. We will discuss this further below.
Synthetic presentations are preferred by the school of Harper-Honsell-Plotkin and are generally used with implementations such as Twelf. They are very flexible and can be used to represent many different object-theories. Moreover, they generally support an adequacy theorem, that there is a compositional bijection between the syntactic objects of the object-theory, as usually presented, and the canonical forms of appropriate LF-type in its LF-presentation. Here “compositional” means that the bijection respects substitution.
Note that the adequacy theorem is a correspondence at the level of syntax; it does not even incorporate the object-theory notion of definitional equality! Two object-theory terms such as and that are definitionally equal (by beta-reduction) are syntactically distinct, and hence also correspond to distinct syntactic entities in the LF-encoding. The definitional equality that relates them is represented by an element of the LF-type encoding the definitional-equality judgment of the object-theory. This is appropriate because such LF-encodings are used, among other things, for the study of the syntax of the object-theory, e.g. for proving properties of its definitional equality.
However, synthetic presentations do not make maximal use of the framework in the case when the object-theory is also a type theory whose judgments are “analytic”. Here “synthetic” means roughly “requires evidence” whereas “analytic” means roughly “obvious”.
An analytic presentation is only possible for certain kinds of object-theories, generally those which are type theories similar to LF itself. In this case, we represent object-theory types by LF-types themselves. Thus we still have the LF-type of object-theory types, but instead of the LF-type of terms and the dependent LF-type representing the object-theory typing judgment, we have
which assigns to each object-theory type, the LF-type of its elements. In other words, the typing judgment of the object-theory is encoded by the typing judgment of the meta-theory.
Now we have to make a choice about how to represent the definitional equality of the object-theory. A consistent choice is to also represent it by the definitional equality of the meta-theory. That is, in addition to merely giving “axioms” such as and , we must give equations representing the rules of the object-theory as equalities in the logical framework. For instance, we must have a beta-reduction rule such as
app A B (lam A B F) M = F M
If the object-theory is itself a dependent type theory whose only definitional equalities are beta-reductions like this, then if we make the coercion implicit, we can think of the resulting encoding as analogous to a pure type system with three sorts, , , and , with and .
However, from a practical point-of-view, rather than extending the logical framework with ad hoc definitional equalities to represent a particular object-theory, often what is actually done is that equality is defined as another type family with explicitly-introduced constructors. In other words, we use the analytic representation of types, but the synthetic representation of definitional equality. For example, in Twelf, the above equation could be represented by first assuming an LF-type family
eq : {A:tp} el A -> el A -> type
(Twelf uses braces as notation for the dependent product), and then postulating a constant
beta : {A:tp} {B:tp} eq (app A B (lam A B F)) (F M)
in addition to the other axioms of equality.
The analytic encoding is associated with Martin-Lof. While convenient for the description of rules in type theories, it is often less appropriate for the purposes of metatheoretic analysis.
For instance, in the hybrid style with the LF-type family , the terms in will involve explicit coercions along equalities in . This destroys the adequacy theorem, since coercions along definitional equalities are generally silent in the usual presentation of a theory. Moreover, one must assert explicitly that dependent types respect definitional equalities, and for multiply-dependent types this requires dependent products in the object-theory.
On the other hand, the “fully analytic” version where definitional equalities in the object-theory are also definitional equalities in the meta-theory involves adding additional equations to the meta-theory, which in general can make it impossible to reason about. One needs a decision procedure for these equalities even to be able to check proofs.
Thus, analytic approaches are less general and flexible; they are best adapted to describing the rules and semantics of a dependent type theory, whereas synthetic approaches are better for reasoning about syntax and for studying more general object-theories.
In both synthetic and analytic presentations, we use higher-order abstract syntax (HOAS). Roughly, this means that variables in the object-theory are not terms of some LF-type, but are represented by actual LF-variables. For instance, when describing a type theory containing function types synthetically, we would have
The point is that the argument of (the “body” of the lambda abstraction) is not a “term containing a free variable ” but rather an LF-function from object-theory terms to object-theory terms. This is intended to be the function “substitute” which knows about the body of the lambda-abstraction, and when given an argument it substitutes it for the variable in that body and returns the result.
This approach completely avoids dealing with the problems of variable binding and substitution in the object language, by making use of the binding and substitution in the metalanguage LF. One might say that the variables in LF are the “universal notion of variable” which is merely reused by all object-theories.
It may be tempting to think of the LF-types such as and as inductively defined by their specified constructors (such as for , and and for ). However, this is incorrect; LF does not have inductive types. In fact, this weakness is essential in order to guarantee “adequacy” of the HOAS encoding.
Suppose, for instance, that were inductively defined inside of LF. Then we could define a function by pattern-matching on the structure of , doing one thing if were a lambda-abstraction and another thing if it were a function application. But such a function is definitely not the sort of thing that we want to be able to pass to the LF-function ! By disallowing such matching, though, we can guarantee that the only functions we can define and pass to correspond to “substituting in a fixed term” as we intended.
As an even simpler example, suppose we consider an object-theory containing just one LF-type together with constructors and . Although we would like to think of as representing the natural numbers, because of the lack of an induction principle, the LF-type certainly cannot be shown to contain all the functions from natural numbers to natural numbers (essentially, we can only construct the constant functions and those incrementing their argument by a fixed constant). On the other hand, to some extent it is possible to get around this restriction by taking a relational rather than a functional point-of-view. For example, addition of natural numbers can be defined as a type family
add : nat -> nat -> nat -> type
together with a pair of constructors
add/z : {N:nat} add z N N.
add/s : {M:nat}{N:nat}{P:nat} add M N P -> add (s M) N (s P).
Now, it is still not possible to prove inside LF that is a total functional relation (i.e., that for all M:nat
and N:nat
there exists a unique P:nat
such that add M N P
). However, in this case that is certainly easy to verify by inspection, and the Twelf proof assistant has facilities for verifying such properties automatically (though in general checking totality is better supported than checking uniqueness).
There are tricks one can play in order to use a HOAS-like representation even in a framework that has true inductive types such as Coq. For instance, XORN18 uses “parametrized HOAS” in which the higher-order functions take as domain an arbitrary, unspecified, type, thereby eliminating the possibility of defining functions inductively over elements of any specific type. However, this apparently requires parametricity axioms in order to work, which are consistent, but false in natural set-theoretic and categorical models.
One of the uses of a logical framework is that as a type theory itself, it can be implemented in a computer. This provides a convenient system in which one can “program” the rules of any other specific type theory or logic which one wants to study.
For a list of logical framework implementations, see Specific logical Frameworks and Implementations.
Historically, the first logical framework implementation was Automath. The goal of the Automath project was to provide a tool for the formalization of mathematics without foundational prejudice. Many modern logical frameworks carry influences of this.
Then inspired by the development of Martin-Löf dependent type theory was the Edinburgh Logical Framework (ELF). The logic and type theory-approaches were later combined in the Elf language. This gave rise to Twelf.
Isabelle is a logical framework that does not have dependent types, but instead works in a “predicate logic” framework with only one layer of dependency, proof-irrelevant propositions over a base simply typed lambda calculus. Unlike Twelf, it is designed for practical use in working in object-languages rather than proving meta-theorems about them. Its most common application is to higher-order logic, but there are also Isabelle formulations of first-order logic, ZFC, and a general library for sequent calculus.
The original logical framework using a synthetic approach was introduced in
while the analytic version was proposed by
General overview:
Frank Pfenning, Logical frameworks – a brief introduction (2002) [pdf]
Frank Pfenning, Logical frameworks, chapter 17 in: Alan Robinson and Andrei Voronkov (eds.): Handbook of Automated Reasoning (1999) 1063-1147 [ps]
Frank Pfenning, Logical frameworks web site (web), including an extensive bibliography and a list of implementations
Randy Pollack, Some recent logical frameworks (2010) (pdf)
A number of examples of encoding object-theories into LF can be found in
Other references:
Last revised on January 3, 2023 at 20:08:12. See the history of this page for a list of all contributions to it.