natural deduction metalanguage, practical foundations
type theory (dependent, intensional, observational type theory, homotopy type theory)
computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory
basic constructions:
strong axioms
further
Per Martin-Löf‘s meaning explanations of type theory provide a way to justify the correctness of the rules of type theory. The meaning of the judgements of type theory are defined by computation of closed terms to canonical form in some untyped notion of computation, such as a lambda calculus extended with constants for the constructors and eliminators of type theory.
From a technical/mathematical point of view, the meaning explanations give rise to a realizability model of type theory, where realizers are terms in some untyped notion of computation, such as an extended lambda calculus. That is, it can be regarded as a description of how types are assigned to untyped computations. However, the term “meaning explanations” also refer to the “philosophical” or “pre-mathematical” component, which regards this as an “explanation” of the “constructive validity” of the judgements of the type theory.
The standard meaning explanations validate not just the rules of intensional type theory but also the stronger rules of extensional type theory. Thus, if one regards it as the “intended semantics” then one may believe in the extensional theory. However, like any particular semantics for a formal theory, the meaning explanation also validates many other “rules” as well which are not usually added to type theories, so it is not so clear that meaning explanations in their own right argue for extensionality. They can also be modified so as to no longer satisfy extensionality.
Martin-Löf has been quoted as saying
Some people feel that when I have presented my meaning explanations I have said nothing. And this is how it should be. The standard semantics should be just that: standard. It should come as no surprise.
Suppose we have given some collection of computational objects which we call “terms”, with an untyped notion of “reduction” or “computation”. These could be terms in a lambda calculus, or elements of a partial combinatory algebra, or even programs in a real-world programming language such as C, Java, or Python. We write to mean that computes or reduces to (in some number of steps). We then define simultaneously the following predicates, where are terms:
The definition is by induction-recursion. We define the property “ is a type” inductively, and simultaneously we define recursively two functions on such , one giving the collection of terms of type , and the other giving the collection of pairs of terms considered equal as elements of . Note that each such collection may itself be a prior inductive definition.
The definition is intended to satisfy some properties:
Equality:
Reverse reduction:
By reflexivity, the specification of which terms have a type is technically redundant. It is just reiterating the reflexivity instances of equality. So for this meaning explanation, the recursive part of the definition is assigning types their meaning as partial equivalence relations (PERs) on terms.
We expect if and only if , rather than just the one, usual direction because we want equality to only be defined on elements. By symmetry and transitivity, implies and , so then by the unusual direction, and .
The reverse reduction properties are saying that the relevant information about a term comes from the value it reduces to. Meaning explanations provide a sense in which type theory is about computational behavior. For this system, the computational behavior of a term is to return a value. Note however that not all terms reduce to a value, for example, the combinator: .
The clauses we should include in the inductive-recursive definition depend on what type constructors we want to put into our type theory; in each case the base notion of untyped computation must include corresponding operators. Here are three paradigmatic examples.
Suppose that we have terms , , and , along with a recursor such that
and
The term is intended to denote the value at of a recursively defined function specified to be at zero and at successors. Here and are variables which may occur free in but are bound in , representing the predecessor and the result of the last recursive call, respectively.
Then to explain the natural numbers type, we include a clause in the inductive definition of “ is a type” which says
The corresponding clause for elements is
Note that the recurrence in this inductive definition doesn’t go via the overall recursive definition of . In other words, to put the above clause more pedantically, we first define
Then say
The clause for equality is
With a similar separation of the inductive definition.
Suppose that we have a term constructor yielding “” for terms and , and similarly and , such that
Then to explain the function type, we include a clause in the inductive definition of “ is a type” which says
The clause for elements is:
For equality, we have:
These are not inductive definitions as we had for the natural numbers; they are merely simple definitions. On the other hand, we are now using the fact that the function mapping types to their collections of element pairs is recursively defined, since in order to make this definition, we must invoke it at and . The negative occurrence of in this clause (and similar occurrences in other clauses for higher order types) is the reason why the whole explanation cannot just be defined by mutual induction.
This clause for function types is not the one Martin-Löf proposed. Unlike all of Martin-Löf’s clauses, this one gives all terms of a function type directly; it doesn’t just give the canonical terms first, with the rest defined via reduction. This is because it defines the elements in reference to the elimination form, , instead of the introduction form, , as Martin-Löf did. Here’s Martin-Löf’s clause for elements (adapting for this non-dependent case):
The element equality clause gets an analogous change. (Exercise!)
Both styles of function type validate all the usual function rules of extensional type theory.
Suppose we have a term constructor yielding for terms , and similarly some term . Then to explain the identity type, we add to the inductive definition of “ is a type” the clause
Here we are finally using the fact that typehood and elements are defined by induction-recursion: this clause in the inductive definition of typehood refers to the value of the recursive elementhood function at some smaller type.
The clause for elements is:
The clause for equality is:
Because we have deliberately only put (and terms reducing to it) into the identity type, this definition of identity types will validate uniqueness of identity proofs (UIP). Moreover, because is inhabited if and only if , it validates the “equality reflection” rule of extensional type theory.
There are simple ways to modify the meaning explanation so as to no longer validate the extensional rules. For instance, if we add to the semantics additional “free variables”, so that there are “neutral terms” which do not compute to any canonical form, and modify the rules of the interpretation so as to say that any normalizable neutral term inhabits every type, then extensionality will no longer hold, since the identity type (like any type) will contain all the variables. This is maybe not a particularly compelling model of intensionality, but there are proposals for better ways to expand the meaning explanation to deal with intensional type theory.
The easy way is to say that
This validates the conversion rules. It’s extensional in a manner similar to material set theory, which seems appealing, given the rest of the setup. But it turns out to be inconvenient in this case, because for technical reasons, many rules will require more premises in order to be valid. So it’s a trade-off. Nuprl uses a more intensional type equality, inductively defining along with , to simplify the rules.
TODO!
Intuitively, open terms are multi-argument dependent functions. The hypothetical judgements are defined in terms of the basic judgements on closed terms by substituting a lot of appropriate closed terms. Respect for equality is enforced in a manner to agree with the function type constructor.
The claim is that by inspecting these definitions, one can arrive at the formal rules of type theory for manipulating judgments such as “ is a type”, “”, and “”, which no longer refer to any given notion of computation. However, Martin-Löf famously wrote
…there are … certain limits to what verbal explanations can do when it comes to justifying axioms and rules of inference. In the end, everybody must understand for himself.
Moreover, as mentioned above, the meaning explanation (like any particular semantics) validates many other “rules” which are not usually added to type theories, such as
(However in Nuprl rules such as these are, in fact, included.) Thus, some discrimination is required, as well as some choices (e.g. whether to add the extensionality rule for identity types).
Per Martin-Löf, Intuitionistic Type Theory (Notes by Giovanni Sambin of a series of lectures given in Padua, June 1980). Napoli, Bibliopolis, 1984. (pdf)
Per Martin-Löf, Constructive mathematics and computer programming (pdf)
Per Martin-Löf. On the Meanings of the Logical Constants and the Justifications of the Logical Laws, Nordic Journal of Philosophical Logic, 1(1): 11–60, 1996 (pdf)
Jonathan Sterling, Type Theory and its Meaning Explanations, arxiv
Proposal for homotopy type theory:
Last revised on February 1, 2021 at 19:34:03. See the history of this page for a list of all contributions to it.