natural deduction metalanguage, practical foundations
type theory (dependent, intensional, observational type theory, homotopy type theory)
computational trinitarianism = propositions as types +programs as proofs +relation type theory/category theory
basic constructions:
strong axioms
Per Martin-Löf’s meaning explanations of type theory provide a way to justify the correctness of the rules of type theory. The meaning of the judgements of type theory are defined by computation of closed terms to canonical form in some untyped notion of computation, such as a lambda calculus extended with constants for the constructors and eliminators of type theory.
From a technical/mathematical point of view, the meaning explanations give rise to a realizability model of type theory, where realizers are terms in some untyped notion of computation, such as an extended lambda calculus. That is, it can be regarded as a description of how types are assigned to untyped computations. However, the term “meaning explanations” also refer to the “philosophical” or “pre-mathematical” component, which regards this as an “explanation” of the “constructive validity” of the judgements of the type theory.
The standard meaning explanations validate not just the rules of intensional type theory but also the stronger rules of extensional type theory. Thus, if one regards it as the “intended semantics” then one may believe in the extensional theory. However, like any particular semantics for a formal theory, the meaning explanation also validates many other “rules” as well which are not usually adding to type theories, so it is not so clear that meaning explanations in their own right argue for extensionality. They can also be modified so as to no longer satisfy extensionality.
Martin-Löf has been quoted as saying
Some people feel that when I have presented my meaning explanations I have said nothing. And this is how it should be. The standard semantics should be just that: standard. It should come as no surprise.
Suppose we have given some collection of computational objects which we call “terms”, with an untyped notion of “reduction” or “computation”. These could be terms in a lambda calculus, or elements of a partial combinatory algebra, or even programs in a real-world programming language such as C, Java, or Python. We write $x\Rightarrow y$ to mean that $x$ computes or reduces to $y$ (in some number of steps). We then define simultaneously the following predicates, where $a,b,A$ are terms:
The definition is by induction-recursion?. We define the property “$A$ is a type” inductively, and simultaneously we define recursively two functions assigning to each type $A$ the collection of terms of type $A$ and the collection of pairs of equal terms of type $A$, respectively. Moreover, each such collection may itself be defined inductively.
The clauses which we include in each case depend on what types we want to put into our type theory; in each case the base notion of untyped computation must include a corresponding structure. Here are three paradigmatic examples.
Suppose that we have terms $N$, $0$, and $s(x)$, along with a recursor $rec(n,z_0,x r. z_s)$ such that
and
The term $rec(n,z_0,x r. z_s)$ is intended to denote the value at $n$ of a recursively defined function specified to be $z_0$ at zero and $z_s$ at successors. Here $x$ and $r$ are variables which may occur free in $z_s$ but are bound in $rec(n,z_0,x r. z_s)$, representing the current index and the result of the last recursive call, respectively.
Then to explain the natural numbers type, we include a clause in the inductive definition of “$A$ is a type” which says
The corresponding clause for elements is
And for equality:
Suppose that we have a term constructor yielding “$A\to B$” for terms $A$ and $B$, and similarly $app(f,a)$ and $\lambda x.t$, such that
Then to explain the function type, we include a clause in the inductive definition of “$A$ is a type” which says
The clause for elements is:
This is not an inductive definition as we had for the natural numbers; it is merely a simple definition. On the other hand, we are now using the fact that the function mapping types to their collections of elements is recursively defined, since in order to make this definition, we must invoke it at $A$ and $B$.
For equality, of course we have:
Suppose we have a term constructor yielding $Id_A(a,b)$ for terms $A,a,b$, and similarly some term $refl$. Then to explain the identity type, we add to the inductive definition of “$A$ is a type” the clause
Here we are finally using the fact that typehood and elements are defined by induction-recursion: this clause in the inductive definition of typehood must refer to the value of the recursive elementhood function at some smaller type.
The clause for elements is:
Here we use not only the fact that elementhood is recursively defined, but that it is defined by mutual recursion with equality. The clause for equality is:
Because we have deliberately only put $refl$ (and terms reducing to it) into the identity type, this definition of identity types will validate the “equality reflection” rule of extensional type theory.
There are simple ways to modify the meaning explanation so as to no longer validate the extensional rules. For instance, if we add to the semantics additional “free variables”, so that there are “neutral terms” which do not compute to any canonical form, and modify the rules of the interpretation so as to say that any normalizable neutral term inhabits every type, then extensionality will no longer hold, since the identity type (like any type) will contain all the variables. This is maybe not a particular compelling model of intensionality, but there are proposals for better ways to expand the meaning explanation to deal with intensional type theory.
The claim is that by inspecting these definitions, one can arrive at the formal rules of type theory for manipulating judgments such as “$A$ is a type”, “$a:A$”, and “$a=b:A$”, which no longer refer to any given notion of computation. However, Martin-Löf famously wrote
…there are … certain limits to what verbal explanations can do when it comes to justifying axioms and rules of inference. In the end, everybody must understand for himself.
Moreover, as mentioned above, the meaning explanation (like any particular semantics) validates many other “rules” which are not usually added to type theories, such as
(However in Nuprl? rules such as these are, in fact, included.) Thus, some discrimination is required, as well as some choices (e.g. whether to add the extensionality rule for identity types).
Per Martin-Löf, “Constructive mathematics and computer programming” (pdf)