This page is part of the Initiality Project.
Here is a proposed slate of choices to make regarding type theories, presentations, definitions, and proof structure. (“I” refers to Mike Shulman.) This is only a first proposal and may change; discussion is underway at the nForum here.
Of the various ways to describe a categorical model of dependent types, I suggest to use categories with families, presented in the style called by Awodey a natural model, but fully algebraically. That is, for us a category with families is a (small) category together with two presheaves and a morphism that is algebraically representable (or, one might say, “represented”): it is equipped with a function assigning to each morphism , where and denotes the Yoneda embedding, an object and a pullback square
This is equivalent to the usual notion of category with families (and thereby also to the usual notion of “category with attributes”, hence also to discrete or full split comprehension categories). But, as noted by Awodey, focusing attention on and as objects of has the advantage that the additional structure corresponding to the rules of a type theory are also naturally formulated in . For instance, if denotes the domain of the local exponential , then the rules of -formation and -formation correspond semantically to giving a morphism .
Note also that can be described in the internal extensional type theory of by the context ; thus is a sort of “semantic logical framework” for structuring . My inclination right now is that while we can rely on this for intuition and explanation, we should also formulate all this structure precisely in traditional category-theoretic language, to avoid any appearance of circularity in our construction of categorical models of dependent type theory.
A morphism of categories with families is a functor together with a commutative square in :
which strictly respects the chosen pullback squares in an appropriate way. This defines the category in which we hope to construct an initial object.
I suggest to start with a type theory containing only -types. This seems the simplest situation where we will have to deal with all the basic issues. We can then add additional structure as desired.
Note that this type theory is completely empty unless we also assert some axioms. Ideally, I would like the “initiality theorem” to really be a “freeness theorem”, saying that the syntactic category of a type theory with axioms is freely generated by those axioms. Of course, particular axioms can just be added as rules to a theory, but such a general freeness theorem should be parametric over all possible axioms.
Since in a dependent type theory, each axiom can involve previous axioms in its type, it seems that the general semantic formulation will have to involve some kind of a “cell complex” in the category . However, we can leave this to the future for the moment and concentrate on the inductive clauses for interpreting the basic rules of -types.
I propose to use syntax with named variables, quotiented by -equivalence. My reason is that our goal is expositional and sociological, and named variables keep the syntax as close as possible to what “users” of type theory are familiar with. Since we are humans writing for humans to read, I don’t want to deal explicitly with de Bruijn indices, and neither do I want to pretend that we are using de Bruijn indices when we aren’t really using them.
If this proposal is adopted, then the raw terms for our simple type theory will be inductively generated by:
We define -equivalence which renames bound variables as usual. Note that the variable in scopes only over , in it scopes only over and , and in it scopes only over . Capture-avoiding substitution is likewise defined as usual. Our partial interpretation will be defined recursively over the above inductive definition of terms, and proven to respect -equivalence.
Following Streicher, all our terms are fully (or at least “sufficiently”) annotated. For instance, we write rather than just . (It is, at best, unclear whether any less than fully annotated syntax can be used in an initiality theorem without preprocessing that is tantamount to annotating it.)
In particular, this means that given and , if can be derived for any value of , then there is a canonical such that can be deduced syntactically, or synthesized, from and . For instance, if has any type, then it must have the type . On the other hand, the conversion rule implies that must also have any type that is judgmentally equal to .
Type theorists have a standard technique for distinguishing these two situations, known as bidirectional typechecking. Instead of one judgment , we have two typing judgments:
There are various advantages of bidirectional typechecking, including:
(Experts, feel free to add more.) It is unclear at present whether any of these advantages are relevant to categorical semantics, but by using a bidirectional framework we are better placed to take advantage of them if they do become relevant. More importantly, it seems to me that the natural structure of the semantic interpretation, as suggested by Peter Lumsdaine (see below), matches the syntactic bidirectional picture quite closely, and it may be clarifying to make that analogy more precise.
Since all our terms are fully annotated, all of our formation, introduction, and elimination rules can synthesize their types. However, their premises usually involve checking their subterms against appropriate types, so the above mode-switching rule gets applied a lot (it is the only way, in our system, to derive a checking judgment). This might be an issue to want to optimize away in an implementation, but for our semantic purposes it seems irrelevant.
For instance, here are the formation, introduction, and elimination rules for -types:
(Note that the “is a type” judgment is not bidirectional. With Russell universes we could consider making it so.)
The “hypothesis” rule is that a variable in the context synthesizes its assumed type:
In implementations of bidirectional typechecking, the equality judgments and are sometimes made bidirectional as well. But at the moment, I don’t think there will be is any benefit to us in doing likewise. In particular, we would like our setup to be general enough to encompass type theories whose judgmental equality is “poorly behaved”, e.g. undecidable, such as those with an equality reflection rule. Thus the equality judgments should have primitive rules asserting reflexivity, symmetry, transitivity, and congruence properties.
Streicher also includes a basic judgment for equality of contexts. Peter suggests that that isn’t necessary, because
any contexts that would be judgementally equal in the system with that judgement still end up canonically end up judgementally isomorphic, by the substitution that consists of not renaming variables.
I’m inclined to follow Peter’s suggestion here, but am open to counterarguments.
Streicher also includes a number of explicit primitive rules involving things like validity of contexts and type-preservation by substitution. I’m not sure whether these are necessary or not; it seems that maybe by presenting our type theories in “the usual way” we should always be able to ensure that substitution is admissible, even for type theories that are “ill-behaved” in other ways (such as equality reflection). But maybe there are good reasons to include these rules explicitly?
The following proposal for the inductive structure of the argument is based on Streicher’s original approach, with a modification suggested by Peter Lumsdaine, and further tweaks to incorporate bidirectionality and build in naturality using the natural-models technology.
Given a finite set of variables , the presheaf of environments for is . We will define, by structural induction on any term , and any any finite set of variables, the following partial morphisms of presheaves.
In fact, the third of these is not actually inductive at all: it is simply a semantic counterpart of the bidirectional mode-switching judgment. We define to be the domain restriction of
to the equalizer of
and the restriction of the second projection to its domain.
The other inductive clauses of the partial interpretation should precisely mirror the bidirectional rules of our type theory, using recursive calls to the above three operations as counterparts of the three syntactic judgments , , and . Note, though, that at this point this is only an analogy: we are simply inducting over the raw term structure of , not making any actual reference to these judgments.
This suggestion incorporates the innovation proposed by Peter over Streicher’s proof of taking the interpretation of the context as an input to the partial intepretation functions rather than an output (see in particular his point (7)). The elements of the presheaf at some object are what Peter calls “environments” for . Note that we do not require that contain all free variables occurring in , but if it omits some then the partial interpretation morphisms should turn out to have empty domain.
I have also reformulated the interpretation function to “put naturality into the goals of the original induction” as suggested by Peter in his point (8), by directly defining partial morphisms in the “semantic logical framework” . This has the additional advantage that in writing down the inductive clauses we can refer directly to the natural-models formulation of the corresponding structure on , as sketched above.
Although the partial interpretation functions sketched above incorporate a number of “tweaks” to make it more pleasing, in fact it was already the short part: Streicher does it in only 5 pages. (His pages are very short, so this isn’t actually very much, plus his type theory is more complicated.) The long part is proving that these interpretation functions are total on derivable judgments; this takes Streicher 40 pages.
The outline should be the same as in Streicher’s proof and Peter’s proposal:
Prove by induction on raw terms that the partial interpretation functions take syntactic substitution (including weakening) to semantic substitution (restriction in the presheaves and ). This takes Streicher 17 pages.
Prove by induction on derivations that the partial interpretation functions are total on derivable judgments. This takes Streicher 20 pages.
To complete the proof of initiality after this, it remains to:
Construct the term model. This takes Streicher 12 pages.
Show that the above interpretation assembles into a morphism in from the term model to . As far as I can tell, Streicher doesn’t actually do this, he just claims that it can be done.
Show that this morphism is unique. Streicher doesn’t actually do this either.
Last revised on January 7, 2019 at 20:49:45. See the history of this page for a list of all contributions to it.