How do you type check MILL, but with projections rather than matching to eliminate ? Mike Shulman gave one such system, but its type-checking algorithm requires two passes (if this algorithm were incorporated into the rules, there would be an extra intermediate judgment with “activeness” annotations on the types), and doesn’t handle additional type constructors such as linear implication.
This page is intended to give algorithmic rules for a system with tensor projections and linear implication, but at present it is unverified and its interpretation into monoidal categories is unclear.
, , and are inputs; , , and are outputs.
The observation mentioned in Allais motivating the use of “” was that a term generally doesn’t use all the resources it’s given, so the rest are passed back out, tensored with the result type.
and are contexts with some natural number of semicolon “scope” delimiters. and are “usage” sets, which are subsets of the (domains of the) respective contexts. In Allais, the usage sets are represented as bit vectors (which is a fine way to implement them). Unlike Allais, the usage before and after are for different contexts, because type checking a term generally adds projections to the context as potentially-consumed resources.
The purpose of delimited scopes in the context is to remember by when we must’ve consumed tensor projections. Currently, only lambda adds a scope delimiter. The current algorithm puts projections in the outermost scope it can get away with. This approach will not work when adding additive connectives.
In the rules, I implicitly coerce a context to the usage set of its entire domain. The abbreviation “weakens” to include all entries in but not . Set theoretically speaking: . “” and variants are used as metavariables for contexts without any delimiters.
Some important (intended) properties of the judgment form are that if , then:
Note that the context grows while the usage shrinks. Erm, I guess these usage sets are the sets of resources not used (and thus available). should be understood as the actual usage indicated by , in light of the additional resources in . So as the type checker proceeds, more resources are known of, but fewer remain available.
There are generator morphisms, which can be used any number of times. means you can get whenever you want.
Functions take exactly one argument, and have one or more results. Semantically, the results are tensored () together. Following Shulman, multiple results are accessed using “generalized Sweedler notation”. For example, if has two results, they are typically written as and . Using both results is a single use of linear implication elimination. This is actually the only way to use tensor projections, currently. Combining it with function application was purely a matter of convenience, and is probably a bad idea, for adding more connectives.
Yes, you get terms out of the context, not just variables. For now, the only terms that end up in the context are variables and tensor projections. really means and , since the type is stored in .
Yeah, that’s tensor elim now. Kind of funky. It turns out this rule is redundant: you can also use .
There is a new scope for checking the body, which starts out with just the bound variable. When done checking, there’s generally more stuff in it. Currently, that would only be tensor projections. In any case, everything in the new scope must’ve been consumed.
That’s the nice application rule, for when there’s one result. Now for the nasty ones…
Whew. These rules add tensor projections of a function application to the context, the first time the application is encountered in a given scope. There are two rules because if the application doesn’t consume any resources, there’s generally an ambiguity between multiple instances of the application, so we add a “label” to disambiguate. is the label metavariable. This labeling idea is from Shulman. See there for a discussion of the problem.
In the rule mentioning , is the set of resources consumed by the application. That is, both the function and argument. The premise implies that is nonempty. Meanwhile, the other rule checks that the same computed usage is empty.
In both rules, is split to find the scope to which the projections are added. If the application uses resources, that scope is the innermost scope with resources that were used. Otherwise it’s the outermost scope.
The projections cannot go in a scope outside one with resources used, because then there would be no way to get the variables out there in ordinary MILL using a match.
On the other hand, it can be trouble to put the projections in a scope more inside than necessary, since the lambda rule checks that everything in its scope was consumed. The projections only need to be consumed before one of their dependencies goes out of scope. For example, () means (), but would be rejected if we naively put into the scope in which it’s first encountered.
These algorithmic rules are loosely based on:
The idea to allow tensor projections in some sound way, and a lot of the notation, are from:
Last revised on December 17, 2019 at 01:46:21. See the history of this page for a list of all contributions to it.