natural deduction metalanguage, practical foundations
type theory (dependent, intensional, observational type theory, homotopy type theory)
computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory
The lambda calculus is:
a simple programming language;
a model of computation (akin to Turing machines and recursive functions), through which we can study the computability and complexity of functions and predicates; and
an internal language for cartesian closed categories (for more on this see at relation between type theory and category theory).
It comes in both typed and untyped versions.
The basic constructs of lambda calculus are lambda abstractions and applications. If is an expression of some sort involving a free variable (possibly vacuously), then the lambda abstraction
is intended to represent the function which takes one input and returns the result of substituting that input for in . Thus, for instance, is the function which adds one to its argument. Lambda expressions are often convenient, even outside lambda calculus proper, for referring to a function without giving it a name.
Application is how we “undo” abstraction, by applying a function to an argument. The application of the function to the argument is generally denoted simply by . Applications can be parenthesized, so for instance and and all denote the same thing as .
Application is generally considered to associate to the left. Thus denotes the application of to , followed by application of the result (assumed to again be a function) to . This allows the representation of functions of multiple variables in terms of functions of one variable via “currying” (named for Haskell Curry, although it was invented by Moses Schönfinkel): after being applied to the first argument, we return a function which, applied to the next argument, returns a function which, when applied to the next argument, … , returns a value. For instance, the “addition” function of two variables can be denoted : when applied to an argument , it returns a function which, when applied to an argument , returns . This is so common that it is generally abbreviated .
Evaluation or reduction is the process of “computing” the “value” of a lambda term. The most basic operation is called beta reduction and consists in taking a lambda abstraction at its word about what it is supposed to do when applied to an input. For instance, the application reduces to (and thereby, presuming appropriate rules for , to ). In general, the beta reduction of a term is defined as the capture-avoiding substitution of for in ,
Terms which can be connected by a zigzag of beta reductions (in either direction) are said to be beta-equivalent.
Another basic operation often assumed in the lambda calculus is eta reduction/expansion, which consists of identifying a function, with the lambda abstraction which does nothing other than apply to its argument. (It is called “reduction” or “expansion” depending on which “direction” it goes in, from to or vice versa.)
A more basic operation than either of these, which is often not even mentioned at all, is alpha equivalence; this consists of the renaming of bound variables, e.g. .
More complicated systems that build on the lambda calculus, such as various type theories, will often have other rules of evaluation as well.
In good situations, lambda calculus reduction is confluent (the Church-Rosser theorem), so that every term has at most one normal form, and two terms are equivalent precisely when they have the same normal form. A term is said to be normalizing if its reduction is also terminating—more precisely, is said to be weakly-normalizing if there exists a finite sequence of reductions
terminating in a normal form , and strongly-normalizing if all sequences of reductions lead to a (by confluence, necessarily unique) normal form. In general, there exist terms which do not normalize by any reduction sequence, although simply-typed lambda calculus and many other typed variants of lambda calculus are strongly-normalizing.
In the untyped (or “pure”) lambda calculus, every term can be applied to any other term. As an example of the sort of freedom this allows, we can form the term which applies its argument to itself, and then define
as the self-application of . Performing beta-reduction on yields
thus giving the classic example of a non-terminating program.
In pure untyped lambda calculus, we can define natural numbers using the Church numerals: the number is represented by the operation of -fold iteration. Thus for instance we have , the function which takes a function as input and returns a function that applies twice. Similarly is the identity on functions, while takes any function to the identity function (the 0th iterate of ). We can then construct (very inefficiently) all of arithmetic, and prove that the arithmetic functions expressible by lambda terms are exactly the same as those computable by Turing machines or (total) recursive functions.
The most natural sort of model of pure lambda calculus is a reflexive object in a cartesian closed category, that is, an object together with a pair of maps
representing pplication and ambda, such that (to validate the equation). Such data thereby witnesses the exponential object as a retract of . Of course there are no nontrivial such models in sets, but they do exist in other categories, such as domains. It is worth remarking that a necessary condition on such is that every term have a fixed-point; see fixed-point combinator.
From the point of view of type theory and in particular the “types à la Church” perspective, such a reflexive object can also be seen as the intrinsic type of every lambda term, and so the untyped lambda calculus is sometimes referred to (a bit cheekily) as really being “uni-typed”. On the other hand, from the “types à la Curry” perspective, it is also possible to begin with the pure lambda calculus as a foundation, and then try to ascribe individual terms more precise types. For example, the normalizing terms of pure lambda calculus can be characterized precisely as those which are typable in an intersection type? system.
In simply typed lambda calculus, each variable and term has a type, and we can only form the application if is of some type while is of a function type whose domain is ; the type of is then . Similarly, if is a variable of type and is a term of type involving , then has type .
Without some further type and term constructors, there is not much that can be done, but if we add a natural numbers object (that is, a type with constants of type and of type , along with a “definition-by-recursion” operator), then we can express many recursive functions. (We cannot by this means express all computable functions, although we can go beyond primitive recursive functions; for instance we can define the Ackermann function. One way to increase the expressiveness to all partial recursive functions is to add a fixpoint combinator, or an unbounded search operator).
Simply typed lambda calculus is the natural internal language of cartesian closed categories. This means that
Every cartesian closed category gives rise to a simply typed lambda calculus whose basic types are its objects, and whose basic terms are its morphisms, while
Every simply typed lambda calculus “generates” a cartesian closed category whose objects are its types and whose morphisms are its equivalence classes of terms.
These two operations are adjoint in an appropriate sense.
Most functional programming languages, such as Lisp, ML, and Haskell, are at least loosely based on lambda calculus.
Textbook accounts:
Introduction to simply typed -calculus:
The idea that untyped lambda calculus can be modeled internally to a cartesian closed category as a reflexive object (an object such that the exponential is a retract of ) was formulated explicitly by Dana Scott in
This followed his earlier work constructing an explicit such model in the category of domains:
Dana S. Scott, Outline of a mathematical theory of computation, in: Proceedings of the Fourth Annual Princeton Conference on Information Sciences and Systems (1970) 169–176. [pdf, pdf]
Dana Scott, Data types as lattices. SIAM Journal of Computing 5 3 (1976) 522–587 [doi:10.1137/0205037, pdf]
Another, more recent take on pure lambda calculus as a certain kind of algebraic theory (called a “lambda theory”) can be found in
Last revised on February 28, 2023 at 14:35:20. See the history of this page for a list of all contributions to it.