natural deduction metalanguage, practical foundations
type theory (dependent, intensional, observational type theory, homotopy type theory)
= + +
/ | -/ | |
, | ||
of / of | ||
for | for hom-tensor adjunction | |
introduction rule for | for hom-tensor adjunction | |
( of) | ( of) | |
into | into | |
( of) | ( of) | |
, , | ||
higher | ||
/ | -// | |
/ | ||
, () | , | |
(, ) | / | |
(absence of) | (absence of) | |
In computer science, polymorphism refers to situations either where the same name is used to refer to more than one function, or where the same function is used at more than one type. One usually distinguishes these two kinds of polymorphism as ad hoc polymorphism and parametric polymorphism, respectively.
In ad hoc polymorphism, one simply defines multiple functions with the same name and different types, relying on the compiler (or, in some cases, the run-time system) to determine the correct function to call based on the types of its arguments and return value. This is also called overloading. For instance, using a mathematical notation, one might define functions
and then when $add(3,2)$ is invoked, the compiler knows to call the first function since $3$ and $2$ are natural numbers, whereas when $add(4.2,\pi)$ is invoked it calls the second function since $4.2$ and $\pi$ are real numbers.
Note that there is nothing which stipulates that the behavior of a class of ad-hocly polymorphic functions with the same name should be at all similar. Nothing prevents us from defining $add : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ to add its arguments but $add : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ to subtract its arguments. Of course, it is good programming practice to make overloaded functions similar in their behavior.
In the example above, there might even be a coercion function $c : \mathbb{N} \to \mathbb{R}$, to be invoked whenever a natural number appears where the compiler expects a real number, giving a commutative diagram
But thing don't always work out this way.
In parametric polymorphism, one writes code to define a function once, which contains a “type variable” that can be instantiated at many different types to produce different functions. For instance, we can define a function
where $A$ is a type variable (or parameter), by
Now the compiler automatically instantiates a copy of this function, with identical code, for any type at which it is called. Thus we can behave as if we had functions
and so on, for any types we wish. In contrast to ad hoc polymorphism, in this case we do have a guarantee that all these same-named functions are doing “the same thing”, because they are all instantiated by the same original polymorphic code.
In a dependently typed programming language with a type of types, such as Coq or Agda, a parametrically polymorphic family of functions can simply be considered to be a single dependently typed function whose first argument is a type. Thus our function above would be typed as
However, parametric polymorphism makes sense and is very useful even in languages with less rich type systems, such as Haskell and ML?.
The distinction between ad hoc and parametric polymorphism is originally due to Christopher Strachey?:
Other classic papers include:
John C. Reynolds, Towards a Theory of Type Structure, Lecture Notes in Computer Science 19, 408-425, 1974. (pdf)
Robin Milner, A Theory of Type Polymorphism in Programming, Journal of Computer and System Sciences 17, 348-375, 1978. (pdf)
John C. Reynolds, Types, Abstraction, and Parametric Polymorphism, Information Processing 83, 1983. (pdf)
A recent paper extending Reynolds’ notion of relational parametricity to dependent types:
Last revised on September 25, 2018 at 09:22:19. See the history of this page for a list of all contributions to it.