nLab polymorphism

Polymorphism

Context

Type theory

natural deduction metalanguage, practical foundations

  1. type formation rule
  2. term introduction rule
  3. term elimination rule
  4. computation rule

type theory (dependent, intensional, observational type theory, homotopy type theory)

syntax object language

computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory

logicset theory (internal logic of)category theorytype theory
propositionsetobjecttype
predicatefamily of setsdisplay morphismdependent type
proofelementgeneralized elementterm/program
cut rulecomposition of classifying morphisms / pullback of display mapssubstitution
introduction rule for implicationcounit for hom-tensor adjunctionlambda
elimination rule for implicationunit for hom-tensor adjunctionapplication
cut elimination for implicationone of the zigzag identities for hom-tensor adjunctionbeta reduction
identity elimination for implicationthe other zigzag identity for hom-tensor adjunctioneta conversion
truesingletonterminal object/(-2)-truncated objecth-level 0-type/unit type
falseempty setinitial objectempty type
proposition, truth valuesubsingletonsubterminal object/(-1)-truncated objecth-proposition, mere proposition
logical conjunctioncartesian productproductproduct type
disjunctiondisjoint union (support of)coproduct ((-1)-truncation of)sum type (bracket type of)
implicationfunction set (into subsingleton)internal hom (into subterminal object)function type (into h-proposition)
negationfunction set into empty setinternal hom into initial objectfunction type into empty type
universal quantificationindexed cartesian product (of family of subsingletons)dependent product (of family of subterminal objects)dependent product type (of family of h-propositions)
existential quantificationindexed disjoint union (support of)dependent sum ((-1)-truncation of)dependent sum type (bracket type of)
logical equivalencebijection setobject of isomorphismsequivalence type
support setsupport object/(-1)-truncationpropositional truncation/bracket type
n-image of morphism into terminal object/n-truncationn-truncation modality
equalitydiagonal function/diagonal subset/diagonal relationpath space objectidentity type/path type
completely presented setsetdiscrete object/0-truncated objecth-level 2-type/set/h-set
setset with equivalence relationinternal 0-groupoidBishop set/setoid with its pseudo-equivalence relation an actual equivalence relation
equivalence class/quotient setquotientquotient type
inductioncolimitinductive type, W-type, M-type
higher inductionhigher colimithigher inductive type
-0-truncated higher colimitquotient inductive type
coinductionlimitcoinductive type
presettype without identity types
set of truth valuessubobject classifiertype of propositions
domain of discourseuniverseobject classifiertype universe
modalityclosure operator, (idempotent) monadmodal type theory, monad (in computer science)
linear logic(symmetric, closed) monoidal categorylinear type theory/quantum computation
proof netstring diagramquantum circuit
(absence of) contraction rule(absence of) diagonalno-cloning theorem
synthetic mathematicsdomain specific embedded programming language

homotopy levels

semantics

This page is about the concept of polymorphism in computer science. For the concept in type theory see universe polymorphism. For the generalisation of morphisms introduced by Shinichi Mochizuki, see poly-morphism.

Polymorphism

Idea

In computer science, polymorphism refers to situations either where the same name is used to refer to more than one function, or where the same function is used at more than one type. One usually distinguishes these two kinds of polymorphism as ad hoc polymorphism and parametric polymorphism, respectively.

Ad hoc polymorphism

In ad hoc polymorphism, one simply defines multiple functions with the same name and different types, relying on the compiler (or, in some cases, the run-time system) to determine the correct function to call based on the types of its arguments and return value. This is also called overloading. For instance, using a mathematical notation, one might define functions

add:× add : \mathbb{N} \times \mathbb{N} \to \mathbb{N}
add:× add : \mathbb{R} \times \mathbb{R} \to \mathbb{R}

and then when add(3,2)add(3,2) is invoked, the compiler knows to call the first function since 33 and 22 are natural numbers, whereas when add(4.2,π)add(4.2,\pi) is invoked it calls the second function since 4.24.2 and π\pi are real numbers.

Note that there is nothing which stipulates that the behavior of a class of ad-hocly polymorphic functions with the same name should be at all similar. Nothing prevents us from defining add:× add : \mathbb{N} \times \mathbb{N} \to \mathbb{N} to add its arguments but add:× add : \mathbb{R} \times \mathbb{R} \to \mathbb{R} to subtract its arguments. Of course, it is good programming practice to make overloaded functions similar in their behavior.

In the example above, there might even be a coercion function c:c : \mathbb{N} \to \mathbb{R}, to be invoked whenever a natural number appears where the compiler expects a real number, giving a commutative diagram

× add c×c c × add \array { \mathbb{N} \times \mathbb{N} & \overset{add}\to & \mathbb{N} \\ \mathllap{c \times c} \downarrow & & \downarrow \mathrlap{c} \\ \mathbb{R} \times \mathbb{R} & \underset{add}\to & \mathbb{R} }

But things don't always work out this way.

Parametric polymorphism

In parametric polymorphism, one writes code to define a function once, which contains a “type variable” that can be instantiated at many different types to produce different functions. For instance, we can define a function

first:A×AA first : A\times A \to A

where AA is a type variable (or parameter), by

first(x,y)x. first(x,y) \coloneqq x.

Now the compiler automatically instantiates a copy of this function, with identical code, for any type at which it is called. Thus we can behave as if we had functions

first:× first : \mathbb{N} \times \mathbb{N} \to \mathbb{N}
first:× first : \mathbb{R} \times \mathbb{R} \to \mathbb{R}

and so on, for any types we wish. In contrast to ad hoc polymorphism, in this case we do have a guarantee that all these same-named functions are doing “the same thing”, because they are all instantiated by the same original polymorphic code.

In a dependently typed programming language with a type of types, such as Coq or Agda, a parametrically polymorphic family of functions can simply be considered to be a single dependently typed function whose first argument is a type. Thus our function above would be typed as

first: A:TypeA×AA first : \prod_{A:Type} A\times A \to A

However, parametric polymorphism makes sense and is very useful even in languages with less rich type systems, such as Haskell and Standard ML?.

Theorems for Free

In type systems with parametric polymorphism, such as System F, there is a family of proofs for every polymorphic type signature which each give an algebraic law for any function with that signature. For example, the type signature A×BAA \times B \to A has the free theorem?

αf=f(α×β) \alpha \circ f = f \circ (\alpha \times \beta)

for all functions ff with that signature (Wadler 89, figure 1).

References

The distinction between ad hoc and parametric polymorphism is originally due to Christopher Strachey:

  • Christopher Strachey, Fundamental concepts in programming languages, Lecture Notes from August 1967 (International Summer School in Computer Programming, Copenhagen), later published in Higher-Order and Symbolic Computation, 13, 11-49, 2000. (pdf)

Other classic papers include:

  • John C. Reynolds, Towards a Theory of Type Structure, Lecture Notes in Computer Science 19, 408-425, 1974. (pdf)

  • Robin Milner, A Theory of Type Polymorphism in Programming, Journal of Computer and System Sciences 17, 348-375, 1978. (pdf)

  • John C. Reynolds, Types, Abstraction, and Parametric Polymorphism, Information Processing 83, 1983. (pdf)

Theorems for free were explained in:

  • Philip Wadler, Theorems for free!, in Proceedings of the fourth international conference on Functional programming languages and computer architecture (FPCA 1989). 1989. (pdf, doi:10.1145/99370.99404)

A recent paper extending Reynolds’ notion of relational parametricity to dependent types:

  • Robert Atkey, Neil Ghani and Patricia Johann, A Relationally Parametric Model of Dependent Type Theory, In Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2014). 2014. (web)

Last revised on October 19, 2022 at 13:08:08. See the history of this page for a list of all contributions to it.