constructive mathematics, realizability, computability
propositions as types, proofs as programs, computational trinitarianism
constructive mathematics, realizability, computability
propositions as types, proofs as programs, computational trinitarianism
A sampling map is an assignment that takes a probability distribution on a space , and returns a random sample from distributed according to .
It is a mathematical abstraction of the idea of flipping a coin, or of rolling a die (which are fair, or for which we know the bias): the outcome is random, but we know exactly the distribution it follows.
Let be a measurable space. Denote by the set of probability measures over . The sampling map is the Markov kernel defined for every and every measurable by
We can interpret this kernel as of “taking a sample from the distribution ”.
Given a number , a Bernoulli random variable with bias , denoted by , is a random variable which is with probability , and with probability . (Intuitively, we are “flipping a coin with bias ”.)
We can view the set as parametrizing all the possible probability distributions on , i.e. . Therefore, the assignment induces a Markov kernel , which is exactly the sampling map given above.
The sampling map is the counit of the adjunction induced by the Giry monad between the category of Markov kernels and the category of measurable spaces.
As the counit of an adjunction, it satisfies the following universal property: every Markov kernel can be written as a composition , where the map is deterministic (it is the kernel induced by a measurable function), and the map is the sampling map.
A probabilistic interpretation of this universal property is that sampling generates all the randomness. In other words, a process is random if and only if, at some point, it involves sampling, for example flipping a coin, or something analogous.
This is particularly important in probabilistic programming (see below).
(For now, see representable Markov category.)
As the counit of a Kleisli adjunction, the sampling map can be interpreted as the forcing map of a thunk-force category.
One can view a probability distribution as a thunk, from which one can sample (or force) at a later time (for example, in lazy programming languages).
Part of the correspondence between the Markov category formalism and the thunk-force formalism is studied in Moss-Perrone’22.
In probabilistic programming?, the sampling map is the function, usually denoted by samp, sample, or sometimes random, which returns a (pseudo)random sample from a specified distribution. (Often, the uniform distribution on the unit interval, or an approximation thereof.)
The sampling map, by its universal property, can be seen as “generating” the randomness of probabilistic computations. For this reason, categorical probability can be interpreted as a form of probabilistic programming semantics. (Another central aspect of categorical probability and of probabilistic programming, besides sampling, is conditioning.)
Tobias Fritz, Tomáš Gonda, Paolo Perrone, Eigil Fjeldgren Rischel, Representable Markov categories and comparison of statistical experiments in categorical probability, Theoretical Computer Science 961, 2023. (arXiv:2010.07416)
Sean Moss, Paolo Perrone, Probability monads with submonads of deterministic states, LICS 2022. (arXiv:2204.07003)
Last revised on September 28, 2024 at 10:18:55. See the history of this page for a list of all contributions to it.