nLab Markov kernel

Redirected from "stochastic kernels".
Contents

Contents

Idea

A Markov kernel (also called transition kernel, stochastic kernel, or probability kernel) is a mathematical formalization of a “function with random outcomes”, or “random transition”. (Sometimes the term transition kernel denotes a more general, non-normalized mapping.)

It can be thought of as a generalization of a stochastic map outside the finite discrete case. (Sometimes the term “stochastic map” is itself used to denote a Markov kernel.)

Markov kernels, in the form of regular conditional distributions (see below) are also the standard setting for talking about conditional probability outside of the discrete case (and without incurring in paradoxes).

Markov kernels and their categories are among the basic building blocks of categorical probability.

Definition

Let (X,𝒜)(X,\mathcal{A}) and (Y,)(Y,\mathcal{B}) be measurable spaces, i.e. sets equipped each with a sigma-algebra. A Markov kernel (X,𝒜)(Y,)(X,\mathcal{A})\to(Y,\mathcal{B}) is an assignment which is

The quantity k(B|x)k(B|x) can be thought of as the probability of transitioning to BB if we are in xx, or of the conditional probability of the event BB given xx.

Basic structures and properties

As Kleisli morphisms

A Markov kernel k:(X,𝒜)(Y,)k:(X,\mathcal{A})\to(Y,\mathcal{B}) can be equivalently expressed as a measurable map where PP denotes the Giry monad on Meas, with the Giry sigma-algebra. In other words, a Markov kernel is exactly a Kleisli morphism for the Giry monad. Particular Markov kernels are also Kleisli morphisms for other probability monads.

Almost sure equality

Consider two k,h:(X,𝒜)(Y,)k,h:(X,\mathcal{A})\to(Y,\mathcal{B}) and a measure pp on (X,𝒜)(X,\mathcal{A}). The kernels kk and hh are said to be pp-almost surely equal or pp-almost everywhere equal if for every BB\in\mathcal{B}, the quantities k(B|x)k(B|x) and h(B|x)h(B|x) only differ on a set of measure zero. Note that in general this set depends on the choice of BB, but in some cases, such as when (Y,)(Y,\mathcal{B}) is standard Borel, it can be taken independently of BB.

Measure-preserving kernels

Let (X,𝒜,p)(X,\mathcal{A},p) and (Y,,q)(Y,\mathcal{B},q) be probability spaces, i.e. measurable spaces equipped each with a probability measure. A measure-preserving kernel (X,𝒜,p)(Y,,q)(X,\mathcal{A},p)\to(Y,\mathcal{B},q) is a kernel between the underlying measurable spaces k:(X,𝒜)(Y,)k:(X,\mathcal{A})\to(Y,\mathcal{B}) such that for every BB\in\mathcal{B},

Xk(B|x)p(dx)=q(B). \int_X k(B|x)\,p(d x) = q(B) .

Kernels from deterministic functions

The identity function on a measurable set (X,𝒜)(X,\mathcal{A}) defines a kernel as follows,

δ(A|x)=δ x(A)=1 A(x)={1 xA; 0 xA \delta(A|x) = \delta_x(A) = 1_A(x) = \begin{cases} 1 & x\in A ; \\ 0 & x\notin A \end{cases}

for every xXx\in X and A𝒜A\in\mathcal{A}. This gives the identity morphisms in the categories of kernels below.

More generally, let f:(X,𝒜)(Y,)f:(X,\mathcal{A})\to (Y,\mathcal{B}) be a measurable function. We can define the kernel δ f:(X,𝒜)(Y,)\delta_f:(X,\mathcal{A})\to (Y,\mathcal{B}) as follows,

δ f(B|x)=δ f(x)(B)=1 f 1(B)(x)={1 f(x)B; 0 f(x)B \delta_f(B|x) = \delta_f(x)(B) = 1_{f^{-1}(B)}(x) = \begin{cases} 1 & f(x)\in B ; \\ 0 & f(x)\notin B \end{cases}

for every xXx\in X and BB\in\mathcal{B}. Intuitively, this kernel represents the deterministic transition, from xx to f(x)f(x) with probability one. This construction induces a functor from the category Meas to the categories of kernels below. (Compare with the analogous construction for stochastic maps.)

Every kernel in the form δ f\delta_f is a zero-one kernel, but the converse is not true in general.

Note that if f:(X,𝒜,p)(Y,,q)f:(X,\mathcal{A},p)\to(Y,\mathcal{B},q) is a measure-preserving function, then δ f\delta_f is a measure-preserving kernel in the sense defined above.

Bayesian inversion

Given a measure-preserving Markov kernel k:(X,𝒜,p)(Y,,q)k:(X,\mathcal{A},p)\to(Y,\mathcal{B},q), a Bayesian inverse of kk is a Markov kernel k :(Y,,q)(X,𝒜,p)k^\dagger:(Y,\mathcal{B},q)\to(X,\mathcal{A},p) such that for all A𝒜A\in\mathcal{A} and BB\in\mathcal{B},

Ak(B|x)p(dx)= Bk (A|y)q(dy). \int_A k(B|x)\,p(d x) = \int_B k^\dagger(A|y)\,q(d y) .

The last formula can be seen as an instance of Bayes' formula, as it generalize the usual discrete formula

P(b|a)P(a)=P(a|b)P(b). P(b|a)\,P(a) = P(a|b)\,P(b) .

We have that

  • Every kernel between standard Borel spaces admits a Bayesian inverse;

  • If a kernel k:(X,𝒜,p)(Y,,q)k:(X,\mathcal{A},p)\to(Y,\mathcal{B},q) admits a Bayesian inverse k :(Y,,q)(X,𝒜,p)k^\dagger:(Y,\mathcal{B},q)\to(X,\mathcal{A},p), then any kernel h:(Y,,q)(X,𝒜,p)h:(Y,\mathcal{B},q)\to(X,\mathcal{A},p) is a Bayesian inverse of kk if and only if it is qq-almost surely equal to k k^\dagger.

  • Bayesian inverses, when they exist, depend crucially on the measure pp.

  • Bayesian inversion makes the category Krn (see below) a dagger category.

Regular conditional distributions

A Markov kernel can be seen as a probability measure which depends measurably on a parameter. Therefore, together with the idea of conditional expectation, it’s a very useful tool to talk about conditional probability without incurring into paradoxes. Let’s see how.

Let (X,𝒜,p)(X,\mathcal{A},p) be a probability space, and consider a sub-sigma-algebra 𝒜\mathcal{B}\subseteq\mathcal{A}. Recall that

  • the conditional expectation of an integrable function f:(X,𝒜,p)f:(X,\mathcal{A},p)\to\mathbb{R} given \mathcal{B} is a function 𝔼[f|]:X\mathbb{E}[f|\mathcal{B}]:X\to\mathbb{R} which is \mathcal{B}-measurable (“coarser” than ff), and such that for every BB\in\mathcal{B},

    Bfdp= B𝔼[f|]dp. \int_B f \, d p = \int_B \mathbb{E}[f|\mathcal{B}] \, d p .

    (Such a function always exists by the Radon-Nikodym theorem.)

  • the conditional probability of an event (measurable subset) A𝒜A\in\mathcal{A} is the conditional expectation of its indicator function:

    [A|]=𝔼[1 A|]. \mathbb{P}[A|\mathcal{B}] = \mathbb{E}[1_A|\mathcal{B}] .

Note that [A|]\mathbb{P}[A|\mathcal{B}] is a function,

x[A|](x) x\mapsto \mathbb{P}[A|\mathcal{B}](x)

which is \mathcal{B}-measurable for every A𝒜A\in\mathcal{A}. Therefore, in order for the assignment to be a Markov kernel (X,)(X,𝒜)(X,\mathcal{B})\to(X,\mathcal{A}) (see the definition above), we moreover need that for every xXx\in X the assignment

A[A|](x) A \mapsto \mathbb{P}[A|\mathcal{B}](x)

be a probability measure on (X,𝒜)(X,\mathcal{A}). This is not always the case. When this happens, we call the resulting kernel (X,)(X,𝒜)(X,\mathcal{B})\to(X,\mathcal{A}) a regular conditional distribution given \mathcal{B}. We can view a regular conditional distribution as a probability measure on (X,𝒜)(X,\mathcal{A}) which is parametrized in a measurable way by (X,)(X,\mathcal{B}).

The disintegration theorem says that when (X,𝒜)(X,\mathcal{A}) is standard Borel, regular conditional distributions in the form above always exist.

Regular conditionals from a joint distribution

Using the disintegration theorem we can obtain conditional probability distributions in a way that generalizes the discrete formula

p(b|a)=p(a,b)p(a) p(b|a) = \frac{p(a,b)}{p(a)}

to the non-discrete setting.

Given standard Borel spaces (X,𝒜)(X,\mathcal{A}) and (Y,)(Y,\mathcal{B}), their product (X×Y,𝒜)(X\times Y,\mathcal{A}\otimes\mathcal{B}) is again standard Borel. The projection map π 1:(X×Y,𝒜)(X,𝒜)\pi_1:(X\times Y,\mathcal{A}\otimes\mathcal{B})\to (X,\mathcal{A}) induces a sub-sigma-algebra π 1 1(𝒜)\pi_1^{-1}(\mathcal{A}) on X×YX\times Y (which makes it isomorphic to (X,𝒜)(X,\mathcal{A}) in both Stoch and Krn, see Section 2.1 of this paper).

Given a joint distribution rr on (X×Y,𝒜)(X\times Y,\mathcal{A}\otimes\mathcal{B}), we can then form the regular conditional (X×Y,π 1 1(𝒜))(X×Y,𝒜)(X\times Y,\pi_1^{-1}(\mathcal{A}))\to(X\times Y,\mathcal{A}\otimes\mathcal{B}) (or equivalently, up to isomorphism, (X,𝒜)(X×Y,𝒜)(X,\mathcal{A})\to(X\times Y,\mathcal{A}\otimes\mathcal{B})). Further composing with the projection π 2:(X×Y,𝒜)(Y,)\pi_2:(X\times Y,\mathcal{A}\otimes\mathcal{B})\to(Y,\mathcal{B}) (or equivalently restricting the kernel to the sub-sigma-algebra π 2 1()\pi_2^{-1}(\mathcal{B})) we then get a kernel sometimes denoted simply r(B|x)r(B|x) or r(B|x)r'(B|x).

This kernel satisfies the property that for every A𝒜A\in\mathcal{A} and BB\in\mathcal{B},

Ar(B|x)p(dx)=r(A×B), \int_A r'(B|x) \, p(d x) = r(A\times B) ,

and is therefore a generalization of conditional probability to the non-discrete setting: compare it to the formula

P(b|a)P(a)=P(a,b). P(b|a)\,P(a) = P(a,b) .

Categories of Markov kernels

References

See also:

Discussion via category theory:

  • F. W. Lawvere, The Category of Probabilistic Mappings, 1962, PDF.

  • Fredrik Dahlqvist, Vincent Danos, Ilias Garnier, and Alexandra Silva, Borel kernels and their approximation, categorically. In MFPS 34: Proceedings of the Thirty-Fourth Conference on the Mathematical Foundations of Programming Semantics, volume 341, 91–119, 2018. arXiv.

  • Tobias Fritz, A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics. Adv. Math., 370:107239, 2020. arXiv:1908.07021.

  • Noé Ensarguet, Paolo Perrone, Categorical probability spaces, ergodic decompositions, and transitions to equilibrium. arXiv.

category: probability

Last revised on July 12, 2024 at 15:46:14. See the history of this page for a list of all contributions to it.