nLab stochastic dependence and independence

Redirected from "stochastic interactions".
Contents

Context

Measure and probability theory

Monoidal categories

monoidal categories

With braiding

With duals for objects

With duals for morphisms

With traces

Closed structure

Special sorts of products

Semisimplicity

Morphisms

Internal monoids

Examples

Theorems

In higher category theory

Limits and colimits

Contents

Idea

One of the main purposes of probability theory, and of related fields such as statistics and information theory, is to make predictions in situations of uncertainty.

Suppose that we are interested a quantity XX, whose value we don’t know exactly (for example, a random variable), and which we cannot observe directly. Suppose that we have another quantity, YY, which we also don’t know exactly, but which we can observe (for example, through an experiment). We might now wonder: can observing YY give us information about XX, and reduce its uncertainty? Viewing the unknown quantities XX and YY as having hidden knowledge or hidden information, one might ask, how much of this hidden information is shared between XX and YY, so that observing YY uncovers information about XX as well?

This form of dependence between XX and YY is called stochastic dependence, and is one of the most important concepts both in probability theory, and, due to its conceptual nature, in most categorical approaches to probability theory.

Intuition

Dependence and independence

(…)

Conditional dependence and independence

(…)

In measure-theoretic probability

Let (Ω,,p)(\Omega,\mathcal{F},p) be a probability space, and let A,BA,B\in\mathcal{F} be events, i.e. measurable subsets of Ω\Omega. We say that AA and BB are independent if and only if

p(AB)=p(A)p(B), p(A\cap B) = p(A)\,p(B) ,

i.e. if the joint probability is the product of the probabilities.

More generally, if f:(Ω,)(X,𝒜)f:(\Omega,\mathcal{F})\to(X,\mathcal{A}) and g:(Ω,)(Y,)g:(\Omega,\mathcal{F})\to(Y,\mathcal{B}) are random variables or random elements, one says that ff and gg are independent if and only if all the events they induce are independent, i.e. for every A𝒜A\in\mathcal{A} and BB\in\mathcal{B},

p(f 1(A)g 1(B))=p(f 1(A))p(g 1(B)). p\big(f^{-1}(A)\cap g^{-1}(B)\big) = p\big(f^{-1}(A)\big)\,p\big(g^{-1}(B)\big) .

Equivalently, one can form the joint random variable (f,g):(Ω,)(X×Y,𝒜)(f,g):(\Omega,\mathcal{F})\to(X\times Y,\mathcal{A}\otimes\mathcal{B}) and form the joint distribution q=(f,g) *pq=(f,g)_*p on X×YX\times Y. We have that ff and gg are independent as random variables if and only if for every A𝒜A\in\mathcal{A} and BB\in\mathcal{B},

q(π 1 1(A)π 2 1(B))=q(π 1 1(A))q(π 2 1(B)). q\big(\pi_1^{-1}(A)\cap \pi_2^{-1}(B)\big) = q\big(\pi_1^{-1}(A)\big)\,q\big(\pi_2^{-1}(B)\big) .

If we denote the marginal distributions of qq by q Xq_X and q Yq_Y, the independence condition reads

q(A×B)=q X(A)q Y(B), q(A \times B) = q_X(A)\,q_Y(B) ,

meaning that the joint distribution qq is the product of its marginals.

(…)

In terms of the Giry monad

(…)

In the category of Markov kernels

(…)

In Markov categories

(For now see Markov category - Stochastic independence)

In dagger categories

(…)

See also

References

  • Kenta Cho, Bart Jacobs, Disintegration and Bayesian Inversion via String Diagrams, Mathematical Structures of Computer Science 29, 2019. (arXiv:1709.00322)

  • Tobias Fritz, A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics, Advances of Mathematics 370, 2020. (arXiv:1908.07021)

  • Alex Simpson, Equivalence and Conditional Independence in Atomic Sheaf Logic (arXiv:2405.11073)

category: probability

Last revised on July 12, 2024 at 16:17:32. See the history of this page for a list of all contributions to it.