In probability theory, iid is shorthand for independent and identically distributed, and it’s mostly used for random variables.
The notion of such random variable formalizes the idea of repeated independent coin flips or dice rolls: they are independent events in that the probability of each event (for example, each single coin flip) follows the same distribution.
Another name for the same process is Bernoulli process.
Let $X$ be a measurable space. Random variables or random elements in a sequence $f_n \colon \Omega\to X$ on a probability space $(\Omega,\mu)$ are said to be iid, or independent and identically distributed, if their joint distribution $p$ is in the form $q\otimes q\otimes\dots\otimes q$ for some measure $q$ on $X$. Equivalently, if for all measurable subsets $A_1,\dots,A_n$ of $X$,
A similar definition can be given for infinite products as well, by means of the Kolmogorov extension theorem.
Sometimes, especially when the space $X$ is finite (particularly when it has two elements, such as for coin flips), one calls the resulting stochastic process a Bernoulli process.
Given a probability distribution $p$ on $X$, one can take iid samples. This can be described as a Markov kernel $samp_\mathbb{N}:P X\to (P X)^\mathbb{N}\to X^\mathbb{N}$ where:
Explicitly, the kernel $samp_\mathbb{N}:P X\to X^\mathbb{N}$ is given as follows:
where again we make use of the Kolmogorov extension theorem to define the kernel in terms of its finite marginals.
One way of stating de Finetti's theorem is by saying that the map $samp_\mathbb{N}:P X\to X^\mathbb{N}$ is a limit cone over an exchangeable process.
In categorical probability, for example in Markov categories, independence is simply encoded by taking the tensor product of morphisms (for example, from the monoidal unit).
In Markov categories, one can model iid samples using the copy map (see at Markov category), together with Kolmogorov products.
An iid process is always exchangeable. (The converse is not true.)
De Finetti's theorem says that every exchangeable probability measure is a unique convex mixture of iid ones.
The Hewitt-Savage zero-one law says that given iid random variables, the probability of each exchangeable event is either zero or one, i.e. it is a zero-one measure.
The Kolmogorov zero-one law, similarly, says that given iid random variables, the probability of each tail event event? is a zero-one measure.
The law of large numbers says that given integrable iid random variables, the empirical mean tends to the expectation value.
de Finetti's theorem, Hewitt-Savage zero-one law, Kolmogorov zero-one law, law of large numbers
ergodic system, Bernoulli shifts?
See also:
Wikipedia, Independence (probability theory)
ProofWiki: Definition:Independent Random Variables
ProofWiki: Condition for Independence from Product of Expectations
Last revised on July 25, 2024 at 13:36:08. See the history of this page for a list of all contributions to it.