In probability theory, iid is shorthand for independent and identically distributed, and it’s mostly used for random variables.
The notion of such random variable formalizes the idea of repeated independent coin flips or dice rolls: they are independent events in that the probability of each event (for example, each single coin flip) follows the same distribution.
Another name for the same process is Bernoulli process.
Let be a measurable space. Random variables or random elements in a sequence on a probability space are said to be iid, or independent and identically distributed, if their joint distribution is in the form for some measure on . Equivalently, if for all measurable subsets of ,
A similar definition can be given for infinite products as well, by means of the Kolmogorov extension theorem.
Sometimes, especially when the space is finite (particularly when it has two elements, such as for coin flips), one calls the resulting stochastic process a Bernoulli process.
Given a probability distribution on , one can take iid samples. This can be described as a Markov kernel where:
Explicitly, the kernel is given as follows:
where again we make use of the Kolmogorov extension theorem to define the kernel in terms of its finite marginals.
One way of stating de Finetti's theorem is by saying that the map is a limit cone over an exchangeable process.
In categorical probability, for example in Markov categories, independence is simply encoded by taking the tensor product of morphisms (for example, from the monoidal unit).
In Markov categories, one can model iid samples using the copy map (see at Markov category), together with Kolmogorov products.
An iid process is always exchangeable. (The converse is not true.)
De Finetti's theorem says that every exchangeable probability measure is a unique convex mixture of iid ones.
The Hewitt-Savage zero-one law says that given iid random variables, the probability of each exchangeable event is either zero or one, i.e. it is a zero-one measure.
The Kolmogorov zero-one law, similarly, says that given iid random variables, the probability of each tail event is a zero-one measure.
The law of large numbers says that given integrable iid random variables, the empirical mean tends to the expectation value.
de Finetti's theorem, Hewitt-Savage zero-one law, Kolmogorov zero-one law, law of large numbers
ergodic system, Bernoulli shifts?
See also:
Wikipedia, Independence (probability theory)
ProofWiki: Definition:Independent Random Variables
ProofWiki: Condition for Independence from Product of Expectations
Last revised on September 28, 2024 at 16:57:37. See the history of this page for a list of all contributions to it.