With braiding
With duals for objects
category with duals (list of them)
dualizable object (what they have)
ribbon category, a.k.a. tortile category
With duals for morphisms
monoidal dagger-category?
With traces
Closed structure
Special sorts of products
Semisimplicity
Morphisms
Internal monoids
Examples
Theorems
In higher category theory
In probability theory, it is a well known fact that events are not always independent, i.e. that in general
One says that the probability of a product is not the product of the probabilities. The term on the left is called the joint probability (the probability that the events and happen jointly) and the terms on the right are called marginal probabilities (the probabilities that and happen separately.) In general, the joint probability has more information than the marginal probabilities alone, because of the presence of correlation or other stochastic interactions.
From the category-theoretic perspective, one can encode this idea using the following (related) concepts:
or
When one takes as Markov category the Kleisli category of a probability monad, the two concepts coincide.
Let be a probability space, and let be events, i.e. measurable subsets of . We call the joint probability of and the probability of their intersection, .
More generally, let and be random variables (or random elements) on . The joint random variable of and is the random variable given by the universal property of the product:
We call
Note that and , so that the marginalization maps can be seen as the functorial images of the product projections under a probability monad.
In absence of the space one can call “joint distribution” any probability distribution on a product space , and define its marginal distributions by pushing forward along the product projections. This is equivalent to taking as . The same is true for products of more spaces, including infinite products.
Given two (marginal) distributions and on spaces and , in principle there could be many joint distributions on admitting and as their marginals, corresponding to different possible stochastic interactions. A canonical choice of joint distribution is given by the product distribution , defined on product sets in the form by
which makes and independent.
Recall that the Giry monad, as well as other probability monads, encodes the idea of forming, from a space , a space of probability measures over it. A joint distribution is then encoded by an element of the space , and the marginals are elements of and obtained via applying to the product projections. Additionally, by the universal property of products, there exists a unique map making the following diagram commute. This can be interpreted as forming, from a joint distribution, the pair of its marginals, which in general encode less information. The same can be done for joint distributions over a larger number of factors, including infinite products.
In order to form product distributions, one needs additional compatibility conditions between the monad structure (giving probability measures) and the products of the category. These conditions are exactly captured by the idea of a monoidal monad, or equivalently, a commutative monad.
This amounts in particular to a natural map satisfying particular compatibility conditions. For the Giry monad, it is the following map.
Consider now the following two diagrams.
We have that
The commutativity of the diagram on the left says that if one forms a product measure and then takes its marginals, one recovers the two factors. In other words, every product probability measure is necessarily the product of its marginals. One can show that the diagram on the left commutes if and only if preserves the terminal object, i.e. (see at affine monad). This is the case for the Giry monad and for most probability monads, since the one-point space admits a unique probability measure. (It can be considered a “normalization” condition for probability measures: if one drops normalization, the one-point space admits several measures.)
In general, the diagram on the right does not commute: if one starts with a joint distribution and then forms its marginals, the original distribution may be different from the product of the marginals. Therefore marginalization is a destructive operation, forgetting possible stochastic interaction. Indeed, if the diagram on the left commutes, then the diagram on the right commutes if and only if the monad preserves products, i.e. it is strong monoidal. The “random” behavior of the Giry monad and of other probability monads can be explained, to a large degree, by their failure to preserve products. (More on this below, and at Markov category).
As we have seen above, a joint distribution, in general, encodes more information than the pair of its marginals, and cannot always be recovered by forming the product distribution.
We can express this equivalently by looking at the category Stoch of Markov kernels, as well as at other Kleisli categories of similar probability monads.
First of all, recall that the category Stoch inherits the monoidal structure from the product in Meas, the cartesian product of measurable spaces.
This product in Stoch, however, is not a categorical product: the universal property of a product requires that given objects , and in a category, for every pair of morphisms and there exists a unique morphism making the following diagram commute. For Markov kernels, this property does not hold: setting , the one-point space, the property would require in particular that there is a unique joint distribution on for every pair of marginals and . As we have seen above, this is not the case. What fails is not the existence part of the universal property: a joint distribution always exists, the product distribution. What fails is the uniqueness part. In other words, the monoidal structure in Stoch is a weak product but not a categorical product.
More generally, given a monoidal monad on a cartesian monoidal category, the induced monoidal structure on the Kleisli category is monoidal only when the monad is strong monoidal, and we saw that this is not the case for most probability monads. (As mentioned at Markov category, a cartesian monoidal Kleisli category can be seen as having “no randomness”.)
In Markov categories, one can define joint and marginal states similarly to what happens in the category Stoch. Recall that a state on
has the interpretation of a probability measure on , and in Stoch it is exactly a Markov kernel , i.e. a probability measure.
A joint state is a state in the form :
In Stoch, this is exactly a probability measure on .
Given a joint state on , its marginals are the states and on and respectively given by discarding the unobserved variable:
In Stoch, these correspond exactly to marginal distributions.
More on this (and examples in other Markov categories) can be found at Markov category - States, channels, joints, marginals.
Tobias Fritz, Paolo Perrone, Bimonoidal Structure of Probability Monads, Proceedings of MFPS, 2018, (arXiv:1804.03527)
Paolo Perrone, Categorical Probability and Stochastic Dominance in Metric Spaces, (thesis)
Kenta Cho, Bart Jacobs, Disintegration and Bayesian Inversion via String Diagrams, Mathematical Structures of Computer Science 29, 2019. (arXiv:1709.00322)
Tobias Fritz, A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics, Advances of Mathematics 370, 2020. (arXiv:1908.07021)
Anders Kock, Bilinearity and cartesian closed monads, Mathematica Scandinavica 29(2), 1971.
Last revised on July 18, 2024 at 09:38:08. See the history of this page for a list of all contributions to it.