This is a draft of a paper by John Baez, Tobias Fritz and Tom Leinster. The finished version can be found here:
The paper was developed in a series of blog conversations, in this order:
John Baez, Category-theoretic characterizations of entropy.
John Baez, Category-theoretic characterizations of entropy II.
Tom Leinster, Entropies vs. means.
Tom Leinster, An operadic introduction to entropy.
John Baez, A characterization of entropy.
For a lot of material that never got incorporated into the paper, see
and for another paper that never got finished, see:
There are numerous characterizations of Shannon entropy and Tsallis entropy as measures of information obeying certain properties. Using work by Faddeev and Furuichi, we derive a very simple characterization. Instead of focusing on the entropy of a probability measure on a finite set, this characterization focuses on the ‘information loss’, or change in entropy, associated with a measure-preserving function. We show that Shannon entropy gives the only concept of information loss that is functorial, convex-linear and continuous. This characterization naturally generalizes to Tsallis entropy as well.
The Shannon entropy (S) of a probability measure on a finite set is given by:
There are many theorems that seek to characterize Shannon entropy starting from plausible assumptions; see for example the book by Aczél and Daróczy (AD). Here we give a new and very simple characterization theorem. The main novelty is that we do not focus directly on the entropy of a single probability measure, but rather, on the change in entropy associated with a measure-preserving function. The entropy of a single probability measure can be recovered as the change in entropy of the unique measure-preserving function to the one-point space.
A measure-preserving function can map several points to the same point, but not vice versa, so this change in entropy is always a decrease. Since the second law of thermodynamics says that entropy always increases, this may seem counterintuitive. It may seem less so if we think of the function as some kind of data processing which does not introduce any additional randomness. Then the entropy can only decrease, and we can talk about the ‘information loss’ associated to the function.
Some examples may help to clarify this point. Consider the only possible map . Suppose is the probability measure on such that each point has measure , while is the unique probability measure on the set . Then , while . The information loss associated to the map is defined to be , which in this case equals . In other words, the measure-preserving map loses one bit of information.
On the other hand, suppose is the probability measure on such that has measure and has measure . Then , so with respect to this probability measure the map has information loss . It may seem odd to say that loses no information: after all, it maps and to the the same point. However, because the point has probability zero with respect to , knowing that lets us conclude that with probability one.
The shift in emphasis from probability measures to measure-preserving functions suggests that it will be useful to adopt the language of category theory (M), where one has objects and morphisms between them. We will do this, although almost no category theory is required to read this paper.
Shannon entropy has a very simple characterization in terms of information loss. To state it, we consider a category where a morphism is a measure-preserving function between finite sets equipped with probability measures. We assume is a function that assigns to any such morphism a number , which we call its information loss. We also assume that obeys three axioms. If we call a morphism a ‘process’, we can state these roughly in words as follows:
Functoriality. Given a process consisting of two stages, the amount of information lost in the whole process is the sum of the amounts lost at each stage:
Convex linearity. If we flip a probability- coin to decide whether to do one process or another, the information lost is times the information lost by the first process plus times the information lost by the second:
Continuity. If we change a process slightly, the information lost changes only slightly: is a continuous function of .
(For full details see Section 2.) Given these assumptions, we conclude that there exists a constant such that for any , we have
The charm of this result is that none of the hypotheses hint at any special role for the function , but it emerges in the conclusion. The key here is a result of Faddeev (F) described in Section 3.
For many scientific purposes, probability measures are not enough. Our result extends to general measures on finite sets, as follows. Any measure on a finite set can be expressed as for some scalar and probability measure , and we define . In this more general setting, we are no longer confined to taking convex linear combinations of measures. Accordingly, the convex linearity condition in our main theorem is replaced by two conditions: additivity () and homogeneity (). As before, the conclusion is that, up to a multiplicative constant, assigns to each morphism the information loss .
It is natural to wonder what happens when we replace the homogeneity axiom by a more general homogeneity condition:
for some number . In this case we find that is proportional to , where is the so-called Tsallis entropy of order .
We work with finite sets equipped with probability measures. All measures on a finite set will be assumed nonnegative and defined on the -algebra of all subsets of .
Let be the category whose objects are finite sets equipped with probability measures and whose morphisms are measure-preserving functions.
Since any measure on a finite set is determined by its values on singletons, we will think of an object of as a pair consisting of a finite set together with an -tuple of numbers satisfying . A morphism in is a function such that
We will usually write an object as for short, and write a morphism as simply .
There is a way to take ‘convex linear combinations’ of objects and morphisms in . Let and be finite sets equipped with probability measures, and let . Then there is a probability measure
on the disjoint union of the sets and , whose value at a point is given by
Given morphisms and , there is a unique morphism
that restricts to on the measure space and on the measure space .
The same notation can be extended, in the obvious way, to convex combinations of more than two objects or morphisms. For example, given objects of and nonnegative scalars summing to , there is a new object .
Recall that the Shannon entropy of a probability measure on a finite set is
with the convention that .
Suppose is any map sending morphisms in to numbers in and obeying these three axioms:
whenever are composable morphisms.
for all morphisms and scalars .
Then there exists a constant such that for any morphism in ,
where is the Shannon entropy of . Conversely, for any constant , this formula determines a map obeying conditions 1-3.
We need to explain condition 3. A sequence of morphisms in converges to a morphism if
We define to be continuous if whenever is a sequence of morphisms converging to a morphism .
The proof of Theorem is given in a later section. First we show how to deduce a characterization of Shannon entropy for general measures on finite sets.
Let be the category whose objects are finite sets equipped with measures and whose morphisms are measure-preserving functions.
There is more room for maneuver in than in : we can take arbitrary nonnegative linear combinations of objects and morphisms, not just convex combinations. Any nonnegative linear combination can be built up from direct sums and multiplication by nonnegative scalars, defined as follows.
For ‘direct sums’, first note that the disjoint union of two finite sets equipped with measures is another thing of the same sort. We write the disjoint union of as . Then, given morphisms , there is a unique morphism that restricts to on the measure space and on the measure space .
For ‘scalar multiplication’, first note that we can multiply a measure by a nonnegative real number and get a new measure. So, given an object and a number we obtain an object with the same underlying set and with . Then, given a morphism , there is a unique morphism that has the same underlying function as .
This is consistent with our earlier notation for convex linear combinations.
We wish to give some conditions guaranteeing that a map sending morphisms in to nonnegative real numbers comes from a multiple of Shannon entropy. To do this we need to define the Shannon entropy of a finite set equipped with a measure , not necessarily a probability measure. Define the total mass of to be
If this is nonzero, then is of the form for a unique probability measure space . In this case we define the Shannon entropy of to be . If the total mass of is zero, we define its Shannon entropy to be zero.
We can define continuity for a map sending morphisms in to numbers in just as we did for , and show:
Suppose is any map sending morphisms in to numbers in and obeying these four axioms:
whenever are composable morphisms.
for all morphisms .
for all morphisms and all .
Then there exists a constant such that for any morphism in ,
where is the Shannon entropy of . Conversely, for any constant , this formula determines a map obeying conditions 1-4.
Take a map obeying the axioms listed here. Then restricts to a map on morphisms of obeying the axioms of Theorem . Hence there exists a constant such that whenever is a morphism between probability measures. Now take an arbitrary morphism in . Since is measure-preserving, , say. If then , and for some morphism in ; then by homogeneity,
If then , so by homogeneity. So in either case. The converse statement follows from the converse in Theorem .
To solidify our intuitions, we first check that really does determine a functor obeying all the conditions of Theorem . Since all these conditions are linear in , it suffices to consider the case where . It is clear that is continuous, and equation (1) is also immediate whenever , , are morphisms in :
The work is to prove equation (2).
We begin by establishing a useful formula for , where as usual is a morphism in . Since is measure-preserving, we have
So
where in the last step we note that summing over all points that map to and then summing over all is the same as summing over all . So,
and thus
where the quantity in the sum is defined to be zero when . If we think of and as the distributions of random variables and with , then is exactly the conditional entropy of given . So, what we are calling ‘information loss’ is a special case of conditional entropy.
This formulation makes it easy to check equation (2):
In the proof of Corollary (on ), the fact that satisfies the four axioms was deduced from the analogous fact for . It can also be checked directly. For this it is helpful to note that
It can then be shown that equation (5) holds for every morphism in . The additivity and homogeneity axioms follow easily.
To prove the hard part of Theorem , we use a characterization of entropy given by Faddeev (F) and nicely summarized at the beginning of a paper by Rényi (R). In order to state this result, it is convenient to write a probability measure on the set as an -tuple . With only mild cosmetic changes, Faddeev’s original result states:
(Faddeev) Suppose is a map sending any probability measure on any finite set to a nonnegative real number. Suppose that:
is invariant under bijections.
is continuous.
For any probability measure on a set of the form , and any number ,
Then is a constant nonnegative multiple of Shannon entropy.
In item 1 we are using the fact that given a bijection between finite sets and a probability measure on , there is a unique probability measure on such that is measure-preserving; we demand that takes on the same value on both these probability measures. In item 2, we use the standard topology on the simplex
to put a topology on the set of probability distributions on any -element set.
The most interesting and mysterious condition in Faddeev’s theorem is item 3. This is a special case of a general law appearing in the work of Shannon (S) and Faddeev (F). Namely, suppose is a probability measure on the set . Suppose also that for each , we have a probability measure on a finite set . Then is again a probability measure space, and the Shannon entropy of this space is given by:
To see this, write for the value of at the point :
Moreover, this formula is almost equivalent to condition 3 in Faddeev’s theorem, allowing us to reformulate Faddeev’s theorem as follows:
Suppose is a map sending any probability measure on any finite set to a nonnegative real number. Suppose that:
is invariant under bijections.
is continuous.
, where is our name for the unique probability measure on the set .
For any probability measure on the set and probability measures on finite sets, we have
Then is a constant nonnegative multiple of Shannon entropy. Conversely, any constant nonnegative multiple of Shannon entropy satisfies 1-4.
We just need to check that conditions 3 and 4 imply Faddeev’s equation (7). Take , and for : then condition 4 gives
which by condition 3 gives Faddeev’s equation.
It may seem miraculous how the formula
emerges from the assumptions in either Faddeev’s original Theorem or the equivalent Theorem . We can demystify this by describing a key step in Faddeev’s argument, as simplified by Rényi (R). Suppose is a function satisfying the assumptions of Faddeev’s result. Let
be the function applied to the uniform probability measure on an -element set. Since we can write a set with elements as a disjoint union of different -element sets, assumption 4 of Theorem implies that
Rényi shows that the only solutions of this equation obeying
are
This is how the logarithm function enters. Using condition 3 of Theorem , or equivalently conditions 3 and 4 of Theorem , the value of can be deduced for all probability measures such that each is rational. The result for arbitrary probability measures follows by continuity.
Now we complete the proof of Theorem . Assume that obeys conditions 1-3 in the statement of this theorem.
Recall that denotes the set equipped with its unique probability measure. For each object , there is a unique morphism
We can think of this as the map that crushes down to a point and loses all the information that had. So, we define the ‘entropy’ of the measure by
Given any morphism in , we have
So, by our assumption that is functorial,
or in other words:
To conclude the proof, it suffices to show that is a multiple of Shannon entropy.
We do this by using Theorem . Functoriality implies that when a morphism is invertible, . Together with (8), this gives condition 1 of Theorem . Since is invertible, it also gives condition 3. Condition 2 is immediate. The real work is checking condition 4.
Take a probability measure on , and probability measures on finite sets respectively. Then we obtain a probability measure
on the disjoint union of . Now, we can decompose as a direct sum:
Define a morphism
Then by convex linearity and definition of ,
But also
by (8) and (9). Comparing these two expressions for gives condition 4 of Theorem , completing the proof of Theorem .
Since Shannon defined his entropy in 1948, it has been generalized in many ways. Our Theorem can easily be extended to characterize one family of generalizations, the so-called ‘Tsallis entropies’. For any positive real number , the Tsallis entropy of order of a probability measure on a finite set is defined by:
The peculiarly different definition when is explained by the fact that the limit exists and equals the Shannon entropy .
Although these entropies are most often named after Tsallis (T), they and related quantities had been studied by others long before the 1988 paper in which Tsallis first discussed it. For example, Havrda and Charvát (HC) had already introduced a similar formula, adapted to base 2 logarithms, in a 1967 paper in information theory, and in 1982, Patil and Taillie (PT) had used itself as a measure of biological diversity.
The characterization of Tsallis entropy is exactly the same as that of Shannon entropy except in one respect: in the convex linearity condition, the degree of homogeneity changes from to .
Let . Suppose is any map sending morphisms in to numbers in and obeying these three axioms:
whenever are composable morphisms.
for all morphisms and all .
Then there exists a constant such that for any morphism in ,
where is the order Tsallis entropy of . Conversely, for any constant , this formula determines a map obeying conditions 1-3.
The proof is exactly the same as that of Theorem , except that instead of using Faddeev’s theorem, we use Theorem V.2 of Furuichi (Fu). Furuichi’s theorem is itself the same as Faddeev’s, except that condition 3 of Theorem is replaced by
and Shannon entropy is replaced by Tsallis entropy of order .
As in the case of Shannon entropy, this result can be extended to arbitrary measures on finite sets. For this we need to define the Tsallis entropies of an arbitrary measure on a finite set. We do so by requiring that
for all and all . When this is the same as the Shannon entropy, and when , it can be rewritten explicitly as
(which is analogous to (6)). The following result is the same as Theorem except that, again, the degree of homogeneity changes from to .
Let . Suppose is any map sending morphisms in to numbers in , and obeying these four properties:
Functoriality:
whenever are composable morphisms.
Additivity:
for all morphisms .
Homogeneity of degree :
for all morphisms and all .
Continuity: is continuous.
Then there exists a constant such that for any morphism in ,
where is the Tsallis entropy of order . Conversely, for any constant , this formula determines a map obeying conditions 1-4.
We thank the denizens of the -Category Café, especially David Corfield, Steve Lack, Mark Meckes and Josh Shadlen, for encouragement and helpful suggestions. Leinster is supported by an EPSRC Advanced Research Fellowship.
Last revised on August 15, 2019 at 19:13:32. See the history of this page for a list of all contributions to it.