John Baez
A characterization of entropy in terms of information loss

Idea

This is a draft of a paper by John Baez, Tobias Fritz and Tom Leinster. The finished version can be found here:

  • John Baez, Tobias Fritz and Tom Leinster, A characterization of entropy in terms of information loss, on the arXiv or free online at Entropy 13 (2011), 1945-1957.

The paper was developed in a series of blog conversations, in this order:

For a lot of material that never got incorporated into the paper, see

and for another paper that never got finished, see:

Contents

Abstract

There are numerous characterizations of Shannon entropy and Tsallis entropy as measures of information obeying certain properties. Using work by Faddeev and Furuichi, we derive a very simple characterization. Instead of focusing on the entropy of a probability measure on a finite set, this characterization focuses on the ‘information loss’, or change in entropy, associated with a measure-preserving function. We show that Shannon entropy gives the only concept of information loss that is functorial, convex-linear and continuous. This characterization naturally generalizes to Tsallis entropy as well.

Introduction

The Shannon entropy (S) of a probability measure p on a finite set X is given by:

H(p)= iXp iln(p i).H(p) = - \sum_{i \in X} p_i \, \ln(p_i) .

There are many theorems that seek to characterize Shannon entropy starting from plausible assumptions; see for example the book by Aczél and Daróczy (AD). Here we give a new and very simple characterization theorem. The main novelty is that we do not focus directly on the entropy of a single probability measure, but rather, on the change in entropy associated with a measure-preserving function. The entropy of a single probability measure can be recovered as the change in entropy of the unique measure-preserving function to the one-point space.

A measure-preserving function can map several points to the same point, but not vice versa, so this change in entropy is always a decrease. Since the second law of thermodynamics says that entropy always increases, this may seem counterintuitive. It may seem less so if we think of the function as some kind of data processing which does not introduce any additional randomness. Then the entropy can only decrease, and we can talk about the ‘information loss’ associated to the function.

Some examples may help to clarify this point. Consider the only possible map f:{a,b}{c}. Suppose p is the probability measure on {a,b} such that each point has measure 1/2, while q is the unique probability measure on the set {c}. Then H(p)=ln2, while H(q)=0. The information loss associated to the map f is defined to be H(p)H(q), which in this case equals ln2. In other words, the measure-preserving map f loses one bit of information.

On the other hand, suppose p is the probability measure on {a,b} such that a has measure 1 and b has measure 0. Then H(p)=0, so with respect to this probability measure the map f has information loss H(p)H(q)=0. It may seem odd to say that f loses no information: after all, it maps a and b to the the same point. However, because the point b has probability zero with respect to p, knowing that f(x)=c lets us conclude that x=a with probability one.

The shift in emphasis from probability measures to measure-preserving functions suggests that it will be useful to adopt the language of category theory (M), where one has objects and morphisms between them. We will do this, although almost no category theory is required to read this paper.

Shannon entropy has a very simple characterization in terms of information loss. To state it, we consider a category where a morphism f:pq is a measure-preserving function between finite sets equipped with probability measures. We assume F is a function that assigns to any such morphism a number F(f)[0,), which we call its information loss. We also assume that F obeys three axioms. If we call a morphism a ‘process’, we can state these roughly in words as follows:

  1. Functoriality. Given a process consisting of two stages, the amount of information lost in the whole process is the sum of the amounts lost at each stage:

    F(fg)=F(f)+F(g).F(f \circ g) = F(f) + F(g) .
  2. Convex linearity. If we flip a probability-λ coin to decide whether to do one process or another, the information lost is λ times the information lost by the first process plus (1λ) times the information lost by the second:

    F(λf(1λ)g)=λF(f)+(1λ)F(g).F(\lambda f \oplus (1 - \lambda) g) = \lambda F(f) + (1 - \lambda) F(g).
  3. Continuity. If we change a process slightly, the information lost changes only slightly: F(f) is a continuous function of f.

(For full details see Section 2.) Given these assumptions, we conclude that there exists a constant c0 such that for any f:pq, we have

F(f)=c(H(p)H(q)).F(f) = c(H(p) - H(q)) .

The charm of this result is that none of the hypotheses hint at any special role for the function plnp, but it emerges in the conclusion. The key here is a result of Faddeev (F) described in Section 3.

For many scientific purposes, probability measures are not enough. Our result extends to general measures on finite sets, as follows. Any measure on a finite set can be expressed as λp for some scalar λ and probability measure p, and we define H(λp)=λH(p). In this more general setting, we are no longer confined to taking convex linear combinations of measures. Accordingly, the convex linearity condition in our main theorem is replaced by two conditions: additivity (F(fg)=F(f)+F(g)) and homogeneity (F(λf)=λF(f)). As before, the conclusion is that, up to a multiplicative constant, F assigns to each morphism f:pq the information loss H(p)H(q).

It is natural to wonder what happens when we replace the homogeneity axiom F(λf)=λF(f) by a more general homogeneity condition:

F(λf)=λ αF(f)F(\lambda f) = \lambda^\alpha\, F(f)

for some number α>0. In this case we find that F(f) is proportional to H α(p)H α(q), where H α is the so-called Tsallis entropy of order α.

The main result

We work with finite sets equipped with probability measures. All measures on a finite set X will be assumed nonnegative and defined on the σ-algebra of all subsets of X.

Definition

Let FinProb be the category whose objects are finite sets equipped with probability measures and whose morphisms are measure-preserving functions.

Since any measure on a finite set is determined by its values on singletons, we will think of an object of FinProb as a pair (X,p) consisting of a finite set X together with an X-tuple of numbers p i[0,1] satisfying p i=1. A morphism f:(X,p)(Y,q) in FinProb is a function f:XY such that

q j= if 1(j)p i.q_j = \sum_{i \in f^{-1}(j)} p_i .

We will usually write an object (X,p) as p for short, and write a morphism f:(X,p)(Y,q) as simply f:pq.

There is a way to take ‘convex linear combinations’ of objects and morphisms in FinProb. Let (X,p) and (Y,q) be finite sets equipped with probability measures, and let λ[0,1]. Then there is a probability measure

λp(1λ)q\lambda p \oplus (1 - \lambda) q

on the disjoint union of the sets X and Y, whose value at a point k is given by

(λp(1λ)q) k={λp k ifkX (1λ)q k ifkY.(\lambda p \oplus (1 - \lambda) q)_k = \begin{cases} \lambda p_k &if k \in X\\ (1 - \lambda) q_k &if k \in Y. \end{cases}

Given morphisms f:pp and g:qq, there is a unique morphism

λf(1λ)g:λp(1λ)qλp(1λ)q\lambda f \oplus (1 - \lambda) g: \lambda p \oplus (1 - \lambda) q \to \lambda p' \oplus (1 - \lambda) q'

that restricts to f on the measure space p and g on the measure space q.

The same notation can be extended, in the obvious way, to convex combinations of more than two objects or morphisms. For example, given objects p(1),,p(n) of FinProb and nonnegative scalars λ 1,,λ n summing to 1, there is a new object i=1 nλ ip(i).

Recall that the Shannon entropy of a probability measure p on a finite set X is

H(p)= iXp iln(p i)[0,),H(p) = -\sum_{i \in X} p_i \, \ln(p_i) \in [0, \infty),

with the convention that 0ln(0)=0.

Theorem

Suppose F is any map sending morphisms in FinProb to numbers in [0,) and obeying these three axioms:

  1. Functoriality:

    (1)F(fg)=F(f)+F(g)F(f \circ g) = F(f) + F(g)

    whenever f,g are composable morphisms.

  2. Convex linearity:

    (2)F(λf(1λ)g)=λF(f)+(1λ)F(g)F(\lambda f \oplus (1 - \lambda)g) = \lambda F(f) + (1 - \lambda) F(g)

    for all morphisms f,g and scalars λ[0,1].

  3. Continuity: F is continuous.

Then there exists a constant c0 such that for any morphism f:pq in FinProb,

F(f)=c(H(p)H(q))F(f) = c(H(p) - H(q))

where H(p) is the Shannon entropy of p. Conversely, for any constant c0, this formula determines a map F obeying conditions 1-3.

We need to explain condition 3. A sequence of morphisms (X n,p(n))f n(Y n,q(n)) in FinProb converges to a morphism (X,p)f(Y,q) if

  • for all sufficiently large n, we have X n=X, Y n=Y, and f n(i)=f(i) for all iX
  • p(n)p and q(n)q pointwise.

We define F to be continuous if F(f n)F(f) whenever f n is a sequence of morphisms converging to a morphism f.

The proof of Theorem 2 is given in a later section. First we show how to deduce a characterization of Shannon entropy for general measures on finite sets.

Definition

Let FinMeas be the category whose objects are finite sets equipped with measures and whose morphisms are measure-preserving functions.

There is more room for maneuver in FinMeas than in FinProb: we can take arbitrary nonnegative linear combinations of objects and morphisms, not just convex combinations. Any nonnegative linear combination can be built up from direct sums and multiplication by nonnegative scalars, defined as follows.

  • For ‘direct sums’, first note that the disjoint union of two finite sets equipped with measures is another thing of the same sort. We write the disjoint union of p,qFinMeas as pq. Then, given morphisms f:pp, g:qq there is a unique morphism fg:pqpq that restricts to f on the measure space p and g on the measure space q.

  • For ‘scalar multiplication’, first note that we can multiply a measure by a nonnegative real number and get a new measure. So, given an object pFinMeas and a number λ0 we obtain an object λpFinMeas with the same underlying set and with (λp) i=λp i. Then, given a morphism f:pq, there is a unique morphism λf:λpλq that has the same underlying function as f.

This is consistent with our earlier notation for convex linear combinations.

We wish to give some conditions guaranteeing that a map sending morphisms in FinMeas to nonnegative real numbers comes from a multiple of Shannon entropy. To do this we need to define the Shannon entropy of a finite set X equipped with a measure p, not necessarily a probability measure. Define the total mass of (X,p) to be

p= iXp i.{\|p\|} = \sum_{i \in X} p_i .

If this is nonzero, then p is of the form pp¯ for a unique probability measure space p¯. In this case we define the Shannon entropy of p to be pH(p¯). If the total mass of p is zero, we define its Shannon entropy to be zero.

We can define continuity for a map sending morphisms in FinMeas to numbers in [0,) just as we did for FinProb, and show:

Corollary

Suppose F is any map sending morphisms in FinMeas to numbers in [0,) and obeying these four axioms:

  1. Functoriality:

    F(fg)=F(f)+F(g)F(f \circ g) = F(f) + F(g)

    whenever f,g are composable morphisms.

  2. Additivity:

    (3)F(fg)=F(f)+F(g)F(f \oplus g) = F(f) + F(g)

    for all morphisms f,g.

  3. Homogeneity:

    (4)F(λf)=λF(f)F(\lambda f) = \lambda F(f)

    for all morphisms f and all λ[0,).

  4. Continuity: F is continuous.

Then there exists a constant c0 such that for any morphism f:pq in FinMeas,

F(f)=c(H(p)H(q))F(f) = c(H(p) - H(q))

where H(p) is the Shannon entropy of p. Conversely, for any constant c0, this formula determines a map F obeying conditions 1-4.

Proof

Take a map F obeying the axioms listed here. Then F restricts to a map on morphisms of FinProb obeying the axioms of Theorem 2. Hence there exists a constant c0 such that F(f)=c(H(p)H(q)) whenever f:pq is a morphism between probability measures. Now take an arbitrary morphism f:pq in FinMeas. Since f is measure-preserving, p=q=λ, say. If λ0 then p=λp¯, q=λq¯ and f=λf¯ for some morphism f¯:p¯q¯ in FinProb; then by homogeneity,

F(f)=λF(f¯)=λc(H(p¯)H(q¯))=c(H(p)H(q)).F(f) = \lambda F(\bar{f}) = \lambda c (H(\bar{p}) - H(\bar{q})) = c(H(p) - H(q)).

If λ=0 then f=0f, so F(f)=0 by homogeneity. So H(f)=c(H(p)H(q)) in either case. The converse statement follows from the converse in Theorem 2.

Why Shannon entropy works

To solidify our intuitions, we first check that F(f)=c(H(p)H(q)) really does determine a functor obeying all the conditions of Theorem 2. Since all these conditions are linear in F, it suffices to consider the case where c=1. It is clear that F is continuous, and equation (1) is also immediate whenever g:mp, f:pq, are morphisms in FinProb:

F(fg)=H(m)H(q)=H(p)H(q)+H(m)H(p)=F(f)+F(g).F(f \circ g) = H(m) - H(q) = H(p) - H(q) + H(m) - H(p) = F(f) + F(g).

The work is to prove equation (2).

We begin by establishing a useful formula for F(f)=H(p)H(q), where as usual f is a morphism pq in FinProb. Since f is measure-preserving, we have

q j= if 1(j)p i.q_j = \sum_{i \in f^{-1}(j)} p_i.

So

jq jlnq j = j if 1(j)p ilnq j = j if 1(j)p ilnq f(i) = ip ilnq f(i)\begin{aligned} \sum_j q_j \ln q_j &=& \sum_j \sum_{i \in f^{-1}(j)} p_i \ln q_j \\ &=& \sum_j \sum_{i \in f^{-1}(j)} p_i \ln q_{f(i)} \\ &=& \sum_i p_i \ln q_{f(i)} \end{aligned}

where in the last step we note that summing over all points i that map to j and then summing over all j is the same as summing over all i. So,

F(f) = ip ilnp i+ jq jlnq j = i(p ilnp i+p ilnq f(i))\begin{aligned} F(f) &=& - \sum_i p_i\ln p_i + \sum_j q_j \ln q_j \\ &=& \sum_i ( -p_i \ln p_i + p_i \ln q_{f(i)}) \end{aligned}

and thus

(5)F(f)= iXp ilnq f(i)p iF(f) = \sum_{i \in X} p_i \ln \frac{q_{f(i)}}{p_i}

where the quantity in the sum is defined to be zero when p i=0. If we think of p and q as the distributions of random variables xX and yY with y=f(x), then F(f) is exactly the conditional entropy of x given y. So, what we are calling ‘information loss’ is a special case of conditional entropy.

This formulation makes it easy to check equation (2):

F(λf(1λ)g)=λF(f)+(1λ)F(g).F (\lambda f \oplus (1 - \lambda)g) = \lambda F(f) + (1 - \lambda) F(g).

In the proof of Corollary 4 (on FinMeas), the fact that F(f)=c(H(p)H(q)) satisfies the four axioms was deduced from the analogous fact for FinProb. It can also be checked directly. For this it is helpful to note that

(6)H(p)=plnp ip iln(p i).H(p) = {\|p\|}\,\ln{\|p\|} - \sum_i p_i\,\ln(p_i).

It can then be shown that equation (5) holds for every morphism f in FinMeas. The additivity and homogeneity axioms follow easily.

Faddeev’s theorem

To prove the hard part of Theorem 4, we use a characterization of entropy given by Faddeev (F) and nicely summarized at the beginning of a paper by Rényi (R). In order to state this result, it is convenient to write a probability measure on the set {1,,n} as an n-tuple p=(p 1,,p n). With only mild cosmetic changes, Faddeev’s original result states:

Theorem

(Faddeev) Suppose I is a map sending any probability measure on any finite set to a nonnegative real number. Suppose that:

  1. I is invariant under bijections.

  2. I is continuous.

  3. For any probability measure p on a set of the form {1,,n}, and any number 0t1,

    (7)I((tp 1,(1t)p 1,p 2,,p n))=p 1I((t,1t))+I((p 1,,p n))I((t p_1, (1-t)p_1, p_2, \dots, p_n)) = p_1 I((t, 1-t)) + I((p_1, \dots, p_n))

Then I is a constant nonnegative multiple of Shannon entropy.

In item 1 we are using the fact that given a bijection f:XX between finite sets and a probability measure on X, there is a unique probability measure on X such that p is measure-preserving; we demand that I takes on the same value on both these probability measures. In item 2, we use the standard topology on the simplex

Δ n1={(p 1,,p n) np i0, ip i=1}\Delta^{n-1} = \left\{(p_1,\ldots,p_n) \in \mathbb{R}^n \:|\: p_i \geq 0, \: \sum_i p_i = 1 \right\}

to put a topology on the set of probability distributions on any n-element set.

The most interesting and mysterious condition in Faddeev’s theorem is item 3. This is a special case of a general law appearing in the work of Shannon (S) and Faddeev (F). Namely, suppose p is a probability measure on the set {1,,n}. Suppose also that for each i{1,,n}, we have a probability measure q(i) on a finite set X i. Then p 1q(1)p nq(n) is again a probability measure space, and the Shannon entropy of this space is given by:

H(p 1q(1)p nq(n))=H(p)+ i=1 np iH(q(i)).H\left(p_1 q(1) \oplus \cdots \oplus p_n q(n)\right) = H(p) + \sum_{i=1}^n p_i H(q(i)) .

To see this, write q ij for the value of q(i):X i[0,) at the point jX i:

H(p 1q(1)p nq(n)) = i=1 n jX ip iq ijln(p iq ij) = i=1 n jX ip i(q ijln(p(i)+q ijln(q ij)) = i=1 np i(ln(p i)+H(q(i))) = H(p)+ i=1 np iH(q(i)).\begin{aligned} H(p_1 q(1) \oplus \cdots \oplus p_n q(n)) &=& -\sum_{i=1}^n \sum_{j \in X_i} p_i q_{i j} \ln(p_i q_{i j}) \\ &=& -\sum_{i=1}^n \sum_{j \in X_i} p_i (q_{i j} \ln(p(i) + q_{i j} \ln(q_{i j})) \\ &=& \sum_{i=1}^n p_i \left( -\ln(p_i) + H(q(i)) \right) \\ &=& H(p) + \sum_{i=1}^n p_i H(q(i)). \end{aligned}

Moreover, this formula is almost equivalent to condition 3 in Faddeev’s theorem, allowing us to reformulate Faddeev’s theorem as follows:

Theorem

Suppose I is a map sending any probability measure on any finite set to a nonnegative real number. Suppose that:

  1. I is invariant under bijections.

  2. I is continuous.

  3. I((1))=0, where (1) is our name for the unique probability measure on the set {1}.

  4. For any probability measure p on the set {1,,n} and probability measures q(1),,q(n) on finite sets, we have

    I(p 1q(1)p nq(n))=I(p)+ i=1 np iI(q(i)).I(p_1 q(1) \oplus \cdots \oplus p_n q(n)) = I(p) + \sum_{i=1}^n p_i I(q(i)) .

Then I is a constant nonnegative multiple of Shannon entropy. Conversely, any constant nonnegative multiple of Shannon entropy satisfies 1-4.

Proof

We just need to check that conditions 3 and 4 imply Faddeev’s equation (7). Take p=(p 1,,p n), q 1=(t,1t) and q i=(1) for i2: then condition 4 gives

I((tp 1,(1t)p 1,p 2,,p n))=I((p 1,,p n))+p 1I((t,1t))+ i=2 np iI((1))I((t p_1, (1 - t)p_1, p_2, \ldots, p_n)) = I((p_1, \ldots, p_n)) + p_1 I((t, 1 - t)) + \sum_{i = 2}^n p_i I((1))

which by condition 3 gives Faddeev’s equation.

It may seem miraculous how the formula

I(p 1,,p n)=c ip ilnp iI(p_1, \dots, p_n) = - c \sum_i p_i \ln p_i

emerges from the assumptions in either Faddeev’s original Theorem 5 or the equivalent Theorem 6. We can demystify this by describing a key step in Faddeev’s argument, as simplified by Rényi (R). Suppose I is a function satisfying the assumptions of Faddeev’s result. Let

f(n)=I(1n,,1n)f(n) = I(\frac{1}{n} , \dots, \frac{1}{n})

be the function I applied to the uniform probability measure on an n-element set. Since we can write a set with nm elements as a disjoint union of m different n-element sets, assumption 4 of Theorem 6 implies that

f(nm)=f(n)+f(m).f(n m) = f(n) + f(m).

Rényi shows that the only solutions of this equation obeying

lim n(f(n+1)f(n))=0\lim_{n \to \infty} (f(n+1) - f(n)) = 0

are

f(n)=clnn.f(n) = c \ln n .

This is how the logarithm function enters. Using condition 3 of Theorem 5, or equivalently conditions 3 and 4 of Theorem 6, the value of I can be deduced for all probability measures p such that each p i is rational. The result for arbitrary probability measures follows by continuity.

Proof of the main result

Now we complete the proof of Theorem 2. Assume that F obeys conditions 1-3 in the statement of this theorem.

Recall that (1) denotes the set {1} equipped with its unique probability measure. For each object pFinProb, there is a unique morphism

! p:p(1).!_p : p \to (1).

We can think of this as the map that crushes p down to a point and loses all the information that p had. So, we define the ‘entropy’ of the measure p by

I(p)=F(! p).I(p) = F(!_p) .

Given any morphism f:pq in FinProb, we have

! p=! qf.!_p = !_q \circ f.

So, by our assumption that F is functorial,

F(! p)=F(! q)+F(f),F(!_p) = F(!_q) + F(f),

or in other words:

(8)F(f)=I(p)I(q).F(f) = I(p) - I(q) .

To conclude the proof, it suffices to show that I is a multiple of Shannon entropy.

We do this by using Theorem 6. Functoriality implies that when a morphism f is invertible, I(f)=0. Together with (8), this gives condition 1 of Theorem 6. Since ! (1) is invertible, it also gives condition 3. Condition 2 is immediate. The real work is checking condition 4.

Take a probability measure p on {1,,n}, and probability measures q(1),,q(n) on finite sets X 1,,X n respectively. Then we obtain a probability measure

ip iq(i)\bigoplus_i p_i q(i)

on the disjoint union of X 1,,X n. Now, we can decompose p as a direct sum:

(9)p ip i((1)).p\cong \bigoplus_i p_i ((1)).

Define a morphism

f= ip i! q(i): ip iq(i)p i((1)).f = \bigoplus_i p_i !_{q(i)} \colon \bigoplus_i p_i q(i) \to \bigoplus p_i ((1)).

Then by convex linearity and definition of I,

F(f)= ip iF(! q(i))= ip iI(q(i)).F(f) = \sum_i p_i F(!_{q(i)}) = \sum_i p_i I(q(i)).

But also

F(f)=I( ip iq(i))I(p i((1)))=I( ip iq(i))I(p)F(f) = I\bigl(\bigoplus_i p_i q(i)\bigr) - I\bigl(\bigoplus p_i ((1))\bigr) = I\bigl(\bigoplus_i p_i q(i)\bigr) - I(p)

by (8) and (9). Comparing these two expressions for F(f) gives condition 4 of Theorem 6, completing the proof of Theorem 2.

A characterization of Tsallis entropy

Since Shannon defined his entropy in 1948, it has been generalized in many ways. Our Theorem 4 can easily be extended to characterize one family of generalizations, the so-called ‘Tsallis entropies’. For any positive real number α, the Tsallis entropy of order α of a probability measure p on a finite set X is defined by:

H α(p)={1α1(1 iXp i α) ifα1 iXp ilnp i ifα=1.H_\alpha(p) = \begin{cases} \frac{1}{\alpha - 1} \Bigl( 1 - \sum_{i \in X} p_i^\alpha \Bigr) &if \alpha \neq 1 \\ - \sum_{i \in X} p_i ln p_i &if \alpha = 1. \end{cases}

The peculiarly different definition when α=1 is explained by the fact that the limit lim α1H α(p) exists and equals the Shannon entropy H(p).

Although these entropies are most often named after Tsallis (T), they and related quantities had been studied by others long before the 1988 paper in which Tsallis first discussed it. For example, Havrda and Charvát (HC) had already introduced a similar formula, adapted to base 2 logarithms, in a 1967 paper in information theory, and in 1982, Patil and Taillie (PT) had used H α itself as a measure of biological diversity.

The characterization of Tsallis entropy is exactly the same as that of Shannon entropy except in one respect: in the convex linearity condition, the degree of homogeneity changes from 1 to α.

Theorem

Let α(0,). Suppose F is any map sending morphisms in FinProb to numbers in [0,) and obeying these three axioms:

  1. Functoriality:

    F(fg)=F(f)+F(g)F(f \circ g) = F(f) + F(g)

    whenever f,g are composable morphisms.

  2. Compatibility with convex combinations:

    F(λf(1λ)g)=λ αF(f)+(1λ) αF(g)F(\lambda f \oplus (1-\lambda) g) = \lambda^{\alpha} F(f) + (1-\lambda)^{\alpha} F(g)

    for all morphisms f,g and all λ[0,1].

  3. Continuity: F is continuous.

Then there exists a constant c0 such that for any morphism f:pq in FinMeas,

F(f)=c(H α(p)H α(q))F(f) = c(H_{\alpha}(p) - H_{\alpha}(q))

where H α(p) is the order α Tsallis entropy of p. Conversely, for any constant c0, this formula determines a map F obeying conditions 1-3.

Proof

The proof is exactly the same as that of Theorem 2, except that instead of using Faddeev’s theorem, we use Theorem V.2 of Furuichi (Fu). Furuichi’s theorem is itself the same as Faddeev’s, except that condition 3 of Theorem 5 is replaced by

I((tp 1,(1t)p 1,p 2,,p n))=p 1 αI((t,1t))+I((p 1,,p n))I((t p_1, (1-t)p_1, p_2, \dots, p_n)) = p_1^\alpha I((t, 1-t)) + I((p_1, \dots, p_n))

and Shannon entropy is replaced by Tsallis entropy of order α.

As in the case of Shannon entropy, this result can be extended to arbitrary measures on finite sets. For this we need to define the Tsallis entropies of an arbitrary measure on a finite set. We do so by requiring that

H α(λp)=λ αH α(p)H_\alpha(\lambda p) = \lambda^\alpha H_\alpha(p)

for all λ[0,) and all pFinMeas. When α=1 this is the same as the Shannon entropy, and when α1, it can be rewritten explicitly as

H α(p)=1α1(( iXp i) α iXp i α)H_\alpha(p) = \frac{1}{\alpha - 1} \Bigl( \Bigl( \sum_{i \in X} p_i \Bigr)^\alpha - \sum_{i \in X} p_i^\alpha \Bigr)

(which is analogous to (6)). The following result is the same as Theorem 4 except that, again, the degree of homogeneity changes from 1 to α.

Corollary

Let α(0,). Suppose F is any map sending morphisms in FinMeas to numbers in [0,), and obeying these four properties:

  1. Functoriality:

    F(fg)=F(f)+F(g)F(f \circ g) = F(f) + F(g)

    whenever f,g are composable morphisms.

  2. Additivity:

    F(fg)=F(f)+F(g)F(f \oplus g) = F(f) + F(g)

    for all morphisms f,g.

  3. Homogeneity of degree α:

    F(λf)=λ αF(f)F(\lambda f) = \lambda^\alpha F(f)

    for all morphisms f and all λ[0,).

  4. Continuity: F is continuous.

Then there exists a constant c0 such that for any morphism f:pq in FinMeas,

F(f)=c(H α(p)H α(q))F(f) = c(H_\alpha(p) - H_\alpha(q))

where H α is the Tsallis entropy of order α. Conversely, for any constant c0, this formula determines a map F obeying conditions 1-4.

Proof

This follows from Theorem 7 in just the same way that Corollary 4 follows from Theorem 2.

Acknowledgements

We thank the denizens of the n-Category Café, especially David Corfield, Steve Lack, Mark Meckes and Josh Shadlen, for encouragement and helpful suggestions. Leinster is supported by an EPSRC Advanced Research Fellowship.

References

  • D. K. Faddeev, On the concept of entropy of a finite probabilistic scheme, Uspehi Mat. Nauk (N.S.) 11 (1956), no. 1(67), pp. 227–231. (In Russian.) German Translation: Zum Begriff der Entropie eines endlichen Wahrscheinlichkeitsschemas, Arbeiten zur Informationstheorie I, Berlin, Deutscher Verlag der Wissenschaften, 1957, pp. 85-90.
  • J. Havrda and F. Charvát, Quantification method of classification processes: concept of structural α-entropy, Kybernetika 3 (1967), 30–35.
  • S. Mac Lane, Categories for the Working Mathematician, Graduate Texts in Mathematics 5, Springer, 1971.
  • G. P. Patil and C. Taillie, Diversity as a concept and its measurement, Journal of the American Statistical Association 77 (1982), 548–561.
  • C. Tsallis, Possible generalization of Boltzmann–Gibbs statistics, Journal of Statistical Physics 52 (1988), 479–487.

Revised on October 31, 2013 16:52:59 by Todd Trimble? (67.81.95.215)