nLab entropy

Entropy

Entropy

Idea

Entropy is a measure of disorder, given by the amount of information necessary to precisely specify the state of a system.

Entropy is important in information theory and statistical physics.

Mathematical definitions

We can give a precise mathematical definition of the entropy in probability theory.

Preliminary definitions

We will want a couple of preliminary definitions. Fix a probability space (X,μ)(X,\mu); that is, XX is a set, and μ\mu is a probability measure on XX.

Surprisal

If AA is a measurable subset of XX, then the surprisal or self-information of AA (with respect to μ\mu) is

σ μ(A)logμ(A). \sigma_\mu(A) \coloneqq -\log \mu(A) \,.

Notice that, despite the minus sign in this formula, σ\sigma is a nonnegative function (since logp0\log p \leq 0 for p1p \leq 1); more precisely, σ\sigma takes values in [0,][0,\infty]. The term ‘surprisal’ is intended to suggest how surprised one ought to be upon learning that the event modelled by AA is true: from no surprise for an event with probability 11 to infinite surprise for an event with probability 00.

The expected surprisal of AA is then

(1)h μ(A)μ(A)σ μ(A)=μ(A)logμ(A)=log(μ(A) μ(A)) h_\mu(A) \coloneqq \mu(A) \sigma_\mu(A) = -\mu(A) \log \mu(A) = -\log(\mu(A)^{\mu(A)})

(with h μ(A)=0h_\mu(A) = 0 when μ(A)=0\mu(A) = 0). Like σ\sigma, hh is a nonnegative function; it is also important that h μh_\mu is concave. Both h μ(nothing)h_\mu(\nothing) and h μ(X)h_\mu(X) are 00, but for different reasons; h μ(A)=0h_\mu(A) = 0 when μ(A)=1\mu(A) = 1 because, upon observing an event with probability 11, one gains no information; while h μ(A)=0h_\mu(A) = 0 when μ(A)=0\mu(A) = 0 because one expects never to observe an event with probability 00. The maximum possible value of hh is e 1loge\mathrm{e}^{-1} \log \,\mathrm{e} (so e 1\mathrm{e}^{-1} if we use natural logarithms), which occurs when μ(A)=e 1\mu(A) = \mathrm{e}^{-1}.

We have not specified the base of the logarithm, which amounts to a constant factor (proportional to the logarithm of the base), which we think of as specifying the unit of measurement of entropy. Common choices for the base are 22 (whose unit is the bit, originally a unit of memory in computer science), 256256 (for the byte, which is 88 bits), 33 (for the trit), e\mathrm{e} (for the nat or neper), 1010 (for the bel, originally a unit of relative power intensity in telegraphy, or ban, dit, or hartley), and 1010\root{10}{10} (for the decibel, 1/101/10 of a bel). In applications to statistical physics, common bases are exactly (since 2019)

(2)e 10 29/138064910 3.145582127704085743×10 22(for the joule per kelvin), \mathrm{e}^{10^{29}/1\,380\,649} \approx 10^{3.145\,582\,127\,704\,085\,743 \times 10^{22}} \qquad \text{(for the joule per kelvin)},

or

(3)e 104600000000000/2078615654538311.654037938063167336(for the calorie per mole-kelvin), \mathrm{e}^{104\,600\,000\,000\,000/207\,861\,565\,453\,831} \approx 1.654\,037\,938\,063\,167\,336 \qquad \text{(for the calorie per mole-kelvin)},

and so on; although e\mathrm{e} is common in theoretical work (and then the unit of measurement is said to be Boltzmann's constant rather than the nat or neper).

Almost partitions

Recall that a partition of a set XX is a family 𝒫\mathcal{P} of subsets of XX (the parts of the partition) such that XX is the union of the parts and any two distinct parts are disjoint (or better, for constructive mathematics, two parts are equal if their intersection is inhabited).

When XX is a probability space, we may relax both conditions: for the union of 𝒫\mathcal{P}, we require only that it be a full set; for the intersections of pairs of elements of 𝒫\mathcal{P}, we require only that they be null sets (or better, for constructive mathematics, that A=BA = B when μ *(AB)>0\mu^*(A \cap B) \gt 0, where μ *\mu^* is the outer measure? corresponding to μ\mu).

For definiteness, call such a collection of subsets a μ\mu-almost partition; a μ\mu-almost partition is measurable if each of its part is measurable (in which case we can use μ\mu instead of μ *\mu^*).

Entropy of a σ\sigma-algebra on a probability space

This is a general mathematical definition of entropy.

Given a probability measure space (X,μ)(X,\mu) and a σ\sigma-algebra \mathcal{M} of measurable sets in XX, the entropy of \mathcal{M} with respect to μ\mu is

(4)H μ()sup{ Ah μ(A)|,||< 0,X=}. H_\mu(\mathcal{M}) \coloneqq \sup \{ \sum_{A \in \mathcal{F}} h_\mu(A) \;|\; \mathcal{F} \subseteq \mathcal{M},\; {|\mathcal{F}|} \lt \aleph_0,\; X = \biguplus \mathcal{F} \} \,.

In words, the entropy is the supremum, over all ways of expressing XX as an internal disjoint union of finitely many elements of the σ\sigma-algebra \mathcal{M}, of the sum, over these measurable sets, of the expected surprisals of these sets. This supremum can also be expressed as a limit as we take \mathcal{F} to be finer and finer, since h μh_\mu is concave and the partitions are directed.

We have written this so that \mathcal{F} is a finite partition of XX; without loss of generality, we may require only that \mathcal{F} be a μ\mu-almost partition. In constructive mathematics, it seems that we must use this weakened condition, at least the part that allows \bigcup \mathcal{F} to merely be full.

This definition is very general, and it is instructive to look at special cases.

Entropy of a probability space

Given a probability space (X,μ)(X,\mu), the entropy of this probability space is the entropy, with respect to μ\mu, of the σ\sigma-algebra of all measurable subsets of XX.

Entropy of a partition of a probability space

Every measurable almost-partition of a measure space (indeed, any family of measurable subsets) generates a σ\sigma-algebra. The entropy of a measurable almost-partition 𝒫\mathcal{P} of a probability measure space (X,μ)(X,\mu) is the entropy, with respect to μ\mu, of the σ\sigma-algebra generated by 𝒫\mathcal{P}. The formula (4) may then be written

(5)H μ(𝒫)= A𝒫h μ(A)= A𝒫log(μ(A) μ(A)), H_\mu(\mathcal{P}) = \sum_{A \in \mathcal{P}} h_\mu(A) = -\sum_{A \in \mathcal{P}} \log(\mu(A)^{\mu(A)}) ,

since an infinite sum (of nonnegative terms) may also be defined as a supremum. (Actually, the supremum in the infinite sum does not quite match the supremum in (4), so there is a bit of a theorem to prove here.)

In most of the following special cases, we will consider only partitions, although it would be possible also to consider more general σ\sigma-algebras.

Entropy of (a partition of) a discrete probability space

Recall that a discrete probability space is a set XX equipped with a function μ:X]0,1]\mu\colon X \to ]0,1] such that iXμ(i)=1\sum_{i \in X} \mu(i) = 1; since μ(i)>0\mu(i) \gt 0 is possible for only countably many ii, XX must be countable. We make XX into a measure space (with every subset measurable) by defining μ(A) iAμ(i)\mu(A) \coloneqq \sum_{i \in A} \mu(i). Since every inhabited set has positive measure, every almost-partition of XX is a partition; since every set is measurable, any partition is measurable.

Given a discrete probability space (X,μ)(X,\mu) and a partition 𝒫\mathcal{P} of XX, the entropy of 𝒫\mathcal{P} with respect to μ\mu is defined to be the entropy of 𝒫\mathcal{P} with respect to the probability measure induced by μ\mu. Simplifying (5), we find

(6)H μ(𝒫)= A𝒫log(( iAμ(i)) iAμ(i)). H_\mu(\mathcal{P}) = -\sum_{A \in \mathcal{P}} \log((\sum_{i \in A} \mu(i))^{\sum_{i \in A} \mu(i)}) \,.

More specially, the entropy of the discrete probability space (X,μ)(X,\mu) is the entropy of the partition of XX into singletons; we find

(7)H μ(X)= iXh μ(i)= iXlog(μ(i) μ(i)). H_\mu(X) = \sum_{i \in X} h_\mu(i) = -\sum_{i \in X} \log(\mu(i)^{\mu(i)}) \,.

This is actually a special case of the entropy of a probability space, since the σ\sigma-algebra generated by the singletons is the power set of XX.

Yet more specially, the entropy of a finite set XX is the entropy of XX equipped with the uniform discrete probability measure; we find

(8)H unif(X)= iXlog((1|X|) 1|X|)=log|X|, H_{unif}(X) = -\sum_{i \in X} \log((\frac{1}{|X|})^{\frac{1}{|X|}}) = \log {|X|} ,

which is probably the best known mathematical formula for entropy, due to Max Planck, who attributed it to Ludwig Boltzmann. (Its physical interpretation appears below.)

Of all probability measures on XX, the uniform measure has the maximum entropy.

Entropy with respect to an absolutely continuous probability measure on the real line

Recall that a Borel measure μ\mu on an interval XX in the real line is absolutely continuous if μ(A)=0\mu(A) = 0 whenever AA is a null set (with respect to Lebesgue measure), or better such that μ(A)>0\mu(A) \gt 0 whenever the Lebesgue measure of AA is positive. In this case, we can take the Radon–Nikodym derivative of μ\mu with respect to Lebesgue measure, to get an integrable function ff, called the probability distribution function; we recover μ\mu by

(9)μ(A)= Af(x)dx, \mu(A) = \int_A f(x) \mathrm{d}x ,

where the integral is taken with respect to Lebesgue measure.

If 𝒫\mathcal{P} is a partition (or a Lebesgue-almost-partition) of an interval XX into Borel sets, then the entropy of 𝒫\mathcal{P} with respect to an integrable function ff is the entropy of 𝒫\mathcal{P} with respect to the measure induced by ff using the integral formula (9); we find

(10)H f(𝒫)= A𝒫log(( Af(x)dx) Af(x)dx). H_f(\mathcal{P}) = -\sum_{A \in \mathcal{P}} \log\big( (\int_A f(x) \mathrm{d}x)^{\int_A f(x) \mathrm{d}x} \big) \,.

On the other hand, the entropy of the probability distribution space (X,f)(X,f) is the entropy of the entire σ\sigma-algebra of all Borel sets (which is not generated by a partition) with respect to ff; we find

(11)H f(X)= xXlog(f(x) f(x))dx H_f(X) = -\int_{x \in X} \log(f(x)^{f(x)}) \mathrm{d}x

by a fairly complicated argument.

I haven't actually managed to check this argument yet, although my memory tags it as a true fact. —Toby

Entropy of a density matrix

In the analogy between classical physics and quantum physics, we move from probability distributions on a phase space to density operators on a Hilbert space.

So just as the entropy of a probability distribution ff is given by flogf- \int f \log f, so the entropy of a density operator ρ\rho is

(12)H ρTr(ρlogρ). H_\rho \coloneqq -Tr (\rho \log \rho) \,.

using the functional calculus.

These are both special cases of the entropy of a state on a C *C^*-algebra.

There is a way to fit this into the framework given by (4), but I don't remember it (and never really understood it).

Relative entropy

For two finite probability distributions pp and qq, their relative entropy is

(13)S(p/q) k=1 np k(logp klogq k). S(p/q) \coloneqq \sum_{k = 1}^n p_k(log p_k - log q_k) \,.

Or alternatively, for ρ,ϕ\rho, \phi two density matrices, their relative entropy is

(14)S(ρ/ϕ)trρ(logρlogϕ). S(\rho/\phi) \coloneqq tr \rho(log \rho - log \phi) \,.

There is a generalization of these definitions to states on general von Neumann algebras, due to (Araki).

For more on this see relative entropy.

Physical entropy

As hinted above, any probability distribution on a phase space in classical physics has an entropy, and any density matrix on a Hilbert space in quantum physics has an entropy. However, these are microscopic entropy, which is not the usual entropy in thermodynamics and most other branches of physics. (In particular, microscopic entropy is conserved, rather than increasing with time.)

Instead, physicists use coarse-grained entropy, which corresponds mathematically to taking the entropy of a σ\sigma-algebra much smaller than the σ\sigma-algebra of all measurable sets. Given a classical system with NN microscopic degrees of freedom, we identify nn macroscopic degrees of freedom that we can reasonably expect to measure, giving a map from N\mathbb{R}^N to n\mathbb{R}^n (or more generally, a map from an NN-dimensional microscopic phase space to an nn-dimensional macroscopic phase space). Then the σ\sigma-algebra of all measurable sets in n\mathbb{R}^n pulls back to a σ\sigma-algebra on N\mathbb{R}^N, and the macroscopic entropy of a statistical state is the conditional entropy? of this σ\sigma-algebra. (Typically, NN is on the order of Avogadro's number, while nn is rarely more than half a dozen, and often as small as 22.)

If we specify a state by a point in n\mathbb{R}^n, a macroscopic pure state, and assume a uniform probability distribution on its fibre in N\mathbb{R}^N, then this results in the maximum entropy. If this fibre were a finite set, then we would recover Boltzmann's formula (8). This is never exactly true in classical statistical physics, but it is often nevertheless a very good approximation. (Boltzmann's formula actually makes better physical sense in quantum statistical physics, even though Boltzmann himself did not live to see this.)

A more sophisticated approach (pioneered by Josiah Gibbs?) is to consider all possible mixed microstates (that is all possible probability distributions on the space N\mathbb{R}^N of pure microstates) whose expectation values of total energy and other extensive quantities (among those that are functions of the macrostate) match the given pure macrostate (point in n\mathbb{R}^n). We pick the mixed microstate with the maximum entropy. If this is a thermal state?, then we say that the macrostate has a temperature, but it has an entropy in any case.

Gravitational entropy

order0\phantom{\to} 01\to 12\phantom{\to}2\to \infty
Rényi entropyHartley entropy\geqShannon entropy\geqcollision entropy\geqmin-entropy

References

General

The concept of entropy was introduced, by Rudolf Clausius in 1865, in the context of physics, and then adapted to information theory by Claude Shannon in 1948, to quantum mechanics by John von Neumann in 1955, to ergodic theory by Andrey Kolmogorov and Sinai in 1958, and to topological dynamics by Adler, Konheim and McAndrew in 1965.

A survey at the introductory level:

Survey with an eye towards black hole entropy:

In quantum probability theory

Discussion of entropy in quantum probability theory, hence for quantum states understood as positive linear functionals on the star-algebra of observables (operator algebraic states on star-algebras) and with their density matrices defined via the GNS construction:

Introduction and survey:

In quantum mechanics, the basic notion is the von Neumann entropy defined in terms of density matrix. For type III von Neumann algebras the density matrix is not well defined (physically, the problem is usually in ultraviolet divergences). Von Neumann entropy is generalized to arbitrary semifinite von Neumann algebra in

  • I. E. Segal, A note on the concept of entropy, J. Math. Mech. 9 (1960) 623–629

Relative entropy of states on von Neumann algebras was introduced in

  • Huzihiro Araki, Relative entropy of states of von Neumann algebras, Publ. RIMS Kyoto Univ.

    11 (1976) 809–833 (pdf)

A note relating I. Segal’s notion to relative entropy is

A large collection of references on quantum entropy is in

  • Christopher Fuchs, References for research in quantum distinguishability and state disturbance (pdf)

Category theoretic and cohomological interpretations

A discussion of entropy with an eye towards the presheaf topos over the site of finite measure spaces is in

  • Mikhail Gromov, In a search for a structure, Part I: On entropy (2012) (pdf)

  • William Lawvere, State categories, closed categories, and the existence [of] semi-continuous entropy functions, IMA Preprint Series #86, 1984. (web)

Category-theoretic characterizations of entropy are discussed in

A (co)homological viewpoint is discussed in

  • Pierre Baudot, Daniel Bennequin, The homological nature of entropy, Entropy, 17 (2015), 3253-3318. (doi)

(for an update see also the abstract of a talk of Baudot here)

Axiomatic characterizations

After the concept of entropy proved enormously useful in practice, many people have tried to find a more abstract foundation for the concept (and its variants) by characterizing it as the unique measure satisfying some list of plausible-sounding axioms.

A characterization of relative entropy on finite-dimensional C-star algebras is given in

  • D. Petz, Characterization of the relative entropy of states of matrix algebras, Acta Math. Hung. 59 (3-4) (1992) (pdf)

A simple characterization of von Neumann entropy of density matrices (mixed quantum states) is discussed in

  • Bernhard Baumgartner, Characterizing Entropy in Statistical Physics and in Quantum Information Theory (arXiv:1206.5727)

Entropy-like quantities appear in the study of many PDEs, with entropy estimates. For an intro see

  • L. C. Evans, A survey of entropy methods for partial differential equations, Bull. Amer. Math. Soc. 41 (2004), 409-438 (web, pdf); and longer Berkeley graduate course text: Entropy and partial differential equations. (pdf)

Last revised on September 17, 2024 at 08:16:03. See the history of this page for a list of all contributions to it.