Entropy is a measure of disorder, given by the amount of information necessary to precisely specify the state of a system.
Entropy is important in information theory and statistical physics.
We can give a precise mathematical definition of the entropy in probability theory.
This is just a preliminary definition.
The expected information of verification of a probability (a real number in the unit interval) $p$ is
(or more precisely $h(p) \coloneqq \lim_{q \searrow p} (- q \log q)$, so that $h(0) = 0$ is defined). Notice that, despite the minus sign in this formula, $h$ is a nonnegative function (since $\log p \leq 0$ for $p \leq 1$). It is also important that $h$ is concave.
Both $h(0)$ and $h(1)$ are $0$, but for different reasons; $h(1) = 0$ because, upon verifying a statement with probability $1$, one gains no information; while $h(0) = 0$ because one expects never to verify a statement with probability $0$. In general, $-\log p$ is the information gained by verifying a statement of probability $p$, but this will happen only with probability $p$, hence $-p \log p$.
We have not specified the base of the logarithm, which amounts to a constant factor (proportional to the logarithm of the base), which we think of as specifying the unit of measurement of entropy. Common choices for the base are $2$ (whose unit is the bit, originally a unit of memory in computer science), $256$ (byte: $8$ bits), $3$ (trit), $\mathrm{e}$ (nat or neper), $10$ (bel, originally a unit of power intensity in telegraphy, or ban, dit, or hartley), and $\root{10}{10}$ (decibel: $1/10$ of a bel). In applications to statistical physics, common bases are approximately $10^{3.1456 \times 10^{22}}$ (joule per kelvin), $1.65404$ (calorie per mole-kelvin), etc.
This is a general mathematical definition of entropy.
Given a probability measure space $(X,\mu)$ and a $\sigma$-algebra $\mathcal{M}$ of measurable sets in $X$, the entropy of $\mathcal{M}$ with respect to $\mu$ is
In words, the entropy is the supremum, over all ways of expressing $X$ as an internal disjoint union of finitely many elements of the $\sigma$-algebra $\mathcal{M}$, of the sum, over these measurable sets, of the expected information of verification of these sets. This supremum can also be expressed as a limit as we take $\mathcal{F}$ to be finer and finer, since $h$ is concave and the partitions are directed.
(Without loss of generality, we do not need the elements of $\mathcal{F}$ to be disjoint, as long as their intersections are null sets. Similarly, we do not need their union to be all of $X$, as long as their union is a full set. In constructive mathematics, it seems that we must weaken the latter condition in this way.)
This definition is very general, and it is instructive to look at special cases.
Given a probability space $(X,\mu)$, the entropy of this probability space is the entropy, with respect to $\mu$, of the $\sigma$-algebra of all measurable subsets of $X$.
Recall that a partition of a set $X$ is a family $\mathcal{P}$ of $X$ such that $X$ is the union of $\mathcal{P}$ and any two distinct elements of $\mathcal{P}$ are disjoint. (That is, the supremum in (1) is taken over finite partitions of $X$ into elements of $\mathcal{M}$.)
Every partition of a measure space $X$ into measurable sets (indeed, any family of measurable subsets of $X$) generates a $\sigma$-algebra of measurable sets. The entropy of a measurable partition $\mathcal{P}$ of a probability measure space $(X,\mu)$ is the entropy, with respect to $\mu$, of the $\sigma$-algebra generated by $\mathcal{P}$. The formula (1) may then be written
since an infinite sum (of positive terms) may also be defined as a supremum. (Actually, the supremum in the infinite sum does not quite match the supremum in (1), so there is a bit of a theorem to prove here.)
In most of the following special cases, we will consider only partitions, although it would be possible also to consider more general $\sigma$-algebras.
Recall that a discrete probability space is a set $X$ equipped with a function $\mu\colon X \to [0,1]$ such that $\sum_{i \in X} \mu(i) = 1$; since $\mu(i) \gt 0$ is possible for only countably many $i$, we may assume that $X$ is countable. We make $X$ into a measure space (with every subset measurable) by defining $\mu(A) \coloneqq \sum_{i \in A} \mu(i)$. Since every set is measurable, any partition of $X$ is a partition into measurable sets.
Given a discrete probability space $(X,\mu)$ and a partition $\mathcal{P}$ of $X$, the entropy of $\mathcal{P}$ with respect to $\mu$ is defined to be the entropy of $\mathcal{P}$ with respect to the probability measure induced by $\mu$. Simplifying (2), we find
More specially, the entropy of the discrete probability space $(X,\mu)$ is the entropy of the partition of $X$ into singletons; we find
This is actually a special case of the entropy of a probability space, since the $\sigma$-algebra generated by the singletons is the power set of $X$ (at least when $X$ is countable, and the formulas agree in any case).
Yet more specially, the entropy of a finite set $X$ is the entropy of $X$ equipped with the uniform discrete probability measure; we find
which is probably the earliest mathematical formula for entropy, due to Boltzmann. (Its physical interpretation appears below.)
Recall that a Borel measure? $\mu$ on an interval $X$ in the real line is absolutely continuous if $\mu(A) = 0$ whenever $A$ is a null set (with respect to Lebesgue measure). In this case, we can take the Radon-Nikodym derivative of $\mu$ with respect to Lebesgue measure, to get an integrable function $f$, called the probability distribution function; we recover $\mu$ by
where the integral is taken with respect to Lebesgue measure.
If $\mathcal{P}$ is a partition of an interval $X$ into Borel sets, then the entropy of $\mathcal{P}$ with respect to an integrable function $f$ is the entropy of $\mathcal{P}$ with respect to the measure induced by $f$ using the integral formula (4); we find
On the other hand, the entropy of the probability distribution space $(X,f)$ is the entropy of the entire $\sigma$-algebra of all Borel sets with respect to $f$; we find
by a fairly complicated argument.
I haven't actually managed to check this argument yet, although my memory tags it as a true fact. —Toby
Note that this $\sigma$-algebra is not generated by a partition.
In the analogy between classical physics and quantum physics, we move from probability distributions on a phase space to density operators on a Hilbert space.
So just as the entropy of a probability distribution $f$ is given by $- \int f \log f$, so the entropy of a density operator $\rho$ is
using the functional calculus.
These are both special cases of the entropy of a state on a $C^*$-algebra.
There is a way to fit this into the framework given by (1), but I don't remember it (and never really understood it).
For two finite probability distributions $(p_i)$ and $(q_i)$, their relative entropy is
Or alternatively, for $\rho, \phi$ two density matrices, their relative entropy is
There is a generalization of these definitions to states on general von Neumann algebras, due to (Araki).
For more on this see relative entropy.
As hinted above, any probability distribution on a phase space in classical physics has an entropy, and any density matrix on a Hilbert space in quantum physics has an entropy. However, these are microscopic entropy, which is not the usual entropy in thermodynamics and most other branches of physics. (In particular, microscopic entropy is conserved, rather than increasing with time.)
Instead, physicists use coarse-grained entropy, which corresponds mathematically to taking the entropy of a $\sigma$-algebra much smaller than the $\sigma$-algebra of all measurable sets. Given a classical system with $N$ microscopic degrees of freedom, we identify $n$ macroscopic degrees of freedom that we can reasonably expect to measure, giving a map from $\mathbb{R}^N$ to $\mathbb{R}^n$ (or more generally, a map from an $N$-dimensional microscopic phase space to an $n$-dimensional macroscopic phase space). Then the $\sigma$-algebra of all measurable sets in $\mathbb{R}^n$ pulls back to a $\sigma$-algebra on $\mathbb{R}^N$, and the macroscopic entropy of a statistical state is the entropy of this $\sigma$-algebra. (Typically, $N$ is on the order of Avogadro's number, while $n$ is rarely more than half a dozen, and often as small as $2$.)
Generally, we specify a state by a point in $\mathbb{R}^n$, a macroscopic pure state, and assume a uniform probability distribution on its fibre in $\mathbb{R}^N$. If this fibre were a finite set, then we would recover Boltzmann's formula (3). This is never exactly true in classical statistical physics, but it is often nevertheless a very good approximation. (Boltzmann's formula actually makes better physical sense in quantum statistical physics, even though Boltzmann himself did not live to see this.)
gravitational entropy
The concept of entropy was introduced, by Rudolf Clausius in 1865, in the context of physics, and then adapted to information theory by Claude Shannon in 1948, to ergodic theory by Andrey Kolmogorov and Sinai in 1958, and to topological dynamics by Adler, Konheim and McAndrew in 1965.
Relative entropy of states on von Neumann algebras was introduced in
A survey of entropy in operator algebras is in
See also
A large collection of references on quantum entropy is in
A discussion of entropy with an eye towards the presheaf topos over the site of finite measure spaces is in
Mikhail Gromov, In a Search for a Structure, Part I: On Entropy (2012) (pdf)
William Lawvere, State categories, closed categories, and the existence (subtitle: Semi-continuous entropy functions), IMA reprint 86, pdf
After the concept of entropy proved enormously useful in practice, many people have tried to find a more abstract foundation for the concept (and its variants) by characterizing it as the unique measure satisfying some list of plausible-sounding axioms.
A characterization of relative entropy on finite-dimensional C-star algebras is given in
A simple characterization of von Neumann entropy of density matrices (mixed quantum states) is discussed in