This is a legacy page; for an up-to-date version go instead to:
I’m trying to chase down leads on generalizing the principle of least action to dissipative settings. Here is my complete collection of clues.
First I heard about these three papers:
L.M. Martyusheva and V.D. Seleznev, Maximum entropy production principle in physics, chemistry and biology, Physics Reports 426 (April 2006), 1-45.
R. C. Dewar, Maximum entropy production and the fluctuation theorem. (Available only with subscription.)
Stijn Bruers, A discussion on maximum entropy production and information theory, J. Phys. A: Math. Theor. 40 (2007), 7441.
I never got around to reading Dewar’s paper… and I was very confused, because Ilya Prigogine has a quite successful principle of least entropy production that applies to certain linear systems. But Martyusheva and Seleznev write:
1.2.6. The relation of Ziegler’s maximum entropy production principle and Prigogine’s minimum entropy production principle
If one casts a glance at the heading, he may think that the two principles are absolutely contradictory. This is not the case. It follows from the above discussion that both linear and nonlinear thermodynamics can be constructed deductively using Ziegler’s principle. This principle yields, as a particular case (Section 1.2.3), Onsager’s variational principle, which holds only for linear nonequilibrium thermodynamics. Prigogine’s minimum entropy production principle (see Section 1.1) follows already from Onsager–Gyarmati’s principle as a particular statement, which is valid for stationary processes in the presence of free forces. Thus, applicability of Prigogine’s principle is much narrower than applicability of Ziegler’s principle.
Then David Corfield got me really excited by noting that Dewar’s paper relies on some work by the great E. T. Jaynes, where he proposes something called the ‘Maximum Caliber Principle’:
And I read this paper and got really excited… but then I got distracted by other things.
But then, on the blog Azimuth, John F tried to convince me that Jaynes’ ‘Maximum Entropy Method“ for statistical reasoning is not distinct from his Maximum Caliber Principle. In pondering that, I bumped into this:
Abstract: Jaynes’ maximum entropy (MaxEnt) principle was recently used to give a conditional, local derivation of the “maximum entropy production” (MEP) principle, which states that a flow system with fixed flow(s) or gradient(s) will converge to a steady state of maximum production of thermodynamic entropy (R.K. Niven, Phys. Rev. E, in press). The analysis provides a steady state analog of the MaxEnt formulation of equilibrium thermodynamics, applicable to many complex flow systems at steady state. The present study examines the classification of physical systems, with emphasis on the choice of constraints in MaxEnt. The discussion clarifies the distinction between equilibrium, fluid flow, source/sink, flow/reactive and other systems, leading into an appraisal of the application of MaxEnt to steady state flow and reactive systems.
… which even cites some papers applying these ideas to climate change!
And then, David Corfield pointed me towards this:
This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes.
Later, on the n-Category Café, David Lyon wrote:
Most of the posters here study beautiful subjects such as the quantum theory of closed systems, which has time reversal symmetry. I have a little bit of experience with open dissipative systems, which are not so pretty but may interest some of you for a moment. My advisor has experimentally explored entropy production in driven systems. Although I haven’t been personally involved in most of the experiments, we’ve had many interesting discussions on the topic.
There is a very simple maximum entropy production principle in systems with a linear response. In this case the system evolves towards its maximum entropy state along the gradient, which is the direction of maximum change in entropy. This principle applies in practice to systems which are perturbed a small amount away from equilibrium and then allowed to relax back to equilibrium. As tomate said, if you take a closed system, open it briefly to do something gentle to it, and then wait for it to relax before closing it again, you’ll see this kind of response.
However, the story is very different in open systems. When a flux through a system becomes large, (e.g. close to the Eddington Limit for radiating celestial bodies, when heat flow follows Cattaneo’s Law, etc), the response no longer follows simple gradient dynamics and there is no maximum entropy production principle. There have been many claims of maximum or minimum entropy production principles by various authors and many attempts to derive theories based on these principles, but these principles are not universal and any theories based on them will have limited applicability.
In high voltage experiments involving conducting spheres able to roll in a highly resistive viscous fluid, there is a force on the spheres which always acts to reduce the resistance of the system. This is true whether the boundary condition is constant current or constant voltage . Since power dissipation is in the first case and in the second case, one can readily see that entropy production is minimized for constant current and maximized for constant voltage.
In experiments involving heat flow through a fluid, convection cells (a.k.a. Benard Cells) form at high rates of flow. For a constant temperature difference, these cells act to maximize the heat flow and thus the entropy production in the system. For a constant heat flow, these cells minimize the temperature difference and thus minimize the entropy production in the system.
If one were to carefully read “This Week’s Finds in Mathematical Physics (Week 296)” one would be able to find several more analogous examples where the response of open systems to high flows will either maximize or minimize the entropy production for pure boundary conditions or do neither for mixed boundary conditions.
As David points out, many variational principles for nonequilibrium systems have been proposed. They only hold in the so-called “linear regime”, where the system is slightly perturbed from its equilibrium steady state. We are very far from understanding general non-equilibrium systems, one major result being the “fluctuation theorem”, from which all kinds of peculiar results descend; in particular, the Onsager-Machlup variational principle for trajectories. For the mathematically-minded, I think the works by Christian Maes et al. might appeal to your tastes.
Funnily enough, there exists a “minimum entropy production principle” and a “maximum entropy production principle”. The apparent clash is due to the fact that while minimum entropy production is an ensemble property, that is, it holds on a macroscopic scale, the maximum entropy production principle is believed to hold for single trajectories, single “histories”. I think the first is well-estabilished, indeed a classical result due to Prigogine, while the second is still speculative and sloppy; it is believed to have important ecological applications. [Similarly, a similar confusion arises when one defines entropy as an ensemble property (Gibb’s entropy) or else as a microstate property (Boltzmann entropy)]
Unfortunately, that I know, there is not one simple and comprehensive review on the topic of variational principle in Noneq Stat Mech.
Here are some papers by Maes:
We explain the (non-)validity of close-to-equilibrium entropy production principles in the context of linear electrical circuits. Both the minimum and the maximum entropy production principles are understood within dynamical fluctuation theory. The starting point are Langevin equations obtained by combining Kirchoff’s laws with a Johnson-Nyquist noise at each dissipative element in the circuit. The main observation is that the fluctuation functional for time averages, that can be read off from the path-space action, is in first order around equilibrium given by an entropy production rate. That allows to understand beyond the schemes of irreversible thermodynamics (1) the validity of the least dissipation, the minimum entropy production, and the maximum entropy production principles close to equilibrium; (2) the role of the observables’ parity under time-reversal and, in particular, the origin of Landauer’s counterexample (1975) from the fact that the fluctuating observable there is odd under time-reversal; (3) the critical remark of Jaynes (1980) concerning the apparent inappropriateness of entropy production principles in temperature-inhomogeneous circuits.
The minimum entropy production principle provides an approximative variational characterization of close-to-equilibrium stationary states, both for macroscopic systems and for stochastic models. Analyzing the fluctuations of the empirical distribution of occupation times for a class of Markov processes, we identify the entropy production as the large deviation rate function, up to leading order when expanding around a detailed balance dynamics. In that way, the minimum entropy production principle is recognized as a consequence of the structure of dynamical fluctuations, and its approximate character gets an explanation. We also discuss the subtlety emerging when applying the principle to systems whose degrees of freedom change sign under kinematical time-reversal.
And here’s something I bumped into en route:
Tomate also wrote:
I went through Dewar’s paper some time ago. While I think most of his arguments are correct, still I don’t regard them as a full proof of the principle he has in mind. Unfortunately, he doesn’t explain analogies, differences ad misunderstandings around minimum entropy production and maximum entropy production. In fact, nowhere in his articles does a clear-cut definition of MEP appear.
I don’t think, like Martyusheva and Seleznev, that it is just a problem of boundary conditions, and the excerpt you take does not explain why these two principles are not in conflict in the regime where they both are supposed to hold.
Let me explain my own take on the minEP vs. maxEP problem and on similar problems (such as Boltzmann vs. Gibbs entropy increase). It might help sorting out ideas.
By “state” we mean very different things in NESM, among which: 1) the (micro)state which a single history of a system occupies at given times 2) the trajectory itself 3) the density of microstates which an ensemble of a large number of trajectories occupies at a given time (a macrostate). One can define entropy production at all levels of discussion (for the mathematically-inclined, markovian master equation systems offer the best set up where all is nice and defined). So, for example, the famous “fluctuation theorem” is a statement about microscopic entropy production along a trajectory, while the Onsager’s reciprocity relations are a statement about macroscopic entropy production. By “steady state”, we mean a stationary macrostate.
The minEP principle asserts that the distribution of macroscopic currents at a nonequilibrium steady state minimizes entropy production consistently with the constraints which prevent the system from reaching equilibrium.
As I understand it, maxEP is instead a property of single trajectories: most probable trajectories are those which have a maximum entropy production rate, consistently with constraints.
As a climate scientist, you should be interested in the second as we have not an ensemble of planets among which to maximize entropy or minimize entropy production. We have one single realization of the process, and we’d better make good use of it.
Here’s a review article on entropy maximization in climate physics:
As mentioned by Ozawa et al., Lorenz suspected that the Earth’s atmosphere operates in such a manner as to generate available potential energy at a possible maximum rate. The available potential energy is defined as the amount of potential energy that can be converted into kinetic energy. Independently, Paltridge suggested that the mean state of the present climate is reproducible as a state with a maximum rate of entropy production due to horizontal heat transport in the atmosphere and oceans. Figure 2 shows such an example. Without considering the detailed dynamics of the system, the predicted distributions (air temperature, cloud amount, and meridional heat transport) show remarkable agreement with observations. Later on, several researchers investigated Paltridge’s work and obtained essentially the same result.
(There are lots of references provided.)
Last revised on August 27, 2011 at 11:17:09. See the history of this page for a list of all contributions to it.