** The Second Law, Part 1.** [11/5/03]

**Entropy and Order**

A simple thought experiment can illuminate the difference between thermodynamic entropy and order. Suppose you take some coins from your pocket place them in a stack on the table. Then the experiment is repeated by tossing the coins haphazardly on the table. From a thermodynamic point of view the stacked coins may have more potential energy than those that were tossed, but the arrangement of the coins has no further thermodynamic implications. One could say that the stacked coins were more “ordered” or contained more “information” than the tossed ones, but only by reference to an observer applying an interpretive algorithm (a value judgment). The observer need not be conscious or even complex, but merely must possess a bias that discriminates between various patterns of coin arrangements. The point is that one cannot discuss the information content of a system in the absence of the encoding/decoding device which interprets the patterns. As with Maxwell’s demon, any such device requires energy to operate, resulting in an increase in entropy.

But what is the physical relationship between thermodynamic entropy and order? None, that I can see. Does information theory relate to the material world at all, or is it a purely mathematical construct, a model? Or perhaps information is simply a certain variety of potential energy. For instance, a computer "writes" electric or magnetic charges at certain locations, increasing entropy in the process. “Reading” this information converts this potential energy back to kinetic energy, further increasing entropy. In principle, this process may be no different from pumping water into an elevated reservoir, which later can be released to drive a turbine.

**Entropy Production and Locality**

Another important point regarding entropy is that it must
increase locally, not just in the system (universe) as a whole. This
observation seems to be widely overlooked or ignored.^{1}_{e}S
+ d_{i}S, where “d_{e}S is the entropy change due to the
exchange of matter and energy with the exterior and d_{i}S is the
entropy change … produced by the irreversible processes in the interior of the
system.” For an isolated system d_{e}S
is zero and the total entropy dS must increase, as dictated by the 2^{nd}
Law. If energy (“negative entropy”) is
transferred into the system, dS may turn negative but the energy transfer d_{e}S
should have no effect on d_{i}S (other than to increase its production),
which still must be greater than zero.
Prigogine obviously considers the above a core insight, since it adorns
the cover of his textbook.^{2}

So the total entropy of the Earth may be decreasing, but only
because the energy input from the Sun more than compensates for the positive entropy
generated by each and every process involving energy conversion on Earth. And of course every reaction on the Sun is
also creating entropy. While Prigogine (following the lead of Schrödinger^{3}) would characterize the energy
input from the Sun as a “negative entropy flow”, I feel that this is a misuse
of the term. To my mind, the equation dU
= dW + T dS is a process equation, flowing from left
to right. Prigogone turns this around to
get something like d_{e}S = (dU – dW) / T,
which looks to me like putting the genie back in the bottle. Nevertheless, his point is clear. There is no place anywhere where entropy is
locally decreasing as the result of a local process, which is the idea I tried
to convey above.

But what about order? Doesn’t increasing order correspond to decreasing entropy? This is where Jaynes weighs in. In his article “The Minimum Entropy Production Principle”, he enumerates seven different definitions of entropy, two of which he characterizes as “information entropy”. His preferred definition is what he calls “experimental entropy”, where dS = dQ / T. “While the properties of [information entropy] are mathematical theorems, those of [experimental entropy] are summaries of experimental facts.”

While there is certainly overlap among the various entropies, at least by analogy, Jaynes' discussion is clearly a warning against mismatching concepts, particularly by comparing theoretically derived propositions with principles derived from experimental observation.

The above discussion can be summarizes as follows:

1. Entropy increases whenever and wherever energy is transformed. Whether the transformation causes order to increase or decrease is thermodynamically irrelevant.

2. The fact that the term “entropy” has been appropriated to information theory has no logical bearing on its thermodynamic meaning, just as the use of the word “energy” to describe mood or motivation is irrelevant to its meaning in physics.

3.
The
seeming contradiction of increasing order (complexity) in the face of the 2^{nd}
Law is a false paradox. Information
theory is a mathematical construct, whereas thermodynamics is based on
empirical observation, so the comparison is one of apples and oranges. A similar “paradox” is that which contrasts
the time-reversibility of the equations of motion (classical or relativistic)
with the directionality of the 2^{nd} Law.^{4} The ideal cannot logically be compared to the
real.

4.
If
the 2^{nd} Law is irrelevant, how does thermodynamics relate to life? Through the 4^{th} Law, which provides for selection based on differential efficiency.

**The 2 ^{nd}
Law, Complexity and Life**

At the end of his chapter on the 2nd Law, Prigogine loses it
(but gains a Nobel prize) by mistaking “association” for causality: “As we shall see in the following chapters,
systems that exchange entropy with their exterior do not simply increase the
entropy of the exterior, but may undergo dramatic spontaneous transformations
to “self-organization.” *The irreversible processes that produce
entropy create these organized states*. Such self-organized states range from
convection patterns in fluids to life.
Irreversible processes are the driving force that
create this order.” [his italics] The
implication is that complexity leads inexorably to life.

First let’s do a reality check: If you look out over your back yard, the ubiquity of life is obvious. However, if you look through a telescope into the rest of the universe, life (as far as we can see) is completely absent. However, there is no shortage of complex patterns to be observed. Looking out from the Earth, one can only conclude that life is extremely rare, while complexity is extremely common. Now if complexity led inevitably (or even occasionally) to life, one would expect to observe a variety of living creatures of various types and scales distributed throughout the cosmos.

One could argue that the reason we can’t detect life is that
living organisms are too small to be detected by our sensors. But the argument that complexity generates
life has not been qualified (as far as I know) to say that it only operates at
certain scales. (I have just such an argument
based on the 4^{th} Law, which I will present below.)

Another criticism involves Prigogine’s use of the word
“self-organization”, which is an oxymoron in this context. While no one would argue that non-living matter
can take a fascinating variety of complicated forms, few scientists would
expect any such forms to possess a “self” in the usual sense of the word. This criticism may seem like a semantic
quibble, but I feel the consequences of using the word “self” are unavoidably
misleading, since we are so psychologically attuned (and properly so) to
thinking of a self as having volition.
But of course no one is doing any organizing; matter and energy are just
falling down entropy gradients in the most efficient manner, which requires no
more volition than a light beam “finding its way” through space-time along the
shortest path.

Life forms generate a wide array of fractal structures.^{5} Doesn’t this also imply that life has roots
in the complexity of chaos? Not
necessarily. Evolution has been able to
take advantage of natural algorithms to provide parsimonious
means for constructing circulatory systems or light-gathering arrays, for
instance. This does not, however, imply
that life originated from those algorithms. ( I have
not seen anywhere that Mandelbrot has made such a leap of faith, even
speculatively.)

I believe that Prigogine, Jaynes and others fall short in
their efforts to fashion the 2^{nd} Law into a motive force with the
potential to animate matter. In this
regard the 2^{nd} Law is necessary but not sufficient. However, the 4^{th} Law provides
sufficiency by offering a mechanism for selective optimization.

Which brings me to the question of scale
limitations. An important
component of 4^{th} Law selection is the existence of
channels through which energy and
matter can flow. For living organisms, the channel walls
are highly impermeable. This feature distinguishes
living cells, for instance, from convection cells and most other non-living structures. It is clear that it is difficult or
impossible to maintain such barriers beyond a certain temperature. The flow of matter and hence energy is also
constrained as temperatures fall. For
terrestrial life, the temperature range so defined is roughly between the
freezing and boiling points of water.
Other forms of life might exist elsewhere in a different temperature
range, but it would still presumably be fairly narrow.

In my opinion, almost everyone I’ve
read on the subjects of complexity and the origins of life are on the wrong
track in seeking the answer in either rejecting or elaborating the 2^{nd}
Law. The real problem is that a law is
missing, the 4^{th} Law, and it is this “new” law that can provide the
answers.

Footnotes and References:

^{1} Donald Haynie, *Biological Thermodynamics*,

^{2} Dilip Kondepudi and Ilya Prigogine, *Modern
Thermodynamics*, John Wiley & Sons, 1998, p. 88-90.

^{3} Erwin Schrödinger, *What Is Life? *
with* Mind and Matter *
and* Autobiographical Sketches*, Cambridge University Press, 1967, p. 70-71.
First published in 1944.

^{4} Ilya Prigogine, *The End of Certainty*,
The Free Press, 1997.

^{5} Benoit Mandelbrot, *The Fractal Geometry of
Nature*, W. H. Freeman and Company, 1983.