RISING COMPLEXITY

The concept of complexity weaves throughout the cosmic-evolutionary story. This is especially so now that we’re about to encounter animate life forms, which are not only more complex than any inanimate system but also increase in complexity across the history of life on Earth. Soon, we shall be confronted with a central question of considerable import: How did the neural network within human brains acquire the complexity needed to build societies, weapons, cathedrals, philosophies, and the like? For what we humans culturally create now—including Web sites like this one—is as much a part of cosmic evolution as the stars that fused the heavy elements or the planets that fostered the origin of life. Figure 5.20 attempts to capture our story yet again, this one a mural that artistically spans great swaths of space and time on the road to greater complexity.

FIGURE 5.20 FIGURE 5.20 – The road to greater complexity is fancifully rendered in this sweeping painting spanning the history of Earth and life on it—from our planet's origin at upper left, to the emergence of life at lower left and its coming ashore at lower right, reaching the present at upper right with a representative example of human intelligence—astronomers in an observatory high atop a mountain. (JPL)

While relating this CHEMICAL EPOCH, we seem to be crossing a boundary—that between non-life and life. Yet, in reality, there is no boundary here. To stress one of the main arguments of this Web site: While chronologically probing the arrow of time from the non-living to the living—from physics and chemistry to biology, mainly—systems of greater complexity have, each in its turn, emerged. But as also noted earlier, when examined closely, living systems do not differ basically from those non-living. Scientists have never found any evidence for a mystical élan vital that grants life some special "life force" or paranormal quality. Instead, all ordered systems seen in Nature differ, not in kind, rather only in degree—namely, degree of complexity. Just what do we mean by complexity and how does it rise over the course of time?

Roughly half-way between the big bang and humankind is a good place to take a finer, slightly more technical look at the course of action likely embraced by ordered systems while originating over the course of time. We have already met a wide range of physical, inanimate systems, including galaxies, stars, and planets; and we are about to meet a whole gamut of biological, animate systems, including plants, animals, and intelligent beings. How realistic is it that all these systems can be arrayed along a continuous, ranked spectrum of rising complexity with time? What are the mechanisms that have brought forth the impressive order and organization in such Nature writ large? Is it possible that all complex systems—from quarks to quasars, from microbes to minds—are the result of the same kind of underlying action, enabling us, if we could understand it, to unify all the sciences?

One way to address complex systems on a common basis, that is, “on the same page,” is to appeal to thermodynamics. That’s because, of all the known principles of Nature, thermodynamics has the most to say about the concepts of change and of energy, and especially the changes in energy that seem key to the origin and evolution of all ordered systems. Literally, “thermodynamics” means “movement of heat”; for our purposes here (and in keeping with the wider Greek connotation of motion as change), a more insightful translation would be “change of energy.”

Many scientists often claim that the most cherished law in all of science is the so-called 2nd law of thermodynamics. (The first law merely states that total energy is conserved before and after any change—another of those central dogmas in science, this one of modern physics.) The second law dictates that randomness or disorder, the technical term for which is “entropy,” increases everywhere. In other words, Nature demands that a price be paid each and every time an energy transaction occurs. And that payment is in the form of less energy available to potentially drive change; energy is not lost, just rendered unavailable for useful work. This is true because heat (i.e., thermal energy) naturally flows from a hot source to a cold source, whether among molecules in a gas, stars in a galaxy, or blood in our bodies. The net result is diminished energy differences, causing events to run down and gradients to even out, all the while entropy inevitably increases.

Nature does abhor a vacuum; some say it abhors a gradient of any sort. Generally, matter and radiation tend to disperse (like perfume escaping from a bottle, or heat released from a fire), that is, to invade places where they are not initially present—and once there, to seek a natural balance or evening out, an equilibrium. A simple example is a pendulum that eventually stops swinging once it has slowed enough to attain its middle, lowest position; while oscillating back and forth it has uneven energy of motion while working, whereas at rest it’s equilibrated and thus no longer works. To shorten a much longer argument by giving another pair of examples, an imperative of thermodynamics’ second law is that a house of cards, once built, will tend to collapse with time; by contrast, a random collection of playing cards isn’t likely to assemble itself into some sort of structure. Likewise, water will of its own accord flow over a dam into a lake below, but has never been seen flowing back up to the top of the dam. Or an egg, once cracked open, will never flow back into its shell. These are classic examples of closed systems—those isolated from their surrounding environments—wherein events occur in only one direction. Nature is said to be irreversible and asymmetric.

By contrast, Nature also exhibits open systems whereby energy (and sometimes matter, too) enters from the environment outside such systems, as depicted earlier in Figure 1.24. And this active interaction with the environment beyond can make a great deal of difference to a system. We could, for instance, by exerting some energy (and some patience, too, which also burns energy), rebuild a house of cards; or reswing a pendulum clock by rewinding (i.e., energizing) it. A water pump could also be used to transport water from a low-lying lake to above a high-lying dam; but that also requires energy from outside the lake-dam system—again, energy obtained from the environment beyond. Usually, systems having energy flowing through them are not in equilibrium, which is why they are often also called open, non-equilibrium systems.

The infusion of energy (or matter) into any system can potentially yield organized structures. Disorder (or entropy) can actually decrease within open systems—which galaxies, stars, planets, and life forms most assuredly are—even though that disorder increases everywhere else in the Universe beyond those systems. Such “islands of structure” do not violate the 2nd law of thermodynamics because the net disorder of the system and its environment always increases. The energy needed to run a water pump—or any such device of our industrial civilization—is acquired at the expense of the environment, which is generally ravaged to power today’s technological society. Both drilling for oil and burning it disorder (or pollute) the environment more than the order (or edifice) gained by the energy needed to light a home or drive a car. Truth be told, the 2nd law of thermodynamics is not an environmentally friendly principle of Nature, yet it does allow organization to exist temporarily—for ~70 years typically for human beings, millions of years for species of life forms, and billions of years for stars and galaxies.

The energy used by humans to build anything—a house of cards, a table, chair, automobile, whatever—derives from the food we eat. We literally feed off our neighboring energy sources—locally plants and animals, and more fundamentally the Sun. The Sun, in turn, derives its energy from the Galaxy, specifically the conversion of the gravitational potential energy of its parent interstellar cloud into the heat that triggered stellar nuclear fusion. And our Galaxy among all the other galaxies, in turn yet again, owe their existence to gradients established in the early Universe—and to the resultant energy flows made possible by cosmic expansion that broke the primeval symmetry between matter and radiation, thereby establishing the non-equilibrium conditions ultimately needed for the growth of ordered systems everywhere.

Complexity Defined Recall for a moment those extremely hot and dense conditions in the PARTICLE EPOCH of the early Universe. Until neutral atoms began forming—that fundamental phase change ~300,000 years after the big bang—matter and radiation were intimately coupled. Equilibrium prevailed during the Radiation Era as a single temperature was enough to specify both matter and radiation—a physical state lacking order or structure, indeed one characterized by maximum entropy or minimum information content. Equilibrated systems are simple systems, requiring little information to describe them.

And that brings us to the heart of the issue regarding complexity. In some ways, the complexity of a system is a measure of the amount of information needed to describe that system. Operationally, it’s also related to the amount of energy flowing through a system of given mass. Complexity: a state of intricacy, complication, variety, or involvement, as in the interconnected parts of a structure—a quality of having many interacting, different components.

In the early Universe, the absence of a temperature gradient between matter and radiation mandated nearly zero information. There were then no structures, no appreciable order, no complexity beyond unclustered elementary particles zipping around in a uniform field of radiation. Rather, mostly everything was part of a homogeneous, chaotic frenzy in the aftermath of the big bang. One temperature, although declining rapidly, was sufficient to model the early history of the Universe, for the overgreat density then produced so many collisions as to guarantee an equilibrium. Once matter and radiation decoupled, however, equilibrium was destroyed, symmetry broken, and the Matter Era began. Two temperatures were thereafter needed to describe the evolution of matter and radiation. Accordingly, a cosmic thermal gradient was naturally established, the essential result being a flow of energy available to go to work—indeed to potentially “build things.”

The very expansion of the Universe, then, drives order from chaos; the process of cosmic evolution itself generates information. How that order became manifest as galaxies, stars, planets, and life forms has not yet been deciphered in detail. But we can now appreciate how natural systems eventually emerged—ordered physical, biological, and cultural systems able to create and maintain information by means of localized reductions in entropy.

Furthermore, because the two temperatures portraying the Matter Era diverge—that is, their difference grows larger with time—their departure (even today) from thermodynamic equilibrium allows the cosmos to produce increasing amounts of information. That’s because energy flows also increase with departures from equilibrium, and with them the potential for the growth of order. We thereby seemingly have a way to understand, at least in general terms minus the details, the observed rise in complexity throughout the eons of cosmic time—not just stars and galaxies, but also structures as intricate as single cells or contracting muscles, let alone the neural architecture of human brains.

Chance & Necessity Enough about inanimate, non-living objects. We are now well into the CHEMICAL EPOCH, on the threshold of life itself. And here we see more clearly the limited role of chance in Nature, as in all aspects of cosmic evolution. To be sure, chance cannot be the sole instrument of change. Determinism—meaning neither reductionism nor mechanism, rather simple obedience to precise, natural laws—must also play a part in all things that change.

Consider the precursor molecules of life’s origin, as noted earlier in this CHEMICAL EPOCH. Simple molecules such as ammonia, methane, water vapor, and carbon dioxide react with each other in the presence of energy to generate larger molecules. The end products aren’t just a random assortment of molecules; they comprise most of the two dozen amino acids and nucleotide bases common to all life on Earth. And regardless of how this chemical-evolutionary experiment is performed (provided the gases simulating our primordial planet are irradiated with realistic amounts of energy in the absence of free oxygen), the soupy organic matter trapped in the test tube always yields the same relative proportions of proteinoid compounds. The relevant point is that if the original reactants were re-forming into larger molecules by chance alone, the products would be among billions upon billions of possibilities and would likely vary each time the experiment was run. But the results of this experiment show no such diversity. Of the myriads of basic organic groupings and compounds that could possibly result from the random combinations of all sorts of simple atoms and molecules, only ~1500 are actually employed on Earth; and these groups, which comprise the essence of terrestrial biology, are in turn based upon only ~50 simple organic molecules, the most important of which are the above-noted acids and bases. Some factor other than chance is necessarily involved in the prebiotic chemistry of life’s origin, though one need not resort to mysticism to understand what's going on. That other factor is the electrical bonding influence naturally at work among the microscopic molecules—forces that guide and bond small molecules into the larger clusters engaged in life as we know it, thus granting the products some stability. Atoms arrayed in a molecular ring, for instance (such as the benzene molecule, or the rings central to all nucleotide bases), are a good deal more stable than linear arrays of the same atoms and molecules. And it doesn’t take long for reasonably complex molecules to form in these prebiotic experiments, not nearly as long as probability theory predicts for a chancy assembly of atoms. In short, the well-known electromagnetic force acts as a molecular sieve or probability selector, fostering only certain combinations while rejecting others, thereby guiding organization from amidst some of the randomness.

Molecules more complex than life’s simple acids and bases are even less likely to be synthesized by chance acting alone. For example, the simplest protein, insulin, comprises 51 amino acids linked in a specific order along a molecular chain. Probability theory tells us the chances of randomly assembling the correct number and order of acids: Given that 20 amino acids are involved, the answer is 1/2051, which equals 1/1066. This means that the 20 acids must be randomly assembled 1066, or a million trillion trillion trillion trillion trillion, times for insulin to form on its own. As this is obviously a great many permutations, we could randomly arrange the 20 amino acids trillions upon trillions of times per second for the entire history of the Universe and still not ever achieve by chance and chance alone the correct composition of this protein. Furthermore, to assemble larger proteins and nucleic acids, let alone a human being, would be vastly less probable if it had to be done randomly, starting only with atoms or simple molecules. Not at all an argument favoring supernaturalism, rather we once again gain an appreciation for how the natural forces of order tend to tame chance—much as was the case for the origin of galaxies in the earlier GALACTIC EPOCH, when Nature was also unable to form galaxies by chance and chance alone.

These are classic cases, whether among atoms in astronomy or molecules in chemistry, of Nature’s twin actors—chance and necessity—jousting again with one another. And of the mechanism of selection at work as well—yet not to select “in” the “winners” as much as to select “out” the “losers.” The process of selection, guided mostly by the laws of physics, serves to eliminate systems, whether molecules or galaxies, that are incompatible with their changing environments. In all such phenomena, including changes among life itself, elements of chance are often present, but so are the deterministic physical laws that serve to constrain chance—to limit its effectiveness, to restrict its randomness, to ensure likely outcomes even in the presence of chance. The two operate in tandem, often triggering change in many types of systems (that’s the chance part), followed by nonrandom elimination of those systems that are not optimized to their newly altered environments (that’s the deterministic part). As is well known, chance, necessity, and selection are essential features of the modern Darwinian paradigm of biological evolution. But they have their roles to play in the inanimate world as well, indeed the origins of these agents of change date back much earlier in the Universe.

Life Defined On the threshold of the next BIOLOGICAL EPOCH, how are we to analyze living systems per se, including biological structure and function, let alone attempt to define life? Surely, entropy must decrease during life’s origin and evolution, for living systems are demonstrable storehouses of focused energy and much order. Once again, as earlier, thermodynamics is the key. As with other objects in the Universe, we can use the concepts of information content and energy flow to describe both the structural and functional aspects of biological organization, indeed to define life itself.

All things considered, biological systems are best depicted by their coherent behavior, for their maintenance of order requires a great number of metabolizing and synthesizing chemical reactions as well as a host of intricate mechanisms controlling the rate and timing of life’s many varied actions. But this doesn’t mean that life violates the 2nd law of thermodynamics, a popular misconception. Although living organisms manage to decrease entropy locally, they do so at the expense of their environment—in short, by increasing the overall entropy of the remaining Universe.

Living things are often said to circumvent temporarily the normal entropy process by absorbing available energy from their surrounding environment. But even “circumvent” is too strong a verb, implying that life is somehow outside the usual bounds of thermodynamics. In reality, living things extend the traditional study of what is really thermostatics into the realm of genuine non-equilibrium thermodynamics. They do so, during both their origin and their evolution, because of temperature gradients naturally established on Earth. What is the source of these thermal differences and ultimately of the energy utilized in the process of living? On Earth, it’s our Sun. Energy flows from the hot (~6000-K) surface of the Sun to our relatively cool (~300-K) planet. All of Earth’s plants and animals depend for survival on the Sun, whose energy can be converted to useful work. Plants photosynthesize by using direct sunlight to convert water and carbon dioxide into nourishing carbohydrates; animals obtain solar energy more indirectly by eating plants and other animals.

By contrast, if left alone without any energy input, all living things, much like all else in Nature, tend toward equilibrium. While just twitching a finger (or merely thinking while reading this page), humans expend energy and eventually tire. Any action taken indefinitely, without further energizing, would drive us toward an equilibrium state of total chaos or orderlessness. That human beings manage to stay alive by steadily maintaining ourselves far from equilibrium is testimony to our evolved ability to handle optimally the flow of energy through our bodies. In point of fact, unachieved equilibrium can be taken as an essential premise, even part of an operational definition, of life, to wit:

Life: An open, coherent spacetime structure kept far from thermodynamic equilibrium by a flow of energy through it—a carbon-based system operating in a water-based medium, with higher forms metabolizing oxygen. As listed in the Glossary of this Web site, the first part of this definition (up to the dash) is applicable to galaxies, stars, and planets, as well as to life. Only the second part of life’s definition is specific to living systems as we know them. This lengthy definition, not entirely unreasonable given the difficulties noted earlier while portraying life, has admittedly been contrived in order to diagnose all ordered systems, once again “on that same page.” Carefully crafted definitions can help to unify the sciences, admittedly a chief agenda of this Web site.

As humans, we maintain a reasonably comfortable steady-state posture by feeding off our surrounding energy sources, mainly plants and animals. We stress “steady-state” since, as noted for any open system, by regulating the rate of incoming energy and outgoing wastes, we can achieve a kind of stability—at least in the sense that while alive, we remain out of equilibrium by roughly a constant amount. In a paradoxical juxtaposition of terms, we might therefore describe ourselves as “dynamic steady-states.” Unfortunately, we waste much of the incoming energy while radiating heat into the environment; warm-blooded life forms are generally warmer than their surrounding air. Yet the emitted energy accords with thermodynamics’ second law, as Nature has its rules. By contrast, some of the incoming energy can power useful work, thereby helping to maintain order in our lives and bodies. Once this energy flow ceases, the dynamic steady-state is abandoned and we drift toward the more common, “static” steady-state known as death, where, following complete decay, our bodies reach a true equilibrium. Stated more pointedly: Once we stop eating, we die.

Here’s what happens in the food chain consisting of grass, grasshoppers, frogs, trout, and humans. According to the 2nd law of thermodynamics, some available energy is converted to unavailable energy at each stage of the food chain, thus causing greater disorder in the environment. At each step of the process, when the grasshopper eats the grass, the frog eats the grasshopper, the trout eats the frog, and so on, useful energy is lost. The numbers of each species needed for the next higher species to continue decreasing entropy are staggering: To support 1 human being for a year requires ~300 trout. These trout, in turn, must consume ~90,000 frogs, which yet in turn devour ~27 million grasshoppers, which live off ~1000 tons of grass. Thus, for a single human adult to remain “ordered”—that is, to stay alive—over the course of a single year, each of us needs the energy equivalent of tens of millions of grasshoppers or a thousand tons of grass.

Humans, then, maintain order in our bodies only at the expense of an increasingly disordered, plundered environment. Every living thing, in fact, takes a toll on the environment. The only reason that the environment doesn’t decay to an equilibrium state is that the Sun continues to shine daily. Our whole biosphere comprises a non-equilibrium system that is subject to solar heating, thus engendering much environmental energy. Earth’s thin outer skin is thereby enriched, permitting us and other organisms to go about the business of living.

It’s worth pursuing this point a bit further. Suppose Earth’s atmosphere and outer space were to achieve thermal equilibrium. All energy flow into and out of Earth would cease, causing all thermodynamic events on our planet to decay within surprisingly short periods of time. A rough estimate shows that the reservoir of Earth atmospheric thermal energy would deplete within a few months, the latent heat bound in our planet’s oceans would dissipate in a couple of weeks, and any mechanical energy (such as atmospheric circulation and weather events) would damp in a few days. So be sure to place Earth’s energy budget into perspective; neither our planet’s primary source of energy nor its ultimate sink are located on planet Earth.

Not only is life, at any given moment, a reservoir of order, but evolution itself also seems to foster the emergence of greater amounts of order from disorder. As we shall see in the next BIOLOGICAL EPOCH, each succeeding (and successful) species becomes more complex and thus better equipped to capture and utilize available energy. The higher the species in a biological lineage, the greater the energy density fluxing through that species, and the greater the disorder created in the Earth-Sun environment. Alas, our principal source of available energy, the Sun, is itself running down as it “pollutes” interplanetary space with increasing entropy.

So use caution while regarding evolution as progress. Evolution fosters evermore complex islands of order at the expense of even greater seas of disorder elsewhere in the Solar System as well as in the Universe beyond.

Energy Rates How valid is this reasoning? Can we actually examine all ordered systems—both living and non-living—on that "single page," or level playing field, sort to speak? Indeed we can and what’s more, most such systems do display increased complexity along the arrow of time. The flow of energy into and out of open, non-equilibrium systems is an integral part of the cosmic-evolutionary story, yet here we shall only sketch the main results. Suffice it to say that the concept of energy itself is a powerful unifying factor in science; energy may well be the most universal currency in all of Nature.

Let’s briefly evaluate the energy budgets of several ordered, structured systems experiencing physical, biological, and cultural evolution—namely, galaxies, stars, planets, plants, animals, brains, and society. For the first few of these, energy derives from the gravitational conversion of matter into heat, light, and other types of radiation, much as discussed in earlier epochs of this Web site. For plants, animals, and other life forms—such as those in this and subsequent epochs—energy derives from our parent star, the Sun. And for social systems, too, as explained in later epochs of this Web site, energy flow is the key driver that powers the daily work needed to run our modern, technological civilization.

Actually, we should concern ourselves less with absolute amounts of energy than with changes in energy—and especially with changes in the density of energy. After all, a galaxy has much more energy than any cell, but of course galaxies also have vastly larger sizes and masses. Rather, it’s energy density that best marks the degree of order or complexity in any system, just as it was radiation energy density and matter energy density that described events in the earlier Universe. Of even greater import is the rate at which energy transits a complex system of given mass. In this way—called normalization—all systems can be compared along a fair and even spectrum. The appropriate term, energy rate density, is familiar to astronomers as the light-to-mass ratio, to physicists as the power density, to geologists as the radiant flux, and to biologists as the metabolic rate. All scientists, each in their own specialties, label this term differently, yet all recognize its importance. Now, with today’s avowed intellectual agenda to unify the natural sciences, energy rate density usefully links many disciplines and its physical meaning is clear: the amount of energy passing through a given system per unit time per unit mass.

Take stars, for instance, in particular an average star such as the Sun. Astronomers know well its luminosity and its mass, so it’s easy to compute its energy rate density. Again, this is energy flowing through the star, as gravitational potential energy during the act of star formation changes into radiation released as the mature star shines. Such a star utilizes high-grade energy during nuclear fusion to produce greater organization, but only at the expense of its surrounding environment, for the star also emits low-grade light, which, by comparison, is a highly disorganized entity. However, even this is a relative statement: What is termed here “low-grade,” disordering sunlight will, when reaching Earth, become a high-grade ordering form of energy compared to the even lower-grade (infrared) energy later emitted by Earth.

Furthermore, as stars evolve, their complexity grows. Cosmic expansion is not the only source of structural order in the Universe. On local scales, the evolution of gravitationally bound systems—which is what stars are—can also generate information. As described in the STELLAR EPOCH, stars are known to originate from dense pockets of gas and dust within chemically and thermally homogeneous galactic clouds. Initially a young star has only a relatively small temperature gradient from core to surface, and is normally composed of a nearly uniform mixture of ~90% hydrogen and ~10% helium, often peppered with trace amounts of heavier elements. As the star evolves, its core grows progressively hotter while adjusting its size like a cosmic thermostat, all the while nuclear fusion reactions change its lightweight nuclei into heavier types. With time, such an object grows thermally and chemically inhomogeneous, as the heat and heavy nuclei build up near the core. The result is an aged star that has gradually become more ordered and less equilibrated—indeed one for which more information is needed to describe it, since a complete description of a thermally and chemically differentiated system requires more data than an equally complete description of its simpler, initially homogeneous state.

Planets are more complex than typical stars (or galaxies), hence they tend to have larger normalized energy flows. For example, the energy rate density that drives Earth’s climasphere—the most impressively ordered inanimate system at the surface of our planet today—is ~100 times that of a typical star or galaxy. (The climasphere comprises the lower atmosphere and upper ocean that together guide meteorological change capable of mechanically circulating air, water, wind, and waves.)

Living systems need even larger energy densities, not surprisingly since any form of life is clearly more ordered than any nonliving system. Photosynthesizing plants use nearly 1000 times the energy of a star, and human beings consume a daily food ration ~20 times more than that—provided, we repeat, those energy flows are normalized to each system’s mass. Although the total energy flowing through a star is hugely larger than that through a human body, the energy rate density is much larger for the latter—a fact surprising though true when comparing ourselves with stars.

In turn, our brains utilize nearly 10 times more energy yet again—all told, >100,000 times the rate of a star, “pound for pound.” Such a high metabolism for our human heads, mostly to maintain the electrical actions of countless neurons, testifies to the disproportionate amount of worth Nature has invested in cerebral affairs. Occupying ~2% of our body’s mass yet using nearly 20% of its energy intake, our cranium—the most exquisite clump of matter in the known Universe—is striking evidence of the superiority, in evolutionary terms, of brain over brawn.

And currently topping the complexity spectrum, civilization en masse—an open system of all humanity comprising modern society going about its daily, energy-rich business—displays nearly 106 times more energy rate density than a star or galaxy. The global ensemble of >7 billion inhabitants, despite its many sociopolitical ills, works collectively to fuel and operate our modern technological culture as an open, elaborate, social system. A marvelous example of the whole equaling more than the sum of its many parts, a group of brainy organisms functioning together is more complex than all of its individuals combined.

Figure 5.21 sums up all this technicality, plotting the energy rate densities for a whole spectrum of ordered systems that have emerged over the course of all of natural history. The rise in energy flow per unit mass is plotted as horizontal histograms, starting at those times when various open structures emerged in Nature. Note how the increase in complexity has been rapid in the last billion years or so, much as expected from human intuition. It’s quite remarkable that such a wide range of complex systems can be treated in the same manner on a single graph.

FIGURE 5.20 FIGURE 5.21 — A logarithmic plot of energy flow per unit mass for a wide range of ordered systems compactly depicts the rise of complexity along the arrow of time. The solid curve approximates the increased energy rate density for galaxies, stars, planets, and life forms throughout all of Nature.

This doesn’t imply, by any means, that one type of ordered system evolves into another. Stars don’t evolve into planets, any more than planets evolve into life, or plants into animals or animals into brains. Rather, new and more complex structures occasionally originated over time as energy flows became more prevalent yet more localized. Galaxies gave rise to the environments suited for the birth of stars, some stars spawned environments conducive to the formation of planets, and at least one planet fostered an environment ripe for the origin of life.

Thus, complexity grows as more intricately ordered systems emerge, in turn, throughout natural history, much as their surrounding environments are inevitably ravaged with rising entropy. The products of evolution are fleeting, the evolutionary process itself messy and undirectional. When energy flows cease, systems decay back to equilibrium from whence they came: when the Sun stops shining it’s no longer a star; when the biosphere ceases plants die; when humans starve we perish; all return their elements to Nature. Remarkably, the two—complexity locally and entropy globally—can both increase and simultaneously so. The temporarily ordered systems that so impressively comprise the diverse actors in the cosmic-evolutionary story are wholly consistent with, indeed best understood by, the most cherished of all physical laws—namely, the thermodynamic principles that undergird the arrow of time itself.


<<BACK            HOME            NEXT>>