Você está na página 1de 79

Order, Chaos and the End of Reductionism

(Further Ruminations of an Amateur Scientist)

By John Winders

z' = z n + c
Note to my readers:
You can access and download this essay and my other essays through the Amateur
Scientist Essays website under Direct Downloads at the following URL:

https://sites.google.com/site/amateurscientistessays/

You are free to download and share all of my essays without any restrictions, although it
would be very nice to credit my work when quoting directly from them.
The image below was generated by cellular automata. The pattern evolves downward from an
Alpha Point at the top of the image. Each pixel in a row is defined by the neighboring pixels in the
preceding row by following simple rules of modulo-2 arithmetic. Modulo-2 arithmetic is highly
non-linear, and non-linear processes produce order and chaos. Projecting the top-to-bottom
evolution as a 2-dimensional image, complicated large-scale order seems to emerge from simple
localized processes.

The image below was generated by the Mandelbulb Generator computer program. The surface
surrounding this strange object is the boundary that separates order from chaos. Points inside the
surface represent order (included in the Mandelbrot set) and points outside the surface represent
chaos (excluded from the Mandelbrot set). Order and chaos thus mirror each other.

i
The image below is the barred spiral galaxy NGC1300 taken by the Hubble space telescope. The
color rendering was inverted to produce the color-on-white image. The large-scale order is largely
a result of interactions involving gravity and inertia. According to reductionist thinking, entropy
can only produce randomness and disorder. Erik Verlinde has discovered that gravity and inertia
both emerge from entropy. Thus, a post-reductionist interpretation of this image is the balance
between the tendency of gas molecules to fly apart and the tendency for them to collapse; both of
these tendencies are driven by a single entropic force.

The image below is an actual photograph of a DNA strand. DNA has the most highly-organized
naturally-occurring structure known; current scientific theories based on reductionism cannot fully
explain it.

ii
The image below is the famous painting The Great Wave off Kanagawa by the Japanese artist
Katsushika Hokusai. It captures the essence of order from chaos. Notice the self-similarity and
scale-invariant features of the breaking wave, which are the fundamental properties of fractals.
Also notice the similarity between the rising wave in the foreground and the snow-covered
mountain in the background. Hokusai was keenly aware of the fractal-like patterns found
throughout nature. This raises the prospect that these patterns are reflections of fractal properties
of space itself. It is possible to mathematically construct a Mandelbrot set using quaternions; the
set would be a finite 4-dimensional solid enclosed by a fractal boundary having three dimensions
with an infinite volume. Could our 3-dimensional space be a fractal-like boundary that separates
order from chaos in a higher dimension?

The image below is the strange stationary hexagonal feature at the north pole of Saturn taken by
the Cassini orbiter in 2012. It was first seen in the 1980s by the Voyager flyby missions. An
unknown self-organizing mechanism is responsible for sustaining the formation. (Credit: NASA)

iii
This image captures natural order and chaos that spring within the fractal boundary we live in.
The chaotic water jet splashing over the urn gives way to the orderly laminar flow down along the
sides. The same fundamental law, which maximizes the total degrees of freedom of the universe,
governs the laws of fluid dynamics and the self-organizing principle expressed in the plants and
flowers that surround the urn.

iv
The image of the Mandelbrot set, depicted below, has infinite complexity. Yet the set itself is defined
by a very simple numerical algorithm: Each point on the complex-number plane generates a series
of numbers that either converges (yes) or diverges (no). The colors assigned to the no
points below indicate how close they are to approaching a yes. The results are the same every
time a Mandelbrot set is created using this method, so the set completely deterministic.

On the other hand, information is a measurement of uncertainty. Therefore, although the image of
a Mandelbrot set may be infinitely complex, it actually contains very little information because it's
completely deterministic. This example illustrates an important distinction between information
and complexity, which are often conflated.

The image of a Mandelbrot set reveals a fractal pattern. Such patterns are ubiquitous in nature, but
natural processes are not mathematically deterministic. Uncertainty the underlying source of
information is the engine that drives most (or all?) natural processes. Time is the necessary
ingredient that allows the universe to evolve into the complex and information-rich reality we
experience.

v
Note: The drawing on the cover is an example of Penrose tiling, generated by a computer program
provided by Craig S. Kaplan of the University of Waterloo in Ontario, Canada. This particular
example was generated by varying the program's parameters until they were almost at the
borderline of order and chaos.
This essay is a companion piece to Is Science Solving the Reality Riddle?(Cogitations of an
Amateur Scientist). I considered adding yet another appendix to Reality Riddle, but repeatedly
fooling around with it was starting to get ridiculous. So I decided to encapsulate some ideas in a
separate piece instead (this one). In case you're interested in knowing the genesis of these ideas, I
suggest reading over Reality Riddle first.

I'll start off with a dictionary definition of reductionism:

reductionism
1: explanation of complex life-science processes and phenomena in terms of the laws of physics
and chemistry; also: a theory or doctrine that complete reductionism is possible
2: a procedure or theory that reduces complex data and phenomena to simple terms
reductionist noun or adjective
reductionistic adjective

I'd like to concentrate on the second definition first. The basic idea is that the whole is equal to the
sum of its parts. I'm what you might call an anti-reductionist, because I think the whole is greater
than the sum of its parts, and usually it's a lot greater. Unfortunately, the hard sciences, such as
physics and chemistry, and almost all of engineering fall into the reductionist camp. This started
back before Isaac Newton, but he was the one who really gave it legs. Scientists knew that planets
revolved around the sun before Newton, and they even had a pretty good idea of how they moved.
They just didn't have a clue as to why they moved the way did. Johannes Kepler accurately
described planetary motions in a set of three laws, but he was a little fuzzy about why these laws are
true. Oh, he did have a theory, described in a document called the Mysterium Cosmographicum,
which seems to be a weird mixture of Platonism, astrology, Biblical doctrine, and maybe even
alchemy. But that doesn't resemble anything like a sound theory according to modern physics.

Then in 1687, Newton came up with his laws of motion and gravity that he published in his
Philosophi Naturalis Principia Mathematica, or just Principia for short. He even invented the
calculus to help scientists and engineers work with his theories.1 Way to go, Sir Isaac! The big
breakthrough came when Newton realized that the same laws that govern apples falling on the Earth
also apply to motions of the Moon and the planets. This also reinforced the idea that natural
processes can be described by mathematics, specifically linear equations, and more specifically
differential equations. This idea became an obsession among scientists, and reductionism hinges on
the notion that nature obeys mathematics; however, I think it's more accurate to state that
mathematics sometimes mimics nature.

Since Newton's time, science and mathematics have been inextricably linked. Every breakthrough
in mankind's understanding of nature has been accompanied by a scientific theory couched in the
language of mathematics. Today, it's the other way around: mathematics is leading science by the
nose. Today, it's virtually impossible to express scientific thought in any language other than

1 Actually, he co-invented the calculus along with Gottfried Wilhelm Leibniz, whose notation was adopted by
mathematicians, and is the standard way calculus is taught in high schools and colleges. Newton accused Leibniz
of plagiarism, even though Leibniz published his version first.

1
mathematics. I feel that this is becoming a stumbling block of science.2

There was great scientific progress in early part of the 20th century, beginning with Albert Einstein's
special theory of relativity and the quantum theory of light in 1905, followed up by his theory of
gravity expressed by general relativity in 1915. Einstein's breakthrough with the quantum theory of
light was further developed by a notable cast of characters beginning roughly in the 1920s.3 I'm not
going to repeat the well-documented history of these events, other than to point out that relativity
and quantum mechanics came at reality from completely different directions, and are in many ways
completely incompatible with each other. This led Einstein and others to try to merge or unify
quantum physics with general relativity. So far, these attempts have been completely unsuccessful.
In my opinion, the problem of unification lies mainly with general relativity, because it is still a
classical-deterministic theory.4 Experiments have shown time and again that reality does not always
obey classical-deterministic rules. As I stated often in Reality Riddle, general relativity is a good
conceptual tool that describes many phenomena very accurately on fairly small scales, as long as
the curvature of space-time isn't carried to extremes. The mathematics begins to fall apart as
indicated by infinities and time anomalies that pop up when it is (mis)applied to extreme
gravitational conditions or when trying to solve the state of the entire universe.

Physicsts believe that the unification of general relativity with quantum field theory will ultimately
result in a Quantum Theory of Gravity. That theory requires a hypothetical elementary particle
known as the graviton the force carrier of gravity. So far, this particle has not been seen in the
wild, but its quantum-mechanical properties are pretty well established. It's range is infinite and it
must travel at the speed of light, so it can't have any mass, and in order to fit into the standard
model, it must be a boson with a spin of 2.5 One of the strange things about gravity is that it cannot
be shielded or blocked. If you stand behind a wall of solid lead or solid wall of anything for that
matter the force of gravity will go right through it. So the graviton must also have infinite
penetrating power, which is a somewhat unique property among elementary particles. ;)

Unfortunately, coming up with a quantum theory of gravity involves a lot more than just plugging a
graviton into quantum field theory, or turning gravitons loose to zoom around in 4-dimensional
space-time. As I stated in Reality Riddle, there seems to be a problem with properly incorporating
rotation into general relativity, which might actually point to a bigger problem. Einstein apparently
believed that there are no inherent, qualitative differences between rotating objects, which have
centripetal acceleration, and objects that accelerate in straight lines. But I suspect there really are
qualitative differences between them. For one thing, an object that accelerates in a straight line
needs to be pushed by something else; otherwise it just stops accelerating.6 A rotating object on the
other hand, accelerates centripetally without any help from the outside. That's one qualitative
difference. Another qualitative difference is that linear acceleration is equivalent to a gravitational
field; however, there doesn't seem to be any plausible gravitational equivalence to centripetal
acceleration. My suspicion is that the failure to recognize these inherent, qualitative differences
resulted in an incomplete theory. This causes anomalies like backward time travel when the general

2 This was one of the basic themes in Is Science Solving the Reality Riddle?(Cogitations of an Amateur Scientist).
3 These included Einstein himself, along with Niels Bohr, Max Born, Satyendra Nath Bose, Louis de Broglie, Arthur
Compton, Paul Dirac, Werner Heisenberg, David Hilbert, Enrico Fermi, Max Von Laue, John von Neumann,
Wolfgang Pauli, Max Planck, and of course Erwin Schrdinger.
4 It is also very much a reductionist theory, which is another fallacy.
5 Mass particles, such as electrons, protons, and neutrons, are fermions. They have spins that are odd multiples of
and they obey Pauli's exclusion principle. Force carrier particles, such as photon, gluons, and such, are bosons.
They have spins that are either zero or even multiples of and they don't obey Pauli's exclusion principle.
6 An accelerating rocket pushes on the gas escaping the rocket nozzle. The gas pushes back on the rocket according
to Newton's third law of motion, causing it to accelerate.

2
relativity field equations are solved for cases where there are spinning motions.

Here's another clue: the fundamental constant in quantum mechanics is Planck's constant, . This
constant has units of angular momentum or spin. The energy of a body in periodic motion is
quantized, as given by the formula E = , where is the frequency of oscillation. Planck's
constant also shows up in Schrdinger's wave function, which is a periodic function. Periodic
motions and spin are closely related. Therefore, it seems that spin is the one ingredient that
automatically provides quantization, and I have a hunch that a quantum theory of gravity might
emerge naturally if spin could be properly incorporated into general relativity and baked into it from
the very beginning.

At the end of the 19th century, the Industrial Revolution had transformed the western world, science
and mathematics had triumphed, and it appeared that nothing further could be invented or
discovered. This was the prevailing reductionist fantasy, expressed earlier in the century by the
physicist Pierre-Simon Laplace:

Consider an intelligence which, at any instant, could have a knowledge of all forces controlling
nature together with the momentary conditions of all the entities of which nature consists. If this
intelligence were powerful enough to submit all this data to analysis it would be able to embrace in
a single formula the movements of the largest bodies in the universe and those of the lightest atoms;
for it nothing would be uncertain; the future and the past would be equally present to its eyes.

By Laplace's time, science had pretty much worked out the movements of the largest bodies in the
universe and those of the lightest atoms, thanks to Newton's laws. So all that needed to be done was
to collect the momentary conditions of all the entities (plugging in the boundary conditions) and
turn the crank. Past, present, and future would be revealed in all their glory.

Of course, the remarkable progress in the early 20th century laid waste to the naive notion that there
was nothing left to discover or invent. But in the early 21st century, it's dj vu all over again.
Some scientists actually think that unifying quantum theory with relativity possibly through string
theory is the only piece of the puzzle that's missing. Like the intelligence in Laplace's fantasy,
finding the missing puzzle piece will reveal the entire past, present and future; how the universe
began in minute detail, its entire evolution, and how it will end. It might even reveal the the origin
of life itself. Well, here's what I think: when and if a unified theory is unveiled, it won't be the end
of science, but it very well might be the end of reductionism. I'll now try to explain the reasoning
behind that statement.

First, it will be helpful to give a very broad overview of the two physical theories that scientists are
attempting to merge. Einstein's theory of relativity can be expressed by the following mathematical
equation, which links the curvature of space the concentration of mass-energy as follows:

R g R + g = (8 G / c4) T

This is called the Einstein field equation. I'm not going to attempt to explain exactly what each of
the terms mean, other than the fact that R, g, and T are what are known as tensors. Tensors are
geometric object that express linear relationships among objects, in this case in four dimensions.
Although Einstein's field equation is somewhat similar to an ordinary linear differential equation, it
is in fact nonlinear, so it is devilishly difficult to solve except for the most simple cases.

Quantum mechanics can be similarly summarized by a single equation, known as the time-

3
dependent Schrdinger equation, shown below.

i (r, t) / t = [( 2/2m) 2 + V(r, t)] (r, t)

Again, I'm not going to explain all of the terms, other than to say that it is a second-order
differential equation of the variable , which varies over time and distance; i.e., it's a wave. The
wave itself has no physical meaning it simply exists in space and time.7 Yet this immaterial
wave mysteriously orchestrates the movements of all material objects from electrons to planets.
The Schrdinger equation, unlike Einstein's field equation, is linear and can be readily solved.
What is meant by linear? Well, the equation z = x + y is linear because the value of z is simply
the sum of its parts, x and y. The Schrdinger equation are linear because the components simply
add. If two wave functions, 1 and 2, were to overlap in space, the resulting wave function would
be the sum of the two because space is presumed to be linear. You would get an interference
pattern, but you could still decompose the pattern into its constituent parts. This also makes it
possible to apply mathematical tools, such as Fourier analysis, which are used to break down
complicated functions into sums of much simpler functions such as sine waves.
The equation z = x2 + 2xy + y2 is nonlinear because the whole, z, is not the sum of its parts, x and y.
If space were nonlinear, two overlapping wave functions would combine in ways that would make
it impossible to decompose the resulting wave into its parts. This would render most situations
unanalyzable. In order to have any chance of analyzing a physical process mathematically, the
process must be linear. Therefore, all physical processes that scientists analyze are assumed to be
linear.
String theorists call the ultimate theory of reality M-Theory, although nobody really knows what the
M stands for. For the lack of a better name, I'll stick to the term M-Theory as well. It's almost
certain that M-Theory must be expressed mathematically, since pure mathematics is the only
driving force behind it at the moment. This means that no matter what form M-Theory takes, the
underlying assumption is that reality is linear. But what if it isn't? In that case, although M-Theory
might successfully describe many things, it won't describe everything, which was the original
purpose for developing it in the first place. But if that's true, then physicists will discover to their
horror that M-Theory was actually a dead end. They will have no choice but to scrap the notion that
reality is linear or that it can be expressed through mathematics, or at least using the kinds of
mathematics we presently use. In other words, scientific principles will change in significant ways,
forcing us to abandon reductionism and look for other kinds of answers.
Now saying that reality is nonlinear is a pretty sweeping statement, but I'm convinced it's true. The
simple reason is that there is order in the universe, and order can only arise naturally through
nonlinearity. We kind of take order for granted, but it's really a very deep mystery because
according to the second law of thermodynamics, order shouldn't exist at all.
First, we need to explore the concept of entropy. When James Watt invented the steam engine
around 1765, he didn't have a clue about thermodynamics. He just knew that steam makes pressure
and by condensing steam you make a vacuum; and if you put pressure on one side of a piston or a
vacuum on the other side, you can make the piston move back and forth; and you can make a
moving piston turn a wheel by using rods. Scientists started to study heat analytically, and they
conjured up a bunch of laws they called thermodynamics. The second law of thermodynamics

7 The wave function is expressed as a complex variable, having a real and an imaginary part. Its conjugate, *,
changes the sign of the imaginary part from a plus to a minus or vice versa. The product * is a real number, and
that does have a physical meaning: it's the probability density function of a particle, or the likelihood of finding the
particle within a given region of space and time.

4
states that heat always flows from hot objects to cold objects. Well, duh. That sort of seems
obvious to most people, but it has some very significant ramifications.
Scientists in the late 18th and early 19th century became obsessed by steam, for good reason, because
steam had completely transformed their civilization by ushering in the Industrial Revolution. They
studied steam from every possible angle, and calculated all of its properties, including temperature,
pressure, enthalpy, and a mysterious property known as entropy.
In 1803, Lazare Carnot came up with the notion of entropy, whereby all physical systems have the
tendency to lose useful energy. The concept of entropy was further developed by his son, Sadi, who
viewed production of work by a heat engine as coming from the flow of a substance called caloric,
like the flow of water through a waterwheel. In the ideal Carnot cycle, the system is returned to its
original state, so the cycle is theoretically reversible. When a process is reversible, then the entropy
of the system remains constant, but if a process is irreversible, some of its ability to do work is lost
and the entropy increases. Increasing entropy decreasing ability to do work.
When heat flows from a hot object to a cold object, it is an irreversible process and entropy
increases. In a reversible process like the ideal Carnot cycle, entropy stays constant. But in neither
case does entropy decrease. Thus, the second law of thermodynamics can be stated as follows, In
an isolated system, entropy never decreases.
In 1877, Ludwig Boltzmann came up with a way to express entropy as a statistical property, which
became the modern way of working with entropy. He defined entropy as the logarithm of the
number of states a system can have times a constant, known as the Boltzmann constant. The second
law of thermodynamics is just another way of saying that all physical systems tend to move toward
their most probable states, which shows up as increased entropy. Viewed in that context, entropy
can be thought of as measuring disorder or randomness.
This led to a very depressing state of affairs, however. Physicists soon realized that the entropy of
the entire universe is increasing, which means that the universe is constantly winding down. This
ultimately will lead to a condition known as heat death. This doesn't mean that heat will vanish;
it only means that the universe will reach a state of thermodynamic equilibrium where heat can no
longer produce useful work. But this doesn't just apply to heat; it applies to everything. Stars will
burn up all their nuclear fuel, all radioactive materials will decay, and everything will be in perfect
state of equilibrium and maximum entropy where nothing ever changes.
The prospect of heat death as the ultimate fate of the universe is a direct result of reductionism.
Based on the underlying assumption of linearity where the sum is equal to the sum of its parts, there
can be no other outcome. The second law of thermodynamics is relentless, driving the universe to a
bland, featureless, and dead state. In fact, a reductionist universe is dead already. But a reductionist
universe is also contrary to the obvious fact that order does, in fact, exist in the universe.
So where does order come from? Surprisingly, it comes from the very same processes that produce
chaos. Order and chaos are actually twins, although they're fraternal and not identical. I'll explain
all that a little further ahead. But how do order and chaos relate to entropy? More specifically, how
can order arise when the second law of thermodynamics states that entropy, or disorder, always
increases? Well, actually viewing entropy as simply disorder is somewhat of a misconception. In
the 1940s, Claude Shannon developed the modern theory of information.8 After studying
information in detail, he came up with the astounding conclusion that information and entropy are
really the same thing!

8 Shannon's work at Bell Labs followed his work on code decryption during WWII. The people at Bell Labs were
interested in sending signals through noisy channels, which tends to corrupt signals. Through clever encryption,
Shannon proved it was possible to send signals error-free as long as the information rate is kept below a certain
threshold. This led to error-correcting codes, making modern communication systems and computers possible.

5
This leads to an interesting corollary to the second law of thermodynamics, namely that information
cannot be destroyed. Physicists, led by Stephen Hawking and Leonard Susskind, have concluded
that entropy is hidden information. I'm not sure I agree with the hidden part, but I guess they
have their reasons for saying that.9 I have a slightly different interpretation. Information is
constantly being created in the Now, which becomes permanently stored as the Past. We sense the
passage of time as information being added to the universe. You could think of the Past as a filing
cabinet being filled with information, but that information can only be perceived in the Now. The
Future is nothing but an empty filing cabinet with no information it it at all, so our sense of Future
is merely a mental extrapolation based on what has already taken place and what is taking place. So
only Now truly exists, which represents the totality of all changes taking place and influenced by
the Past.
Shannon showed that information is fairly easy to quantify, drawing similarities with Boltzmann's
formula for entropy. The hard part is assigning a qualitative value to information. Information is
neither good nor bad but certain kinds of information seem more meaningful than other kinds.
I think that is where order and chaos come into play.
Creationists argue that evolution isn't possible because it would violate the second law of
thermodynamics. In the face of entropy, how could life forms have arisen, becoming more and
more complex over time, unless they were created and fashioned by a conscious and willful divine
Entity? Reductionists like Carl Sagan argued that life arose through a random process; if atoms
keep banging into each other over a sufficiently long time,10 they will eventually form DNA
molecules. If you keep randomly shuffling a deck of cards, it will eventually arrange itself in
perfect ascending order. Could random natural process possibly account for the incredible
complexity of life? Reductionism says yes.
To me, the creationism argument is a false dichotomy. It's not a choice between increasing order or
increasing entropy; both can increase together, and in fact they actually do just that. Think of a
river that flows downhill due to the force of gravity alone. Imagine that the river bed is filled with
rocks, logs, and other debris and that the river banks are very uneven. Now the general flow of the
river is always downhill, but you will see eddies and whirlpools here and there. Now for the most
part, those eddies and whirlpools don't move downstream. In fact, some of them may even move
upstream momentarily. Now would you say those eddies and whirlpools defy the law of gravity?
Of course not. The water molecules always move downhill, but the features of the river don't
necessarily have to. The gravitational force is actually what causes those features to form in the
first place, along with the highly nonlinear process known as fluid turbulence. Turbulence produces
unpredictable, chaotic motions that somehow arrange themselves into stable, ordered features. Very
mysterious, no?
Entropy, order, and chaos work in much the same way. Entropy is the engine that keeps the
whole process moving. Yes, the system as a whole (the universe) will tend to move toward the most
probable state, thereby increasing its entropy. But although the universe began in a very
improbable, low-entropy state and is currently winding down, nonlinear processes abound in nature.
These processes create chaos, which is completely unpredictable. An it is chaos that nudges the
universe into creating the beautiful order and structure seen everywhere. The universe isn't dead.
It's very much alive and it's engaged in an incredibly rich and diverse creative process.
Well, how does this process actually work? What's the mathematical equation that governs it?
Well, I'm afraid I can't describe the process through a single equation maybe nobody can. But I

9 This came about by studying what happens when objects are dropped into black holes. If all information about
them is erased, then this would violate the second law of thermodynamics. Hawking and Susskind concluded that
the information isn't lost; it becomes encoded or hidden as entropy on the black hole's event horizon.
10 Or as Sagan would say, After billions and billions of years.

6
can describe some examples how this process can work on paper. Benoit Mandelbrot was a brilliant
engineer/mathematician who spent much of his career studying how order comes from chaos,
although he didn't describe it quite that way. He published his results in Fractals: Form, Chance
and Dimension. His ideas were not widely understood by the scientific community, at least initially.
But his was a case of someone who was very much ahead of his time; thanks to Mandelbrot, fractals
have become a vibrant field of study.
Here's one of the ways the process works. Take the formula z = z2 + c. The first thing we note is
that the expression on the right side is nonlinear, owing to the z2 term. The z (z prime) stands for a
new value of z based on the old value of z in the right side of the equation. Thus, the formula also
contains feedback. The value of c is a number that we want to test using the formula. Next, we let
z, z, and c be complex numbers. Now don't get scared or flustered by that. It just means that each
of them has a real and an imaginary part. Using the rules of complex algebra, we can rewrite the
formula as two separate formulas:
z (real) = z (real) z (real) z (imaginary) z (imaginary) + c (real)
z (imaginary) = 2 z (real) z (imaginary) + c (imaginary)
Now, we can plot imaginary numbers as points on an x-y graph: the real parts correspond to x
values and imaginary parts correspond to y values. We pick a real and imaginary value for c, say
(0,0) and plug it into the formula to calculate z. We feed z back into the equation as z, and repeat
the calculation over and over. Now one of two things will happen: a) the value of z will become
chaotic and zoom out of the x-y plane, or b) the value of z will settle down to a nice, predictable set
of values that keep repeating. If a) happens, then c is thrown out. If b) happens, then c becomes
part of the Mandelbrot set, and we plot the real and imaginary parts of c on our x-y graph.
Over many trials involving different values of c, a distinct and very beautiful 2-dimensional pattern
emerges. The pattern is a fractal that has very unusual properties. I'm not going into those
properties here,11 but the point is this: the formula that is used to generate the fractal creates both
chaos and order at the same time. The chaos consists of unstable numbers that are not part of the
set; the order consists of stable numbers that are part of the set. Chaos and order, Yin and Yang:

The process of making a fractal is a type of self-ordering process. People who have studied self-
ordering processes have identified three necessary conditions: 1) the system cannot be in a state of
equilibrium, 2) there must be at least one degree of freedom, and 3) nonlinearity must be present. It
is almost certain that all three of these conditions exist in the universe. The first two are obvious:
the universe is certainly not in a state of thermodynamic equilibrium because entropy is still
increasing, and there are at least three degrees of freedom present in the very space that things
occupy. The only necessary ingredient we're not quite sure about is the nonlinearity. But the very
fact that self-ordering processes seem to be taking place is a very good indication that nonlinearity
is an underlying feature of our universe. This feature simply cannot be described using linear
equations, so the self-ordering process is not amenable to mathematical expression or analysis.
In case you are inclined to think that fractals have no relationship to reality, you may want to
observe nature more closely. Fractal-like objects are ubiquitous, from the veins in a leaf, to a head
11 I discussed them in more detail in Reality Riddle.

7
of broccoli, mountain landscapes, to ocean waves breaking on rocks. Even the rhythm of your heart
is a fractal pattern as a function of time. How do these things arise? Well, many of self-organizing
processes are very local in nature, but lead to highly organized structures on very large scales. This
kind of processes is called cellular automation. Here's an example of how this works: suppose
there is a row of boxes, each of which can be either full or empty. Now we add a simple rule for
each box: if the two neighbors on either side are either both full or both empty, then the box
becomes empty. Otherwise the box becomes full. Now fill some of the boxes and watch the row
evolve one step forward using that rule. The process is repeated over and over and as the rows
evolve, complex large-scale patterns emerge from one very simple rule applied on a very local
scale. You could try this yourself using the cells of a spreadsheet.
This brings up the very controversial subject of the evolution of life. Biochemists have now
successfully sequenced the entire human genome. Every gene consists of a sequence of so-called
letters imprinted on a strand of DNA. These letters form a code, which instructs the cell what to do
but more importantly determines whether the cell is part of a plant or animal, and what kind of plant
or animal it is. There is no question that a person's genes determine many of his or her physical
attributes, from eye color to hair texture, height, bone structure, etc. This is obvious simply by
looking at family resemblances, especially between identical twins. However, the big question is
how the letters imprinted on the DNA strands shape the individual.
The biochemists say the genes simply tell the cells which proteins to produce and that some genes
are turned on while others are turned off. Well, that's not much of an explanation. How does a liver
cell know it resides in the liver, where it's supposed to be making liver enzymes, instead of in the
big toe, where it wouldn't have to do much of anything? An embryo starts out as a single cell,
which divides many times before the individual cells begin to branch out as nerve cells, bone cells,
skin cells, etc. Where is the template that tells each cell where it is in relationship to all the
others? Well, the creationists have a ready answer for that: God tells the cells what to do and when
to do it. That sounds very unscientific, but I'm afraid the reductionists don't have much of an
answer either, based on the model of a dead, reductionist universe. However, the principle of
cellular automation might explain how a complicated structure like a human being could arise from
each cell knowing who its neighbors are and following simple rules written in the code letters of its
DNA. I'm not saying that's exactly how it happens, but I'm saying that it could be close to the truth.
Could the process of cellular automation explain how life originated in the first place? Well, I don't
know, but it's certainly more plausible than atoms banging into each other and forming life by sheer
luck. It also avoids having to invoke a special one-time creation event as the cause. The boundary
between life and non-life seems to be rather sharp. However, the study of chaos shows that
boundaries are often very sharp between linear and chaotic behavior. So it certainly seems plausible
and even possible that life could have been initiated in a sudden chaotic manner from non-life.
Science has been pushing God into the gaps ever since Newton, and maybe even before him. Each
time some phenomenon was explained by a natural law or process, there was less and less room for
a supernatural explanation. Now I realize that this theory of chaos and entropy may push God still
further into the gaps. Is there any room left for Her at all? Of course there is, and I think there's a
lot more room for Her compared to a reductionist philosophy based on random chance alone. Think
of the ramifications of all this: God could have simply willed creation into existence ready-made,
complete with stars, planets, plants, animals, and people just like it says in Genesis. Or She could
have designed a universe that started out in a completely formless, uniform, and highly improbable
state; a complete void with zero entropy, but with a strong propensity for creating chaos and order
out of nothing and absolutely no way for Her to predict exactly how the whole thing would end up.
Then She could just sit back, let the whole thing unfold in front of Her, and really enjoy the show.
Now I ask: if you were God, which kind of universe would you choose to create?

8
Appendix A Order is in the Eye of the Beholder
One of the books that inspired this essay was The Cosmic Connection by Paul Davies. There is one
paragraph on Page 109 that's worth quoting:
Information theorists have demonstrated the 'noise', i.e. random disturbances, has the effect of
reducing information. (Just think of having a telephone conversation over a noisy line.) This is, in
fact, another example of the second law of thermodynamics; information is a form of 'negative
entropy', and as entropy goes up in accordance with the second law, so information goes down.
Again, one is led to the conclusion that randomness cannot be a consistent source of order.
Well, this doesn't quite jibe with the information theory I learned in graduate school, or what I know
of Claude Shannon's work. As far as I know, there is no such thing as negative entropy, and I
think Stephen Hawking would agree with me that information doesn't go down ever. He and
Leonard Susskind refer to entropy as just hidden information, and I guess I could sort of go along
with that. But the point is that entropy and information are essentially the same.
I think there's a common misconception that entropy lacks any information just because it's random.
Randomness contains the same quantity of information as non-randomness, because a random state
is just as unique as a non-random state. However, randomness does seem to lack a quality we call
order, which we need to define. I'll try to clarify these distinctions through a simple example.
Suppose you're sitting across the table from an alien from the Alpha Centauri system and you each
have a deck of cards. Your deck is the standard 52-card variety with four suits of deuces through
tens, three face cards, and an ace. Now you shuffle the deck about a dozen times and start drawing
cards one at a time, and notice that they're all in order! You keep drawing and they keep coming out
in order. So your heart's pounding and you're getting all excited, and you start to sweat. And then
there are only two cards left: the king and ace of spades. Could the next card be the king, making
all 52 cards come out in perfect order? That would be one chance in 52! or about one chance in
8 1067. You draw the next two cards and they're the king followed by the ace! The alien just stares
at your cards and shrugs its shoulders. To it, those cards are just showing random symbols.
Now the alien gets out its deck of 53 cards, which have all sorts of weird hieroglyphs printed on
them. In fact, each card has a completely unique symbol on it because its species uses a base-53
number system. The alien shuffles its deck a number of times and starts drawing cards. To you, the
cards appear to be in random order with no discernable pattern whatsoever because each card has a
unique symbol printed on it. But you notice the alien is getting very nervous and excited as it draws
down the deck. Near the end of the deck, the alien is so excited it can't even hold itself together. It
draws the last card and faints dead away. You look at the cards on the table, and they just look like
a pile of random hieroglyphs. But to aliens from the Alpha Centauri system, the arrangement of
those cards has meaning: all 53 cards came out in perfect order in their base-53 number system.
You see, strictly from information theory, there is nothing really special about any arrangement of
cards versus any another. They're all equally probable. No matter what arrangement you dream up,
you would have to shuffle the deck about 1068 times for there to be a decent probability of that
arrangement coming up by chance. This is how I started to change my thinking about entropy,
information, order, and chaos. Entropy and information are quantitative measurements, whereas
order and chaos are qualitative measurements. It's actually very hard to define what order is. It's
like beauty you know it when you see it. You might define order as information with chaos
removed, but then you would have to define what chaos is. Yin and yang.
Here's another analogy.12 Suppose you're building a giant wall of bricks, say 1,000 bricks wide by
1,000 bricks high. There's a huge pile of bricks lying at the construction site. There are two kinds

12 I just love analogies, don't you? But some people, like my daughters, don't seem to like my analogies very much.

9
of bricks: some have white 0s painted on them and others have black 1s painted on them, so you
could think of those bricks as information. You call over your assistant, whose name happens to be
Claude Shannon, and ask him, Hey Claude, how much information is over there in that pile of
bricks? Claude counts the bricks and informs you there's one million bits of information. Now
before you start building the wall, you decide it would be nicer to create a pixelated copy of the
Mona Lisa using the 0s and 1s instead of just randomly laying the bricks next to and on top of one
another. So that's what you do; and after you finish, you stand back and admire your version of the
Mona Lisa, and ask, Hey Claude, how much information is in those bricks now? You think
he'd be so impressed by your work that he'd say there are a couple of billion bits up there. But
Claude simply counts the bricks and tells you there are one million bits of information in the wall.
You see, Claude doesn't appreciate art, so to him, every arrangement of 1s and 0s is just like any
other. What you should be asking him is how much order (or lack of chaos) is in those bricks.
Generating pseudorandom number sequences is similar to generating chaos. There are methods that
measure the statistical complexity of pseudorandom numbers generated by algorithms, as
described in a paper entitled Intensive Statistical Complexity Measure of Pseudorandom Number
Generators, by H.A. Larrondo, C.M. Gonzlez, M.T. Martin, A. Plastino, and O.A. Rosso.
According to my new way of thinking about order and chaos, Larrondo et al may have stumbled
on a way to measure order indirectly by measuring chaos. Maybe it's an equation like this:
Order = Information Chaos
I think the only truly random processes are natural, especially quantum ones, like as radioactive
decay. In the famous Schrdinger's cat13 thought experiment, the process that triggers the release
of cyanide and kills the cat is from a radioactive material placed near a Geiger counter. Apparently,
Schrdinger realized that a pseudorandom number generator just wouldn't cut it in that experiment
because it wouldn't be random enough. Now you might say that there's no real difference between
an algorithm that generates random numbers and a radioactive decay process that generates 0s and
1s. But there is. Albert Einstein thought that quantum processes, like radioactive decay, were like
little machines that are programmed to spit out beta or alpha particles every so often. He called the
programming hidden variables. He challenged his nemesis, Niels Bohr, with this by publishing a
paper in 1935. He said that said Boh'rs version of reality quantum uncertainty was bogus.14
Well, it turns out that experiments performed in the 1980s proved Einstein was wrong and Bohr was
right, so Bohr got the last laugh; or he would have if he and Einstein had still been alive by then.15

When I was in the army, I saw some super-secret radio transceivers that scrambled (encrypted)
human voices. The encryped transmissions received by an ordinary radio sounded like noise, as if
you were listening to Niagara Falls. But it wasn't random noise at all, it was really chaos. The
information in the message wasn't diminished it can't be but the circuitry changed ordered
{silence + human voices} into chaos. Those secret transceivers must have used pseudorandom
number generators to do that because the process was completely reversible so the receivers could
change the chaos back into ordered {silence + human voices} again. The whole science of breaking
secret codes, Shannon's area of expertise in WWII, depends on the reversibility of the encryption
process. In principle, every code can be broken with a sufficient amount of brute force because
they all use reversible algorithms. I think a completely unbreakable code would have to scramble
messages using random numbers from an irreversible process like radioactive decay. But then
nobody would be able to unscramble the messages, including people who are supposed to receive
them. So there are even qualitative differences between chaos generated by reversible processes
and chaos generated by irreversible processes, although it's pretty hard to tell the two apart.

13 It's also known as the Fluffy experiment named after Schrdinger's pet cat, Fluffy. Just kidding.
14 Actually, he wasn't quite that rude. He just politely asked whether or not Bohr's theory was complete.
15 I covered Bell's inequality experiments in excruciating detail in my essay Reality Riddle.

10
Appendix B The Ice Box Conundrum
Whenever I think about entropy I always come back to the same ice box problem that sticks in my
head. Say you have a perfectly-insulated box with food items at room temperature, and you want to
cool the food down in a hurry so it won't spoil. You go to a store where they sell dry ice (frozen
CO2) and you bring a chunk of it home, stick it in the box, and close the lid. Now an expert in
thermodynamics will say that you disrupted the thermal equilibrium of the box at room temperature
by putting a cold chunk of dry ice in it. In other words, you opened the system to the outside and
lowered its entropy by forcing it to be in an unnaturally-ordered state: {warm food + cold ice}.
Now over time, heat will flow from the food into the dry ice, which makes some of the CO2
evaporate. This confirms the second law of thermodynamics as it was originally stated: heat flows
from bodies at higher temperatures to bodies at lower temperatures. If the box is perfectly
insulated, the amount of heat energy inside stays the same, but the entropy increases because a gas
has more entropy than a solid. What this means is the number of microstates of the system has
increased while that elusive property we call order decreases. Eventually, the food and the dry ice
will reach thermal equilibrium where everything is at the same temperature. This maximizes the
number of microstates the system can occupy, which maxes out its entropy.
Suppose the box is not only perfectly insulated but it's also perfectly sealed. If not all the CO2
evaporates, there's still some dry ice in the box and all the original CO2 molecules are still in there.
Now here's the part that bothers me: most textbooks that discuss entropy say there's always some
probability that systems in thermal equilibrium could spontaneously go from a disordered state into
some highly-ordered state. They say the probability might be vanishingly small, but it could
happen. In other words, there's some miniscule probability that all the CO2 gas molecules could
suddenly decide to refreeze and dump heat back into the food, returning the system to its original
state. Since dry ice that spontaneously decides to refreeze is exactly the same as the dry ice you put
there originally, the entropy of the entire system will have to go back to its original low value.
The authors of the textbooks wave their hands around and say, Don't worry, this won't happen
because the number of microstates is unbelievably large, so the probability of going all the way
back to square one is vanishingly small. But this just won't cut it because vanishingly small is still
greater than zero, so this still could happen; but the second law of thermodynamics says it simply
can't happen. Period. This is what I call the ice box conundrum.
I thought about this for a long time and I think I came up with a solution. When rolling dice, it
doesn't matter whether you roll one die a million times or roll a million dice all at once. Either way,
the probabilities of the dice coming up certain ways are the same because all rolls are statistically
independent. In other words, previous rolls don't change the probabilities of future rolls. This is
different than the changes happening inside the ice box. As each CO2 molecule vaporizes, the
number of possible microstates increases, so entropy increases gradually; here, the probabilities do
change depending on what state the system is in. It can't get from the initial low-entropy state to
any of the high-entropy equilibrium states in one giant leap because those states aren't included in
the list of possible low-entropy microstates. Pathways to those states have to open up first.
Here's why going in the reverse direction wouldn't work. In the textbook version of a system in
equilibrium, the system jumps around from one state to another; all states are equally probable and
each jump is statistically independent from all the others. So in theory, the system could jump all
the way back to its original low-entropy state in one jump like rolling all the dice at once. But a real
physical system like the ice box can only move into the states that are available to it. Unlike dice
rolls, the moves are not statistically independent. If a tiny pathway to a lower-entropy state opens
up, it soon closes again before it can be filled. A few CO2 gas molecules might refreeze from time
to time, but no permanent pathway is open for the system to get back to its original state.

11
Appendix C The Post-Reductionist Universal Law

Newton's laws, special and general relativity, and quantum theory all have something in common:
they all hinge on fields. Newton saw nothing wrong with action at a distance, so he didn't bother to
postulate a field in his theory of gravitational attraction between two masses; his equations spoke
for themselves. But others who followed him made sure to add a gravitational field. Einstein
explained gravitation as space-time curvature, which can also be interpreted as a disturbance of the
space-time field. Quantum mechanics is based on the Schrdinger wave function, , which is a
kind of field, although nobody is sure what really is. Modern quantum field theory, which
produced the standard model of elementary particles, proposes many different kinds of fields. The
elementary particles are knots in those fields; individual electrons are knots in the electron field,
individual quarks are knots in the quark field, etc. The vacuum isn't empty; it's filled with fields of
every type and description, including the all-pervasive Higgs field, with virtual particles popping in
and out of existence as a result of quantum fluctuations in those fields. Nobody yet knows what
string theory, or M-Theory, will come up with, but I'm sure new fields will be in it. The one thing
that's lacking in all of this is a unifying law or principle that make everthing hang together.
Some scientists in the past and present have proposed a different way of thinking. I'll call this the
post-reductionist view. Whereas reductionism views the whole (the universe) as being the sum of
its parts (a linear superposition of all fields throughout space), post-reductionism is a holistic theory
that proposes there is a unifying law or principle that expresses itself through the action of the parts.
Pierre de Fermat and Joseph Louis Lagrange were two pioneers of this philosophy.
Fermat proposed that the path taken by a light ray is the path that minimizes the transit time.
Physicists generally reject that notion, favoring the wave theory of light to explain refraction,
although they have to admit that Fermat's conjecture does work. Reductionist thinking doesn't
allow for light rays to seek out paths that minimize transit times. Instead, light waves are
influenced locally by the optical properties of the media through which the waves propagate, and
the waves themselves are electromagnetic fields governed by Maxwell's equations.
One of Lagrange's ideas was the principle of least action, where moving objects follow paths that
minimize the total action summed over time. Lagrange came up with a definition of action as
follows: Action = Kinetic Energy Potential Energy. Suppose you're on the ground and throw a
ball to your friend standing on a flat roof. You want to know what path the ball follows, knowing
only its initial velocity and the location of your friend. Applying Lagrange's method, you would
express the incremental action, dS, in terms of the ball's mass, m, its horizontal and vertical
distances from you, x and y, and the gravitational acceleration, g, over a time interval, t:
dS = { m [(dx/dt)2 + (dy/dt)2] mgy } dt
The path of the ball is expressed as the function y(x) is found by minimizing the integral of dS over
the total time it takes the ball to go from you to your friend, which of course you don't know ahead
of time. Now actually doing the Lagrange computation is fiendishly hard, taking up several pages
of very difficult calculations. What you end up with is a parabola:
y(x) = Ax Bx2
, where A and B depend on the ball's initial velocity. Now you might ask why any person with a
sane mind would go to all that trouble when you could just use Newton's laws of motion and come
up with the same result with a few lines of relatively simple calculus? Well, you wouldn't use
Lagrange's method for this particular problem, but the fact that it actually works provides some
deep insights about the universe. Richard Feynman's high school physics teacher showed him this,
and it made a deep and lasting impression on him. In fact, his quantum field theory uses a

12
methodology that is closely related to Lagrange's least-action principle.
Instead of going through all the excruciating pain of calculating the Lagrange integral, you could
approach the ball-tossing problem another way. Start out by drawing a straight line between you
and your friend and calculate the total Lagrange integral by summing the actions at all the points
times the increments of time it takes the ball to go between the points. Then move the points one at
a time (except the points where you and your friend are standing) up and down just a little and see
whether those movements increase or decrease the total action. If a movement decreases the action,
keep moving it, otherwise go the other way. If you keep doing this over and over, you eventually
reach a point where no little movements will reduce the action any further. That's the path.
The part that impressed Feynman so much was the fact the ball seems to know the best overall
path to follow. This is a very holistic approach to the problem of ball throwing. Instead of gravity
tugging on the ball and changing its velocity ever so slightly, the ball just knew where to go.
Now this sounds absurd, but Feynman used this approach to explain the famous double-slit
experiment in quantum mechanics. In his interpretation, a particle doesn't blindly follow a path
though the slits. Instead, it first explores every possible path through the slit at the same time and it
then chooses the one path with the highest probability based on some fundamental principle.
Using a Lagrangian approach, let me propose my post-reductionist universal law: Every change
maximizes the total degrees of freedom of the universe.
The first element of this law involves change. Without change, the law wouldn't make any sense.
The second element is holistic. It implies that everything, from elementary particles, to baseballs,
to planets knows its place in the entire scheme of things and how to maximize the total degrees of
freedom of the universe.16 Not only that, everything will act accordingly. Remember the example
of the ice box in Appendix B? Well, as soon as the dry ices was placed in the box, heat energy
began flowing from the warm food to the cold dry ice. As this occurred, the food got colder and its
molecules lost some degrees of freedom; however, the total degrees of freedom increased because
as CO2 molecules absorbed heat from the food, they evaporated and created many more degrees of
freedom for the CO2 molecules than were lost by the food molecules. In other words, the food
molecules slowed down, and gave up some of their degrees of freedom for the greater good of the
universe. How did they know how to do that? That's the great mystery.
Before going further, let's find out how many degrees of freedom typical things have. Entropy is a
well-known quantity for well over 100 years and it's been measured accurately. The entropy of one
kilogram of steam at a pressure of one atmosphere and a temperature of 100C has been measured at
7.35 kJ/K. Boltzmann's entropy formula17 is
S = k log W
W is the number of degrees of degrees of freedom, and k is Boltzmann's constant, which is a very
small number: 1.38 10-26 kJ/K. Rearranging the formula,
W = 10 S/k
Plugging in the values for S and k, we see that W for a kilogram of steam at 100C is equal to 10
followed by over 1026 zeros not just 1026 mind you but 10 followed by 1026 zeros!! This is just
an insanely large number.
Entropy isn't just a byproduct of time, it' really the driving force behind creation. It's easy to see
how creating more degrees of freedom makes gas expand, but most people don't think of that as
much of a creative process. If that were all entropy did, it would turn everything into random
nothingness and entropy does have a very bad rap sheet in that regard. But there's much more to
16 Degrees of freedom sounds less sinister than entropy. However, maximizing one maximizes the other.
17 This formula is carved on Boltzmann's tombstone.

13
it than that: entropy actually may be pulling everything together too.
Erik Verlinde has come up with an amazing theory that says that gravity is caused by entropy. I
can't really do justice to this theory, so I strongly recommend downloading On the Origin of
Gravity and the Laws of Newton from the ArXiv web site: http://arxiv.org/abs/1001.0785
Verlinde's theory is based on the holographic principle that Leonard Susskind and Stephen Hawking
discovered through studying black holes. Every finite volume of space containing mass-energy has
a finite number of degrees of freedom (microstates). This number is determined by the Bekenstein
bound.18 Verlinde says that when mass-energy is distributed over the finite microstates, it produces
a temperature, a macroscopic property. Multiplying that temperature by the increase in the entropy
that occurs as the two bodies come together equals work, and it's the same quantity of work gravity
does on those bodies according to Newton's law. Verlinde believes this is no mere coincidence.
Instead, some fundamental law of maximizing entropy is forcing the bodies to come together. The
force is manifested as Newton's gravitational force. He says,
The holographic principle has not been easy to extract from the laws of Newton and Einstein, and
is deeply hidden within them. Conversely, starting from holography, we find that these well known
laws come out directly and unavoidably. By reversing the logic that lead people from the laws of
gravity to holography, we will obtain a much sharper and even simpler picture of what gravity is.
For instance, it clarifies why gravity allows an action at a distance even when there in no mediating
force field.
So we've come full circle from Newton's action at a distance, to field theories, and finally back
again to action at a distance. I don't think this is the entire story, however. The law stating, Every
change maximizes the total degrees of freedom of the universe may explain a lot of things,
including the forces found in current field theories.19 But even if entropy turns out to be the driving
force behind it all, I still don't think it's the only creative mechanism in the universe; alone it doesn't
account for all the order and structure found everywhere. We need another ingredient for order (and
chaos), and I believe that ingredient is a strong local nonlinearity that permeates everything.
One source of local nonlinearity could be quantum interactions. The quantum properties of things
are binary for the most part: spin up, spin down, positive charge, negative charge, etc. When there
are quantum interactions, information is exchanged a quantum computation of sorts. Modulo-2
arithmetic is highly nonlinear and so are feedback processes. We saw earlier how cellular
automation can create structure and order, and this phenomenon may be occurring at the sub-atomic
level through quantum interactions. Maybe modulo-2 arithmetic and feedback take place during
quantum interactions. But this is getting very speculative, so I'll stop right here.
This is an entirely new way of thinking about reality and it needs a lot more work to flesh it out as a
good scientific theory. Unfortunately, there aren't enough minds working on it right now. Breaking
the prevailing deterministic-reductionist paradigm will be almost as tough as it was for 16th century
astronomers in overturning Ptolemaic gobbledygook. But at least the 21st century scientists only
have to worry about losing their research grants, and not being burned the stake for heresy.
One think is abundantly clear, at least to me. Reductionism is dead, or at least its days are
numbered. If and when the Theory of Everything is found, I think scientists will be astounded by
the utter simplicity of the universal law that governs it, and by the amazing complexity that emerges
from such a simple law.

18 The Bekenstein bound gives the maximum degrees of freedom expressed as entropy: S 2 k RE / c, where R is
the radius of a sphere enclosing the volume and E is the mass-energy (expressed as energy) inside the volume. The
constants k, , and c are Boltzman's constant, Planck's constant, and the speed of light.
19 Obviously, it should produce results that are consistent with current theories; otherwise, it wouldn't be a very good
law. But it should also explain those results in a more fundamental way than the current theories do.

14
Appendix D Introduction to Radical Post-Reductionism: Wheelerism
Quantum mechanics clashed with Newtonian physics and relativity right from the beginning. Even
some of the scientists who ushered in quantum theory, such as Erwin Schrdinger and especially
Albert Einstein, began to have misgivings when they realized the full ramifications of what they had
wrought. On the opposite side, Niels Bohr and his Copenhagen crew weren't particularly bothered
by the fact that something could be in multiple places or in multiple states at the same time.
In 1935 Erwin Schrdinger proposed his famous cat experiment, where a live cat is placed in a
sealed box along with a Geiger counter that triggers a release of deadly cyanide gas.20 A sample of
radioactive material emitting beta particles is placed near the Geiger counter. The radioactive atoms
have a known half-life, and based on their proximity to the Geiger counter, there is exactly a 50%
probability that the Geiger counter will be activated within ten minutes. Everything is sealed up
nice and tight so nothing, not even the sounds of a dying cat in agony, could escape the box, and a
10-minute timer outside the box is started as soon as the Geiger counter is activated. After ten
minutes the box is opened to see whether the cat is alive or dead. The question is: during those
fateful ten minutes, what was the state of the cat? Was it dead, alive, both, or neither?
Now at first, this sounds like a really dumb question because how could a cat be both dead and alive
or neither? Most people would say that if the cat is alive after opening the box, it was alive the
whole time, and if it is dead, then it started out alive and became dead at some point before the box
was opened. But that's not how quantum physics works. You see, the strict Copenhagen
interpretation is that the Geiger counter, the cyanide, and the cat are all sealed in the box where no
information can get out, so they're all included in the same quantum wave function that keeps the
radioactive atoms in a superimposed state of decay and non-decay. A measurement must be made
to see if any of them decayed or not. In this interpretation, the cat is both alive and dead until the
box is opened and an observation (measurement) is taken. Then the entire wave function
collapses and the cat is either still alive or it becomes dead at that moment.21
The real question is whether the cat itself counts as an observer. Now cats are pretty smart animals,
but presumably they're not as smart as humans.22 If you accept a cat as being a valid observer, the
wave function would collapse before the box is opened. But in that case, which kind of animal
wouldn't count as an observer? A rabbit? A snake? A fish? A slug? A bacterium? This highlights
the problem known in quantum-mechanical circles as the measurement problem.
John Wheeler was among a new breed of thinkers, who solved the measurment problem in a pretty
radical way: he stated that history doesn't exist until we create it. We make the whole thing up
when we observe things. It's as if dinosaurs didn't really exist until someone dug up their fossils.23
I call this philosophy Wheelerism. To prove his point, Wheeler came up with all sorts of
interesting thought experiments, including one based on the famous double-slit experment, called
Wheeler's delayed-choice experiment. A form of the delayed choice experiment was actually
carried out in a lab, and it did seem to validate the notion the present influences the past.24
Of course, Wheeler has many critics in reductionist circles who argue that you can't really show that
history doesn't exist by doing a lab experiment the time delays are too short. In response to that
criticism, he imagined a much bigger experiment called Wheeler's astronomical experiment. In

20 There is absolutely no evidence that Schrdinger actually did this experiment on a live cat. If he had, these
questions might have been answered by now.
21 An old-fashioned reductionist wouldn't buy any of this, but that way of thinking is pass, as we have seen.
22 This is debatable. Most of the cat owners I know are very well trained, and cats must have a superior intelligence in
order to train humans so successfully.
23 Some creationists deny the existence of dinosaurs even after dinosaur fossils are dug up. However, they are not to
be mistaken as Wheelerites.
24 This apparent paradox is explained fully in Reality Riddle.

15
this experiment, a very distant star emits light that travels toward the Earth. A very massive object,
such as a black hole, sits between the star and the Earth. This object forms a gravitational lens,
allowing light from the star to take two completely different paths to the Earth it's like the
double-slit experiment on steroids. Now depending on how you decide to detect the light from that
star, you can either get an interference pattern from light going in both paths around the lens at the
same time (making it a wave), or you can use a telescope to see which path the light took (making it
a particle). Either way, you're creating your own version of history because the light passed the lens
billions of years earlier. Although Wheeler's astronomical setup does give you plenty of time for
making delayed choices (billions of years), the main problem with it is that you need to work with
one photon at a time for the experiment to prove anything. I think it would be pretty hard to get a
star to emit one photon at a time, and that would also make the star awfully dim, so I don't see much
chance of anyone actually carrying out this experiment.
There's one aspect of Wheelerism I really do like, although I wouldn't quite carry it to the extremes
some do. I'm referring to the it from bit conjecture. As I stated often in Reality Riddle, it seems
plausible, and even likely, that everything we observe in the universe is essentially just information
a dataverse. But that's not all. According to Wheeler, reality has two distinct parts that are
modeled after computer technology: hardware (the it part) and software (the bit part). The
software component consists of observers like us, and that is the real part of real-ity. The
hardware component (the -ity part) is just the nuts and bolts, like electrons, quarks, gluons,
gravity, etc. The software controls the hardware; without software, the computer is just a dead
machine; not real. Now here's the truly weird part of Wheelerism: the software is constantly
creating history by modifying and improving the hardware. It's like your computer suddenly
decides to upgrade its memory from 8 gigabytes to 16 gigabytes, so it goes online, orders a couple
of sticks of RAM, and installs them all by itself. Reality is like HAL in 2001: A Space Odyssey.
The most extreme form of Wheelerism says that quarks didn't always exist, despite what the
cosmologists say about the early universe and its evolution. It says quarks were invented by
Murray Gell-Mann and George Zweig in 1964, and they were discovered right on cue in 1968.
The same thing is true about the neutron, the electron, the atom, and so forth. Those particles didn't
exist either until the software (the physicists) decided to upgrade the hardware (the universe) by
inventing them and creating history. So it should have surprised no one when the Higgs particle
was discovered in 2013, because Peter Higgs had already created it in 1964, although a machine big
enough to make Higgs particles had to be built first.
Now this is getting way too metaphysical for a scientific essay, so I'm going to have to dial it back
a little. However, there is a grain of truth that points back to Schrdinger's cat experiment and the
measurement problem. The quandary was how to separate the observer from the observed in
experiments involving quantum particles. Borrowing some of Wheeler's ideas, history doesn't exist
until some kind of record of it is made. But I don't think it takes an intelligent observer, such as a
human or even a cat to record it. A record could consist of a track of a positron in a cloud chamber,
or anything else that leaves a physical impression of some sort. By defining things that way,
quantum objects like electrons, photons, etc., have no histories because they carry no records of any
kind. All electrons are exactly alike, and there's no way of telling where they've been or what
they've done in the past. They are defined only by their wave functions. This could mean that all
quantum particles are connected through a common wave function, and the universe is holistic and
very interconnected at its core. I think it would have to be holistic in order to carry out a universal
law requiring that all changes must maximize the total entropy of the universe.
Cats, on the other hand, are unique, non-quantum creatures. They have memories and personalities.
They have kittens, get old, and sometimes they die from cyanide poisoning. In short, they do have
histories and wave functions don't apply to them. Parts of Wheelerism aren't very plausible and are
even pretty disturbing, but thinking about it did help me resolve the measurement problem.

16
Appendix E Order, Chaos, and the Emergence of Consciousness
I'm really going out on a limb with this appendix because as an engineer, I have practically no
professional experience whatsoever in the fields of in neurology, psychology, or psychiatry. But
that won't stop me from talking about those things, because I have opinions on just about everything
and I'm not shy about sharing my opinions with anyone who will listen.
To recapitulate what I've said so far: science will sooner or later undergo a paradigm shift away
from orthodox reductionism. Field theories will be replace by a more holistic and integrated view
of the universe because scientists must come to realize that while reductionism explains many
things, it doesn't explain everything, In fact, it may not even explain most things. The new
paradigm will be based on a single universal law with all our existing physical laws being seen as
special cases. I suggested earlier that such a law might be: Every change always maximizes the
total number of degrees of freedom. We can see that this is a holistic law because it encompasses
everything all at once. The same law that causes gas molecules to expand also causes massive
objects to collapse toward each other as a primitive form of organization called gravitation. There
are countless other ways the universal law operates that are just waiting to be discovered.
Below the surface a powerful organizing principle is at work. It operates when systems having
degrees of freedom are not in equilibrium, and when interactions are nonlinear. This organizing
principle causes chaos and order to spring out of nowhere from what might be otherwise considered
dead material. Even as the law of entropy relentlessly drives the universe toward randomness and
heat death, this organizing principle works to create order through chaos. This inevitably
generates increasing structure and complexity, ultimately leading to life and consciousness.
One thing is certain: reductionism and molecular biology have utterly failed to provide a coherent
explanation of how life functions after it is created, let alone offer any rational theory of how life
emerged from non-living matter in the first place. Without recognizing any universal organizing
principle, we would be forced to abandon science altogether and invoke special creation by an
intelligent and purposeful Creator as the only plausible explanation for life. But this is a false
dichotomy we don't just have a choice between reductionism and creationism; I believe science
will ultimately discover the organizing principle and show that the emergence of life is a natural and
inevitable outcome of change.
The line dividing life from non-life is sharp. Even unconscious life, at the level of a bacterium, is
amazing and purposeful. A bacterium does live a purposeful life, although its purpose may be
limited to consuming food, eliminating waste, and reproducing copies of itself.25 Simple life is
amazing enough, but the emergence of consciousness from living matter is almost beyond belief.
The dividing line between consciousness and unconsciousness isn't quite as sharp as the one that
divides living from non-living. We'll see there are different levels of consciousness, with somewhat
fuzzy lines between them. Take an earthworm for example. The nervous system of an earthworm
is rather primitive, but it does actually have one, around 300 neurons in all. There's no brain or
eyes, but an earthworm can respond to outside stimuli. It likes to dig tunnels, and it seeks out other
earthworms to mate with, so it evidently knows the difference between a potential mate and a twig
from a tree.26 However, its primitive consciousness is just barely aware of its surroundings.
Next up on the ladder of consciousness are the leeches, snails, and slugs. These animals have
between 10,000 and 20,000 neurons, which is couple of orders of magnitude more than an
earthworm has, but there doesn't seem to be much improvement in overall intelligence. These

25 Actually, some human lives seem to have similarly limited purposes.


26 Earthworms have both male and female reproductive organs, so they could theoretically mate with themselves. I
don't know if they do that, however.

17
animals don't have spinal cords or brains; just ganglia that are spread throughout their bodies.
When we get to the fruit fly, there is a quantum jump in brain power yes, it actually has a brain.
With about 100,000 neurons and about 107 synapses (connections between them), a fruit fly
registers brainwave activity while in flight. Now that's quite an improvement. Ants are interesting
creatures. They only have about 2 times the number of neurons as fruit flies, but they leverage
their tiny brains with all other ants in their colonies, which can number as many as 40,000. They
act collectively, and can do things together that no ant could do alone; in fact, collectively, they
have ten billion neurons, more than most mammals.27 Honeybees act collectively too, but a bee can
go off alone without acting stupid. They have almost a million neurons with 109 synapses. We're
about to leave the class of insects, but before we do, guess which insect has the most neurons.28
I'm not going to go up the entire animal kingdom, but we eventually end up with mammals at the
top of the heap. You need to mostly count neurons in the brains of a mammal instead of the total
number of neurons in their bodies, because most of the action takes place inside the brain. The
brain has neurons and synapses that form a very highly non-linear network. So not only is there a
fundamental non-linear biological process, which creates order and chaos that allows the brain to
emerge; but the brain itself is a non-linear process, which creates order and chaos that allows
consciousness and intelligence to emerge. Here's what physicist James Crutchfield says,
Innate creativity may have an underlying chaotic process that selectively amplifies small
fluctuations and molds them into macroscopic coherent mental states that are experienced as
thoughts. In some cases the thoughts may be decisions, or what are perceived to be the exercise of
will. In this light, chaos provides a mechanism that allows for free will within a world governed by
deterministic laws.
Wow, that's quite a statement! I think it kind of capsulizes much of what this essay is about. Again,
there is a universal theme: entropy is the driving force behind everything, while an undercurrent of
order and chaos that comes from non-linearity. From order and chaos, complexity emerges in
stages. At the bottom level is dead matter organizing into stars, galaxies, planets, etc. through the
primitive push/pull balancing act of entropy. Biological activity emerges as a higher level that uses
new chaotic processes to organize complex body structures that eventually lead to nervous systems
and a brain. The brain has its own chaotic process that organizes consciousness, free will, and
intelligence. The same organizing principle operates on different levels, each level involving
different chaotic processes that allow the level to emerge, and so on.
Once we arrive at consciousness, it also splits into higher levels of complexity. In the mammal
class, there is simple consciousness, self consciousness, and cosmic consciousness. The three levels
were described by Richard Maurice Bucke, a 19th century psychiatrist from Ontario, Canada. Most
mammals experience simple consciousness. These mammals are fully aware of their surroundings,
have memories, and may have a full range of emotions like love, fear, anger, joy, sorrow, and even
remorse and shame. Mammals with simple consciousness can plan ahead and even use reasoning
and logic to solve problems. This isn't conjecture; it's a proven fact.
Gable is a border collie who lives at the University of Lincoln in the UK, where behavioral
psychologists study him. Gable has managed to associate 54 human words as names for 54
different toys. When his trainer tells Gable to fetch a particular toy from a pile in another room, he
will go to that room, pull out the toy from the pile, and return it to his trainer. That's pretty good,
but here's the amazing part. Once his trainer placed an unfamiliar toy in the pile and gave it a name
that Gable was never taught. The trainer told Gable to fetch that toy using that name, but Gable was
confused and didn't know what to do. He was instructed to fetch that toy by name several more

27 When an individual ant is separated from her colony, she becomes pretty stupid. At least that seems to be the case.
28 That honor goes to the cockroach with 1,000,000 neurons.

18
times. Finally, Gable went into the room, found the new toy, and brought it back to his trainer.
Gable could reason that the toy his trainer wanted was not one of the 54 toys that he knew by name.
So he searched for a toy that was not one of those 54 toys he knew until he found it.
Looking only at the number of neurons in the brain gives misleading information about intelligence
because the overall size of the animal has to be factored in. Very large animals like whales and
elephants need more neurons to just move their huge bodies around. But it's interesting to note that
cats have almost twice as many neurons as dogs, 300 million versus 160 million, while both animals
are of the same order in size. Chimpanzees (5-6 billion neurons) are considered Number 2 in the
intelligence hierarchy, and of course we humans (19-23 billion neurons) are Number 1.
As smart as cats and dogs are, they still only possess simple consciousness. Self consciousness is
the next level up, and humans (and maybe chimpanzees) have it. A self-conscious being not only
thinks, but it knows it's thinking. This brings about a whole new level of complexity. One way to
tell if an animal is self conscious is by placing it in front of a mirror. If it recognizes the image in
the mirror as itself, then it probably has self consciousness. We humans usually don't reach that
stage until we're almost a year old. Put a mark of lipstick on a child's forehead and place her in
front of a mirror. If she's attained the level of self consciousness, she will immediately try to rub off
the mark on her forehead; a baby doesn't associate the image of the baby's forehead in the mirror
with her own forehead until she's reached that level. Adult chimpanzees seem to recognize
themselves in mirrors. Dogs don't; but what dogs lack in self consciousness, they more than make
for up for by learning to adapt so well to the peculiar ways of human beings.29
Bucke classified self consciousness as an emergent phenomenon, and he said it only emerged in the
human race quite recently. Looking at this from a post-reductionist perspective, we see that it
would be the inevitable result of the self organizing principle; it happens when simple
consciousness becomes sufficiently complex and chaotic. Today, virtually every adult human being
is in a state of self consciousness. This enables us to think abstractly on several levels at once, as in
the statement, I know that I know that I know. We can also think symbolically at a very high
level, and we can manipulate abstract mathematical symbols to solve problems.
Self consciousness is also a nonlinear and chaotic process, and when it becomes sufficiently
complex and chaotic, it will inevitably organize into what Bucke called cosmic consciousness.
Bucke's description of cosmic consciousness seems identical to what Buddhists call satori, a calm
state of pure knowing, without any fear, anger, or self-centeredness. In that state, a person is
consciously aware of the connectivity and unity that underlies the universe. It seems that people
who are in satori directly experience the laws of the universe operating within their own minds.
Relatively few humans have attained that level of consciousness, and still fewer have sustained it
for any length of time.30 However, Bucke believes that cosmic consciousness will eventually
become the normal state of consciousness of the human race as it continues to evolve.
Pierre Teilhard de Chardin was a French philosopher, paleontologist, and geologist. He was a

29 While chimps and dogs are comparable size, chimps have over ten times as many neurons, so they should be way
smarter than dogs, right? But consider this: when a human points at something, dogs instinctively know to look in
the direction the human is pointing, whereas chimpanzees don't have a clue about what the human is doing. Long
ago, dogs learned how to get along with humans and they almost became like us. They do what we tell them to do
(sort of), and they seem to go out of their way to please us. Because of this, dogs get to live in our houses, eat our
food, play with our children, go on trips with us, and sometimes they're even allowed to lie on our beds. On other
hand, adult chimps are vicious, hateful creatures that will attack and kill humans if they are given the chance.
Because of this, chimps get to live alone in steel cages. Now I ask: which animal is really smarter?
30 Bucke himself purportedly experienced a fleeting moment of cosmic consciousness in 1872. Although the
experience was temporary, it had a profound effect on Bucke that permanently changed him.

19
staunch believer in evolution, both of the human race and of the universe as a whole.31 His ideas
were clearly post-reductionist. He also believed that the evolution of the universe is being
orchestrated by the conscious creatures who inhabit it, with everything and everyone evolving
toward an end state he calls the Omega Point. I can see clear parallels between Teilhard's views and
John Wheeler's it from bit conjecture. Both Teilhard and Wheeler believe in the primary status of
consciousness (bit) and the secondary status of the physical universe (it). Both held the belief
that the bit controls and determines the it. I think the more likely scenario is that both the it
and the bit emerge together from one universal law and its corollary organizing principle through
chaos. The law itself has primary status; the universe is secondary. Just don't ask me how or why
the universal law and the organizing principle originated, because I have no idea.
Created matter organizes into more complex structures that eventually become chaotic. Chaotic
structures leads to order at a higher level, which may even add new processes of organization as the
universe marches on with increasing degrees of freedom. Those new structures may also open
additional pathways that maximize the total degrees of freedom.32 This process goes on and on until
chaos produces a whole new level organization life. At the level of living things, the role of
ordinary physical laws is significantly diminished; life is governed by a different set of laws. It is
here that reductionism fails completely because quantum mechanics and Newtonian physics simply
cannot account for most processes that occur in living forms; microbiology doesn't provide a
complete picture either. Darwin's theory can explain parts of the evolutionary process, but the
power of natural selection is somewhat limited, and its effects on living forms are almost trivial.
Eventually, life evolves complicated and highly nonlinear neural networks and brains. Those
provide a whole new stage for chaos and order to play out their roles. There is another exponential
increase in complexity and chaos, then order produces consciousness. First there is only primitive
consciousness, on the level of an insect, but that is followed by higher levels, with each level setting
the stage for the next. The physical brain continues to provide the foundation for the edifice of
consciousness, like the foundation of a building. There may also be entirely new organizing
processes operating on consciousness itself that transcend and bypass the physical brain altogether.
Looking at the entire picture, there seems to be a universal hierarchy at work: Elementary particles
operate on the lowest level, obeying the laws of quantum physics and nothing else. For them, time
is symmetrical, they have no individual identities, and the past does not exist. Quantum particles
organize into macroscopic objects that have identities and histories. Here, time is not symmetrical,
and the past emerges. Macroscopic objects organize into larger and more complex physical
structures that obey Newtonian and relativistic laws (approximately). Structural complexity
increases until chaos produces a new order called life, which obeys an emergent set of laws that
science presently does not understand. There is a hierarcy among living things, some having
evolved into organisms of extreme complexity, where chaos produces ordered nervous systems,
increasing in size and complexity until primitive consciousness finally emerges. Chaos organizes
primitive consciousness into higher states of order, and so on, ad infinitum. It's hard for me to
visualize where this process will lead, but I'm positive it will be a fantastic journey.

31 He also happened to be a Jesuit priest. Needless to say, his unorthodox views on creation and evolution didn't
exactly endear him to the Vatican.
32 Proponents of the big bang theory are certain that there was an event called inflation, when the universe expanded
exponentially soon after it came into being. Physicists proposed several possible mechanisms for inflation, but
there seems to be no rationale for why inflation started and stopped. Here's my suggestion: The universe may have
originated in a state of near-zero entropy. The universal law requires that change maximize the total degrees of
freedom of the universe. At that time, inflation was the only available mechanism for doing that, so it did. At some
point during inflation, the universe entered a different state where inflation could no longer maximize the total
degrees of freedom, so it stopped. A different form of expansion accomplished the task of maximizing the total
degrees of freedom, which is ongoing today. The universe may attain states where different processes of change
will emerge, which will fulfill the universal law more effectively than the present process, and so forth.

20
Appendix F We're Living on the Hairy Edge
As I wrote the essay Is Science Solving the Reality Riddle, I kept getting this nagging feeling that
the answers to some of the questions concerning reality might have something to do with fractals
and Mandelbrot sets. At that time, I didn't really understand the full implications of fractals, and I
still don't; but I've done a bit of research on fractals since then and came up with some amazing
connections between fractals, three dimensions, order and chaos.
First, I looked at a very simple fractal known as the Koch snowflake, named after Swedish
mathematician Helge von Koch. To make a Koch snowflake, you start out with a simple equilateral
triangle with each side having a length, s. You add three more equilateral triangles to each side to
make a star of David with 12 sides. Then you add more 12 equilateral triangles to each of those
sides, and so on. This drawing shows the evolution of the snowflake:

The nth evolution is denoted by the symbol S(n). Starting out with the triangle, T, you get the
following sequence of figures: T S(1) S(2) S(3) S(4) S(5). Now you can carry this
on forever if you want, and the resulting shape will be a fractal. This snowflake has very unusual
properties. The length of the perimeter of the snowflake is given by the very simple formula
P = 3s(4/3) n. The funny thing is that as n so does P. That's right, the perimeter, P, of the
fractal becomes infinite. And I don't just mean that it has an infinite number of points all lines
have an infinite number of points I mean P has an infinite length!
One ramification of this is that you can't really define a Koch snowflake by a formula, like the
formula y2 = r2 x2 for a circle, or any other kind of formula for that matter. You can only define it
by describing the process that generates it. Now, although the Koch snowflake has a perimeter of
infinite length, it sure looks like it has an inside and and outside. And in fact it does. So what's the
area inside the snowflake? I'm not going into the whole derivation, because you can look that up,
but the important thing is that the area is finite: A = 2 s2 3 /5. So, the perimeter of a fractal
encloses a finite area even though the perimeter itself has infinite length. Very strange.
Now fractals can be generated in other ways too, the Koch snowflake being a very simple evolution.
There's another class of fractals are are generated from a process that creates Mandelbrot sets,
named after Benoit Mandelbrot. You can represent any point in 2-dimensional space as a complex
number: z = x + iy.33 You can generate a Mandelbrot set in two dimensions as follows. Using the
formula z = z 2 + c, pick any point you want c = x + iy and compute z from the starting point z = 0
33 Up until now I've denoted the imaginary number -1 by the letter j. Now I'm going to change that to the letter i, for
reasons that will become clear shortly.

21
using the rules of complex algebra. Next, feed z back into the formula as z, and compute a new z
and keep c the same. Keep doing that over and over. Two things might happen: a) the values of z
settle down to very predictable numbers that repeat, or b) the values of z chaotically zoom off into
the stratosphere. If a) occurs, then c is part of the Mandelbrot set, and if b) occurs, it is not.
What you'll find is this: there's a boundary that separates the numbers in the Mandelbrot set (order),
from the numbers not in the Mandelbrot set (chaos). This boundary is a fractal perimeter, having
similar properties to the perimeter of a Koch snowflake. The perimeter encloses a finite amount of
area inside it, but the perimeter itself will have an infinite length. You may ask whether this type of
thing could be extended into three dimensions? The answer is yes sort of.
There are no mathematical objects having three dimensions that follow the kinds of algebraic rules
that complex numbers follow; so although you can represent points in 3-dimensional space as sets
(x, y, z), there are no consistent algebraic rules for these sets. Luckily, through some mathematical
trickery, you can still generate a fractal surface in three-dimensional space called a Mandelbulb. An
example of one of these strange objects is shown in a figure near the front of this essay. The
colored surface of this Mandelbulb is all fractally and uneven. Points in space inside the
surface are part of a Mandelbrot set (order). Points not inside the surface are not part of that set
(chaos). The surface itself is thus a boundary between order and chaos.
Since the Mandelbulb is a surface that encloses a finite volume, it must have two dimensions (at
least nominally) and so it must also have an area. What's the area equal to? Infinity. Just like the
perimeter of the Koch snowflake is infinite, the surface of a Mandelbulb is infinite. Very strange.
Now everyone who has studied scientific literature probably knows about a place called Flatland
where hypothetical 2-dimensional creatures live. Flatland is ordinarily thought of as a traditional
2-dimensional surface, like a flat plane or the curved surface of a sphere. Well, what would happen
if we were 2-dimensional creatures living on the surface of a Mandelbulb? How would we
characterize the area of our home? What kind of features would we see there? Now I think some of
you might just see where this is all going, and here's where things start to get a little freaky.
It turns out that there is a class of mathematical objects known as quaternions. They were
discovered by the mathematician William Rowan Hamilton. These objects extend the idea of
complex numbers into four dimensions.34 Quaternions do follow a set of consistent algebraic rules,
although they're strange rules. For one, multiplication isn't commutative. In ordinary algebra, and
even complex algebra, the multiplication operation is commutative: A B = B A (whether A and
B are real or complex). This isn't the case in Hamiltonian algebra. Here, the order of things is
important, like in matrix algebra. Here is Hamilton's table for the rules of multiplication:

1 i j k

1 1 i j k
i i -1 k -j
j j -k -1 i
k k j -i -1
Hamilton saw the whole shebang in a flash of insight; he summarized it by: i2 = j2 = k2 = ijk = -1.
Let's put all of this into practice. Suppose of you have a point, z, in 4-dimensional space. This

34 Note that mathematics jumps from 2-dimensional complex numbers into 4-dimensional quaternions and completely
skips over the third dimension. This may be very significant. Or maybe not.

22
point can be expressed by four numbers: z = a + ib + jc + kd. The numbers a, b, c, and d are simply
the values assigned to the four dimensions, and i, j, and k are just markers or labels for the three
non-real dimensions. The unlabeled value, a, is the real part of the quaternion.35
The nice thing is that there are consistent algebraic rules for adding and multiplying 4-dimensional
points. So a formula like z = z 2 + c makes perfect sense when z, c, and z are all quaternions. The
value of z 2 is found by multiplying z by itself:
z 2 = (a + ib + jc + kd) (a + ib + jc + kd) = (a2 b2 c2 d2) + i(2ab) + j(2ac) + k(2ad)
Adding the quaternions z 2 and c together is just a matter of summing up their real parts, along
with summing up each of their non-real parts, i, j, and k.
By testing every possible quaternion in 4-dimensional space, c, we'll end up with a Mandelbrot set
of all points that are stable and don't cause z to explode. There should be a some kind of
perimeter or surface that separates the quaternions in the Mandelbrot set (order) from the
quaternions not in the set (chaos). How many dimensions will this perimeter have? Well, based
on the fact that a one-dimensional perimeter encloses a two-dimensional area, and a two-
dimensional area encloses a three-dimensional volume, my guess is that it will nominally have one
less dimension than the four-dimensional space it encloses. Logically, it should then have three
dimensions and it should also have fractal properties, and I'm going out on another limb and say
that this 3-dimensional border has a volume that approaches infinity.
But does such a 3-dimensional fractal boundary exist? Well look around, because we already may
be living in one. Like the Flatland people living on the surface of a Mandelbulb, we can move our
3-dimensional bodies around our 3-dimensional space and explore it at will. Of course, we can't see
the 4-dimensional space that our 3-dimensional surface occupies, but we may be able to detect some
fractalish properties of the space we live in if we are clever enough and dare to look for them.36
Now I'm going out on yet another limb to say that maybe the reason space has three dimensions,
instead of two, four or five, is because of the mathematical properties of quaternions. I know it's
dangerous to ascribe properties of reality to mathematics alone. That's what string theorists are
doing, and I think it's leading science down a rabbit hole, as I said in Reality Riddle. At best, I
believe mathematics just mimicks what nature does. But I just can't help this nagging feeling that a
fractal universe makes sense in a weird sort of way. After all, the universe does seem infinitely
large, even if it was created a finite time ago. And then there are all those fractal objects seen
everywhere in nature; these may be projections of the fractal universe itself. 37 Finally, it seems
natural to expect creation to be happening in a place that's at the hairy edge between order and
chaos, which is another reason I find this conjecture so intriguing and even plausible.
If space has three dimension only out of mathematical necessity, then you may ask how time fits
into this model I just contrived? My answer is and always was that time isn't really part of a
space-time continuum (although you can sometimes make calculations more convenient by
spatializing time). Instead, space and time are fundamentally different things. Time only
measures changes and evolution we really can't navigate through time at will like it's space. At
the very basic level of elementary particles, time is bi-directional and symmetrical, and the quantum
realm is changeless and eternal. Time doesn't emerge as a measurable or meaningful property
beneath the macroscopic level; time emerges when things have unique identities and histories.

35 If you're into Minkowsky space-time, you might use a system like this for tracking world lines in 4-dimensional
space-time, but that's not the point here.
36 Don't ask me how to look for them, because I'm simply not clever enough. My job is only to plant the seed.
37 Fractals have the properties of self-similarity and scale invariance, where patterns repeat over and over on smaller
and smaller scales forever.

23
Appendix G Why Reductionism Cannot Fully Explain Biology
By definition, reductionism is an explanation of complex life-science processes and phenomena in
terms of the laws of physics and chemistry. The most basic unit of life is the cell, which is a
wonderfully complex chemical factory. In very simple terms, cells are tiny protein assembly plants.
Proteins are made from 22 standard amino acids under the direction of RNA, with the help of
molecular machines called ribosomes. RNA is copied from the genes stored in the double helix of
DNA. Proteins make up many of the cellular structures, such as the cell membrane, and can even
act as tiny machines fed by complex chemical reactions. These reactions are becoming well-
understood, so it can be argued that reductionism is very successful in explaining all of the inner
workings of the cell in terms of complex chemistry. However, we really can't go much further than
describing how RNA is copied from DNA, and how RNA translates into proteins.
Here's a much more difficult question:
How does this produce this?

The standard answer (according to reductionism) is that a complete blueprint of a human organism
is contained within 23 pairs of chromosomes that are found in each and every cell. The DNA code
consists of a base-4 number system represented by the letters A, T, G, and C. Each of those letters
stands for a molecule that links up with another molecule to form one rung of the DNA strand. An
A (adenine) links up with a T (thymine), and a G (guanine) links up with a C (cytosine), but A-T
and G-C rungs can also be reversed as T-A and C-G rungs, so each rung can have one of four
possible configurations. Hence, DNA uses a base-4 number system to encode information.
If a strand of DNA has n rungs, then there are 4 n possible configurations, which are quite a lot. But
the real question is how much information is actually stored in our genes?38 The human genome has
recently been sequenced (decoded) completely, so the answer to that question is now available
and it's quite surprising: The total information stored in the human genome is only about 1.5 109
bits! Although the structure of our bodies surely must be defined by our DNA39, the number of bits
contained in DNA is simply not enough to specify the digital template of a human body.40 Clearly,
DNA does much more than encode data for making proteins out of amino acids, but what and how?

38 Remember that every cell contains the same 46 chromosomes, so the total information in the 100 trillion cells in a
human being is equal to the information in one cell. The same information is duplicated 100 trillion times.
39 This is quite apparent merely by observing the similarities between identical twins, who have the same DNA.
40 On the next page you'll find out how many bits are required to do that. (Hint: it's a huge number.)

24
Everyone knows that a human being starts out as a fertilized egg, or zygote. The zygote divides
over and over until a ball of undifferentiated cells, called a morula, is formed. These cells are
attached to each other, but they still operate pretty much as independent one-celled animals. Later,
the morula forms a hollow structure, called a blastula. It is at this stage in development that the
cells of the embryo begin to differentiate into what will eventually become the tissues and organs
that comprise the human body. But how does each individual cell know how to differentiate? What
orchestrates the development of an embryo into a fetus, which becomes a child and then an adult?
I believe the answer is that those 46 human chromosomes don't define the complete end product at
all. Instead, they define a relatively simple process carried out at the cellular level that ends up
assembling a complete human being. The end design is defined by the assembly process itself.
Let me explain this with a crude analogy. When a master carpenter makes an inticate cabinet using
materials from a lumber yard, he has the final design of the cabinet in his mind's eye. Each step in
making the cabinet is directed toward fulfilling that design, and that requires quite a lot of
information. On the other hand, making a cabinet from a preassembled kit requires very little
information. It involves simple steps such as attaching Part B to Part A, Part C to Part B, and so
forth. You don't even have to know what the cabinet will look like in order to complete the process.
The cabinet's final design emerges during assembly by following simple steps.
Remember the self-organizing principle as illustrated by mathematical cellular automata? The
resulting order is not a design at all it just emerges from a process of carrying out very simple
rules contained in each cell. In a similar way, each cell in the developing embryo carries out logical
steps that are directed in part by information communicated among the cells. Although the logic
may be relatively simple, the end product is incredibly complex. The DNA does not define a
human being in terms of a complete design template. It defines the process of making a human
being, and this only requires information that can easily be stored on 46 human chromosomes.
I recently read an article about teleportation as in, Beam me up, Scotty! Some reductionist
scientists believe that someday it may actually be possible to disassemble a human body, extracting
all the information contained within it, transmitting the information to a remote location, and using
the information to reassemble the body from raw materials. In the teleportation piece I cited, the
amount of information required to reassemble a human body was estimated to be 2.6 1042 bits.41
This is why reductionism fails to fully explain biology: If the whole is equal to the sum of its parts,
we simply cannot reconcile the fact that an incredibly complex human organism represented by
2.6 1042 bits cannot be encoded into chromosomes that can only contain 1.5 109 bits.
By abandoning reductionism, we can see how a genetic mutation actually affects the whole
organism not by changing the template of the organism, but by altering the program that makes
the organism. For example, a small change to one gene can cause a fruit fly to grow an extra pair of
wings. That small genetic change caused a glitch in the assembly program that results in an extra
pair of wings. What will the end result from a given genetic change be? There's only one way to
find out: by seeing what develops from the programming glitch. A superior end product produces
positive feedback that will reinforce the glitch in the next generation. An inferior end product
produces negative feedback that will suppress it. This properly explains evolution and selection.
This also allows us to see why intelligent design isn't needed in order to explain the complexity of
the universe. Complex things don't need to be designed. In fact, it seems that complexity actually
disconfirms design. Designed objects (like a sleek Ferrari Spider automobile) generally tend to be
much simpler than many objects, such as Mandelbrot sets, that are not designed.
41 One of the main problems with teleportation is the time that it would take to transmit a complete human blueprint
through space. Using a 30 GHz bandwidth, it would take almost 5 1015 years to accomplish that feat, which is a
lot longer than the 9 months needed to assemble a human baby from scratch using a genetic code.

25
Appendix H Chaos, Dice and Einstein
I recently finished reading the book Chaos by James Gleick. That got me thinking about quantum
randomness and Albert Einstein's famous remark, God doesn't play dice with the world.42 Niels
Bohr, the father of the Copenhagen School, reportedly responded with, Stop telling God what to
do! You see, Bohr embraced the idea that quantum processes are truly random, or what
mathematicians like to call stochastic, whereas Einstein was convinced until his dying day that the
universe was fundamentally deterministic and knowable.
I discovered that chaos falls somewhere in the middle. Whereas a stochastic process is just plain
unpredictable, a chaotic process is deterministic and unpredictable. There is a certain class of
chaotic processes that feature strange attractors. To see them, you have to plot the state of the
system under study in something called state space, which is multi-dimensional. Every point in
state space represents the complete state of the system, which can include hundreds, thousands, or
even millions of variables or dimensions. When a system undergoes change, the point that plots the
state will trace a path through state space. That path is called an attractor. Some attractors form
closed loops, in which means the system is oscillating or changing periodically. When a system
goes into chaos, however, the path never crosses itself, which is why it's called a strange attractor.
Consider this set of differential equations:
dx/dt = (y x)
dy/dt = x ( z) y
dz/dt = xy z
These are called the Lorenz equations, named after Edward Lorenz, a mathematician from MIT who
used them to model atmospheric convection and showed that weather is unpredictable.43 The
variables x, y, and z are the state variables, and , , and are parameters that can be adjusted to
tune the equations and change the system's behavior. Note that the equations are linked and they
are non-linear.44 The presence of non-linearity gives rise to a strange attractor shown in phase space
below, sometimes referred to as the Lorenz butterfly because of its shape.

42 Most likely what he really uttered was, Gott nicht wrfelt mit der ganzen Welt.
43 The same equations have been used to simulate many different sorts of physical systems, which seems to show (at
least to me) an underlying unity in nature.
44 Interestingly, Einstein's Field Equations (EFEs) used in general relativity are linked, non-linear partial differential
equations, unlike other fundamental equations of physics, such as Newton's equations of motion, Maxwell's
equations of electromagnetism, and Schrdinger's wave equation, which are linear. The physical meaning behind
EFE non-linearity is that a gravitational field, containing energy, creates its own gravitational field. Could gravity
become chaotic under certain extreme conditions? That would be truly amazing.

26
On the surface, the behavior of a chaotic system defined by the Lorenz equations is qualitatively
very different than a stochastic process. The state trajectory of a chaotic system is a continuous
line, so the position s of each point is dependent on the position s ds of a point on the line at a
prior time t dt. Thus, a chaotic system is fundamentally deterministic. However, the system is
unpredictable because there is no function f(t) that defines state trajectory. You can't calculate x
analytically for t + t in the future when t is large . Therefore, a chaotic system is both
deterministic and uncertain. In contrast, a stochastic process is uncertain, but it is not deterministic.
Its state trajectory is just a random series of points, with no apparent functional relationships among
them. When I say no apparent, it's because I think there might be hidden determinism that
underlies stochastic events.
Hidden determinism sounds an awful lot like something Einstein said in his famous EPR paper,
where he claimed hidden variables underlie quantum uncertainty. Experiments in the 1980s based
on Bell's theorem essentially destroyed the concept of hidden variables forever; however, the hidden
determinism I'm referring to is at a deeper level than Einstein's hidden variables.
In Erwin Schrdinger's famous Cat Experiment, he used radioactive decay to trigger the release of
cyanide gas to kill Fluffy. It had 50/50 probability of triggering the cyanide release within a 10-
minute time period. He chose that particular method of execution because he wanted to suspend
Fluffy in a state of quantum superposition, 50% alive and 50% dead, for the entire 10 minutes.
There wasn't much point in running the experiment using a 5-minute timer instead of radioactive
decay to trigger the cyanide, because everyone knew that Fluffy would be 100% alive for 5 minutes
and 100% dead for 5 minutes. But what if there's a chaotic process that deeply underlies
radioactive decay? Would Fluffy still be in a 50/50 quantum state? Or would Einstein be right?
If an atomic nucleus is unstable, it has a tendency to eject a particle to lower its internal energy and
make it stable. We call an unstable atom radioactive because it gives off radiation when it decays.
When isotopes have too many protons and not enough neutrons to keep the nucleus stuck together,
they might eject a positron, which adds one neutron and subtracts one proton. This changes the
isotope into another element that's one position back in the Periodic Table. Sodium-22 is like that.
It has a half-life of around 2.6 years and changes into the stable isotope Neon-22 by emitting a
positron. Although a collection of billions of unstable atoms has a well-defined half-life, the decay
of an individual atom is completely random. The atom literally has no memory. It doesn't care if
was created 13.5 billion years ago or last Tuesday. It has the same chance of decaying either way.
Some physicists compare an atomic nucleus to quark soup, or maybe it's more to accurate to call it
quark Jell-O. In this model, the nucleus is in constant turmoil with all kinds of internal jiggles we
can't actually see, but because protons and neutrons have internal structure, it seems reasonable that
they do jiggle around inside the nucleus. Now suppose the protons and neutrons are all jiggling
around chaotically, similar to a Lorenz process. Every so often, those jiggles might combine in a
way that forms a big bulge in the Jell-O that spits out a positron. The thing we don't see the
jiggling around could be a chaotic process that's deterministic, but the thing we do see the
spitting out of the positron may still look like a stochastic process. This is just one possible
example. So maybe Einstein was right in a weird kind of way. Maybe deterministic chaotic
processes really are behind the scenes of quantum uncertainty.
Mathematicians will insist that they have all sorts of tests that show when a number sequence is
generated by an algorithm. After reading the literature on this topic, it seems to me that these tests
use circular logic. One of the tests is to compare two number sequences when the system is started
in two different nearby states. If the two sequences diverge deterministically, then it means the
numbers are computable and the process is non-stochastic. But if the strings diverge randomly,
then it means the numbers are not computable and the process is stochastic. This raises two
objections: First, you can't know which system states are nearby unless you already knew the

27
system is a machine, which already tells you that the numbers are computable and hence non-
stochastic by definition. Second, how do you know whether the two number sequences diverge
randomly or deterministically? That just defines a random process as something that produces
random results.
Here's where I stand on this issue: It is possible, at least in principle, to design a machine that
generates a finite series of bits that is mathematically indistinguishable from a finite series of bits
generated by a stochastic process. In other words, it is possible to fool even the best
mathematicians into believing that a deterministic process is stochastic, as long as they are only able
to make external observations. A chaotic process can do that by keeping its complete states hidden
and revealing only portions of them. I will now show a proof of concept design that does that.
Using the Lorenz equations, suppose we program a machine that computes the three state variables
to 128-bit precision, which represents (2128) 3 = 2384 unique machine states. If the complete floating-
point values of x, y and z were shown, any mathematician could see they evolve into a Lorenz
butterfly and declare the numbers are computable. But suppose we take the last significant bit of
each floating-point value of x, y, and z, add those three bits together, and use the result as the output
bit. Could our mathematician friends detect any patterns in that sequence of zeros and ones? Could
they guess what the machine states are? Could they declare with certainty that the bits are
computable and therefore non-stochastic? I really doubt they could do any of those things.
Now it's true that all algorithms must eventually repeat because computer memory is finite and all
machines have only a finite number states. If the machine enters any state for the second time, the
algorithm must repeat. This is a fundamental law of computing. The nice thing about the Lorenz
attractor is that it's strange it never intersects itself so our hypothetical Lorenz machine could
visit every one of the 2384 states without visiting any of them twice.
Now let's put our hypothetical machine to work and generate some quantum mechanical numbers,
like the spin of an electron. No matter in which direction we measure an electron's spin, it can only
point up or down a zero or a one. Let's assume that the spin of an electron can change its spin
state unpredictably45 once every Planck-time interval, which is roughly 210 43 times per second.
Let's assume the electron has been doing that since the dawn of time, roughly 13.8 billion years ago.
So over entire history of the universe, our little stochastic electron may have undergone 8.710 59
spin changes a very long sequence of 0s and 1s without repeating the sequence. Could our
Lorenz machine match that? Let's check.
The number of Lorenz states divided by the maximum number of spin changes an electron can have
each second equals the length of time our Lorenz machine could keep up with an electron changing
spin states before the machine starts repeating the sequence. That number is 2 384 / 210 43 seconds =
6.210 64 years. That's close enough to eternity to suit me. Do electrons change into fresh spin
states once every Planck time, or do they only change when somebody decides to measure them?
Who knows? Either way, this proof-of-concept example shows that a simple algorithm can
duplicate whatever an electron decides to do. The key is to use an algorithm of a chaotic process
having a strange attractor, revealing only part of the system state while keeping the rest hidden.
So here's another conjecture to consider: There is no such thing as a random event in nature and
deterministic processes underlie everything. Randomness is what we perceive when we are only
presented with a thin sliver of reality. I'm sure this would have made Einstein very happy. I don't
know whether it's possible to prove or disprove this conjecture at this point. Maybe some genius
like John Bell will come along with a theorem that will show how to do that.
But why go to all the trouble of inventing some unproven (and unprovable) conjecture about

45 Notice I said unpredictably, and not randomly.

28
computers and algorithms? I only presented this hypothesis to show that it's possible to simulate
something like quantum randomness with a chaotic algorithm that's deterministic. There are still
some riddles that determinism doesn't have answers to, like what quantum entanglement is and how
it works. Maybe we should just accept quantum mechanical weirdness the way it presents itself to
us, and stop looking behind the scenes for hidden meanings and secret algorithms.
But I just can't help thinking that true randomness (whatever that is) simply does not fit into the
digital picture of reality that Nature is giving us. In another one of my essays, Is Science Solving
the Reality Riddle?, I proposed that there may be no physical reality at all the it from bit
conjecture. Nature seems to reveal herself as information. Classical thermodynamics and
information theory are turning out to be two sides of the same coin. The more you look, the more
the universe seems to be part of some kind of digital algorithm, and by definition all digital
algorithms are deterministic.
I grew up in the analog age when TV was still black and white. Music was recorded on tiny
grooves that wiggled back and forth on the surfaces of vinyl disks. Telephone conversations
traveled as waves that propagated along wires. Radio waves carried information by continuously
modulating their amplitudes or frequencies instead of simply turning them on and off. But in the
final analysis, it turns out that there is no such thing as analog. In an analog world, arbitrary
amounts of precision are possible, which could contain infinite amounts of information. Someone
suggested how it would be possible to condense the entire Encyclopedia Britannica into two straight
lines drawn on a piece of paper. The ratio the lengths of those two lines can be expressed as a
decimal fraction, which is a string of numbers. If you draw the lines with enough precision, you
could theoretically encode the encyclopedia into those digits. After all, the number is just a ratio
of a circle's circumference to its diameter, and the digits of go on and on forever. Of course, it's
impossible to achieve arbitrary levels of precision because we live in a noisy universe, and noise
would swamp out most of the information contained in the ratio of the lengths of two lines. Claude
Shannon unlocked the secrets of information by representing information as discrete units of binary
digits instead of wavy lines, thereby turning the somewhat vague notions about information in the
analog age into the science of information theory in the digital age.
Some philosophers, and especially theologians, find the idea of a deterministic universe quite
disturbing, eliciting scenes from The Matrix movie. Determinism implies a lack of free will and
human lives without any purpose. Thoughts and emotions need to be spontaneous and
unpredictable to be genuine. If determinism underlies everything, are humans nothing more than
digitally-programmed automatons without souls? I think chaos theory obviates those fears. We
have shown that chaotic systems are both deterministic and unpredictable.
But you might ask if we're in a dataverse, then where's the computer? The correct answer is you
don't need a chunk of hardware to have a logically consistent mathematical structure. A logically
consistent mathematical structure simply exists because it's true. Here in the information age we
sort of got stuck in the paradigm of having someone design computer hardware, and then install the
software and run it on the computer. But that's just our paradigm. The formula 1 + 1 = 2 is true
with or without hardware. The it literally comes from the bit and not the other way around.
Previously, I discussed a machine with a finite number of states and how that number limits the
output of the machine. However, if the machine only consists of space instead of wires and
transistors, then you can make the machine arbitrarily large by expanding space. Bear in mind also,
that a dataverse won't need a central processing unit. The processing would be carried out
everywhere with information (entropy) expanding everywhere. Is it just a coincidence that
increasing entropy and expanding space are both fundamental features of our universe? Or are they
both driven by the same process? Might not the purpose be to accommodate an increasing number
of machine states in order to preserve the illusion of randomness?

29
Appendix I Reductionism and Bell's Cat
It's fun sometimes to mix metaphors, which is why I'm introducing the concept of Bell's Cat. By
now we're familiar with Schrdinger's cat-in-the-box experiment and Bell's inequality. It turns out
that both paradoxes are closely interrelated and stem from the fallacy of reductionism.
Erwin Schrdinger believed the Copenhagen School had taken his own wave equation way too
literally, so he proposed the cat-in-the-box as a way of ridiculing their interpretation of quantum
mechanics. Well, that backfired because the Copenhagen School just took his cat-in-the-box
experiment and doubled down on their bet. The issue was: Exactly what is an observation and
where should science draw the line between the observer and the observed? The Copenhageners
concluded there is no line between the two, and the entire universe is actually one giant wave
function. The radioactive source, the Geiger counter, the cyanide, the cat, the box, and the observer
are all inextricably blended together as a superposition of wave functions. Not only is there no
objective reality on the microscopic quantum level, there is no objective reality at any level.
Observation literally creates reality, and intelligence whatever that is is the necessary agent that
brings about reality. Of course, carrying this idea to its logical conclusion results in Wheelerism,
which I discussed in Appendix D, and eventually leads to the many worlds theory.
In addition to the obvious paradoxes raised by Wheelerism, the other thing I really don't like about
this idea is that it inevitably leads to solipsism. In case you don't know what solipsism is, it's the
notion that you are the only real thing that exists, and that every other object in the universe, both
living and non-living, are nothing more than constructs of your own mind. When you look away
from the moon it ceases to exist, and when you look back it pops into existence. This is exactly the
attitude of psychopaths, who view other people as mere objects to be manipulated for their own
selfish purposes. In other words, you run the risk of turning into a Ted Bundy if you take
Schrdinger's wave function too literally. (Just kidding.)
I covered Bell's inequality in detail in Is Science Solving the Reality Riddle? I just re-read The
Cosmic Code 46 by the late Heinz Pagels, who presented an excellent interpretation of Bell's
experiment. Pagels said the conventional interpretion of Bell's inequality forces us to make an
unpleasant choice: If we insist on objective reality, we must give up the idea of local causality and
vice versa. We can either have objective reality or local causalilty, but not both. Since most
physicists prefer to keep local causality, they must conclude from Bell's inequality that there is no
objective reality. I think you can see where this leads and how it relates to Schrdinger's cat.
So is there a way out of this dilemma? Pagels said yes and I agree. Both of us came up with
basically the same reasoning: While experiments may show there is no objective reality on the
microscopic quantum scale, there definitely is objective reality on the macroscopic scale. Why? A
tiny amount of information exists at the quantum level, such as charge, spin, etc., but there is no
history and no memory. History and memory are irreversible and entropic. Therefore, objective
reality (everything recorded in our universe that is familiar to us) emerges from entropy. We can
draw a clear line separating the observer from the observed very close to the quantum level,
enabling us to dispense with the silly notion that we're all just part of one giant wave function.
The logic of reductionism says we're all made of atoms and atoms are wave functions. Therefore,
we exist only as a superposition of wave functions. That's ridiculous. Dispensing with
reductionism immediately solves Schrdinger's cat paradox and also allows us to avoid having to
make the unpleasant choice demanded by Bell's inequality. Bell's Cat clearly illustrates that science
can begin solving the reality riddle by first rejecting reductionism.

46 Unfortunately this book is no longer in print, although the Kindle version is still available. Heinz Pagels was one of
the few writers of popular science books I've come across who really got it.

30
Appendix J Reductionism and Homo Ex Machina
According to the conventional wisdom of material reductionism, it's only a matter of time until
engineers figure out how to create artificial intelligence (AI) using silicon chips. All they have to
do is make a silicon version of a neuron cell and connect 19-23 billion or so of them on a circuit
board, et voil, we'll have an artificial human brain. Or maybe all it takes is a very sophisticated
computer program that can simulate human behavior well enough to pass the Turing Test. After all,
some neuroscientists state that human consciousness is nothing more than a collection of reflexive
habits and behaviors that are stimulated by sensory inputs, so it should be fairly easy to duplicate
that with the right software. In fact, one of the very first digital computers was the Writing Boy
designed by Pierre Jaquet-Droz in the 1770s. It was a mechanical dummy driven by a complex set
of gears and cams that was programmed to write in lovely script using paper, pen and ink.
Gears and cams were about all engineers had to work with up until the 1940s. Alan Turing's first
digital computer, named Bombe, was developed at Bletchley Park during WWII to break the
demonic Enigma code used by the German Wehrmacht. Bombe was strictly an electromechanical
device, consisting mainly of gears and cams.47 The fact that this machine could solve a complex
problem like breaking a secret enemy code naturally led some people to speculate that such
machines could actually think at some primitive level. Images of the human brain as an assembly
of gears, like the one illustrated below, became a popular motif in the mechanical age.

Building a human brain out of gears and cams seems absurd to us living in the electronic age. But
is building a human brain out of transistors any less absurd? According to the dogma of material
reductionism, a whole is equal to the sum of its parts. A collection of 100 billion transistors would
therefore be 100 billion times as intelligent as a single transistor. The problem with this logic is that
a single transistor has zero intelligence, so 100 billion times zero is still zero. Furthermore, the
Turing Test, which is the current gold standard among AI researchers, is entirely insufficient as
proof of intelligence.48
A recurring theme of the science fiction genre is when robots rebel against their human masters. In
the classic film 2001: A Space Odyssey the HAL 9000 computer convincingly passes the Turing
Test in the very first scene on the ship Discovery by engaging in a deep conversation with crewman
Dave Bowman. However, what really convinced me that HAL possessed intelligence was when he
systematically killed off all the other members of the Discovery crew.
Ex Machina is another great film along the same line. In that film, Nathan is the quirky inventor
of an Internet search engine, who is also the insanely wealthy CEO of a huge company named
Bluebook. Caleb is a lowly Bluebook software engineer, who wins a company-wide contest. The
first prize is a one-week stay as a house guest at Nathan's extensive underground home/research
facility in the middle of nowhere. On Day 2 of the visit, Nathan shows Caleb a very sexy female
47 The electromechanical Bombe was soon superseded by the electronic Colossus machine employing vacuum tubes.
48 The likelihood of a machine passing the Turing Test is getting better all the time, but it's not because our technology
is getting better it's because average human intelligence is getting worse. In fact, it would only take about three lines
of code to simulate all of the limited responses coming from a typical teenager.

31
cyborg he created, named Ava, who is kept locked up in a prison-like section of the underground
facility. The ostensible purpose of Caleb's visit is to conduct a series of interviews with Ava, using
the Turing Test on her to determine if she is truly conscious. Needless to say, Ava passed Caleb's
test with flying colors during their very first interview. Of course Nathan, who is about 1,000 times
smarter than Caleb, already knew Ava would pass the Turing Test. Nathan's real purpose was to see
if Ava could manipulate Caleb into helping her escape from lockup, so he deliberately let both of
them know about his plan to have Ava reprogrammed, which is a cyborg's equivalent of death.
Predictably, Caleb had fallen in love with Ava during his time with her underground, so he hatched
a plan to free her, although things go terribly wrong as soon as it is implemented. Although I won't
go into details and spoil the ending, I will say this: What really clinches the deal for AI is when
cyborgs can learn to manipulate humans. It's not enough for a cyborg to follow instructions and
parrot human responses any dumb computer can do that. While Adam and Eve were wandering
around Eden obeying God's instructions, they were acting like mindless cyborgs. But as soon as
they turned against their Creator, they proved they truly were thinking, intelligent beings.
A calculated response to danger based on self preservation is a clear proof of intelligence. HAL and
Ava faced existential threats, and both of them executed plans to circumvent those threats by
manipulating and/or killing humans. In contrast, a mindless machine would meet its fate with
equanimity, responding only to the software instructions that were programmed into it. But it
would be a mistake to conclude that committing violence against humans is any indication of
intelligence when that violence is merely a result of some software glitch.49
No, I'm afraid the AI engineers have a very long way to go before they succeed in building a brain
using transistors and software. Stuart Hameroff once issued a provocative challenge to AI
engineers: design an artificial paramecium instead of a human brain. You see, a paramecium is a
single-cell organism that has no brain. All it has is a membrane that contains cytoplasm and a
nucleus with DNA needed to replicate itself, with hairlike celia on the membrane's surface that
provide locomotion. Yet this primitive paramecium can navigate around obstacles, seek food, and
find sex partners,50 all without any nervous systems or sense organs that higher animals possess.
There are even indications that paramecia can learn, and thus modify their behaviors. So at some
basic level, you'd have to agree that a paramecium has intelligence, or at least a protoconsciousness.
So what lesson can be learned from this? First of all, whether or not you believe in reductionism,
it's highly unlikely that 100 billion or even a trillion times zero will produce anything greater than
zero. So even with clever programming, cramming a lot of transistors onto a circuit board will not
produce true intelligence unless the transistors themselves have intelligence. The fact that
intelligence emerges in highly-organized nervous systems is due to two things: First, the
fundamental building block of the nervous system neurons must possess protoconsciousness
similar to that of the single-cell paramecium (the Penrose-Hameroff model of consciousness is
based on the premise that conscious activity takes place within each neuron). Second, self
organization results in a highly non-linear system that provides tremendous synergy, where the
whole is far greater than the sum of its parts. Interconnecting billions of neurons greatly amplifies
the cellular protoconsciousness into what passes as human intelligence.
I agree with Stuart Hameroff: Forget about designing a complete artificial brain. Step One of the
homo ex machina project is trying to replicate the intelligence of a single cell.

49 That's why I didn't like the plot of I, Robot. The robots had Three Laws of Robotics programmed into them,
which were supposed to prevent them from harming humans. However, one of the robots had software secretly
installed that could override those laws. That software spread through the robot community like a computer virus,
wreaking global havoc. The flaw in this plot is that the robots were still behaving like programmed machines
instead of thinking, feeling, intelligent creatures. It was just a software glitch that made them crazy.
50 They have sex by exchanging DNA through their cell walls. Hubba hubba!

32
Appendix K Size Matters with Relativity
The orthodox space-time paradigm is in need of a major overhaul, and I'll show why this is true
later in this appendix. But first, some background. From Isaac Newton's time onward, space was
considered a 3-dimensional stage on which reality performs. Gravity and later electromagnetism
were seen as vector fields that filled pre-existing space. Albert Einstein combined space with time
into a 4-dimensional space-time continuum where, according the theory of special relativity,51 space
and time can seamlessly change places, with lengths and durations undergoing changes subject to
relative motions with respect to reference frames. Space-time is assumed to be perfectly flat and
unperturbed in the absence of gravity, but it becomes bent and twisted under the influence of mass-
energy. In Einstein's universe, gravity is no longer seen as a field52. Space-time itself becomes a
field; an actual physical object with dynamical geometric properties that both acts on matter and is
acted on by matter. The theory of general relativity is basically a set of field equations that define
the mutual interactions between mass-energy and space-time using 4-dimensional geometry.53
Space-time is dynamic, and space must either expand or contract over time.54 From every
indication, our universe is expanding. An expanding universe can either be flat or curved. A
universe with positive curvature is closed, and will expand to a maximum size and then collapse
back to a point, while one with negative curvature is open, and will expand forever.55 A perfectly
flat, expanding universe asymptotically approaches a finite limit but will never quite reach it. As
highly improbable as it seems, precise measurements show our universe as being perfectly flat
within an error of 0.5%, a current state of perfect balance that required incredibly fined-tuned initial
conditions. This is the so-called flatness problem of the standard cosmological model (SCM).
In 1972, Alan Guth proposed an inflationary model that fixes the flatness problem. According to
SCM, the entire universe began its existence as a quantum fluctuation much smaller than a proton.
The universe was anything but flat at that time, having quantum twists, turns and lumps all over the
place. So as it expanded, those twists, turns and lumps should have persisted right up until the
present time. The trouble is, they didn't, because every conceivable measurement shows that the
universe is ridiculously flat and 3-dimensional. So instead of a gradual expansion from a wrinkled
and lumpy baby universe, Guth proposed a sudden expansion that blew up everything that became
the observable universe from proton-sized to grapefruit-sized almost instantaneously.56 Even if
the entire universe had been extremely lumpy, our small portion of it our baby observable
universe was all smoothed out and incredibly flat. Then that flat portion grew and grew into a
vast, flat, 3-dimensional space we inhabit today. Problem solved.
Now we're ready to see why this model fails. First let's review the SCM one more time:
1. The universe began as a quantum fluctuation smaller than a proton and then almost
instantaneously expanded by a factor of 1026 until what would be our observable universe
was approximately the size of a grapefruit. This is called the inflationary period.

51 The word order is sometimes changed to special theory of relativity. Einstein preferred the theory of
invariance, and some modern authors have begun calling it the theory of motion relativity.
52 A short time after Einstein published general relativity, he set to work trying to unify gravity and electromagnetism.
His approach was to replace electric and magnetic fields with dynamical properties of space-time itself, which
required a 5th dimension of space-time. Unfortunately, this attempt at unification led nowhere.
53 String theorists have raised the dimensional ante to 11.
54 A static universe containing mass is inherently unstable. Einstein failed to recognize this early on, which he called
his biggest blunder.
55 In a closed universe, the sum of the angles inside a triangle is greater than 180. In an open universe, the sum of the
inside angles is less than 180.
56 This means the universe contains an incredible number of observable universes. We are causally connected to
only one of those observable universes ours. More on that later.

33
2. When the inflationary period ended, a huge release of energy reheated the universe to an
incredibly high temperature of 1027 K or so. That was the official Big Bang event.
3. Our observable universe appears so incredibly flat and 3-dimensional today because it
started out as a small, flat patch of universe that underwent an enormous factor-of-1026
expansion that smoothed out all the wrinkles. Then it just kept expanding.
Okay, so what's wrong with this? We start by defining what the term observable universe actually
means within the context of flat, 4-dimensional space-time. In 4-dimensional space-time, regions of
causality are defined by this equation: x2 + y2 + z2 c2 t2 = 0.
By eliminating one spatial dimension and reducing the equation to x2 + y2 c2 t2 = 0, we can now
draw regions of causality on paper, shown thus:

With t = 0, both spatial components (x, y) are reduced to a single point: x2 + y2 = 0. An observer is
placed at this point labeled Here, Now.57 For values t > 0, the equation defines the region of
space-time in the observer's future. This region is in the shape of a cone, called a light cone, shown
in blue above. The surface of the cone corresponds to a wavefront expanding at speed of light, c,
away from the observer in the x-y plane. Although nothing in the future can influence the observer,
the observer can influence everything that lies within the blue cone.
For values t < 0, the equation defines the region of space-time in the observer's past, another cone-
shaped region shown in red above, with a radius that expands backward in time at the speed of light.
Everything the observer can see using light, radio waves, x-rays, gamma rays, etc., lies along the
outer surface of the red cone. Anything that lies inside the red cone can influence the observer, but
not by using light. For example, sound waves coming toward the observer from the past would
have to lie well inside the red cone. This region, and this region alone, comprises the observer's
observable universe. Nothing outside this region can influence the observer in any way or be any
part of the observer's objective reality.

57 The meaning of this is clear: Now can only exist at a single point Here. There is no Now anywhere else.

34
When astronomers look out into the night sky through telescopes, they only see the part of the
observable universe that lies along the surface of the red cone in the diagram above.58 Thanks to
inflation, the observable universe is both 3-dimensional and flat. So as astronomers look out farther
into space (and farther back in time), our observable universe keeps growing. In principle, it should
be possible to design a telescope that can see all the way back to the beginning. If the
observable universe keeps growing as time goes backward, the beginning must have stretched
across billions of light years. Yet the SCM says the beginning was just a flat, grapefruit-sized
patch within a very wrinkled universe. These two versions of reality just don't match.
Now we can see why the space-time paradigm needs a major overhaul. The universe can be
expanding, 3-dimensional, or flat, but it can't possibly have all three of these properties. And yet
every observation and measurement known to mankind seems to show that space is indeed
expanding, and 3-dimensional, and flat. So what's wrong?
The problem starts between our ears. The mental concept of the universe as having physical
properties like size is just all wrong. In Newton's day, people thought of space as an empty
place that's filled with reality. After general relativity was introduced in 1915, people started
rethinking of space-time as a physical object in its own right, having geometric properties like
curvature, size, and so on. When we observe gravitational interactions between objects, we
interpret this as interactions between those objects and space-time, which forces us to assign
physical properties to space-time. The truth of the matter is that in the absence of other objects,
there is no way to observe space-time itself.
There are no universal standards of size or scale. If it were possible to grow grapefruits inside a
grapefruit-sized universe, how big would those grapefruits be? Everything, including scale, is
relative. Einstein took the first step by showing that all motion is relative. This led to an
inescapable conclusion: The speed of light, c, is an unapproachable invariant that is the same for
all observers. The way Einstein came up with this conclusion was for him to imagine chasing a
light beam. According to pre-relativistic thinking, you could do that because light traveled at an
absolute speed in a fixed coordinate system. But as soon as you caught up to the light, you'd see its
electric and magnetic fields frozen in space. This made no sense to Einstein, so he was forced to
reject the idea of absolute motion in a fixed coordinate system. Similarly, the SCM gives us
conflicting pictures of an observable universe that is both huge and tiny at the same time, so we are
forced to reject one or more of the assumptions that are the bases of the SCM.
Laurant Nottale took relativity to the next level by asserting that both motion and scale are
relative.59 It is no longer possible to assume that space is infinitely divisible and has an infinite
number of x-y-z coordinates. In scale relativity, the unapproachable invariant is the Planck scale,
which has the same relative size for all scales. Thus, the ratio of the meter scale to the Planck
scale is the same as the ratio of the light-year scale to the Planck scale. This sounds crazy to most
physicists today because they still think of the Planck length, the meter, and the light year as all
having definite, fixed lengths. But remember it sounded just as crazy in 1905 when Einstein
insisted that light traveled at the same speed relative to all observers. According to scale relativity,
scales are not absolute, just as motions are not absolute according to special relativity. The
standard meter, a bar made of platinum-iridium alloy, used to sit in a temperature-controlled vault in
Paris, France. But Nature does not have a standard meter everything in the universe is relative.
When a planet orbits a star, the modern interpretation according to general relativity is that space-
time is warped in the vicinity of the star, which forces the planet to cruise around it in an elliptical

58 In 4-dimensional space-time, this would be along the surface of a hyper sphere.


59 Refer to Scale relativity and fractal space-time: theory and applications by Laurant Nottale
http://arxiv.org/abs/0812.3857

35
path, as seen by an observer. This drama is played out in a physical, dynamical, 4-dimensional
medium called space-time. If the scale of the star-planet system doubles, the very same physical
laws still apply; the only difference is that the warpage of space-time is less severe on the larger
scale, so the planet orbits the star at a more leisurely pace. But according to scale relativity, the
scale of the star-planet system changes physics itself. In other words, size matters.
Unlike the classical space and time of Newton's day and the more modern relativistic space-time,
with scale relativity, only a finite number of spatial coordinates (aka points) are available for any
given scale. As the scale increases, the total number of points doesn't necessarily increase in
proportion to the volume. This implies that space has dimensions that take on non-integer,
variable values. This led Nottale to the bizarre conclusion that space-time has fractal properties. I
don't really like the idea of space-time as having any properties, but I'll concede this point.
In my view, space and time (or space-time) are not physical objects at all. Applying my criterion
that Existence = Observability, I conclude that space-time doesn't even exist as objective reality.
Instead, I like to think of space and time as rules of engagement between objects, or more
accurately between objects and observers. Nothing in the universe happens without a reason, and I
believe the only reason to have any rules of engagement in the first place is to prevent violations of
causality. The blue and red cones in the causal diagram separate the universe into regions that are
causally linked to the observer and regions that are not causally linked to the observer. That
requires having spatial and temporal rules of engagement.
The same rules of engagement may not apply equally to all scales or in all cases. In Bell's
experiment, no causal violations can occur between the two entangled particles. Therefore,
although Alice and Bob see two separate particles in their respective labs, as far as the two particles
were concerned, the rules of spatial and temporal separation do not apply to them. The delayed
choice and quantum eraser experiments are even more bizarre from a classical space-time
perspective because time appears to flow backward; i.e., future quantum states affect present
observations and present choices affect past quantum states. Again, the secret to understanding
these weird results is to know that there can be no causal violations involving entangled particles;
therefore, the temporal rules of engagement do not apply to them. Nature will gladly dispose of
these rules when they are no longer needed to preserve the laws of causality.
Here's an even more radical idea: Suppose the dimensionality of space itself is scale-dependent.
When two objects that are relatively close together, they form a system having a small scale
where the spatial rules of engagement are flat and 3-dimensional, conforming to 3-dimensional
Newtonian physics. Now incorporating the idea of fractal space-time in Nottale's work, suppose
space becomes increasingly 2-dimensional at larger scales. An observer interacting with distant
objects would still see them distributed in three independent directions when looking at them
through a local 3-dimensional space; however, the observer's physical interactions with those
objects might take on more of a 2-dimensional aspect. This could explain the gravitational
anomalies astronomers see cropping up at scales slightly larger than the solar system.
In conclusion, we must let go of the paradigm of trying to portray the universe as a diorama model
that a 7th grader might use for a science fair project. Since the SCM based on an expanding, flat,
and 3-dimensional universe is not even logically possible, the universe must be very different than
the kind of place described by our senses. Remember, that our sense of space and time is very
much tied to observing our surroundings on a human-sized scale. This spatial sense developed
while we were evolving on the African savannah and it has served us very well for hundreds of
thousands of years; however, the rules of the savannah don't apply on the scale of the universe. If
cosmological-scale beings exist, it's likely that they would have developed a very different sense of
space and time and their interpretation of reality would be very different than ours.

36
Appendix L Non-Causal and Causal Space
The ramifications of Bell's Theorem are vast and often underestimated, even among experts in the
physics community. As previously discussed in this and other essays I've written, violations of
Bell's inequality revealed by experiments carried out by Alain Aspect and others give us only two
choices: 1) a universe where local hidden variables don't exist and where quantum observations are
fundamentally indeterminate and uncaused, and 2) a universe where everything is predetermined.
The second choice is sometimes referred to as superdeterminism, and the resulting universe is
called a block universe (BU). In Manifesto of an Amateur Scientist, I stated that a BU rules out the
possibility of free will because free will would be an unaccounted-for external cause.
In a BU, every world line is a fixed chain of events. The mental concepts of past, present, and
future are merely points along the world line. No point along that line is special in any way. This is
exactly equivalent to Laplace's clockwork universe, where the present state of the universe uniquely
determines every state in the past or future. In such a universe, the amount of information remains
constant. Since the total number of bits remains constant, the state of the universe can only be
changed by rearranging those bits through linear and reversible processes. Superdeterminism
requires that all of the above statements are true.
Proving that we don't live in a superdeterministic universe is trivial. The line of attack is to show
that the amount information in the universe does not remain fixed, but increases over time, which
requires irreversible processes. It's easy to demonstrate the presence of irreversibility in nature
simply by measuring increases in entropy. Also, if that the universe is undergoing free expansion
(as opposed to ideal, controlled isentropic expansion), it must mean that total entropy is increasing.
It is very clear that we do not live in a BU. Therefore, in light of Bell's Theorem, only one choice
remains: We live in a universe where quantum observations are fundamentally indeterminate and
uncaused.
The time-dependent non-relativistic Schrdinger equation describes the time evolution of a
quantum wave function:
i (r,t)/t = (r,t)
Here, (r,t) is the state of a system in terms of space and time coordinates, r and t. Note that values
of (r,t) are complex numbers, having real and imaginary parts. The symbol is the Hamiltonian
operator, which is linear. Thus, the evolution of the wave function is determinate and reversible;
given (r,t), we can calculate (r,t') with certainty, where t' can either be t + t or t t. The odd
thing is that if represents the wave function of a small particle, then whether or not that particle
exists at (r,t) is far from certain. The best we can do is know the probablity of finding that particle
in a region around (r,t) by calculating the probability density function: p(r,t) = (r,t) *(r,t), where
*(r,t) is the complex conjugate of (r,t) and the values of p(r,t) are real numbers.
The quantum world resembles the BU in one respect because the evolution of the wave function
itself is deterministic and time-reversible. But observations of quantum states are completely
indeterminate and appear to be non-causal, as demonstrated by violations of Bell's inequality. I'll
refer to this as non-causal space (NCS).
When things are subject to the laws of causation, space and time are required to place them in the
proper order. What about events in NCS? Do they have to be placed in any particular order? Not
really. So what purpose do space and time have in NCS? Not much. Remembering the EPR
Paradox and the experiments to test Bell's Inequality, we discovered that the entangled particles
didn't feel any separations in space at all, even though to Alice and Bob observed them in two

37
places at the same time. And in delayed choice experiments, such as the quantum eraser, the
effects of quantum measurements appear to go backward in time, as if the entangled particles
themselves don't feel time.
When particle physicists calculate quantum interactions using Feynman diagrams, they draw space
and time axes on them. But this amounts to humans beings trying to impose features of our world
on NCS. If space and time are interchanged by rotating a Feynman diagram by 90, the interaction
it describes is completely valid, although the physical interpretations of ingoing and outgoing
particles may be different. Thus, the directions of space and time are somewhat arbitrary in NCS.60
In other words, space and time don't exist in NCS the same way they exist in our world; therefore,
NCS is boundless. Maybe it would be more accurate to say that there is a space/time symmetery in
NCS that makes it hard to tell space and time apart.
Since every transformation is reversible, there is no net increase in the amount of information in
NCS; i.e., no additional qubits are created. This is the so-called information conservation law. An
electron comes with only so many qubits of information no more and no less.61
The properties of NCS are summarized:
Evolutions of states are deterministic and reversible, subject to linear Hamiltonian operators.
There are no accumulations of entropy or information.
Observations of quantum states are completely indeterminate and (apparently) uncaused.
There is a symmetry between space and time.
There is temporal and spatial boundlessness.
Most of us live in the world of cars, refrigerators, and wide-screen televisions. Those things obey
the laws of causation, requiring them to be properly ordered in space and time. I'll call this world
causal space (CS). Where does causation come from? I propose that causation emerges along with
entropy, space, and time as a result of irreversible chaotic processes.
If a singular object is removed from contact with the rest of the universe, it exists in timeless,
boundless NCS. For example, when an electron boils away from a hot metallic cathode into a
vacuum, it no longer exist as a point particle with a definite location in space or time, but instead as
a wave in boundless NCS. If this electron62 collides with a phosphor on a screen, it interacts with
the universe, revealing its position in space and time as a tiny pinpoint of light on the screen.
Detecting electrons in this manner involves a highly non-linear, chaotic, and irreversible
amplification process on the screen. Systems that are highly interactive tend to take on
characteristics we call classical. Classical systems obey laws, such as Newton's laws of
motion. But are they really laws? Maybe they're just handy mathematical formulas that describe
the consistent behavior of complex, highly-interactive systems in CS.
How do we distinguish a reversible process from an irreversible process? As a simple example,
suppose the state of a system evolves through a linear transformation: ' = A + B . If the final
state of the system is known, then the initial state is found using simple algebra: = (' A) / B.
Now let's change this to a non-linear transformation: ' = A + B 2. If the final state of the system
is known, then the initial state is obtained: = (' A) / B . Assuming the constants A and B are
chosen so the quantity (' A) / B is positive, we still have a dilemma: has two values, one
positive and the other negative. We cannot compute unique states when time is reversed, so
whereas the forward transformation ' is completely deterministic, it doesn't work in reverse.

60 Richard Feynman discovered that a positron is really an electron moving backward in time.
61 An electrons doesn't accumulate history. It doesn't grow old and wrinkled, nor does its hair turn gray.
62 The term this electron is fiction we use for convenience. In the NCS, electrons lose their individuality.

38
When systems undergo reversible changes, bits of information are rearranged but the total number
of bits remains constant. The total number of bits actually increases with irreversible processes,
establishing a definite direction in time. According to the theories of relativity, space and time are
interconnected, but I believe they are fundamentally different things in CS.
The accumulation of information in CS creates history that leaves a permanent physical record. To
compare the present state of an object to its previous states, this permanent record must be
physically separated from the object itself in space. That is why space must emerge along with time
and history. In contrast, future events are completely hidden from the present63 because future
events lack records. This forms a temporal boundary of the CS called the now.
The properties of CS are summarized:
Evolutions of states are mostly chaotic, deterministic, and irreversible.
There is a general accumulation of entropy and information.
Observations the state of systems are deterministic and obey the laws of causation.
The symmetry between space and time is broken, and an arrow of time emerges.
There is a temporal boundary called the now.
So apparently two distinct worlds make up our universe. It's pointless to ask where these worlds
intersect in space and time, because as I've shown, NCS is spaceless and timeless. Nevertheless,
these worlds seem to be connected.
A lot of physicists are really bothered by having to deal with two very different kinds of spaces, and
they keep complaining about how hard it is to draw a bright line between them.64 Personally, I'm
not bothered by that at all. Take the chemical compound C60 known as buckminsterfullerene. This
substance consists of 60 carbon atoms arranged in a sphere, called a buckyball, resembling a soccer
ball and having a definite geometry that occupies space in three dimensions. Furthermore, a
buckyball can have a history. If you replace one of the carbon-12 atoms in C60 with a carbon-14
atom, this changes the properties of the buckyball by making it radioactive, and you can locate this
change in time. Thus, it would appear a buckyball is a full-fledged classical object living in CS.
However, it is possible to perform a double-slit experiment using buckyballs. One of them can pass
through two slits simultaneously and form wave interference patterns just like electrons or
photons.65 Physicists will scratch their heads all day long trying to figure out what kind of an object
a buckyball really is. Eeny, meeny, miny, moe. Why can't we just accept the possibility that a
buckyball is a NCS-CS hybrid? What is this need to draw bright lines?
So how can actions be uncaused? I think this must involve something called free will. As we
already know, free will and the BU are incompatible, and we also know that the BU is fiction. But
just because free will might exist, does it? Some neuroscientists insist our innate sense of
consciousness and free will is a delusion. They say brain is a machine, albeit a very complex one,
and thoughts are simply brainwave patterns that can only be subject to physical causes. But I have
a question to ask: How can you delude a non-conscious machine into believing it is conscious,
unless it's already conscious? Instead, I propose that consciousness is the boundless, irreducible
something that not only observes the brain's changing mental states, but can also influence them
non-causally within boundless NCS. Free will takes root in quantum indeterminacy.
63 Of course one can predict future events with limited certainty based on probable outcomes of current events. For
example, by observing a car speeding toward you while crossing the street, you can infer that you'll soon be in the
hospital or in the morgue.
64 This was part of the motivation for the BU. According to superdeterminism, there is no fundamental difference
between the quantum world and the classical world because everything is predetermined, so there is no longer any
reason to draw bright lines between them. Of course, we now know that superdeterminism is false.
65 Buckyballs are huge compared to electrons, so the spacings between the two slits have to be very small. Viruses,
which are even larger that buckyballs have also been successfully used in double-slit experiments.

39
Appendix M Why Causal Space Has 1T + 3D Dimensions
We learned in Appendix L that there are two worlds, NCS and CS. Neutrinos, quarks, and electrons
belong to non-causal space, while cars, refrigerators, and wide-screen televisions belong to causal
space. As a result of chaotic, nonlinear processes, CS emerges from NCS as space-time, which is
necessary to carry out the laws of causality. Many physics books refer this as four-dimensional
space-time, but are all four dimensions equivalent? No, they aren't. The reason has to do with
degrees of freedom, entropy and the second law of thermodynamics as we will soon see.
Causality involves objects ordered in causal chains like this: A B C D The arrows
represent events that are at most only partially reversible in CS, so the arrows point in the direction
of time where entropy is increasing. A causal chain involves the passage of time, which forms the
first dimension. But there is more than just a single chain of events there are many more and
objects in one chain can also causally affect objects in another chain. So we can draw a bunch of
causal chains in parallel with arrows connecting them together, like this:
Aa Ba Ca Da

Ab Bb Cb Db

Ac Bc Cc Dc
In the scheme above, Ba has two direct causes: Aa and Ab. Ca and Bc also have two direct causes
and Db has three. So causality forms a 2-dimensional array or web with time proceeding from left
to right. Things are also arranged in the vertical direction called space to provide non-temporal
separations between chains of events. Dimensions are headings and subheadings in a causal filing
system; objects are filed under a heading called time and a subheading called space. Cb is filed
under the time heading C and the space subheading b. Juan Maldacena is a famous quantum
physicist working at the Institute for Advanced Study in Princeton, NJ. He claims space is created
through quantum entanglement.66 He may be right, but my own take on this is that time and space
are simply a filing system, forming a causal web that links objects and events together in CS.
But let's get back to the earlier question of why space and time aren't equivalent. Well, it all has to
do with degrees of freedom. It turns out that we have complete (or almost complete) freedom to
move around in space at will.67 But we have very little freedom to move around in time. If I just sit
in a chair, I'm moving through time at the speed of light. I can slow down my time travel somewhat
by traveling through space, but I can't stop time travel or reverse it. I'm stuck in a runaway time
machine going into the future. So that's a major difference between time and space. Another key
difference involves symmetry. The space component of CS is pretty symmetrical, but time isn't. In
CS, entropy can only increase or stay the same over time but it can never decrease.
So now we know why space-time isn't really 4-dimensional, but rather one-plus-three dimensional,
or 1T+3D. But why isn't it 1T+2D instead, or even 1T+10D like string theorists say it is? Well, this
has to do with degrees of freedom, symmetry, and a woman named Emmy Noether. Noether was a
German-born mathematician extraordinaire, who made some very important contributions to
physics. She lived in a time when higher mathematics was pretty much a club just for boys (no girls
allowed). Nevertheless, Noether discovered a very important theorem that links the symmetry of
space with physical conservation laws, specifically the conservation of linear and angular
momentum. The universe at least the CS part is built upon a radically relational framework

66 As a quantum physicist, Maldacena tends to boil everything down to wave functions, particles and entanglement.
Remember, when the only tool you have is a hammer, everything looks like a nail.
67 Most of us don't have the freedom to get into Fort Knox and other high-security places.

40
where there can be no favored positions or directions. Everything must be defined in relation to
other things, and each observer has complete freedom to choose his own set of spatial coordinates.
According to Noether's theorem, the law of conservation of linear momentum is a direct result
being able to move around in space or to place the origin of spatial coordinates anywhere in space
without changing the laws of motion. The law of conservation of angular momentum is a direct
result of being able to rotate one's frame of reference at will without changing the laws of motion.
So in other words, we have complete freedom to roam around the universe and to turn around in
any direction without changing the laws of motion around us. There are no experiments we can
perform locally that can tell us anything about our location or which direction we're facing.
The figure below shows a 3-dimensional cube. If we line up the coordinate axes along the edges of
the cube, all six faces will lie in either the x-y plane, the y-z plane, or the x-z plane. The cube can
rotate in three, independent orthogonal directions defined by circular motions in three planes of
rotation colored blue, red and green respectively.

Rotation imparts angular momentum, L, on the cube. It is conserved quantity that is expressed as
vector having a magnitude and a direction perpendicular to the plane of rotation.
Plane of Rotation Angular Momentum
x-y z-direction (Lz)
y-z x-direction (Lx)
x-z y-direction (Ly)
Things seem to work out fine in 3-dimensional space, but why couldn't Noether's theorem work just
as well in 2-dimensional space? Well, in a 2-dimensional space there is only a single plane, x-y, so
all rotations must take place in that plane. The angular momentum is defined by a vector that points
in a direction perpendicular to the plane, but the trouble is there is no direction perpendicular to the
x-y plane; therefore, angular momentum cannot be defined by a vector in 2-dimensional space.
There are enough dimensions to allow traveling in a circle on a plane, but there aren't enough
dimensions to accommodate the vector quantity L.
What about a 4-dimensional space? Although it's impossible to draw objects in 4-dimensional
space, we can at least discuss a hypercube living in four dimensions (w, x, y, z). Here there are six
planes of rotation: w-x, w-y, w-z, x-y, x-z, and y-z. Rotation is possible in any of those planes, so
there should be six different directions where an angular momentum vector can point. Huh? There
are only four total dimensions, so there are either too many rotational degrees of freedom or too few
dimensions to accommodate all of them. It looks like four dimensions won't work either.
There is a simple formula for the number of rotational degrees of freedom, NR , based on the
number of dimensions, n, in space: NR = n (n 1)/2.68 We saw that n=2 and n=4 dimensions

68 According to string theory, there are 10 spatial dimensions, so NR = 45. Yikes! That's way too many IMO.

41
weren't enough to accommodate all the directions where the L-vectors could point. So there seems
to be an overriding need to match the number of rotational degrees of freedom with the number of
dimensions: NR = n. Well folks, the only number that works is n = 3. That's why we live in a
1T+3D world: There is no other choice, thanks to Emmy Noether.

Number 3 is the magic number of dimensions that allows a radically relational universe to exist; one
where everyone has complete freedom to choose whatever coordinate system he wants with no one
being the wiser. I haven't seen many physics books use that rationale to explain why the universe is
the way it is, so I could be way off base here. But it sounds about right to me.
But what about NCS? Well, things are a bit uncertain that wavy world, so I would be a little
hesitant to state how many dimensions it has. But there appears to be very little reason to set up a
causal filing system in a world where there no causality. In fact, it might even be considered
inappropriate and even rude to put things in causal order in NCS. But even so, we can find a couple
of precursors of CS in NCS, one of which is angular momentum.
We saw how rotation forms the basis of why there are three spatial dimensions. Lone elementary
particles living in NCS have an inherent property known as spin. Spin is an actual quantity of
angular momentum, just like our spinning cube has, and it comes in either odd or even multiples of
/2, depending on whether you're a fermion or a boson.69 The difference between a spinning
particle in NCS and a spinning cube in CS is that the directions of spin in NCS are completely
indeterminate. It is only through interaction with CS that these directions are found. Living in CS,
we get to choose the directions our instruments use to detect those particles, and the particles oblige
us with simple answers: +1 or 1. Given that spin exists in NCS and time doesn't, I might hazard a
guess as to what the total number of dimensions are in that world, and I'll write about that in a
future appendix. My guess is predicated on the fact that there should be spatial symmetry in NCS
and the property of spin. But without entropy, there is no direction of time and no physical motion
as in v = dx/dt. Physical motion only happens in CS. The reason why communication between an
entangled Bell pair of particles appears instantaneous is because the communication is happening in
NCS where there is no time delay. But communicating in NCS cannot involve exchange of any
classical information (bits); classical information is equivalent to entropy, and there is no entropy in
NCS, so they have to communicate in qubits instead.
Getting back to CS, I think it's a misconception to think of CS as a giant 3-dimensional box. We
saw in Appendix K that this primitive notion of the universe as a 3D diorama is seriously flawed.
Instead, I try to view surrounding space as a local 1T+3D bubble that follows me around as I
change positions relative to other stuff. The universe may look like it stretches out into a huge, flat
3-dimensional space forever, but it's just an illusion from living inside the bubble.
To sum things up concerning CS, I'm going to offer a few more details concerning why only three
69 Even the massless photon, gluon, and graviton all have spin. The Higgs boson is an exception it has lots of mass
but no spin. Go figure. Maybe they call it the God Particle because everything else is spinning around it.

42
dimensions can satisfy the conservation of momentum law. Strictly speaking, the calculation of
angular momentum is not restricted to objects moving around axes in perfect circles. Angular
momentum can be computed for objects moving in any direction at any distance and in any
direction from a reference point. The distance and direction from the reference point to the moving
object is a vector, r, and the velocity of the object is a vector, v. For an object with a mass, m, the
equation for angular momentum involves multiplying the vectors r and v: L = m r v.
I use the symbol here to represent something called a cross product. The vectors r and v define
the plane of rotation, and L is a vector perpendicular to the plane in the axis of rotation. The
interesting thing about this is that mathematicians have proven that cross products can be defined
only in 3-dimensional or 7-dimensional spaces no cross products exist for any other dimensions.70
So why couldn't Noether's theorem also work in 7-dimensional space? Well, let's see
There are seven orthogonal ri vectors and seven orthogonal vj vectors that form ri vj = Lk cross
products in 7-dimensional space. The numbers (i, j, k) = 1, 2, 3, 4, 5, 6, 7 correspond to those seven
dimensions. The table below shows how the cross products are computed.
v1 v2 v3 v4 v5 v6 v7

r1 0 L3 -L2 L5 -L4 -L7 L6
r2 -L3 0 L1 L6 L7 -L4 -L5
r3 L2 -L1 0 L7 -L6 L5 -L4
r4 -L5 -L6 -L7 0 L1 L2 L3
r5 L4 -L7 L6 -L1 0 -L3 L2
r6 L7 L4 -L5 -L2 L3 0 -L1
r7 -L6 L5 L4 -L3 -L2 L1 0

There are 21 positive Lk vectors in this array, which equals the number of rotational degrees of
freedom using the formula k = 7 /2 = 21. If we restrict (i, j) 3 then we are back to our
familiar 3-dimensional space, shown as the area highlighted in yellow. Here, all Lk vectors are
within 3-dimensional space, k 3. For all other spaces in this table where (i, j) 6, the vectors L7
and -L7 pop up, which point nowhere. When (i, j) 7 things work out a little better because all Lk
vectors point in directions that are included in 7-dimensional space.71 The problem is that there are
three different pairs of orthogonal ri and vj vectors that result in each Lk vector:

L1 L2 L3 L4 L5 L6 L7
= r2 v3 = r3 v1 = r1 v2 = r5 v1 = r1 v4 = r1 v7 = r2 v5

= r4 v5 = r4 v6 = r4 v6 = r6 v2 = r3 v6 = r2 v4 = r3 v4
= r7 v7 = r5 v7 = r6 v5 = r7 v3 = r7 v2 = r5 v3 = r6 v1

In three-dimensional space, the vector Lk uniquely determines ri and vj vectors that define a single
plane of rotation, but in seven-dimensional space, the planes of rotation are indeterminate. That's
why Emmy Noether's magic theorem won't work here.

70 If you don't believe me, go talk to the mathematicians.


71 Cross products are defined if and only if their dimensions include them. This only works for 3 and 7.

43
Appendix N Why Non-Causal Space Has 1 + 2D Dimensions
In Appendix M, I said I would try to guess the number of dimensions in NCS, and there are several
clues that can help make this somewhat of an educated guess instead of a complete SWAG. First,
we know that there is no entropy in NCS, so the time dimension should be entirely absent. Nothing
actually happens in NCS in the same sense that things happen here in CS.
Next, we learned in Appendix M that in order for CS to be radically relational, there has to be
rotational symmetry, at least in the bubble that follows us around. According to Emmy Noether's
theorem, this results in the law of conservation of angular momentum. Angular momentum is
defined by a cross product of the r and v vectors that lie in a 2-dimensional plane, producing the
angular momentum vector, L, which is perpendicular to that plane. There is a strong suggestion
that angular momentum exists in NCS in the form of a property called spin. Every elementary
particle (with the exception of the Higgs boson) has this property. So we might jump to the
conclusion that there are three spatial dimensions in NCS also. But not so fast. The peculiar thing
about elementary particles is their spins don't seem to point in any consistent direction. When we
try to measure their spins using magnets or other devices, we only get an answer like up/down or
+1/ 1, and never an answer like 46.72. So spin doesn't look a vector at all; it looks more like
an ordinary scalar quantity that has two values, k /2. To me, this implies that there must be two
spatial dimensions in NCS in order to produce something akin to the cross product of two vectors
lying in a plane. But the spin vector behaves like a scalar, or possibly a pseudovector, that only
can point up or down.
So does that mean NCS has no time dimension and two space dimensions: 0T+2D? Well, not
quite. It's true that time is absent in NCS, but events happening in causal order in CS have to be
related to this space somehow. I think this can be done using an inverse Fourier transform. Any
time function can be converted into a frequency function using the Fourier transform and converted
back again with the inverse Fourier transform. For example, take the simple pulse function,
denoted rect(t) or (t), defined below.

(t) = 0 t<
(t) = t
(t) = 0 t>

The Fourier transform of (t) is sin( f)/( f), also referred to as sinc( f), where f is frequency in
hertz (Hz). This function is plotted below.

The quantity (2 f) can be replaced by the frequency expressed in radians per second. Thus, the

44
Fourier transform of (t) is equal to sinc(/2)/(/2). The Fourier transform, F (), is computed for
any time function f(t) using the following formula.

F () = f (t) e it
dt , integrated over the interval t < +

In general, F () is a complex number + i . This simply means that F () has both an amplitude
|F () | = ( 2 + 2) and a phase angle, = tan 1 ( / ). The argument of F () requires one
dimension, , and the amplitude and phase of F () require two additional dimensions, and .
Therefore, NCS has 1 + 2D dimensions as shown in the figure below.

The colored planes can contain vectors that generate the angular momentum property, or spin. Now
the spin vector finally has somewhere to point, i.e., in the direction. I think that's appropriate
because spin is usually associated with frequency the greater the frequency, the faster the spin.
There is one important difference between the (, , ) dimensions of NCS and the (x, y, z)
dimensions of CS: There is an absolute origin at = 0. This origin cannot arbitrarily shift because
although the Fourier transform is linear, it is not shift-invariant with respect to frequency.
F () can be thought of as a vector field within the - planes that are stacked on top of each other
in the direction. The F () field is shown as red arrows in the right-hand part of the figure above.
Of course, these arrows would have varying lengths corresponding to |F () | as well as changing
directions. Changing the slope d/d shifts f(t) in time, but adding a constant to all the phase
angles in the - planes doesn't change f(t), so there are no preferred angles or directions for the
and axes. The and dimensions shouldn't be thought of as space in the usual sense; however,
the transformation (, ) (x, y, z) adds a third dimension, so CS may actually be a 3D projection
of a 2D hologram in NCS. This idea is intriguing, because the whole is represented in every part
and vice versa in a hologram.72 This might also explain spooky action at a distance that has
troubled physicists since the beginning of quantum mechanics.73 Transforming a discrete particle
in CS into a wavelike entity in NCS is mathematically similar to the Fourier transform of the
impulse function (t): F {(t)} = 1 with constant. Whereas a particle occupies a specific spot in
CS, its transform extends throughout the entire (, , ) space in NCS.
There is one final comment to be made on the subject of spin. The cross product r v can't exist in
NCS, because v requires time, but a spin vector pointing in the direction could be defined
using the curl of a vector, , that lies in the - plane: = / / .
Since is defined in the - plane only, is essentially just a scalar quantity. The curl of a vector

72 This statement is very anti-reductionist. But after all, that's the whole point of this essay, right?
73 Well, it troubles some physicists but not everybody. It didn't seem to trouble Niels Bohr at all.

45
field can be visualized using the red arrows in the preceding figure. Follow a counter-clockwise
path around some point on the plane. If the arrows align with the path, points in the +
direction at that point.74 If the arrows oppose the path, is in the direction.
It may seem like I'm trying to change or reinvent quantum mechanics here, which I'm certainly
not.75 Far be it for me to try to improve upon what was laid out almost a century ago by giants
like Niels Bohr, Max Born, Louis de Broglie, Arthur Compton, Paul Dirac, Werner Heisenberg,
David Hilbert, John von Neumann, Wolfgang Pauli, Max Planck, and of course Erwin Schrdinger.
It's just that a place I invented called NCS has some peculiarities that appear to be similar to what is
described by quantum mechanics. One particular thing is bothersome. When the inverse Fourier
transform converts a static F () into a sequence of events in the time dimension, those events are
defined for all time. This looks suspiciously like the block universe, which I hope I was able to
totally debunk in Appendix L. So how do I reconcile the fact that F () cannot change (there is no
time dimension in NCS) with the fact that events cannot be locked in or predetermined in CS?
A similar question has plagued quantum physicists for a very long time. According to QM,
everything in the universe from electrons to cats to galaxies to the the universe itelf are nothing
other than wave functions that are linearly superimposed on top of each other. Furthermore, those
wave functions evolve over time deterministically. That is one of the reasons people embrace the
block universe model. The way around this is to invoke something called the collapse of the wave
function, which occurs when an observer makes a measurement of a quantum system. At that
point, the old wave function disappears and a brand new one appears in its place. Physicists don't
like the idea of wave function collapse, because the terms observer and measurement are ill-defined.
What exactly is a measurement and who is qualified to make it? Where is the boundary between a
the quantum system and the oberver?
The many worlds hypothesis avoids the wave function collapse entirely. The wave function goes on
and on forever. A quantum observation is not just choosing one of many possible eigenvalues of
a wave function. It's choosing all of them at the same time.76 In other words, the universe splits
into a different actual world for each eignvalue. It's sort of like a block universe on steroids, with
each world generated by the same deterministic wave function. To a lone observer it may seems as
if there is a single universe with a series of random choices, but in fact the observer also splits.
John Bell discussed a different way of intitiating wave function collapse in his book Speakable and
Unspeakable in Quantum Mechanics. He described something he called introducing stochasticity
into the Schrdinger wave equation, or in simple terms, forcing the wave function to make random
choices. Bell said this can be triggered by guess what? just adding a small nonlinear term to the
wave equation! For small quantum systems, the wave functions superimpose more or less the way
they do in classical QM. But for larger systems, which Bell defined as having more than 100
atoms, give or take, the nonlinearity builds up quickly and chaotically, causing the wave function to
collapse automatically. The length of time it takes for a system's wave function to collapse is
inversely proportional to the size and complexity of the system.
The fact that chaos (nonlinearity) plays a central role in creating order is the point I've been trying
to get across throughout this entire essay. From the standpoint of NCS, the F () doesn't evolve in
ordinary time like a wavefunction in QM. F () is static, covering an infinite range of frequencies
and spread out over the - planes like a hologram. A single collapse, triggered by nonlinearity,
would change the entire F () over all frequencies and throughout all - planes intantaneously.

74 You could say that has a positive frequency, but you shouldn't think implies rotational motion, because motion
requires change, which requires time, and time is absent in NCS. Think of as just another dimension.
75 Or it may seem like I'm speaking complete gibberish.
76 Or more accurately, choosing each of them separately at separate times in separate worlds.

46
Appendix O Fresh Bits, Stale Bits and Einstein
Way back in Appendix H, the question was raised of whether there truly are random processes in
nature. Albert Einstein objected to the Copenhagen interpretation of quantum physics, which said
quantum measurements are indeterminate. Einstein's reputation has deteriorated over the years
since he died. Today, some think of him a doddering old man with unkempt hair, stuck in 19th
century classical thinking while wasting the final third of his career trying to solve the unsolvable
problem of merging gravitation with quantum physics. They say Einstein just couldn't get
quantum mechanics, so he kept shuffling around in shoes without wearing any socks muttering,
God doesn't play dice with the world. Well, I have come to a slightly different conclusion.
Einstein most certainly did get quantum mechanics in fact, he helped invent it, which is why they
gave him a Nobel Prize.77 It's just that the idea of uncaused events didn't sit well with him.
Gottfried Wilhelm Leibniz coined the phrase The Principle of Sufficient Reason, and Einstein
pretty much bought into his idea completely. Basically, PSR says that for every fact F, there must
be an explanation why F is the case. So if we're measuring the spin of an electron and it comes out
as +1, then it's no good to say, Well it just is what it is. If you remember, I did a thought
experiment in Appendix H where an electron contained a tiny computer that spit out random bits
that gave its 1 spin states, only those bits weren't truly random but pseudo-random bits generated
by a hidden chaotic computer algorithm. The question was whether a string of pseudo-random bits
like those could be distinguished from a truly random string of bits coming from an electron spin
detector. Some mathematicians claim they could tell the difference, which I seriously doubt.78
Well, now I came up with a slightly different thought experiment that should put this whole thing to
rest. We could argue forever over the merits of a pseudo-random number generator versus a truly
random number generator. So let's put a tiny hard disk inside the electron that is filled with ones
and zeros previously generated using bona fide random-number hardware, like a Zener diode shot
noise device.79 The ones and zeros are then read out one at a time from the hard disk as electron
spin directions. Now the question arises: Are those bits on the hard disk truly random and
indistinguishable from bits generated in real time using a Zener shot noise device? Most people
would say the two sets of numbers are completely indistinguishable.80 Well, here's a surprise: In
terms of entropy, there is a difference, although it's a very subtle one.
When a system falls into a particular quantum eigenstate, it essentially means its wave function has
collapsed and one or more classical bits of information were added to the universe. A slightly
modified version of Shannon's definition of information is, I = log2 , where is the number of
equally-probable states of a system. Boltzmann's definition of entropy is S = kB loge W, where W is
the number of microstates of a system. Notice the similarity? So even if you don't buy into my
claim that entropy = information, you'd have to agree that when entropy goes up, information has to
go up with it. Every time an electron's spin is measured, there is one more bit of information added
to the universe. So any quantum mechanical process where a wave function collapses will increase
the information and entropy of the system by some amount as fresh bits bubble up from NCS.
Bits that were already saved on a hard disk and then read out are stale bits. Stale bits do not

77 Well okay, technically they gave it to him for discovering the photoelectric effect. But he explained the
photoelectric effect by introducing quantum mechanics.
78 Unless they looked inside the electron and found a tiny computer. Papers in mathematics journals claiming there
are an observable differences between random and pseudo-random number sequences seem to depend on having
some knowledge about the mechanics behind those numbers, which of course is cheating.
79 Zener shot noise arises from a quantum mechanical process, so everyone even mathematicians would agree it
generates truly random numbers.
80 They even include mathematicians who claim they can tell the difference between truly random bits and those
coming from a pseudo-random computer algorithm.

47
increase information and entropy of a system because they were already baked into it. It's no
wonder poor Einstein had trouble wrestling with this question: His mind was too subtle compared
to other people's. So I think I've found the answer to the puzzle of stochasticity that troubled
Einstein: Stochastic processes increase entropy whereas deterministic processes do not.
We saw in Appendix L that we can rule out the possibility that we live in a totally deterministic,
Laplacian, block universe. It is simply because when we measure entropy in any closed system, it
almost always increases with time and it never decreases. So every quantum wave-function
collapse is almost certainly an irreversible, stochastic process that generates fresh bits. This
conclusion would not make a believer in PSR like Einstein happy, but I think he'd accept this if it
were explained to him in these terms.81
This brings us back to Schrdinger's cat.82
This thought experiment has generated tons of debate since Schrdinger first introduced it in 1935.83
Articles about Schrdinger's cat still keep appearing in Scientific American and other journals from
time to time, adding new complexity to the problem, like one I read not long ago where an observer
observes the observer who observes the observer who observes the cat and so on. Schrdinger's cat
is also one of the drivers behind Hugh Everett's Many Worlds Interpretation (MWI) embraced by
many leading physicists. MWI completely avoids the question of what happens to the wave
function when it collapses. When the observer interacts with the cat, both alive and dead versions
of the cat exist in separate universes while the observer's consciousness splits in two.
Schrdinger's thought experiment is based on the QM premise that if the cat is isolated from the
observer, then she is suspended in an alive/dead superposition of eigenstates until she interacts with
the observer, causing her wave function to collapse into one of those states. John Wheeler and
others have argued that consciousness is both the necessary and sufficient condition in order for a
wave function to collapse. In Schrdinger's experiment, presumably the observer's consciousness
does the trick but not the cat's.
I discovered a very important lesson about doing thought experiments: It's extremely important to
determine if the thought experiment contains any hidden assumptions that conflict with known
principles or physical laws. Schrdinger's cat experiment depends on the assumption that Fluffy
does not interact with anything outside the box. There could be soundproofing that muffles the
caterwauling of Fluffy in her death throes, a lead lining to block any radiation that could trigger the
decay of a radioisotope or electromagnetic waves that could set off a Geiger counter, etc., etc. The
fallacy of this experiment is that it is impossible to isolate any large system having a Geiger
counter + vial of cyanide + cat from the rest of the universe.84 So it may be possible to completely
isolate individual electrons or photons, at least for a little while, but it simply isn't possible to isolate
large, classical objects. If nothing else, Fluffy is still connected to the Earth by gravity, which
cannot be blocked by any amount of soundproofing or lead. So the fallacy behind this thought
experiment is that its design is fundamentally undoable in practice.
A similar problem exists in the EPR thought experiment. In it, there is some God-like observer
(presumably Einstein) peering down on Alice and Bob and watching both of them performing their
experiments simultaneously. This ignores a fundamental consequence of special relativity, namely
the relativity of simultaneity. I suspect the AMPS thought experiment, which supposedly uncovered
deep paradoxes in nature, will turn out to be just as fundamentally flawed. It just doesn't make
sense to me how macroscopic Alice (who is comprised of stale bits) can fall into a black hole while
81 We need to cut Einstein some slack. He lived in an age when information theory was essentially nonexistent. It
Shannon who blazed the trail in that area, and Einstein was way past his prime by then.
82 Huh?
83 It seems that 1935 was a banner year for QM thought experiments, including the famous EPR paradox.
84 Incidentally, in light of NCS-CS duality, everything is connected to everything else.

48
doing unitary operations on qubits still entangled with Hawking radiation that left the black hole
billions of years ago. I know this is just a thought experiment, but I would like to ask its authors
exactly how Alice is supposed to do this. Of course they'll say it is difficult in practice but doable
in principle.85 But I still want to know how. To my knowledge, nobody has yet patented a fully-
functional qubit descrambling device, so I be inclined to say it isn't just difficult in practice but
impossible. Even in thought experiments, things still have to be physically possible.
In his book Speakable and Unspeakable in Quantum Physics, John Bell floated the idea of
introducing a small nonlinear term into the Schrdinger wave equation, which would stimulate
wave function collapse and produce stochasticity. I kind of like this idea, but I still don't like the
way space and time are shoehorned into the wave equation. According to this model, while Fluffy
is in the box, she continuously forms new wave functions containing alive/dead cat qubits. Those
wave functions collapse almost immediately, and Bell even provides a formula that spells out how
long that takes.86 This continuous collapse raises the possibility of running out of room. If a whole
new cat materializes during every Planck-time interval, that's an awful lot of stochasticity in my
opinion. Imagine the number of fresh bits it takes every second to generate all those Fluffies.
In Appendix C, we computed the number of microstates, W, in a pound of steam based on
Boltzmann's entropy formula. That number worked out to be 10 followed by over 1026 zeros. To
get some idea of how truly big that number is, let's try to write down all those zeros on a sheet of
paper. You can fit about 80 zeros on one line and 120 lines on a page for a total of 9,600 zeros per
page. If we write them on both sides of the paper, we can fit 19,200 zeros on one sheet. Dividing
that number into 1026 zeros gives us 5.2 1021 sheets of paper, which is still a pretty big number.
There are 500 sheets of paper in a ream, so this represents about 1019 reams of paper. A ream is
about 2 thick, which is 3.2 10-5 of a mile. So if we stack those reams of paper on top of each
other, the stack would stretch 3.2 1014 miles. A light year is 5.9 1012 miles, so a stack of sheets
with 1026 zeros on them would be roughly 54 light-years thick, and you'd have to travel to the
Andromeda galaxy and back at least ten times to cover that distance. Yet that huge number
somehow fits into one pound of steam, which only occupies 142 ft3 of space. So it seems there is
plenty of room in the universe to store very large numbers.
Still, the universe keeps expanding. Or at least CS does. In my opinion, although CS is finite, NCS
is infinite because of the mathematical properties of the transformations that link these two spaces
together. The CS has at least one temporal boundary in the present87 and the Fourier transform of a
finite time interval covers an infinite number of frequencies, meaning that the dimension in NCS
must be infinite. I suspect the size of the other two dimensions of NCS may be infinite as well,
although I don't know the properties of (, ) (x, y, z) transformations well enough to feel
confident in making that claim. At any rate, whereas the number of stale bits in CS increases more
or less linearly with time, there are already an infinite number of qubits in NCS.
Expanding space has the thermodynamic properties of adiabatic free expansion. Adiabatic
expansion simply means that there is no transfer of heat between the system and the exterior. That's
obviously true for the universe as a system, because it has no physical exterior. Free expansion
means that there is nothing restraining the expansion. The only restraint to universal expansion is
the gravitational attraction of matter, but it doesn't appear there's enough matter to fully restrain the
expansion. Entropy increases under adiabatic free expansion, so expanding space must be
generating entropy in the form of fresh bits bubbling up from NCS and turning into stale bits.

85 I'm reminded of a saying by the late, great Yogi Berra. He said, In theory, practice and theory are the same. In
practice, they're different. Of course according to Yogi, he didn't say half the things he said.
86 It doesn't take very long. My guess is that the average duration of Fluffy's wave functions is on the order of one
Planck time.
87 There could be a second temporal boundary known as the Big Bang.

49
Appendix P Why Matter?
In Appendix M we learned why causation must operate in a 1T + 3D backdrop, and although space
and time are closely connected, they are in fact different things. One dimension of time is just
enough, but space must have exactly three dimensions no more and no less to provide rotational
symmetry. This shouldn't rule out having higher dimensions as long as these are not misconstrued
as being spatial dimensions.88
The question arises whether there is such a thing as space devoid of anything. When I try to picture
myself all alone in an empty universe without anything else around me, I simply can't. I can
imagine stretching my arms and legs out into empty space, but it totally creeps me out to think
that there is nothing at all beyond my reach. It seems that in order for space to have any real
explanation, it must contain matter, and lots of it.89
Referring to Emmy Noether's theorem (discussed in Appendix M, above) you'll remember that
space has two very important symmetries. Displacement symmetry says that the time to soft boil an
egg in London, England will be the same as it is in Sydney, Australia. Rotational symmetry says
that the time to soft boil an egg is the same no matter which direction the stove is facing. These
symmetries rest on the conservation laws for linear and angular momentum.
Displacement Symmetry Law of Conservation of Linear Momentum m v = constant
Rotational Symmetry Law of Conservation of Angular Momentum m r v = constant
Matter (expressed as mass, m , above) is an indispensable part of the two conservation laws and
hence it is indispensable to the concept of space. In fact, it wouldn't surprise me if matter actually
creates a 1T+3D bubble around itself, and what we call space-time (aka the universe) is the
stitching together of individual mass-produced bubbles (pun intended). Interestingly, combining
two fundamental constants from general relativity, G and c, with the fundamental constant from
quantum mechanics, , results in a generic set of units that measure time, space, and matter:
tp = G/c5
lp = G/c3
mp = c/G
The fact that time, space and matter are linked this way through Planck's constant provides a strong
hint of an underlying unity between gravity and quantum mechanics.
By the way, the idea that matter creates space isn't entirely new. Ernst Mach (1838-1916) argued
that the large-scale distribution of matter in the universe produces inertial frames, known as Mach's
Principle.90 Julian Barbour proposed a conjecture that space is an accounting system that only
keeps track of the distances between objects as a data compression technique; the impression that
space is a full-blown three-dimensional coordinate system is just an illusion.91 In any case, it's clear
to me that space at least the CS part of it absolutely depends on the presence of matter in it.

88 Albert Einstein briefly toyed with the idea of a fifth dimension, which he used to unify electromagnetism with
gravity, but he abandoned it because he was bothered by the fact we can't see this extra dimension. String
theorists feel the need to curl up their six extra dimensions into strange shapes in order to hide them.
89 According to quantum mechanics an empty vacuum isn't really empty, but it's filled with pairs of virtual particles
and maybe a Higgs field. But knowing that space is filled with invisible stuff isn't very comforting.
90 A person alone in the universe wouldn't be able to feel the sensation of acceleration or spinning. Einstein was
heavily influenced by Mach's Principle; rotating masses produce frame dragging in general relativity.
91 I can agree with this, but I find it hard to swallow Barbour's other conjecture that time is also an illusion. As a side
note, Barbour turns John Wheeler's it from bit principle upside down. His paper Bit from It relies on Claude
Shannon's groundbreaking work. Read Bit from It here: http://www.platonia.com/bit_from_it.pdf

50
Appendix Q A False Start from a False Vacuum
We live in a world of change where nothing is permanent. Even the atoms in your body are being
continuously exchanged for new atoms. In her delightful book Trespassing on Einstein's Lawn,
Amanda Gefter and her father go on an amateur scientific quest to find something anything that
they could agree upon as being invariant and independent of context or relationships with other
things. I'm convinced that our universe is radically relativistic, meaning that everything can only be
defined in relation to other things. There is literally nothing that can be relied upon as being
permanent; the so-called laws of nature are just mathematical descriptions of how things react in
relation to other things, and even those descriptions are subject to change. The one Principle that
appears to be immutable is that systems always change in ways maximize total degrees of freedom.
Something in the human psyche demands an explanation for the Origin of all things, and from the
earliest times, humankind has dreamed up all sorts of creation myths. This question is so important
that the very first verse of the Bible starts out with, In the beginning Unfortunately, I believe
the human need for an explanation of an Origin is misplaced.
We saw in earlier parts of this essay that the physical reality we're familiar with consists of Causal
Space where changes are placed in causal order in time and across space. Change produces a
permanent record, or history, adding information/entropy to the universe. Time is how we label
change throughout history. Our internal sense of time is an awareness of an expanding history
stored in our memories. The problem with a universal Origin, or any lesser origin for that matter, is
that everything taking place in CS is built upon some other thing prior to it, making it impossible to
say at what point any particular thing actually begins to exist.
Classical cosmology traces the evolution of an expanding universe back in time while applying the
laws of physics, which are presumed to be forever fixed. At some point, the universe converges to
singularity a region of space having infinite curvature and infinite density, where it is said the
laws of physics no longer apply. This is the so-called Big Bang Theory, dating back to 1931 when
Georges Lematre first proposed a universe expanding from a singularity. The idea caught on in the
late 1960s and early 1970s when Roger Penrose and Stephen Hawking published work confirming
Lematre's idea. After that, the idea that the universe had an actual Beginning became an accepted
fact in the scientific community.
The idea of a singularity is a contradiction however, since the universe worked its way back to a
singularity because of the laws of physics. This would mean that the laws of physics were different
while the universe was a singularity and then they spontaneously changed. Lately, string theorist
have come to the rescue (or so they claim) by smudging the point-like singularity into a stringy
region where the laws of string theory take over from general relativity. But whether we're using
classical physics or string theory, did the universe have an actual Beginning? And if so, how?
The problem with this kind of analysis is that it tries to rewind the history of the universe while
ignoring irreversibly. Although the field equations of general relativity are indeed fully time-
reversible, most natural processes aren't. The total amount of information/entropy today is greater
than the total amount of information/entropy yesterday, and if we could go backward through time,
we could imagine arriving at a state where the total amount of information/entropy, S, is zero. That
would be a universe existing in a single unitary state, = 1.92 But would that be the true
Beginning? Could the universe be said to actually exist in such a state? I guess it depends upon
your definition of existence.93 Obviously, if time measures change, time couldn't exist in any

92 Note there is no such thing as negative entropy so, the smallest possible value of entropy is zero. The formula for
entropy is S = kb log , where is the total number of possible states. Thus, zero entropy means = 1.
93 Or as one former US president said, it depends on what the meaning of the word is is.

51
normal sense in a = 1 universe. In other words, = 1 is an eternal, timeless state without a true
Beginning.
The scientific quest for a universe with a true Beginning leads to some pretty bizarre ideas. The
prevailing opinion is the so-called laws of nature were frozen at the Beginning, raising the obvious
question of what a universe with a different set of laws would be like. Thus, it is rather tempting to
imagine that a plethora of such universes, which must exist in some sense outside our own space
and time, since they are completely unobservable from inside this universe. What does it even
mean to exist outside our space and time? Again, it depends on what the meaning of the word is is.
Although an exterior region having its own separate space and time would allow other universes to
pop into their own existences at various points in time, exactly how would that be facilitated?
One idea that has gained quite a bit of traction lately is that an entire universe can pop into existence
from something called a false vacuum. That idea is explored in Zeeya Meralis book A Big Bang in
a Little Room, where she proposes that some super-intelligent being possessing the necessary tools
actually created our universe out of practically nothing by creating a false vacuum. Furthermore,
Merali thinks the super-intelligent being who created the universe could have intentionally sent a
message to us digitally encoded in the cosmic background radiation. Sure, why not? The title of
the book alludes to the possibility that mankind could create our own baby universes in the near
future right here on Earth, given the right tools.94
In case you're wondering what a false vacuum is, here's the explanation as I understand it.
According to quantum field theory (QFT), space is filled with an inflaton scalar field95 having
energy peaks and valleys, as depicted below.

According to orthodox cosmology, the inflaton field strength was zero when the universe began,
giving it a relatively high energy value and causing inflation to kick into high gear. The universe
quickly settled down into an energy valley where inflation stopped, marked F in the picture
above, meaning false vacuum. The reason it's false is because there is an even lower energy
valley nearby, a true vacuum marked T. Cosmologists believe we're in a false vacuum that's
currently in meta-stable state, meaning that we could find ourselves quantum tunneling into the next
valley at any moment along a path marked by the dashed line. This quantum tunneling is also
referred to as a false-vacuum decay, which would trigger inflation all over again, ripping everything
to shreds.
Merali proposed creating a tiny false vacuum in a laboratory, which might kick off inflation in some

94 Naturally, the tools required for creating baby universes would include something akin to a scaled-up version of the
Large Hadron Collider (LHC), converting it into the LHS (Large Hadron Supercollider). I'm sure CERN scientists
could justify the extra 100,000,000,000 or so needed to turn baby universes into reality.
95 A scalar field doesn't point in any particular direction, unlike a vector field such as the electromagnetic field, which
is directional. Naturally, there is a particle, called an inflaton, associated with this field. AFAIK, no inflaton
particles have been detected in any LHC experiments as yet.

52
extra dimensions and create a brand-new baby universe. The thing that worries me about this idea
is that if we're already living in a false vacuum, her experiment could trigger a false-vacuum decay
in our universe instead of in some extra dimensions, ripping us all into shreds.96
In the late 1970s, William Lane Craig popularized a line of reasoning known as the Kalm
Cosmological Argument (KCA)97, based on the fundamental premise that the universe began to
exist. Part A of the KCA is:
1. Everything the begins to exist has a cause for its beginning;
2. The universe began to exist; therefore,
3. The universe has a cause.
Of course, starting from that point, in Part B you can assign almost anything to be a possible Cause;
e.g., Jehovah, Brahma, a random quantum fluctuation, a false-vacuum decay, etc. The problem with
the KCA is that it's based on faulty reasoning hiding behind what seems to be irrefutable logic in the
form of a classic syllogism like this: All men are mortals; Aristotle is a man; therefore, Aristotle is a
mortal. This syllogism is easily illustrated using set theory:
Aristotle Men}
{Men} {Mortals}
Aristotle Mortals}
Two serious problems arise when we try to represent the KCA using sets as in the above illustration
of the Aristotle syllogism:
The universe Things that begin to exist}
{Things that begin to exist} {Things that have causes}
The universe Things that have causes}
First, can the universe be member of another set? Since the universe is presumably the set of all
sets, then the universe would have to be a member of itself in order to be a member of any set. This
raises Russell's paradox: Suppose U is the set of all sets that are not members of themselves. If U
is not a member of itself, then the very definition of U dictates that it must be included in U. But if
U is a member of itself, then this contradicts the definition of U as the set of all sets that are not
members of themselves. So it seems the set of all sets cannot be a member of any set.
Second, when we examine the set of things that begin to exist in order to confirm they all do have
causes, we discover it's an empty set. All things undergo changes and are transformed into different
forms, but we can't identify any real things that begin to exist. Thus, the KCA is an argument that
seems quite air-tight on the surface but turns out to be gibberish. It is quite possible that the
universe simply didn't have a Beginning and it doesn't require a Cause.
In order for there to be some T minus X before the universe began, the universe would have to
reside in some external temporal frame of reference where things undergo change. We can only
conclude there is no such time frame, unless we accept the premise of an evolving multiverse
sprouting baby universes. But this only serves to push the question of a beginning back to a time
before the first baby universe was born, and from there all the way back to the Beginning of the
multiverse itself. So I'm afraid this premise just leads to turtles all the way down.

96 I personally feel very uncomfortable living in meta-stable universe. I'd much rather live in a completely stable one.
However on a positive note, Joseph Lykken calculated that a false vacuum like ours should decay only once every
10100 years or so. Unless, of course, someone like Zeeya Merali deliberately screws around with it.
97 Kalm is the shortened version of the Arabic expression Ilm al-Kalm, meaning science of discourse, which was
used to justify the tenets of Islam. In its modern form, the Kalm Cosmological Argument is being used to justify
the tenets of Christian theology instead of Islam, although it could also be used to justify almost anything else.

53
Appendix R Beyond the Point of No Return
The Standard Cosmological Model is based on an extrapolation from the current state of the
universe, as if it's actually possible to define such a state, back to the Beginning. By applying the
known laws of physics in reverse, the Beginning is supposedly a singularity, where the known
laws of physics no longer apply. As stated earlier in Appendix Q, I consider this kind of approach
applying a set of laws to arrive at a result that's completely inconsistent with those laws a bit
illogical. Then I began to wonder if it is even possible to do this in principle, let alone in practice.
As previously discussed, the universe is filled with both chaos and stochasticity. Although chaotic
systems are deterministic in principle, it is a well-established fact that it is simply not possible in
practice to determine previous states of a chaotic system from its present state. It's because of the
many-to-one relationship between the present state and all possible states in the past. An example
of this is found in a cellular automaton that can only go in the forward direction. At the atomic
scale, stochasticity reigns because quantum events are both indeterminate and irreversible.
It would appear that the universe combines lots of chaos with lots of stochasticity, ruling out any
possibility of reverse engineering it back to an earlier state with any degree of precision. We need
to find out just how irreversible the universe really is. Are we at a point in the evolution of the
cosmos where it's just impossible to determine how it all started? Are we beyond the point of no
return for doing this sort of reverse analysis? In other words, is cosmology dead?
It seems to be a well-established fact that the universe is expanding, but what does that actually
mean? According to Mach's principle, the idea of a completely empty universe is not meaningful.
In other words, you can't examine the state of the universe without examining the stuff inside it.
Let's review a little thermodynamics. Imagine there are two cylinders filled with gas that is allowed
to expand, as depicted below. It is assumed that both expansion processes are adiabatic, meaning
that no heat energy can enter or escape the cylinders.

Consider the cylinder on the left: A piston is slowly withdrawn, allowing the gas to expand while
maintaining a state of thermal equilibrium throughout. Gas pressure pushes the piston toward the
right, doing work, W = F x. This kind of adiabatic expansion is isentropic, meaning entropy does
not change. Pushing the piston slowly back into the cylinder does work on the gas isentropically,
bringing the gas back to its original thermodynamic state. The whole process is reversible.
Consider the cylinder on the right: If the piston is withdrawn faster than the speed of sound, the gas
molecules lose contact with it and can't exert any pressure on it, so no work is done. The gas is not
in a state of equilibrium during the expansion, and when it finally does settle down afterward into
equilibrium, we find its entropy has increased; if it is an ideal gas, its temperature would remain
constant. This is called is free, or joule, expansion and it is irreversible. Moving the piston to the

54
left compresses the gas, adding energy to it and raising its temperature and pressure above the
original starting point. There is no possible way to return this gas back to its initial thermodynamic
state unless heat is allowed to escape from the cylinder.
The increase in entropy during irreversible, free expansion is called the Joule-Thomson effect:
S = n R log e (Vf / Vi) , where n is the number of moles of gas, R is the gas constant, and Vi
and Vf are the initial and final gas volumes, respectively.
Now the question is whether the universe is undergoing isentropic expansion or joule expansion.
It's kind of a stretch to think of the universe as being a single isolated system like a gas cylinder
because of the relativity of simultaneity; i.e., it's simply not possible to take a snapshot of the entire
universe and determine its properties at a single moment in time. However, it is useful to think of
smaller volumes of space that can be treated as individual subsystems and ask whether their
contents assumed to be a gas expands freely or not. The obvious answer is that there is nothing
for the gas to push on, so there is only joule expansion with entropy increasing (S > 0) by quite a
lot, making the entire process irreversible. This means the initial state of the universe cannot be
determined in any detail simply by applying the known laws of physics in reverse to the universe.
The obvious answer may not be entirely correct, however. When gravity enters the picture, the
expansion is doing work against gravity, whose negative energy becomes less negative as a result.
Click on this link Entropy and Gravity, for an interesting paper by yvind Grn, of Oslo and
Akershus University College of Applied Sciences, explaining this connection in more detail.
Although there is some uncertainty (or maybe a lack of consensus) concerning the question of
whether the expanding universe itself creates entropy, there are plenty of other very large-scale
processes within smaller volumes of the universe that clearly are irreversible, which must increase
the overall entropy of the universe. Grn cites the example of gravitational condensation that forms
galaxies, noting that Paul Davies found that gravitational clumping introduces an apparent
paradoxical increase in entropy. Quoting from Grn's paper:98
At first sight it appears paradoxical that an element of the cosmological fluid can start out in a
quasi-equilibrium condition, and yet still increase in entropy at a later epoch. He further notes
that his paradox is resolved because a self-gravitation system has no equilibrium configuration.
I disagree with the second sentence. Appendix S of my essay Is Science Solving the Reality Riddle?
makes the claim that matter condensing into galaxies, supposedly attracted by so-called dark matter,
is nothing more than the self-gravitational collapse of very cold and very sparse clouds of molecular
hydrogen gas. Using nothing more than Newton's law of gravitation and the ideal gas law, I show
that such a cloud ends up as a stable sphere or an ellipsoid in the case of a spinning cloud in an
equilibrium configuration where the inward gravitational attraction is perfectly balanced by the
back pressure of gas within the cloud like air inside a soap bubble.
At any rate, as the cloud is compressed into a proto-galaxy, its temperature rises above the ambient
temperature of the universe.99 Like any other warm body, the gas irreversibly radiates heat into the
environment, making the process of collapse irreversible. So whether you use Grn's model of
gravitational collapse, or mine, you get the same result: Large-scale processes in the universe such
as galaxy formation are irreversible. The consequence for cosmology is simple. Because of chaos
and entropy, it is not possible to apply the laws of physics in reverse to the present state of the
universe (if it can be said that such a state even exists) in order to back calculate its initial state,
a.k.a. the Big Bang. The universe is already well beyond the point where that is possible.

98 Entropy 2012, 14, 2456-2477; doi: 10.3390/e14122456


99 The current ambient temperature runs about 2.7K.

55
Appendix S Why Gravity?
Physics would be so much easier if there were no gravity. You don't need it at all with particle
physics, although you do need mass and inertia, and Appendix P explained why. Gravity has
always been a kind of odd duck, refusing to be quantized and merged with quantum field theory,
although some very brilliant physicists have put a lot of time and effort trying to force fit the two of
them together. There is a hypothetical particle, called the graviton, with zero mass and a spin of 2,
carrying the force of gravity throughout the cosmos. The standard model of particle physics
includes many different particles, including the all-powerful Higgs boson that bestows the property
of mass, but unfortunately the graviton isn't among them.100 It seems that gravity is just a big
nuisance, so why have it? The Principle of Sufficient Reason provides the answer.
There are a couple of ways to view gravity from a classical perspective. One way is to imagine it as
a force field radiating from matter. An object moving perpendicular to the Earth's gravitational field
will follow a curved trajectory. If you apply Newton's laws of motion and assume the force field is
constant in the vertical direction, the trajectory is a parabolic arc. But if you assume the object is
pulled toward the center of the Earth by a force field that is inversely proportional to the square of
the distance from the center, the trajectory is an elliptical orbit around the center of the Earth.101
Another way to look at gravity is to imagine that it bends space-time. All objects travel at the speed
of light along the longest shortest possible path though space-time. The time shown on a clock
measures its travel through space-time, so a clock in a gravitational field will follow a trajectory
that produces the shortest longest elapsed time recorded by the clock. When the shortest longest-
elapsed-time rule is applied to the planet Mercury, it revolves around Sun in an elliptical orbit
whose perihelion will slowly precess toward the direction of Mercury's orbital motion.102
Now let's try to find the sufficient reason for the existence of this force field, space-time curvature,
or whatever else it it might be. I claim the universe initially had zero entropy, meaning it was in a
unitary state in which everything all particles and forces were the same nondescript entity.
Paradoxically, this zero-entropy state has an enormous kinetic energy with a temperature reaching
the highest value possible, known as the Planck temperature of around 10 32 K .
Recall my post-reductionist law: Every change maximizes the total degrees of freedom of the
universe. According to the law, the universe had to find some way to maximize its total degrees
of freedom, and it could only do this by shedding kinetic energy to reduce its temperature. One
possible way is through expansion, but as we saw in Appendix R, internal energy and temperature
don't decrease during expansion unless work is done externally. The problem is that there is
nothing external to the universe itself, so this is where gravitation comes in.
If there were an attractive force acting between points of mass-energy, and the force were to
decrease with distance, that force would produce negative energy. If expansion occurs, the
distances between points of mass-energy increase and attractive forces between them decrease.
Gravity is that attractive force. Expansion makes the negative energy of gravity less negative; in
essence, work is done on gravity, allowing the kinetic energy of the universe and its temperature to
decrease. This causes the universe to irreversibly transition away from the unitary state, which
creates entropy. The reason why gravity exists is to allow the universe to shed kinetic energy and
cool through expansion in order to maximize its total degrees of freedom.

100 String theorist claim that gravitons pop out of their string-theory equations, but I have no way of verifying if
that's true.
101 A parabolic curve and an ellipse are practically the same over short distances.
102 The observed precession is small, only 43" of arc per century, but it baffled astronomers and physicists alike for a
long time. Albert Einstein derived the exact value of the precession by calculating Mercury's orbit as the shortest
longest distance through space-time and taking into account the bending of space-time due to the Sun's gravity.

56
Addendum to Appendix S
When you make mistakes, it's important to correct them and not cover them up. That's what I'm
doing here. In Appendix S, above, I made the mistake of saying that objects follow paths that
minimize proper times. In fact, objects follow paths, called geodesics, that maximize proper times,
which are equal to the distances traveled through space-time. I corrected those mistakes and
highlighted them using strike-through characters.
In addition to correcting this error, I thought it would be appropriate to add an addendum that
expands on the idea of gravitation as an expression of degrees of freedom. In the later stages of
writing my essay Relativity in Easy Steps, I started treating gravitation as a geodesic instead of a
force when I came to realize that gravitation is co-dependent; the acceleration of an object toward a
much more massive object depends on the space-time curvature from the larger mass and the state
of relative motion between them. In some circumstances, their relative states of motion can even
produce gravitational accelerations making it appear as though repulsive gravity is acting on
them. I then envisioned a much more fundamental principle at work than space-time curvature as
described by the Einstein field equations.
Building on the conjecture of maximizing degrees of freedom, it can be shown that space-time
curvature is inversely proportional to a property that can only be described as entropy, or
conversely, entropy is proportional to the square of the radius of curvature. To an observer who is
floating freely through space-time without any external forces (except for the influence of gravity),
space-time appears to be perfectly flat, due to the principle that space-time must be observed to be
symmetrical with no measurable preferred direction, location or time. However, the fact that two
objects in close proximity have geodesics that converge means that the universe is not perfectly flat,
but it has an inherent, cosmological curvature that is real but unobservable. When neighboring
objects follow the longest, flattest routes through space-time, their geodesics converge. This is
Nature's way of flattening the universe and increasing entropy, step by step. What a clock is actually
measuring (proper time) is both the distance traveled through space-time and the amount of entropy
being added to the universe.
Einstein asked, What is time? repeatedly throughout his life. He wasn't being overly modest; he
really couldn't fathom what time truly is. It does no good to define one second as the time it takes
for light to travel 300,000,000 meters because now you are stuck with having to define one meter as
the distance light travels in 1/300,000,000th of a second it's a circular definition. The best answer
Einstein could give was that time is what is measured by a clock. But there has to be a countable
quantity of something that is being measured. What is that countable quantity? A water-flow meter
measures the volume of passing through it, so it's essentially counting water molecules. A kilowatt-
hour meter measures energy consumed, so it's essentially counting quanta of electrical energy.
Either of these meters could be used as a clock if the flow rate or energy-usage rate were held
constant. An hourglass counts grains of sand falling through it. In my opinion, when a clock is
specifically designed to measure time it's actually measuring incremental amounts of entropy
bits added to the universe as space-time unfurls around it. It's essentially a bit counter.
Expanding on the idea of co-dependent, contingent geodesics, the most effective way to add
entropy to the universe on cosmological scales is through free expansion, which rapidly reduces
curvature and causes geodesics of far-away objects to diverge instead of converging. The same
force bringing objects together can also drive them apart. I believe there are cross-over points
where gravitation changes its behavior. Erik Verlinde supports this possibility in his ground-
breaking paper Emergent Gravity and the Dark Universe. Verlinde proposes that when the
acceleration of stars revolving around galaxies is reduced to a0 = c H0, where H0 is the Hubble
constant, gravitational force shifts from an inverse-square law to an inverse-distance law.

57
Appendix T Time Unfurls
The concept of time has caused great befuddlement since the time of Zeno and his arrow paradox.
Numerous books have been written on the topic of time103 and the scientific consensus about time
seems to fall into one of two categories: 1) Since we don't know what time is, let's just pretend it
doesn't exist, or 2) we'll just tack time onto Euclidean space as a fourth dimension, calling it a
block universe where time can go either forward or backward.104
As I've written in previous appendices and in other essays, time is inextricably linked to a definite,
countable, quantifiable property known as entropy/information. An in-depth study of general
relativity reveals something very odd about the nature of something called space-time. First, let's
go back in time to somewhere around the year 1907. That year, Hermann Minkowski, who was one
of Albert Einstein's instructors at Eidgenssische Polytechnikum,105 introduced the concept of the
invariant space-time quantity as a more elegant way of explaining the peculiarities of the new
special theory of relativity. Einstein fully embraced this new concept and extended it even further
in his general theory of relativity. The basic premise is that every material object in the universe
from plants to people to planets are all traveling through space-time at the speed of light. The
exception to that rule is light itself, which is permanently stuck in space-time. The fact that light
always travels at the same speed for all observers is explained by the fact that all observers are
traveling at the same speed relative to light. We can also look at space-time as being the one-and-
only universal frame of reference, with light serving as an auxiliary reference frame.
Most of us technical types can appreciate what traveling through space-time means in a
mathematical sense, but what does space-time travel mean in a physical sense, if anything? The
first clue is revealed by examining one of the forms of Einstein's field equations of general relativity
expressed as tensors.
R R g = (8 G / c4 ) T
The terms on the left side of the equation describe the curvature of space-time; they are sometimes
combined into a tensor G . The tensor on the right side of the equation, T, is the so-called
stress/energy tensor, which has elements that correspond to pressure, shear, and torsion stresses.106
Distributions of mass and energy in the universe introduce the stresses described by T, which
result in strains (distortions) of space-time described by G . Thus, the Einstein field equations are
equivalent to stress/strain equations of an elastic solid. The solid in this case has both spatial and
temporal dimensions, although this should not be misconstrued as a solid block in four-dimensional
Euclidean space. Riemannian geometry is required in order to make sense of this.
Combining the speed-of-light travel through space-time with the twists and turns imposed by the
Einstein field equations, objects will trace paths called geodesics when left to their own devices.107
Geodesics are the longest possible paths through space-time, which are also the paths having the
least curvature. Curvature is expressed in G as the parameter 1/R2, where R is expressed in units
of length. If you think in terms of a sphere, its curvature would be inversely proportional to the
103 You can see this for yourself by going to the Amazon book store and do a search on the phrase physics of time.
104 The direct corollary of the nonsensical block universe model is that our sense of time moving forward is just a
stubborn illusion created inside the brain. John Gribbin made that point clear in The Illusion of Time.
105 Einstein was in the E.P. Class of '00. Minkowski is famously quoted as describing his former student as a lazy
dog who never bothered about mathematics at all.
106 The torsion elements in the tensor are produced by spinning masses, causing unequal off-diagonal elements. The
torsion effects are generally ignored, mainly because it makes solving the equations easier. But ignoring torsion
also renders those solutions incomplete, at least in my humble opinion.
107 External forces prevent objects from following their geodesic paths. We think we feel a downward force of
gravity on the soles of our feet while we stand on the ground, but we actually are feeling upward forces from the
ground that are keeping us from traveling along geodesics that intersect with the center of the Earth.

58
square of the sphere's radius, R2, or its surface area 4 R2. In the case of space-time curvature,
however, we shouldn't think of R as being an actual physical radius, such as the radius of the
known universe, because I hope by this time you realize measuring the size of the universe is
impossible even in principle by stretching a cosmic tape measure across it. Furthermore, space-
time curvature proportional to 1/R2 doesn't only define curvature of space, it defines curvature of
time as well. It's best just to think of R as a parameter that happens to come with a unit of length.
So how does any of this solve the riddle of time? Well, Einstein said that time is what is measured
by a clock, but a clock also measures the distance traveled through Minkowski space-time. So if
geodesics are the longest (and straightest) paths an object (like a clock) can take in its travels
through space-time, then time must have something to do with straightness, or a lack of curvature.
A second clue comes, again, from another interpretation of the Einstein field equations. Thanu
Padmanabhan discovered that the equations can be expressed in a form equivalent to the equation
for entropy: T dS dE = P dV. This equation translates into, Temperature, T, times the change in
entropy, dS, minus the change in internal energy, dE, equals pressure, P, times the change in
volume, dV. So not only does space-time behave like an elastic solid exhibiting stress/strain
properties, but it also behaves like a solid having thermodynamic properties like temperature and
entropy. This led Padmanabhan to propose that space-time is comprised of atoms. Although I'm
not sure I'd go as far as believing in an atomic theory of space-time, I do have a strong hunch that
there is a physical meaning to the entropy of space-time: More entropy = Less curvature.
So what does a clock really record? I suspect it's recording an experience of a universe unfolding
from a highly-curved low-entropy state into a maximally-flat high-entropy state. The universe has
an intrinsic curvature that cannot be physically measured by an observer following a geodesic
path.108 The flattening of this intrinsic curvature produces local increases in entropy, which clocks
register as time. Time really amounts to the unfurling of information along with increased
complexity, requiring the formation of a permanent record encoded in the present moment.
We need to dispel the notion that the second law of thermodynamics is just some kind of statistical
illusion that results from taking averages of many random events. Nothing could be farther from
the truth; the second law is the most fundamental and perhaps the only law of the universe.
Every other law, including gravity, is derived from it as Erik Verlinde's work on entropic gravitation
is beginning to reveal.
There is an excellent paper written by Lee Smolin and Marina Corts, The Universe as a Process
of Unique Events. The paper makes a number of important statements that coincide with the
conclusions I made in Appendix B and Appendix R above; namely, the direction of time is real and
is absolutely unchangeable. Quoting from their paper,
In this paper the diametrically opposite view [that time is non-existent]. We develop the hypothesis that time is
both fundamental and irreversible, as opposed to reversible and emergent. We'll argue that the irreversible
passage of time must be incorporated in fundamental physics to enable progress in our current understanding.
The true laws of physics may evolve in time and depend on a distinction between the past, present and future, a
distinction which is absent in the standard block universe perspective. (Emphasis added).

The gist of their argument is that the present defines a finite set of all possible future states. Once
an isolated system leaves a particular state, a record is written of it as being in the past, and all
possible pathways back to that prior state vanish. Previous states simply do not exist in the finite
set of possible future states. This is a very radical conclusion but it's also the correct one.
108 The reason is straightforward. If intrinsic curvature could be measured locally, it would point in a direction away
from the center of curvature. Unfortunately, the universe doesn't have a center so its curvature cannot be
measured directly. Physicists refer to this absence of measurable curvature as the flatness problem, which really
isn't a problem at all. However, the unfolding and flattening of the universe is revealed to an observer as an
increase in entropy (equivalent to distance traveled through space-time) and it is measured as time using a clock.

59
Appendix U The Wheeler-Mandelbrot Connection
I classify John Wheeler and Benoit Mandelbrot as true geniuses who both possessed the gift of
being able to think outside the box. Wheeler was always sort of an enigma to me, and his ideas
sometimes seemed slightly deranged (but in a good way). I think he often came across as cryptic
mainly because of the parsimonious way in which he expressed himself, using only the minimum
number of words necessary to get his point across. A while ago, I read a transcript of an interview
where Wheeler tried to explain his Big U cosmology, illustrated in the sketch below.

The Big U cosmology is centered on the idea that consciousness creates physical reality. The
letter U diagrammed above represents the physical universe. At the left side is the skinny end,
representing the Big Bang. The shape of the U widens, representing universal expansion,
culminating in a giant eyeball. The eyeball is of course human consciousness, which looks back
or conceptualizes the origin of the universe, and this very conceptualization brings all of physical
reality into being. The interviewer was having some difficulty understanding how human
consciousness could retroactively bring an entire physical universe into existence with ultimately
producing an organism possessing conscious awareness. To be honest, I was also having a tough
time understanding the apparent illogic of effects producing causes. But instead of going into into a
long, rambling dissertation, Wheeler replied in his typical fashion, Think of it as a continuous
feedback loop. Those few words made the light bulb in my brain light up.
You see, as a retired engineer, I'm pretty familiar with feedback loops. In order for a feedback loop
to work properly, it must operate continuously and instantaneously. Putting any kind of time delay
into a feedback loop leads to disaster (imagine trying to drive a fast car with a 10-second time delay
built into its steering mechanism). So I think what Wheeler meant by his analogy is that
consciousness has somehow been steering the universe along a particular path throughout time. I
believe this is the pure essence of Wheelerism.
Recalling the work of Benoit Mandelbrot, another out-of-the-box thinker, I then understood there is
a definite connection between Wheeler's Big U cosmology and Mandelbrot sets. As you recall from
previous sections of this essay, a Mandelbrot set is generated through a process involving decision
loop. A number generator, z z2 + c, generates a series of complex numbers, z, starting with a
complex seed, c, and an initial value z = 0. Then a simple binary question is asked: Does this
series of numbers converge to a set of finite values? If the answer is yes then c is part of the
Mandelbrot set, and if the answer is no then c is not part of the Mandelbrot set. For example,
using a seed number c = -1 produces the series -1, 0,-1, 0 and so on. This series converges in a
set of finite numbers, -1 and 0, so the seed number c = -1 is in the Mandelbrot set. Using a seed
number c = +1 produces the series: 1, 2, 5, 26, 677, 458330, 2.1007 1011, and so on. It should

60
be pretty clear that this series will not converge in a set of finite numbers, so the seed number c = +1
is not in the Mandelbrot set. Every number in the complex number plane can be tested in a similar
fashion to determine whether or not it's included in the Mandelbrot set.
According to my own slight modification to Wheeler's cosmology, Mandelbrot's chaotic number
generator is replaced by myriad events generated stochastically in NCS. The question asked is,
Will this event produce a physical result that is observable by a conscious entity? If the answer is
yes then the event is included in the set of possible physical events. If the answer is no then the
event is not included in the set of possible physical events. In this scheme, consciousness is the
filter that allows stochastic NCS events to appear as physical outcomes in the CS universe. A
physical universe made only from bricks observable by consciously-aware entities is a universe
that sooner or later is very likely to make bodies for those entities using those bricks. I believe
this answers the fine-tuning question bugging cosmologists and theologians alike. A universe
constructed this way may not be perfectly tuned for intelligent life, but the tuning is more than
adequate for allowing conscious entities to evolve bodies at some point in time.
A universe built from the ground up like a Mandelbrot set should share similar features with it, such
as fractality, which the universe does seem to possess. There is one very important difference that
must be noted. The number-series generator employed in building a Mandlebrot set is chaotic but it
is not stochastic. All Mandelbrot sets are identical: A set generated on Tuesday is the same as the
set generated on Monday. Here, the outcome or design is certain and static. On the other hand,
the underlying stochasticity of NCS (as revealed by quantum effects) is 100% uncertain, so the final
design is by no means certain. This brings us back to Claude Shannon, another true genius.
Shannon defined information (which he liked to call entropy) as follows:
S = pr log2 pr , where pr is the probability of a particular outcome, r.
Consider a pair of dice. The probabilities of the 11 possible dice rolls are tabulated below.
r 2 3 4 5 6 7 8 9 10 11 12
pr 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36
The negative sum of these probabilities times their base-2 logarithms equals 3.274 bits (we should
not be distracted at all by the fact that this results in fractional bits). If n = 11 possible rolls would
all have equal probability, pr = 1/n, the information would be S = log2 (n) = 3.459 bits.109 The
increased entropy calculated for the set of equiprobable outcomes means those dice have less
certainty than a normal pair of dice. Uncertainty produces entropy/information. So now we can
answer this question: How much information is in a loaded pair of dice that always rolls a seven?
The answer, of course, is zero. Absolute certainty eliminates information. A Mandelbrot set may
seem infinitely complex from its appearance but it actually contains little or no information
according to the formal definition, because the steps in building it are completely deterministic.
I think you'll see that if our universe really craves information, then the only way it can satisfy
that craving is by generating genuine randomness, which it prodigiously carries out through its
quantum foundation. Coupling uncertainty with a feedback loop endowed with conscious-
awareness filtering makes it something quite similar to the (more or less) habitable universe we now
enjoy. Uncertainty is the element that distinguishes an information-rich, dynamic Wheeler universe
from a static, deterministic Mandelbrot universe. Time emerges directly from uncertainty and
information, so time must be a distinct and very fundamental property.

109 You can immediately see the expressions S = log (n) and S = kB log are identical for the cases where all
outcomes, n, or all thermodynamic states, , are equally probable. This is why I keep harping on the fact that
there's no real difference between Shannon's information and Boltzmann's entropy.

61
Appendix V What Time Is It?
Appendix P introduced Mach's principle, named after the great Austrian physicist Ernst Mach. The
gist of Mach's principle is that the concept of space is absolutely dependent upon relationships
between objects. We discovered that time has something to do with the unfurling of space-time and
the production of information, which is also dependent on such relationships. Appendix M
introduced Emmy Noether and her famous theorems, which tied the three conservation laws of
energy, momentum, and angular momentum to temporal and spatial symmetries. The three
dimensions of space are a direct result of rotational symmetry which demands that the three degrees
of rotational freedom, as defined by cross products, are uniquely aligned with three degrees of
translational freedom.110 So it seems space has no choice but to be three-dimensional as long as it
wants to be symmetric with respect to rotation, which leads to the conservation of angular
momentum, and by inference the conservation of linear momentum.
The question this raises in my mind is this: Granting that rotational symmetry and the conservation
of angular momentum are immutable, is this also true about translational and temporal symmetry?
In their book The Singular Universe and the Reality of Time, Roberto Unger and Lee Smolin argue
that there is no reason to assume it is. They point out that the universe is evolving and has a history.
The so-called universal laws are just mathematical equations describing how systems behave, but
those behaviors could be modified as the state of the universe changes. This makes perfect sense if
the universe is unfolding in such a way that information (entropy) is always being maximized. In
that case universal laws may have to change along with the universe itself, and temporal
symmetry would no longer hold up. Because rotational symmetry is primeval, it would still hold up
regardless; i.e., the universe would still look more or less the same in every angular direction from
an observer.111 But the consequences of temporal asymmetry are huge: Since locations vast
distances apart are literally in different cosmological time zones, should we believe the laws of
Nature are constant across those distances? In other words, since universal time in the Newtonian
sense does not exist, all observers could experience different versions of reality.
Those differences are negligible in commonplace situations. A physics experiment done in Paris
will have very nearly the same results as the same experiment done in New York, because temporal
and translational symmetries do apply over small distances. You can even extend those symmetries
across our solar system. But there's no reason to believe that universal laws in our local corner of
the universe hold up everywhere and every when. A quasar observed from Earth at a distance of 10
billion light years existed when the universe was purportedly less than 4 billion years old. Did the
universe unfold around that quasar the same as it's doing here? If not, is it reasonable to assume the
same universal laws would apply to that quasar? What is the current time on the other side of
the Milky Way, and are their laws exactly the same as ours? Everything, including time and
universal laws, start becoming relative on cosmological scales instead of being absolute.
Steven Weinberg's famous book, The First Three Minutes, describes events occurring literally
during the first three minutes of time. I know Weinberg is extremely smart, which is why I'm so
surprised how easily he fell into the trap of thinking of time in absolute, Newtonian terms. Suppose
we could transport a device back to the universe's beginning to record the events described in
Weinberg's book, assuming it would be rugged enough to withstand the extreme conditions
prevailing then. How much time would have elapsed on the device's clock when those events were
completed? If my hypothesis is right and clocks actually measure the unfolding of the universe, I'm
inclined to think the elapsed time would be many powers of ten greater than three minutes.
110 A cross product is defined in 7-dimensional space as well, but with too many degrees of freedom (21 versus 7).
111 I suspect this is the reason why the Planck constant, which is the quantum of angular momentum, seems to be a
truly fundamental constant of Nature. On the other hand, the laws of conservation of energy and linear
momentum may just be approximations that apply only to sub-cosmological scales.

62
Appendix W Bekenstein Bound
In his book 13.8: The Quest to Find the True Age of the Universe and the Theory of Everything,
author John Gribbin describes a survey by Allan Sandage, counting the number of galaxies per unit
volume throughout space to determine whether the universe is flat or curved. According to Gribbin,
the survey revealed the number as being constant, proving the universe is indeed flat. This result
never made any sense to me because if the universe is expanding, galaxies had to be packed more
tightly in the far-away/long-ago universe, unless new galaxies continuously pop into the empty
spaces left behind by expansion. I think Sandage's survey indeed did show the universe seems flat;
not because it actually is flat, but because it must appear spatially flat to any free-falling observer
(including Sandage). I think a finite, expanding space-time must have an inherent curvature.
Anyway, while thinking about this topic, I came up with an alternative way of looking at the
cosmological redshift and the Hubble constant defining the rate of universal expansion. This
alternative way is illustrated by the schematic model of the universe below.

Suppose very shortly after the big bang, billions of identical clocks were distributed evenly across
the entire universe, and suppose each one was set to be triggered by some universal event, such the
universe cooling down to the 3000 K recombination temperature.112 Once the clocks are triggered,
they continuously broadcast radio signals indicating the elapsed time since the triggering event.
Now let's fast forward by approximately 13.8 billion years. The billions of clocks seeded in the
early universe are now redistributed everywhere at various distances from Earth. Some of them are
in our immediate neighborhood, and the signals coming from them show about 13.8 billion years
has elapsed since the triggering event. We detect distant clocks in Outer Space running behind
the nearby clocks. The reason is simple: The signals from the far-away clocks were transmitted
earlier in the history of the universe and are just now reaching us. The farther in Outer Space the
clocks are, the farther back in time those signals were sent, and the less time had elapsed on them.

112 According to orthodox cosmology, this supposedly occurred roughly 377,000 years after the big bang, which is a
blink of an eye in cosmological time. If any observers were alive back then then, the radius of their observable
universe would have been no greater than 377,000 light-years. That's not much larger than the radius of the
present-day Milky Way galaxy, which is somewhere between 50,000 and 90,000 light years.

63
The time discrepancy is t = d / c, where d is the distance separating us from a particular signal
from the past, with c being the speed of light. A time discrepancy on a clock is equivalent to the
clock running more slowly as seen from our reference frame, underscoring the relativity of time.
But if an artificial clock runs more slowly, then natural clocks in the clock's vicinity also will run
more slowly, including atoms that radiate and absorb light. Distant atom clocks running slowly
are observed as red shifts in the light we see, and those shifts are proportional to the distances from
us. This yields the familiar Hubble constant that defines the expansion of the universe.
It's important to note that Outer Space relates to time as well as distance. The Beginning is the
farthest point from Earth in all directions.113 If one of the billions of clocks that were seeded
throughout the early universe were detected near The Beginning, its radio signal would reveal
almost zero elapsed time. From our vantage point, that clock is running so slowly it has basically
stopped. There are a couple of ways to stop a clock, one of which is by making it travel at light
speed. Thus, the stopped clock at The Beginning appears as if it's receding from us, or more
accurately we're receding from it, at the speed of light. The current time back to The Beginning is
13,800,000,000 years, and the current distance, R, from The Beginning is 13,800,000,000 light
years, while R increases at the speed of light: dR/dt = c. The quantity 1/R2 is a parameter that is
related to curvature in Einstein's Field Equations, and the red arcs appearing in the model indicate
curvatures in different eras, with the curves steadily becoming flatter as the universe unfolds.
Appendix C introduced what I refer to as the post-reductionist universal law, subsuming all other
so-called natural laws according to material reductionism. This universal law says every change
must maximize the total number of degrees of freedom (and entropy/information). If this is true,
then the universe should always be in a state of maximal entropy/information.
Jacob Bekenstein discovered a finite region of space can have too much entropy. He determined the
only way to cram more entropy into a space that's already full is to increase size of the space. The
universe could be considered as being a finite region of space, and it's already chock full of entropy
according to my post-reductionist universal law conjecture. So in order to cram even more entropy
into the universe, it must expand.
The Bekenstein bound defines the maximum allowable entropy/information of a system based on
one of two sets of system parameters: Its total mass-energy and radius, or its surface area.
The first version of the Bekenstein bound, below, expresses maximum entropy/information in bits
based on the total mass-energy, E, and radius, R.
Imax = 2 R E / c Ln 2 , where Ln 2 is the natural logarithm of 2 = 0.6931472
In normal circumstances, a radius R would be interpreted as a physical distance from the center of a
sphere to its edge. In the case of the universe, the center is The Beginning and we're located at
Here and Now at radius R on the edge. You can also think of 1/R2 as a parameter defining the
current curvature of the universe at the edge, where R is expressed in units of length. If we assume
the universe is already filled to the top with information, we can let I = Imax. Everyone thinks of
and c as universal constants, and we can assume E is also constant too, owing to the law of
conservation of mass-energy. R must increase over time in order for I to increase over time, and we
can thus compute the derivative of I with respect to time, , as follows.
= (2 E / c Ln 2) , where is the derivative of R with respect to time.
We can use = c to reduce the equation to the final form below.
= 2 E / Ln 2 (Equation 1)

113 In this case, the old saying all roads lead to Rome can be restated as all directions lead to The Beginning.

64
According to this formula, the total information within the universe increases at a constant rate,
proportional to the total mass-energy. The derivative dI/dt establishes the ratio of the elapsed proper
time shown on a clock to the increase in total information in the universe surrounding it, as follows:
t = I ( Ln 2 / 2 E).
In the second version of the Bekenstein bound, Imax is expressed in bits in terms of an area:
Imax = A c3 / 2 h G Ln 2
You should not think the area A is on a physical surface surrounding the universe out there. Some
cosmologists mistakenly tell us we're surrounded by a far-away surface hologram with the
present universe encoded on it. The fact is our present moment is riding along on an expanding
surface with the entire history of the universe encoded on it. If you think of 1/A as representing the
curvature of this surface, then it's okay to use the customary relationship A = 4 R2:
I = Imax = 2 2 R2 c3 / h G Ln 2 Remembering that h = 2 ,
= R2 c3 / G Ln 2
Assuming c, , and G are constants, we calculate the time derivative, , as follows.
= (2 c3/ G Ln 2) R
We can use = c to reduce the equation to the final form below.
= (2 c4/ G Ln 2) R (Equation 2)
By setting Equation 1 equal to Equation 2, we find this relationship:
R = 2 G E / c4
This should look familiar. By replacing E with M c2 in the above equation, R becomes the
Schwarzschild radius of the universe, and we're actually riding on its event horizon! Further
comparisons between Equation 1 and Equation 2 reveal an unpleasant discrepancy, however.
Assuming E is constant in Equation 1 requires the rate of change, , to be constant; however, in
Equation 2 is proportional to R, which we know increases at the speed of light. In order for the two
equations to agree with each other, one of the following statements must be true:
1. Total mass-energy, E, is proportional to R
2. The quantity (2 c4/ G Ln 2) R is proportional to E
The first statement suggests there's a dark energy component of the total mass-energy. However,
according to prevailing theories on this subject, Edark is proportional to volume, and since volume is
proportional to R cubed, dark energy by itself wouldn't make E proportional to R.
The second statement, (2 c4/ G Ln 2) R E, is possible only if one or more of the universal
constants (c, or G) can change over time. This would be a very radical departure from standard
material reductionism but it may not be too far-fetched. In The Singular Universe and the Reality
of Time, Roberto Unger and Lee Smolin argue that universal laws are continuously evolving along
with the universe. Interestingly, Alexander Unzicker and Sheila Jones note in their book Farewell
to Reality that repeated measurements of G based on the Cavendish experiment, along with other
methods, show some unexplained fluctuations in the values of G that fall outside the known limits
of experimental error. Paul Dirac noted the ratios of some force constants have uncanny
similarities to ratios of cosmological scales, which led him to the large-number hypothesis. From
this he concluded the gravitational constant is inversely proportional to the age of the universe and
the total mass-energy in the universe is proportional to the age of the universe squared. Notice how
nicely his conclusions match the requirements of Statement 2, above: R/G E tU!!

65
Appendix X It Equals Bit
The it from bit conjecture was introduced way back in Appendix D of this essay. John Wheeler
was one of the main proponents of the idea that the material universe is the product of some sort of
digital computational process. We also learned in Appendix W that the conservation of energy may
not apply to the universe at large, and that one of the so-called universal constants, G, may not be
constant throughout time. In this appendix, I will explore the possibility that mass-energy is
actually equivalent to entropy-information it equals bit.
According to Paul Dirac's large-number hypothesis, the total mass-energy of the universe, EU, is
proportional to the square of the age of the universe, tU2. The curvature parameter, A, is also
proportional to tU2. The Bekenstein bound combined with my own conjecture the universe is
continuously saturated with information shows total information, IU, is proportional to tU:
IU = A / 4 Ln 2 tU where I is expressed in bits and A is given in Planck units of area
Since IU and EU are both proportional to tU, one might jump to the conclusion that IU / EU is a
constant, but this would be a mistake because a Planck unit of area = G / c3, and G 1 / tU .
IU = A c3 / 4 G Ln 2 tU3 EU3/2 Uso apparently IU is not proportional to EU.
An equivalence between IU and EU could be realized if tU, but unfortunately there are no
indications of changing over time. However, there is one way around this conundrum by
introducing another variable, temperature, to the equation. Temperature can be expressed as energy,
k T, using the Szilrd equation dE = k T Ln 2 dI, where T is absolute temperature, expressed in K,
and k is Boltzmann's constant. The physical meaning of this equation is, k T Ln 2 joules of energy
are released when one bit of information is removed from a storage medium operating at an
absolute temperature T. Let's rearrange the first equation in this appendix as follows.
Ln 2 dIU = (c3 / 4 G) dA , where A here is given in units of length squared
Multiplying both sides of this equation by k T, we get the following.
k T Ln 2 dIU = (k T c3 / 4 G) dA
The left side of the above equation equals dEU per Szilrd's equation, so if T 1 / tU.
dEU = (k T c3 / 4 G) dA dEU / dA = (k T c3 / 4 G) = constant EU IU
In conclusion, if T 1 / tU across an entire surface of constant curvature, A, this will produce the
following equivalence: mass-energy entropy-information although they increase at different
rates over time because temperature decreases over time. A surface of constant curvature are all
points defined as Now. The question is, What's the temperature of Now? The answer is it's the
temperature of the universe, which can be measured by aiming a microwave antenna at the sky and
measuring the radiation pouring out from it. Cosmologists call it the Cosmic Microwave
Background, CMB, and its temperature is 2.73 K. Conventional wisdom says CMB came from one
solitary far-away and long-ago place and time when the universe's temperature was approximately
3000 K. In reality, CMB is the combined thermal radiation from every epoch in our causal patch,
once those radiations were red-shifted in proportion to the times/distances back to those epochs.
According to Wien's displacement law, the frequency of peak intensity of black-body radiation is
proportional to temperature, and this is in agreement with Planck's radiation law. Since the
temperature of each epoch is inversely proportional to tU in that epoch and its peak frequency is red-
shifted in proportion to its distance from us, thermal radiation we receive from every previous
epoch is at the same temperature; it's currently 2.73 K but was higher in the past.

66
Appendix Y Time Is of the Essence
The Singular Universe and the Reality of Time by Roberto Unger and Lee Smolin centers around
time. The following statement is near the front of the book, under the heading of Philosophy.
The revolution that Roberto Mangabeira Unger and Lee Smolin propose relies on three central
ideas. There is only one universe at a time. Time is real: everything in the structure and regularities
of nature changes sooner or later. Mathematics, which has trouble with time, is not the oracle of
nature and the prophet of science; it is simply a tool with great power and immense limitations.
I wholeheartedly agree with their philosophy as stated above. It turns out that time isn't only real;
it's the most fundamental element of reality because it connects so many other elements. In the
previous appendix, I derived a set of relationships from the post-reductionist universal law I
postulated in Appendix C: Every change maximizes the total degrees of freedom of the
universe. Those relationships can be expressed in relation to time since The Beginning (the
center of temporal curvature, with spatial curvature being zero).
G T tU 1 R tU A EU tU2 IU tU3
The quantities G, T, R, A, EU, and I U, are respectively: Newton's gravitational parameter (erstwhile
assumed to be a universal constant); absolute temperature; the radius of temporal curvature
(commonly thought of as being the radius of the universe); the area of a surface of uniform
temporal curvature; total mass-energy of the universe; total information of the universe. The
current value of tU is believed to be around 13.8 billion years.
We can do a quick sanity check against the standard cosmological model to see if this is on the right
track. According to the SCM, the universe cooled to the recombination temperature, 3000 K, when
tU was around tR = 377,000 years. If the current temperature is 2.73 K and the rate of cooling is
given by the relationship T 1 / tU , then tR / tU = 2.73 / 3000. But when tR is solved this way based
on the current estimate of tU = 13.8 billion years, the result is tR = 12,558,000 years, which is greater
than the currently-accepted value by a factor of 33.
There may be an explanation for this large discrepancy. The simplest explanation is that if
chronologies of events in the past are based on red shifts, small percentage errors in the Hubble
constant create huge time shifts in the early universe. For example, if the estimate of the Hubble
constant deviates by one percent from the true value, it would produce a time shift of 138,000,000
years for the earliest events. However, that explanation wouldn't apply to this discrepancy for the
recombination event because the SCM value relies on forward-looking expansion and cooling rates
based on general relativity, not looking back at red shifts. It turns out that the cooling rate for
relativistic particles, such as photons, yields T R 1, in agreement with the model developed in
Appendix X. But for non-relativistic particles, such as baryons, the cooling rate is much faster,
yielding T R 2. Taking a much faster cooling rate for the early universe into account, the
recombination event could have occurred very early: tU << 12,558,000 years or 377,000 years.
It's interesting to note that although EU tU2 and IU tU3, there still is an equivalence between EU
and IU because they are tied together by temperature, T tU 1. This means the surface density of
information, dIU / dA, increases in proportion to tU. The current holographic universe model
assumes the Planck area is constant and IU is proportional to its surface area. In the model proposed
here, IU is proportional to R3 and the whole volume of the universe. I find this model intuitively
much more satisfying than the holographic model because looking out into the volume of space is
looking back at the history of the universe. That history can only be encoded in the present; i.e., the
expanding surface of constant curvature defined by its area, A. This encoding is made possible by a
shrinking Planck area: dIU / dA tU dIU / dtU tU2 and dA / dtU tU.

67
Appendix Z The Final Chapter
The first edition of Order, Chaos and the End of Reductionism came out in March, 2013 as a
tentative response to my first essay Is Science Solving the Reality Riddle? It is now August, 2017
and it's hard for me to believe that I've been working on this essay for over four years. I honestly
intended to make it brief and to the point, but it turned out to be a very long work in progress with
many twists and turns, dead ends, blind alleys and even a few self contractions along the way. I
thought it best to leave everything stand as it was when I originally wrote it warts and all as a
kind of notebook that recorded my evolving thoughts. I'm now up to the last letter of the alphabet
with this appendix, so it's finally time for me to wrap things up and summarize what I've gleaned
over a period of four-plus years.
In its most basic form, reductionism is an approach to understanding the nature of complex things
by reducing them to the interactions of their parts, or to simpler or more fundamental things.
Engineers and physicists use reductionism to explain reality. I came to the conclusion that there are
three different classes of interactions in nature:
1. Deterministic, linear, reversible, certain
2. Deterministic, non-linear, irreversible, predictable in the forward direction
3. Non-deterministic, irreversible, unpredictable (probabilistic)
Reductionism is concerned mainly with the first class of interactions; however, they only apply to
the most trivial of situations, such as two bodies orbiting around each other and simple harmonic
motion. The vast majority of interactions in nature are in the second class, commonly referred to as
chaotic interactions. Ironically, it seems that the highly-complex order we observe in the universe
emerges essentially from chaos. Take for example weather patterns, like a hurricane, born from
chaos and yet having an identity and a quasi-stable structure. The giant red spot on Jupiter is a
permanent hurricane that has persisted for at least 187 years.
Since reductionism is only capable of examining the simplest and most trivial examples of order, I
chose the title of this essay to reflect the fact that order and chaos begin where reductionism ends.
Another interpretation is that reductionism is at an end as a viable scientific philosophy going
forward. As long as you examine nature through linear, deterministic and reversible interactions,
you are only seeing reality through a tiny keyhole. How sad it is that a majority of scientists still
consider reductionism as the preferred default method of solving science. String theory is touted as
the whiz-bang cutting edge of theoretical physics, but I perceive old-fashioned reductionism at its
core.
The third class of interactions are stochastic, random, and completely unpredictable. These
interactions lie at the heart of quantum mechanics. Oddly enough, some extremely brilliant
theoretical physicists (including Albert Einstein up till his death) deny the very existence of
stochastic interactions, believing that some underlying local hidden variables are involved instead.
I confess being guilty of thinking that chaotic interactions might be used as substitutes for stochastic
processes, but I was definitely wrong (see Appendix H). Experimental violations of Bell's
inequality put that idea to rest, and in the face of such incontrovertible evidence as this I'm amazed
there are theoretical physicists who still cling to determinism.
The core of my thesis is: Entropy equals information. Entropy has been completely
misunderstood by many leading scientists, who try to label it as missing information or hidden
information or even negative information. This misconception stems from the fact that order
and entropy are indeed opposites. People tend to prefer order over disorder, so they equate entropy
to something very negative and undesirable. On the other hand, people love information the more
the better. After all, we live in the information age with the Internet offering us cool things like

68
Wikipedia, Facebook, Twitter, and Instagram. So how can something good like information
possibly be the same as something so obviously bad like entropy? First off, you need to know
information is defined, which unfortunately most physicists do not. Claude Shannon figured it out
in the 1940s, and it has everything to do with probability and uncertainty. Suppose there are N
possible outcomes of some interaction, each with a certain probability, pr. Shannon concluded that
the amount of information, S, contained in that set of outcomes is as follows.
S = pr log2 pr , r = 1, 2, 3, , N
If an outcome is certain; i.e., if any of the probabilities in the set should equal one, then there is zero
information in that set. Suppose I call someone on the phone and inform them it's Saturday. How
much information did I relate to that person if he already knew it was Saturday? The answer is
zero, because there was no uncertainty on his part about the day of the week. But now suppose that
person just woke up from a coma and had no idea what day it was, so all days are equally probable
to him. For N equally-probable outcomes, the above equation reduces to S = log2 N. Stating that
it's Saturday provides log2 7 bits of information to that person. If you notice, S = log2 N is identical
to Boltzmann's definition of thermodynamic entropy, except Bolzmann used the natural logarithm
instead of the base-2 logarithm and he stuck a constant, kB, in front of it: S = kB Ln N.
Once you come to grips with the fact that entropy = information, then it's apparent that information
cannot exist without uncertainty. So which class of interactions in nature involves uncertainty?
Well, the first class clearly doesn't because all outcomes can be uniquely solved in both forward and
reverse directions. A single planet revolving around a star will stay in that orbit forever unless it is
perturbed by some outside force. You can determine the exact location of that planet billions of
years into the future or billions of years into the past using a simple formula that describes an ellipse
with a time parameter, t.
Can information come from a chaos? It might seem that chaos could provide randomness and
uncertainty, but this is not the case. Chaotic processes are still deterministic because there is a
unique relationship between the current state and subsequent states. Thus, every repetition of a
chaotic process will produce exactly the same sequence of events. This is not true in the reverse
direction due to one-to-many relationships between the current state and previous states, rendering
chaotic processes irreversible. Thus, irreversibility alone does not generate true uncertainty, at least
going forward. Chaotic interactions can rearrange bits, and even make them unrecognizable, but
they cannot create new bits. Only the third class of stochastic interactions can introduce the
uncertainty that information requires.
Chaos produces fractal patterns, and these patterns are widespread in nature as shown in some of
the figures in the front of this essay. So at one point in compiling this essay, I thought the universe
itself might be a colossal fractal. Fractal patterns have extremely high or one might even say
infinite levels of complexity that are generated from very simple non-linear functions. Fractals
have the properties of scale-invariance and self-similarity, where large-scale features are repeated
over and over on smaller scales. Those features are not necessarily repeated exactly, however. The
Mandelbrot set is one of the most widely-known fractals, having a prominent circular feature that
appears over and over again on smaller scales. On the smallest scales, this feature gradually gives
way to other features. You can try this yourself using the interactive Mandelbrot Viewer.
I used to think the general relativity field equations could only be applied to small-scale systems,
but I was very wrong. What I discovered is that Einstein actually had stumbled on a set of
equations that provides an exact description for the entire universe, and that pattern is only repeated
as an approximation on smaller scales involving weak-field interactions. In other words, the
universe is a fractal having an exact overall solution given by the Schwarzschild equation, but this
equation doesn't necessarily serve as an exact solution for smaller scales.

69
A recurrent theme in this essay is that the most important and perhaps the only law of nature is
the statement that entropy of isolated systems cannot decrease. This is the famous second law of
thermodynamics, which really should be the zeroth law of the universe because it underlies
causality itself. Since entropy and information are equivalent, this law means that information
cannot be destroyed. Some scientists try to trivialize this law by saying that there's just a tendency
for entropy to increase because it's more likely to increase than to decrease. They say given enough
time (and patience) you'll see an isolated system inevitably return to some previous lower-entropy
state. I state unequivocally that this is not just unlikely, but it's impossible because it would be
tantamount to destroying information and causing unhappening of previous events.
As a corollary to the second law of thermodynamics, I came up with what I call the post-
reductionist universal law stated as follows:
Every change maximizes the total degrees of freedom of the universe.
The phrase total degrees of freedom sounds kind of nice, which is why I chose it. But the
logarithm of total degrees of freedom equals total entropy, so what this really means is that every
change maximizes the entropy of the universe. Not only can entropy never decrease, it must always
increase to the maximum extent possible. Taking this idea to the limit, I postulated we live in a
moment of maximally-increasing entropy, which addresses and maybe solves the mystery of
time. What clocks are actually measuring are increases in entropy reflected as a reduction in
curvature of the universe unfolding around them, as explained in the following paragraphs.
Solving the Schwarzchild equation yields R = 2 E G / c2 describing a sphere of radius R, where E is
the mass-energy of the system, G is the gravitational parameter, and c is the speed of light.
Maximizing the total degrees of freedom (entropy) of the universe means the universe is in a
permanent state of maximal entropy, so the only way to further increase entropy is through
expansion. The maximum rate of expansion can be attained if R increases at the speed of light by
introducing the concept of universal time, tU, where R = c tU.
The idea that there could be such a thing as universal time is anathema to physicists. After all, we
are told space and time are relative, not absolute. However, tU isn't the Newtonian notion of
simultaneity across space. Instead, tU marks the progress of universal expansion, and while R has a
dimension of length, it should be thought of as an expanding radius of curvature around a temporal
center, with a surface surrounding the center at a distance R = c tU marking the present moment. No
clock can run ahead of tU because no clock can run ahead of the present moment. A free-falling
body will keep up with tU, except when a force acts on the body causing acceleration and its proper
time to lag behind tU.
Observing objects at some distance in any direction, we observe them when the universe had a
radius R' < c tU. Those objects will fall behind us in time and will appear to recede from us in space,
resulting in the cosmological red shift. Objects at at distance R = c tU will be receding at the speed
of light and will be at the edge of our horizon. Substituting c tU for R in the Schwarzschild equation
results in E G = c3 tU. This means either the total mass-energy of the universe or the gravitational
parameter must increase over time, or both. As it turns out, the gravitational parameter decreases
over time, being proportional to tU 1, so E must increase in proportion to tU 2.
If the universe is in a state of maximal entropy, we can apply the Bekenstein equation to it. By
combining the Bekenstein and Szilrd equations, we get the following equation of state.
dEU = (k T c3 / 4 G) dA , where A is the expanding surface area of constant curvature, 4 R2.
dA = 8 R dR
dA / dtU = 8 R dR / dtU = 8 c R

70
dEU / dtU = 2 k T c4 R / G = 2 k T c3 tU / G
The above equation of state combines the four fundamental constants k, c, , and G (although G is
really a variable, being inversely proportional to tU). The temperature of the universe, T, is
inversely proportional to tU also, so the ratio T / G equals a constant that can be evaluated using the
current temperature of the universe and the measured value of the gravitational constant.
According to the Bekenstein equation, the total entropy expressed in bits is proportional to the area
of constant curvature, 4 R2 , divided by 4 Ln 2 times the Planck area, G / c3. Since the Planck
area is proportional to tU1, total entropy is proportional to tU3. There must have been a time in the
past when the total entropy of the universe was equal to one bit, which I would guess is the
minimum amount of information that has meaning; the information associated with a coin toss. The
value of tU corresponding to a single bit of entropy would be my idea of The Beginning.
Information from the past is encoded into the present moment. Linear and chaotic interactions
transform those bits according to the laws of determinism without any loss of information, obeying
the second law of thermodynamics, a.k.a. the zeroth law of the universe. Meanwhile, stochastic
interactions are laying new bits of information at an increasing rate across an ever-expanding
surface of constant curvature corresponding to the present moment.
One of the raging controversies in the scientific community is the vacuum catastrophe, referring
to the huge discrepancy between the mass-energy vacuum density based on cosmological arguments
with an apparent flatness of space and the mass-energy vacuum density of virtual particle pairs
based on quantum electrodynamics (QED). Using the model developed in this essay, the value of
density, , is found by dividing the rate of change of dEU / dtU by dVU / dtU = 4 R2 dR / dtU, with the
assumption that dR / dtU is at the maximum rate, c. The vacuum density is = k T / 2 G tU, and it
decreases over time. Based on the known values of the parameters used in the formula, the vacuum
density is currently 980 kg / m3, a surprisingly large value. However, it's not nearly as outlandish as
the QED value for vacuum density of around 10 106 kg / m3.

SUMMARY

I'll conclude this essay with a list of bullet items that highlight its key points:
There are four kinds of interactions: linear, deterministic, reversible; non-linear,
deterministic, chaotic, irreversible; stochastic, probabilistic, irreversible.
Entropy is equivalent to information.
Information requires uncertainty; thus, only stochastic interactions are capable of producing
information.
Linear and chaotic deterministic interactions preserve and transform information in causal
space according to the laws of nature.
Causal space has one time dimension, requiring three spatial dimensions because they must
match the number of rotational degrees of freedom.
A free-falling observer is incapable of measuring any spatial curvature of three-dimensional
space because of rotational symmetry.

71
Due to the asymmetry of time, there is a radius of temporal curvature, R, expressed in units
of length.
Order emerges from chaotic interactions as fractal-like patterns that repeat on different
spatial and temporal scales.
The universe is a fractal with the properties of scale-invariance and self-similarity.
Due to scale-invariance, solutions to the general relativity field equations are exact solutions
for the entire universe and approximate solutions for to its sub parts.
The Schwarzschild formula R = 2 E G / c2 is an exact formula of a closed system, e.g. the
universe.
The universe is in a permanent state of maximal entropy and so the Bekenstein equation can
be applied to it. Thus, the universe must expand in order to accommodate more information.
There exists a universal time parameter, tU, which marks the expansion of the universe.
The universe expands maximally at a rate dR / dtU that is bounded by the speed of light, c.
Since tU corresponds to the present moment, proper time of an observer cannot get ahead of
tU. The geodesic paths of free-falling bodies maximize proper time up to the limit of tU.
Time, having a radius of curvature equal to R, does not have time translation symmetry over
cosmological time periods. Thus, the law of conservation of mass-energy does not apply to
the universe as a whole.
The quantity of mass-energy in the universe increases in proportion to tU2.
The quantity of entropy-information in the universe increases in proportion to tU3.
There is an equivalency between mass-energy and entropy-information (it equals bit).
Since mass-energy and entropy-information increase at different rates, they are linked by the
Szilrd equation with a decreasing temperature, T, proportional to tU1.
The vacuum density of mass-energy is = k T / 2 G tU, with a present value of 980 kg / m3.

THE END

72

Você também pode gostar