1 Bits don`t have error bars Russ Abbott Department of Computer

Bits don’t have error bars
Russ Abbott
Department of Computer Science, California State University, Los Angeles, California
[email protected]
Abstract. How engineering enabled abstraction—in computer science.
1. Introduction
In 1944, Erwin Schrödinger [5] pondered the science of life.
[L]iving matter, while not eluding the ‘laws of physics’ … is likely to involve ‘other
laws,’ [which] will form just as integral a part of [its] science.
But if biology is not just physics what else is there? In a landmark paper [3]
Philip Anderson extended Schrödinger’s thought when he argued against what he
called the constructionist hypothesis, namely, that
“[the] ability to reduce everything to simple fundamental laws … implies the ability to
start from those laws and reconstruct the universe.”
Anderson explained his opposition to the constructionist hypothesis as follows.
At each level of complexity entirely new properties appear. … [O]ne may array the
sciences roughly linearly in [a] hierarchy [in which] the elementary entities of [the science
at level n+1] obey the laws of [the science at level n]: elementary particle physics, solid
state (or many body) physics, chemistry, molecular biology, cell biology, …, psychology,
social sciences. But this hierarchy does not imply that science [n+1] is ‘just applied
[science n].’ At each [level] entirely new laws, concepts, and generalization are necessary.
… Psychology is not applied biology, nor is biology applied chemistry. … The whole
becomes not only more than but very different from the sum of its parts.
Although Anderson agreed with Schrödinger that nothing will elude the laws of
physics, he thought that the position he was taking—that the whole is not only
more than but very different from the sum of its parts—was radical enough that
he should includ an explicit reaffirmation of his adherence to reductionism.
[The] workings of all the animate and inanimate matter of which we have any detailed
knowledge are all … controlled by the same set of fundamental laws [of physics]. …
[W]e must all start with reductionism, which I fully accept.
2
The questions raised by Schrödinger and Anderson can be put as follows. Are
there are independent higher level laws of nature, and if so, is the existence of
such laws consistent with reductionism? Or, to frame the question from the opposite perspective, if everything is reducible to the fundamental laws of physics can
there be independent higher-level laws of nature?
The computer science notion of level of abstraction explains why there can be
independent higher-level laws of nature without violating reductionism. In providing this explanation, computer science illustrates how computational thinking, a
term coined by Jeanne Wing called [7], applied beyond computer science and in
this case is able to resolve one of philosophy’s most vexing problems. Section 4
explains the essence of the solution, which I discuss in greater detail in [1]. This
paper explores the different roles that computer science and engineering play in
the development of the necessary concepts.
2. Turning dreams into reality
Engineers and computer scientists create new worlds. A poetic—if overused—
way to put this is to say that we turn our dreams into reality. Trite though this
phrase may be, I mean to take seriously the relationship between ideas and material reality. As engineers and computer scientists we transform ideas—which exist
only as subjective experience—into phenomena of the material world.
In raising the issue of the relationship between mind and the physical world I
am not proposing a theory of mind. I am not making any claims about how mental
states or events come into being or about how human beings (or other conscious
entities) map mental states or events to anything outside the mind. There is a
broad philosophical literature on concepts1 and an even broader philosophical literature on consciousness.2 I do not claim to have mastered this literature or even to
have familiarized myself with most of the issues that philosophers have raised—
and there are a significant number of them.
Nonetheless I want to talk about the relationship between ideas and physical
reality—and especially about how that relationship is at the heart of how we as
computer scientists and engineers do our work. In this article, I will take as a given that we each experience ourselves as thinking beings, that we each understand
what it means to say that we experience ourselves as thinking beings, and that
when I speak of having an idea, we each understand in more or less the same way
what that means.
1
See [Margolis] and the “Related Entries” for an overview of the philosophical literature on
Concepts.
2 See [Van Gulick] and the “Related Entries” for an overview of the philosophical literature on
Consciousness.
3
Engineering and computer science are both creative disciplines. Practitioners
in both fields begin with ideas, primarily for things that don’t exist, and we build
those things. That’s actually quite remarkable on its own. It’s not clear that there
are any other species that do this—that imagine something and then bring that imagined thing into existence. This sort of creativity is the essence of the creative
arts.
A primary difference between engineering and computer science is the kind of
product that we create—or more accurately, the forms in which we express our
creations. Computer scientists express their creative products as software. Engineers express their creative products as engineering drawings and other design
documents.
Computer science turns ideas into a symbolic reality; engineering turns ideas
into a material reality.3, 4, 5 Although perhaps unremarkably true, the consequences
are far-reaching.
Thought externalization. Engineers and computer scientists turn ideas into reality. The first step in turning ideas into reality is to externalize thought—not just
to act on a thought but to represent the thought outside subjective experience in a
form that allows it to be examined and explored.
An enduring goal of computer science is to develop languages that have two
important properties. (a) The language may be used to externalize thought.
(b) Expressions in the language can act in the material world—that is, the language is executable. This is remarkably different from anything that has come before. Human beings have always used language to externalize thought. But to have
an effect in the world, language has always depended on people. Words (and
equations) mean nothing unless someone understands them. Software acts without
human intervention. [2]
Engineers externalize thought as either designs or objects. Most designs—even
computer-based designs—have the same limitation as most other languages. To
affect the world a person must understand them. The engineer who builds an object has indeed turned thought into a material phenomenon. But the thought is
gone; all that’s left is the physical realization. To recover the thought requires reverse engineering—i.e., science. In computer science externalized thought is executable; in engineering it isn’t.
3. Engineering and computer science
Intellectual leverage. Computer science gains intellectual leverage by building
levels of abstraction, the implementation of new types and operations in terms of
3
Scientists turn reality into ideas.
Humanists turn reality into dreams. —Debora Shuger
5 Mathematicians turn coffee into theorems. —Paul Erdos
4
4
existing types and operations. A level of abstraction is always operationally reducible to a pre-existing substrate, but it is characterized independently—what Searle
calls causal but not ontological reducibility. 6 Levels of abstraction allow computer
scientists to create new worlds—worlds that are both symbolic and real and that
obey laws that are independent of their substrate platforms.
Engineering gains intellectual leverage through mathematical modeling and
functional decomposition. Both are understood to approximate an underlying reality. Neither is understood to create ontologically independent entities. The National Academy of Engineering [4] explains that “engineering systems often fail
… because of [unanticipated interactions (such as acoustic resonance) among well
designed components] that could not be identified in isolation from the operation
of the full systems.”
A further illustration of the relative difficulty engineering has with ontologically independent constructs is that engineering designs are buttressed by safety factors; software, in contrast, uses assertions. Every physical implementation of a
level of abstraction does indeed have feasibility conditions. Safety factors should
be understood as ways to ensure that those feasibility conditions are met rather
than as an insurance policy against improbable events.
Symbolic floor vs. no floor. Engineering enabled computer science when it
harnessed electrical signals as bits. Bits are both symbolic—no error bars—and
physically real. Computer science builds levels of abstraction on a foundation of
bits. It relies on engineering for issues beyond the bit—such as performance.
But bits prevent computer science from working with the full richness of nature. Every software model has a fixed bottom level—making it impossible to explore phenomena that require dynamically varying lower levels. A good example
is a biological arms race. Imagine a plant growing bark to protect itself from an insect. The insect may then develop a way to bore through bark. The plant may develop a toxin—for which the insect develops an anti-toxin. There are no software
models in which evolutionary creativity of this richness occurs. To build software
models of such phenomena would require that the model’s bottom level include all
potentially relevant phenomena. But to do so is far beyond our currently available
computational means. This deficit suggests a challenge to computer science: develop a modeling framework in which the lowest level can be varied dynamically
as needed.
Engineering (like science) has no floor. It is both cursed and blessed by its attachment to physicality. It is cursed because one can never be sure of the ground
on which one stands—raw nature does not provide a stable base. It is blessed because one can decide for each issue how deeply to dig for useable bedrock.
6
In Mind [6] Searle claims that this nicely described distinction explains
subjective experience. He doesn’t say how.
5
4. The reductionist blind spot
Here’s how levels of abstraction resolve the issue of higher-level laws. (See [1].)
In the Game of Life, the rules are analogous to the fundamental laws of physics:
they determine everything that happens on a Game of Life grid. Nevertheless there
can be higher level laws that are not derivable from them.
Certain Game of Life configurations produce patterns. One can implement arbitrary Turing machines by arranging such patterns. Computability theory applies to
these Turing machines. Thus while not eluding the Game of Life rules, new laws
(computability theory) that are independent of the Game of Life rules apply at the
Turing machine level of abstraction—just as Schrödinger said.
Furthermore, because the halting problem is undecidable, it is undecidable
whether an arbitrary Game of Life configuration will reach a stable state. So, not
only are there independent higher level laws, those laws apply to the fundamental
elements of the Game of Life. I call this downward entailment, a scientifically acceptable alternative to downward causation.
Like all levels of abstraction, Game of Life patterns are epiphenomenal—they
have no causal power. It is the Game of Life rules that turn the causal crank. Why
not reduce away these epiphenomena? Reducing away a level of abstraction results in a reductionist blind spot. No equations over the Game of Life grid can describe the computations performed by a Turing machine—unless the equations
themselves model a Turing machine.
It isn’t surprising that levels of abstractions give rise to new laws. To implement a level of abstraction is to impose constraints. (It is their entropy and mass
properties, not causality, that make levels of abstraction ontologically irreducible,
i.e., objectively real.) A constrained system almost always obeys laws that
wouldn’t hold otherwise. Those laws make sense only when expressed in terms of
the abstractions implemented by the constraints.
This perspective applies beyond software. Nature—a “blind programmer”—
also builds levels of abstraction, a phenomenon sometimes referred to as emergence. The principle of emergence helps explain what exists. Extant levels of abstraction—naturally occurring or man-made, static (at equilibrium) or dynamic
(far from equilibrium)—are those whose implementations have materialized and
whose environments support their persistence.
Since it is the elementary forces that enable implementations to form and cohere, a fundamental mystery remains. What are forces, and how do they work? It
will probably require the development of a background-free physics to answer this
question.
6
5. Summary
Engineering has no stable base. Engineers must always be concerned about the
possibility of lower level physical effects—such as O-rings not functioning as
sealants if the temperature is too low. Consequently, engineering has not been in
an intellectual position to develop the notion of level-of-abstraction. But with its
gift of the bit to computer science, engineering created a world that is both real
and symbolic. Computer science developed the level-of-abstraction as a way to
gain intellectual leverage over that world. It then applied that concept to solve a
long-standing problem in the philosophy of science.
References
[1] Abbott, Russ, “Emergence explained,” Complexity, Sep/Oct, 2006, (12, 1) 13-26. Preprint:
http://cs.calstatela.edu/wiki/images/9/95/Emergence_Explained-_Abstractions.pdf.
[2] Abbott, Russ, “If a tree casts a shadow is it telling the time?” Journal of Unconventional
Computation, Vol. 5, No. 1, pp. 1–28, 2008. Preprint:
http://cs.calstatela.edu/wiki/images/6/66/If_a_tree_casts_a_shadow_is_it_telling_the_time.pd
f.
[3] Anderson, P.W., “More is Different,” Science, 177 393-396, 1972.
[4] Commission on Engineering and Technical Systems, National Academy of Engineering, Design
in
the
New
Millennium,
National
Academy
Press,
2000.
http://books.nap.edu/openbook.php?record_id=9876&page=R1.
[5] Margolis, Eric and Stephen Laurence, "Concepts", The Stanford Encyclopedia of Philosophy
(Winter
2007
Edition),
Edward
N.
Zalta (ed.).
http://plato.stanford.edu/archives/win2007/entries/concepts/.
[5] Schrödinger, Erwin, What is Life?, Cambridge University Press, 1944.
http://home.att.net/~p.caimi/Life.doc.
[6] Searle, John, Mind: a brief introduction, Oxford University Press, 2004.
[x] Van Gulick, Robert, "Consciousness", The Stanford Encyclopedia of Philosophy (Spring
2008
Edition),
Edward
N.
Zalta (ed.),
forthcoming.
http://plato.stanford.edu/archives/spr2008/entries/consciousness/.
[7] Wing, Jeanette, “Computational Thinking,” Communications of the ACM, March 2006, (49,
3) 33-35.
http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing06.pdf.
All Internet accesses are as of March 15, 2008.