Natural Algorithms
Joshua J. Arulanandham
under the supervision of
Professor Cristian S. Calude and Dr. Michael J. Dinneen
A Thesis Submitted in Partial Fulfilment of the
Requirements for the Degree of
DOCTOR OF PHILOSOPHY
in
Computer Science
The University of Auckland, 2005
c 2005 by Joshua J. Arulanandham
Copyright ii
ABSTRACT
This Thesis documents our explorations of simple natural physical systems that
exhibit useful, interesting computational effects. We view such physical systems as
specialized “computers”/natural–computers. These natural–computers (physical
systems), as they evolve in phase space, can be seen as performing a computation
or executing a natural algorithm to solve a specific problem. We often consider
systems that apparently “solve” problems that are hard to solve by using conventional
computers. We actually view the physical systems themselves—not their numerical
simulations—as computers.
The Thesis is a braid of three ideas/inventions: (i) A gravity powered
“computer”—constructed out of beads and rods of an √
abacus—that can perform
data processing tasks like sorting, searching, etc. in O( N ) time on an unsorted
database of size N. (ii) Bilateral Computing, a paradigm for the construction of
natural physical computing devices which are “bilateral” in nature: devices realizing
a certain function f can spontaneously compute as well as invert f . These gadgets
can be used to solve NP –hard problems like Boolean Satisfiability (SAT), apparently
without using an exponential amount of resources. (iii) Balance Machine, a generic
natural computational model which is shown to be universal.
First of all, natural algorithms for sorting and searching—trivial problems that
can be solved by classical computing methods quite efficiently, using only resources
(for instance, time and memory) polynomial in the size of the input data—are developed. These, while serving as typical examples of natural algorithms, set the
stage for natural algorithms solving hard problems. It may be noted, however, that
these natural algorithms—the physical processes themselves, and not their various
implementations—do outperform their classical counterparts in a spectacular way!
These natural algorithms are “run” on an abacus–like natural–computer powered by
gravity.
We move on to solve computationally hard problems: We attempt to solve SAT, an
NP –complete problem, using a natural–computer constructed with pipes and pistons,
which uses water levels to encode data. Besides proposing a specialized natural–
computer that solves SAT, we develop a general paradigm—referred to as Bilateral
Computing—advocating the design of a class of such “bilateral” natural–computers;
these are physical systems in which there is no intrinsic distinction between its various
parameters: there is no tag attached to each of the parameters classifying it as an
“input parameter” or as an “output parameter”. In other words, one has the freedom
to change, or assign a value to, any of the parameters and watch what happens
to the rest. When one or more of the parameters is varied, the system will react
by spontaneously adjusting the rest of the parameters so as to maintain a certain
relation between the various parameters.
The culmination of our research is a universal, generic natural computational
model. This model, called Balance Machine, is a mechanical model like a Turing Machine, which can be directly realized in the form of a machine. It consists of
iii
components that resemble ordinary physical balances, each with a natural tendency to
automatically balance their left and right pans: If we start with certain fixed weights,
that represent inputs, on some of the pans, then the balance–like components would
vigorously try to balance themselves by filling up the rest of the pans with suitable
weights that represent the outputs. Interestingly, the balance machine is a bilateral
computational model.
Natural algorithms exploit the vast computing power in ordinary, commonplace
physical phenomena which occur in the familiar macroscopic world unlike other popular approaches to unconventional computing that feed on sophisticated processes in
molecular biology and quantum mechanics. The very fact that natural algorithms are
“extracted” from easy–to–understand natural physical phenomena witnessed everywhere by everyone (and not only in laboratories by specialists) endows it with great
pedagogical value, among other things.
Finally, there is a whole new side to Natural Algorithms other than building practicable natural–computers: It inspires researchers to look at computation in a new light,
to think and talk about computation in terms of a new “vocabulary” comprising natural physical processes as opposed to a set of logical operations. Natural–computers
may or may not become practical computing gadgets of the future, but we believe
that the new way of thinking will certainly persist.
iv
ACKNOWLEDGEMENTS
I have been toying with the idea of Natural Algorithms since 1994 when I, as
an undergraduate, had stumbled onto a new sorting algorithm based on a natural
phenomenon. During this long and tortuous journey I have met with a good number
of people who discouraged me; some showed contempt for the idea of linking computer
algorithms and natural phenomena, some found the idea bizarre and some suggested
that I pursue these ideas as a “hobby” rather than as a serious scientific study. I take
this opportunity to salute them all, for they are the ones who “motivated” me more
than the others who were sympathetic to my views. In a sense, my research has been
a quest to try and prove them wrong.
I thank my PhD supervisors, Professor Cristian S. Calude and Dr. Michael J. Dinneen, first of all for their willingness to supervise a project on a rather queer area
of study. They have given me excellent guidance, moral and technical support and
whatever a student would hope to get from good supervisors. They were willing to
walk an extra mile with me, trying hard to make a “civilized thinker” out of me, who,
by nature would lean more on raw intuition and gut feeling than on logic. If at all
there are some traces of logical/mathematical thinking in me, I owe them all to my
supervisors.
I also take this opportunity to thank the Head of the Department of Computer
Science, Professor John Hosking, for providing me with excellent facilities for my
research and also Professor Clark Thomborson, who was the PhD coordinator when
I joined the course.
I thank immensely Dr. Alan Creak, Honorary Researcher at the Department of
Computer Science, who gave critical comments about our work from time to time.
Whenever I gave him a paper to read, he would go to great lengths to get to the
bottom of it. His comments, delivered in his usual ruthless manner, have helped
us improve the quality of our work to a great extent. I can never forget the way
he proved—to my dismay—that one of my long–cherished natural algorithms was
incorrect! I also thank Professor Boris Pavlov, Department of Mathematics, who
helped us especially with our work discussed in Chapter 3 of the Thesis.
I also thank Dr. R. Soodamani, formerly lecturer at the Department of Computer Science, Bharathiar University, South India, who supervised my research for a
brief period. She taught me how to write my first research paper. I thank Professor
K. Sarukesi, formerly Head of the Department of Computer Science, Bharathiar University, South India, who was one of the very first few people to find my earlier ideas
pertaining to natural algorithms interesting. But for his encouragement, I would not
have persisted in this journey.
I owe a lot to students who attended my lectures on some of the topics of Unconventional Models of Computation (COMPSCI 755) at this university for their comments and suggestions. In particular, let me thank Daniel Bertinshaw who proof–read
Chapter 1 of this Thesis and offered a lot of comments, Mike Stay and Jason Stevens,
v
with whom I have had a number of fruitful discussions. Mike taught me a lot about
Physics in the context of natural algorithms. Some of the students did assignments
and projects on natural algorithms as part of their course work which gave me fresh
new insights. Let me mention Jung Jen Chen, Dominik Haug, Rongwei Lai, Bernard
O’leary and Dominik Schultes in particular. Dominik Schultes’ “rainbow sort”, a
novel natural algorithm for sorting that he developed as part of his project was inspiring.
I have had informal, interesting discussions with a number of people in the department including Professor Bakh Khoussainov, Dr. Garry Tee and Professor Greg
Chaitin, who is a visiting professor at the university and also with Professor P. Shanmugam of the Department of Computer Science, Sri Krishna College of Technology,
South India on a few memorable occasions. I thank them all for their suggestions, concern and encouragement. I also thank Dr. Gh. Păun for providing me with technical
material on Tissue P systems and for his kind email communications.
I thank my wonderful office mates during the period of my study—Philip Chiang,
Qiang Dong, Chi–kou Shu, Li Ming, Simona Dragomir and Sasha Rubin for their
friendship. Sasha very kindly proof–read the whole of my Thesis and offered a number
of useful suggestions for which I am extremely grateful.
My friends in India—Isaac Mark, John Mark, Emmanuel (Emmu) Reagan and
Samuel Muthuraj—have been a great source of encouragement and support right
from the beginning when some of the ideas discussed in this Thesis were in their
infancy. I cannot picture this personal journey without them.
Finally, I thank my parents for their unconditional support during the course
of this project despite the fact that I have never bothered to do what they think I
should, in general. In particular, I thank my father, Professor R. S. Arulanandham,
who spent countless nights proof–reading my Thesis and succeeded—to my chagrin—
in exposing a number of mistakes pertaining to my use of the English language. I
also thank my beloved wife, Anjana, who openly shows her disgust for most of my
obsessions (including this project) and yet takes great care of me and my needs. A
big thanks to my little daughter, Amanda, who surprises me by continuing to want
my company, even after I shooed her out of my room more than a hundred times
while typing this Thesis.
Special thanks are due to the scholarly examiners who read this Thesis and suggested changes which have greatly improved its quality.
Contents
Table of Contents
vi
List of Figures
viii
List of Tables
x
1 Introduction
1.1 Eureka! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Typical examples . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Finding the average, using liquid behavior . . . . . . .
1.2.2 Finding the shortest path with strings and beads . . .
1.2.3 Solving Graph Connectivity, using electricity . . . . . .
1.2.4 Solving the Maximal Clique Problem, using water flow
1.3 Classic natural computing models . . . . . . . . . . . . . . . .
1.3.1 DNA Computing . . . . . . . . . . . . . . . . . . . . .
1.3.2 P Systems (Membrane Computing) . . . . . . . . . . .
1.3.3 Genetic Algorithms and Neural Networks . . . . . . . .
1.3.4 Quantum Computing . . . . . . . . . . . . . . . . . . .
1.3.5 Analog computation with dynamical systems . . . . . .
1.3.6 Natural Algorithms versus “the rest” . . . . . . . . . .
1.4 New ideas developed in the Thesis . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
3
4
4
5
5
7
9
9
10
11
12
13
14
14
2 Beads and Rods
2.1 Bead–Sort . . . . . . . . . . . . . . . . .
2.2 Searching for a needle in a “bead–stack”
2.3 Sorting and searching a database . . . .
2.4 The BeadStack Min/Max data structure
2.5 Implementing Bead–Sort . . . . . . . . .
2.5.1 Digital circuit . . . . . . . . . . .
2.5.2 Analog circuit . . . . . . . . . . .
2.5.3 Cellular automata . . . . . . . . .
2.5.4 P Systems . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
20
23
26
27
27
28
33
34
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vii
3 Solving SAT with Bilateral Computing
3.1 The problem of Satisfiability . . . . . . . . . . .
3.2 Bilateral Computing . . . . . . . . . . . . . . .
3.3 Bilateral AND, OR gates . . . . . . . . . . . .
3.4 Solving an instance of SAT . . . . . . . . . . . .
3.5 Time complexity . . . . . . . . . . . . . . . . .
3.6 Bilateral Computing vs. Reversible Computing
.
.
.
.
.
.
44
44
45
46
47
49
50
.
.
.
.
.
.
.
.
.
51
51
52
54
57
62
62
64
64
66
5 Conclusion
5.1 Why natural algorithms? . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Where do we go from here? . . . . . . . . . . . . . . . . . . . . . . .
68
68
70
A Input–Output Devices for Bead–Sort
72
B Solving SAT with Fluidic Logic
74
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 Balance Machines
4.1 Computing as a “balancing feat” . . . . . . . . . . .
4.2 The Balance Machine model . . . . . . . . . . . . . .
4.3 Computing with balance machines: Examples . . . .
4.4 Universality of balance machines . . . . . . . . . . . .
4.5 Bilateral Computing revisited . . . . . . . . . . . . .
4.6 Three applications: SAT, Set Partition and Knapsack
4.6.1 The SAT problem . . . . . . . . . . . . . . . .
4.6.2 The Set Partition problem . . . . . . . . . . .
4.6.3 The 0/1 Knapsack problem . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
Illustrating Bead–Sort. . . . . . . . . . . . . . . . . . .
Finding the average using liquid behavior. . . . . . . .
Finding the shortest path with strings and beads. . . .
Solving Graph Connectivity using electricity. . . . . . .
A clique. . . . . . . . . . . . . . . . . . . . . . . . . . .
Device for solving MCP for a graph with three vertices.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
5
6
7
8
9
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
Representing numbers with beads. . . . . . . . . . .
Illustrating Bead–Sort. . . . . . . . . . . . . . . . .
Bead–Sort conventions. . . . . . . . . . . . . . . . .
Introducing a new integer into the list. . . . . . . .
Searching for the integer 3. . . . . . . . . . . . . . .
Searching for the integer 2. . . . . . . . . . . . . . .
Sorting a database: mere colored beads do not help.
Sorting a database: a different approach. . . . . . .
The ON-state of a memory cell represents a bead. .
Digital hardware implementation. . . . . . . . . . .
Representing rods with electrical resistors. . . . . .
Trimmer circuits. . . . . . . . . . . . . . . . . . . .
Simple analog circuit for sorting. . . . . . . . . . .
Illustration of sorting in an analog sorting circuit. .
Cellular automaton implementation. . . . . . . . .
Membranes = Rods, Objects = Beads. . . . . . . .
Making bead–objects fall down. . . . . . . . . . . .
Ejecting the output. . . . . . . . . . . . . . . . . .
Prompting the ejection of output from row 1. . . .
Prompting the ejection of output from rows 2 & 1.
Sample illustration — intermediate steps (i) to (vi).
Sample illustration — steps (vii) to (x). . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
18
21
22
22
24
25
28
29
30
30
31
32
34
35
37
38
38
39
40
41
3.1
3.2
3.3
Unilateral and bilateral logic circuits. . . . . . . . . . . . . . . . . . .
Logic gates based on liquid statics. . . . . . . . . . . . . . . . . . . .
An instance of SAT. . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
46
47
4.1
Physical balance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ix
4.2
4.3
A self–regulating balance. . . . . . . . . . . . . . . . . . . . . . . . .
Pictorial, textual representations of a simple balance machine that performs addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Increment operation. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Decrement operation. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Addition operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Subtraction operation. . . . . . . . . . . . . . . . . . . . . . . . . . .
Multiplication by 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Division by 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solving simultaneous linear equations. . . . . . . . . . . . . . . . . .
Solving simultaneous linear equations (simpler representation). . . . .
AND logic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . .
OR logic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
NOT logic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . .
S–R flip-flop constructed using cross coupled NOR gates. . . . . . . .
Balance machine as a “transmission line”. . . . . . . . . . . . . . . .
Solving an instance of SAT. . . . . . . . . . . . . . . . . . . . . . . .
54
55
55
55
56
56
56
58
58
60
60
60
61
61
63
A.1 Loading a row–of–beads in parallel. . . . . . . . . . . . . . . . . . . .
A.2 Reading ouput. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
73
B.1 Fluidic logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.2 Solving an instance of SAT. . . . . . . . . . . . . . . . . . . . . . . .
74
75
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
53
List of Tables
2.1
2.2
Toy database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Expected performance of common data structures. . . . . . . . . . . .
24
26
3.1
Truth table for (ā + b)(a + b). . . . . . . . . . . . . . . . . . . . . . .
47
4.1
4.2
4.3
4.4
Truth table for AND. . . . .
Truth table for OR. . . . . .
Truth table for NOT. . . . .
State table for S–R flip-flop.
59
59
59
59
.
.
.
.
.
.
.
.
.
.
.
.
x
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 1
Introduction
The wonders of the world come in two distinct flavors—“natural” and “artificial”:
nature’s own handiwork and artifacts made by human beings. While God takes
credit for the Niagara Falls and the Great Barrier Reef, man has his own share of
glory, thanks to his own creations like the Taj Mahal and the Computer.
The Computer is deemed the quintessential artifact of the human mind; it is
driven by laws of logic, as conceived by us, humans. The natural–computer1 which
is the focus of our research, though, has a rather “divine” element; it is fueled by
natural laws that evoke computational effects. For instance, it is a natural property
of soap films, when constrained by fixed boundaries, to spontaneously take up the
minimum surface area. A natural–computer can use this property of soap films to
solve minimization problems as demonstrated in [39, 40].
This Thesis documents our explorations of simple natural physical systems that
exhibit useful, interesting computational effects like the minimizing effect seen in
soap films. We view such physical systems as specialized “computers”/natural–
computers. These natural–computers (physical systems), as they evolve in
phase space, can be seen as performing a computation or executing a natural algorithm to solve a specific problem. But, note that every physical system can be viewed
as performing a computation: it realizes a solution to the dynamic equations governing
its physical behavior! We consider only systems exhibiting interesting computational
behavior, often those that apparently “solve” problems that are hard to solve by using conventional computers. We actually view the physical systems themselves—not
their numerical simulations—as computers. We do not make use of natural physical
phenomena as mere metaphors2 for computer algorithms.
Now, what exactly do we mean by a natural–computer? The conventional computer too functions by obeying the physical laws of nature at the level of hardware
1
We will shortly explain what a natural–computer is.
Natural Computing—“computing going on in nature and computing inspired by nature”,
according to G. Rozenberg, is a general term with a broader scope [58, 59]; it does include computer
algorithms that make use of natural phenomena as metaphors. We use the expression Natural Algorithms to refer to the specific kind of natural computing that we explore in this Thesis.
2
1
2
3
(a)
2 4
3
3
4
(b)
4
2 (c)
(d)
Figure 1.1: Illustrating Bead–Sort.
circuits, and hence is a “natural–computer” in that sense! Let us try to make the
distinction using an example first. Consider the problem of sorting numbers in order.
To solve this problem, a natural–computer would make use of a natural phenomenon
that resembles sorting—say, a physical process that automatically arranges objects
according to their size, weight or multiplicity. For instance, visualize beads attached
to vertical rods as in an abacus—suspended in the air, just before they would slide
down due to gravity. (See Figure 2.2.) Beads, read horizontally as a row, represent
unordered numbers. Now, in your thought experiment, allow them to drop down;
on falling down, rows of beads are always seen sorted—smaller rows appear on top
of bigger ones. In other words, you will never see the following situation: a row
of 4 beads appearing on top of a row of 3 beads. The “Bead–Sort machine” just
discussed is a specialized natural–computer that sorts positive integers based on the
above natural physical phenomenon. Compare this with conventional approaches to
sort data: The natural–computer does not employ the usual logical, relational and
arithmetic operations to compare/distribute numbers, but is driven by a simple, natural law that happens to exist in physical nature—a law that forbids a bigger row of
beads to dwell on top of a smaller one.
The important distinction, thus, is the following: Conventional computing is normative: a rigid framework of logical operations is a priori established, and natural
laws are utilized merely to implement/realize them. On the other hand, in natural
computing, natural laws dictate the very “operations” that would achieve the desired
computation—for, we do not force nature to realize those operations that we choose
to define.
In what follows in this Chapter, we give typical examples of natural algorithms,
review other classic unconventional natural models of computation like DNA Computing and compare them with our approach, and finally give an inkling of the key
ideas that make up the Thesis.
We now start with two anecdotes of historical fame. Whether these incidents
qualify as key examples of natural algorithms in the context of this Thesis, is beside
the point; the point we wish to convey is that they, ancient as they are, bear a striking
3
resemblance to the relatively modern notion of Natural Algorithms.
1.1
Eureka!
In the first century BC, Archimedes, the wise man at the court of the king of Syracuse
was facing a tough challenge: test the purity of the king’s golden crown—without
making the slightest dent on it. Archimedes had voluntarily got himself into this
trouble. The crown, in the form of a wreath, which the king wished to place on the
statue of a deity had just been delivered, and Archimedes had raised doubts about
the purity of the gold while everyone else in the court was simply carried away by its
charm. He had bluntly suggested that the goldsmith could have kept for himself some
of the gold given to him, and put an equal weight of silver in its place. No wonder,
the king immediately ordered him to test if indeed his suspicion was well–founded.
One day, deep in his thoughts, and still haunted by this problem, Archimedes
stepped into his bathtub. What ensued—the water overflowing as he stepped in—set
off a spark in his head: a solution to the problem came to him in a flash, and out
he came from the tub and ran along the streets (without even his clothes on, as the
legend has it) shouting, “Eureka, I have found it!”. Now, his ingenious solution can
in fact be spelled out like an algorithm:
1. Drop into a bowl of water a nugget of gold, known to be pure,
and whose weight equals that of the crown and observe how much
water gets displaced.
2. Repeat (1), using the actual crown.
3. The gold is pure if and only if there is no difference in the
amount of water displaced in (1) and (2).
The algorithm is based on the following simple natural phenomenon: If the crown
contained a less dense foreign metal like silver, as Archimedes suspected, despite
having the same weight as the original gold nugget, it would have “swelled up” a
bit: that is, the volume of the crown would have increased. Though this may not
be directly noticeable, it can be found out—so thought Archimedes—if it spills more
water from the bowl. We have based this story of Archimedes on an account given in
[74]3 .
Another noteworthy instance from history that has the elements of natural computing concerns the master architect Antoni Gaudi and the famous unfinished church,
Sagrada Familia in Spain [21, 73]. Gaudi was unconventional: he would build models
rather than draw designs; he constantly observed “art forms” in nature and mimicked
them. While designing the church, well–known for its “leaning columns”—a peculiar
kind of columns which are not at right angles to the floor and the ceiling—Gaudi
computed the correct angle for each of the columns with nature’s aid: He built a
scaled–down hanging model, a prototype, of the structure in which strings took the
3
The resemblance between Archimedes’s approach and natural algorithms was brought to our
attention by Daniel Bertinshaw, a graduate student at the University of Auckland.
4
place of the actual columns; to see how the weight of the massive ceiling would affect
the columns, he turned the model upside down thus letting gravity work it out. He
could then extract the outcome of his computation from the resulting “shape” of the
model.
1.2
Typical examples
In this section we present some typical natural algorithms which include some of
the algorithms we have designed and also a few proposed by others. Most of them
solve simple computing problems. The purpose is merely to introduce the notion
of Natural Algorithms, and not to analyze their complexity. Natural algorithms for
computationally hard problems are discussed in the Chapters that follow.
1.2.1
Finding the average, using liquid behavior
The conventional method to compute the average of n numbers is to explicitly make
use of standard arithmetic operations to find their sum and then divide it by n. Our
approach is to look for a ready–made natural process that somehow produces an
averaging effect (one need not know how!), by virtue of its physical nature.
Consider the apparatus in Figure 1.2(a) with four cylindrical limbs, each having
the same base area a. We use such a device to illustrate computing the average of
four numbers. Use a similar apparatus with n limbs to average n numbers. The
plunger divides the limbs, one from the other. When it is pulled out, though, the
limbs become interconnected. We can use a certain quantity of a liquid (say, water)
to represent a number. To compute the average of the numbers l1 , l2 , . . . ln , do the
following:
1. Fill the apparatus with water: water levels/heights in limbs
should read l1 , l2 , . . . ln (on some suitable scale).
2. Pull the plunger out, and wait for the water to settle down.
The new water level which will be the same throughout gives
the average.
When the plunger is pulled out, water flows freely between the limbs and the atmospheric pressure, which is the same on every limb, sees to it that every limb has the
same water level. Since the total volume of water, before and after pulling the plunger,
remains the same, the new level—call it l—represents the average as shown below:
sum of the volume of water in individual limbs initially = sum of the volume of water
in individual limbs after pulling out the plunger, i.e. l1 .a + l2 .a + . . . + ln .a = (l.a) × n
(recall that a is the base area of each cylindrical limb). Thus, l = (l1 + l2 + . . . + ln )/n,
the average of l1 , l2 , . . . ln as required.
5
l1
l2
l4
l3
l
(a) Before
(b) After
Figure 1.2: Finding the average using liquid behavior.
1.2.2
Finding the shortest path with strings and beads
The version of the Shortest Path Problem that we wish to solve is the following: Given
an undirected, edge–weighted, connected graph G = (V, E) with two distinguished
vertices x, y ∈ V , find the shortest path (i.e., the sequence of edges having the least
cumulative weight) in G between x and y; the weights are assumed to be positive
rationals.
Conventional algorithms to solve this problem include Dijkstra’s algorithm and
the Bellman-Ford algorithm [20].
But we propose the following natural algorithm: Given a graph, construct a physical model realizing the graph. Use strings for edges, and tiny beads for vertices,
joining the strings into a bead wherever there is a vertex. For every edge in the graph
use a string of length proportional to its weight (see Figure 1.3(a)).
To find the shortest path between two given vertices x and y, do the following:
1. Hold the beads representing x and y and pull them apart
as shown in Figure 1.3(b).
2. Stop pulling when a straight line, comprising a sequence
of edges, is formed between them (see Figure 1.3(c)).
(The path represented by the straight line is the shortest path
between x and y.)
This is because a straight line represents the shortest distance between any two points
in Euclidean space; longer paths between them will trace different (longer) “routes”
in space.
1.2.3
Solving Graph Connectivity, using electricity
The Graph Connectivity Problem is defined as follows: Given an undirected graph
G = (V, E) and two distinguished vertices x, y ∈ V , check if there is a path in G
6
y
y
a
a
b
b
c
c
x
x
(a)
(b)
y
a
b
c
(c)
x
Figure 1.3: Finding the shortest path with strings and beads.
7
x
y
V
volts
i amperes
Figure 1.4: Solving Graph Connectivity using electricity.
between x and y. A. Vergis et al propose a simple method to solve graph connectivity
in [71].
Realize the given graph in the form of an electrical network: use a wire of constant
resistance per unit length wherever there is an edge, and join the wires wherever the
edges meet at a vertex (see Figure 1.4). Apply a voltage source between (junction)
points representing vertices x and y. A current flow can be detected if and only if
there is a path between x and y. Clearly, their construction works only for undirected
graphs. To make the idea work with directed graphs, one can use unilateral circuit
elements such as junction diodes—which conduct electricity in only one direction—in
the place of ordinary wires.
Rongwei Lai has proposed a similar method to solve the Maze Problem, in which
we actually find a path between two vertices in a connected graph that represents a
maze besides asserting its existence [45]: The electrical network is laid out on a circuit
board which is covered using a thermo sensitive material such as plastic pellicle; the
heat generated when current passes through the path(s) between the vertices can be
utilized to produce a physical impression of the route(s) on the material, which can
be “read”.
1.2.4
Solving the Maximal Clique Problem, using water flow
George Whitesides of Harvard University and his colleagues have proposed and implemented a natural algorithm that makes use of flowing water to solve the Maximal
Clique Problem (MCP), which is known to be NP –complete [15, 25]. A clique is a
graph in which every pair of vertices is connected by an edge. See Figure 1.5 for an
example. Now we can define MCP: Given an undirected graph G of n vertices, find
the maximum number k such that G contains a subgraph of k vertices that forms a
clique.
The main idea used in [25] is to allow running water, flowing through microscopic
channels, to explore in parallel the branches (representing the structure of a generic
8
5
1
4
2
3
Figure 1.5: A clique.
graph) carved on slabs of plastic4 . It should be noted that in order to solve MCP for
a graph of n vertices, one has to first fabricate5 a generic fluid “hardware” that can
be used to solve MCP for any graph of n vertices. The algorithm for solving MCP has
four steps: (i) For every edge [i, j] (edge connecting vertices i and j) of G, mark (say,
using a tag) all subgraphs of G containing i and j. (ii) For every subgraph, count the
number of tags. (iii) Decide if there are enough tags (edges) in each subgraph to be
a clique; (a subgraph of x vertices must have x(x − 1)/2 tags to qualify). (iv) Return
the size and identity of the largest clique.
The above algorithm is implemented using a microfluidic system as described
below: Given a graph of n vertices, first build, or make use of, a generic microfluidic
device for solving MCP for any graph of n vertices. Figure 1.6 shows the design of a
generic device for graphs with three vertices: for every edge allot a reservoir (source
of water) and for every possible subgraph set up a well (to collect water); connect the
reservoirs and wells using channels—connect the reservoir denoting edge [i, j] with all
subgraphs that include vertices i and j. So far we have developed a generic device for
graphs of a certain size. Now, given a graph G (of size that fits the device we have got)
do the following steps, which directly realize the algorithm mentioned above: (i) Drop
a calibrated number of fluorescent beads into reservoirs that correspond to the edges
in G (these beads will swim towards the associated wells). (ii) Exploit parallelism in
optical systems to read out the amount of fluorescence emanating from each well; this
corresponds to the number of beads in a well, which in turn corresponds to number
of edges (reservoirs) contributing to the subgraph represented by the well. (iii) By
setting the appropriate optical detection threshold, decide if the fluorescence suffices
for a well (subgraph) to qualify for a clique. (iv) The size and identity of the largest
clique can be found quickly, thanks to the way the wells are spatially arranged on the
4
The whole of the microfluidic system has only the size of a silicon chip because of the cheap and
simple technique, photolithography.
5
Parallel fabrication of microfluidic systems in a number of steps which is polynomial in the
number of vertices is possible.
9
{0}
{1}
reservoir (edge)
well (subgraph)
{2}
{1,2}
{1,2}
{1,3}
{3}
{1,3}
{2,3}
{1,2,3}
{2,3}
Figure 1.6: Device for solving MCP for a graph with three vertices.
plastic slabs. More technical details can be found in [25].
1.3
Classic natural computing models
C. S. Calude et al have considered the following unconventional models of computation
as pertaining to natural computing [22]: DNA Computing, Membrane Computing,
Genetic Algorithms, Neural Networks and Quantum Computing. In this section we
give a very brief introduction to each of these paradigms. Without delving into
technical details we explain the core ideas in a nutshell, just to be able to distinguish
our own work on Natural Algorithms from them. We also include a note on “analog
computation using dynamical systems” as discussed in [66] by Hava Siegelmann et
al, which can be viewed as a mathematical theory of Natural Algorithms.
One cannot fail to notice a dichotomy in the natural computing schemes to
be discussed: some of them (e.g. Genetic Algorithms) use natural phenomena as
metaphors while others (e.g. DNA Computing, Natural Algorithms) use natural physical/chemical/biological systems themselves as the medium of computation, the computation being steered by spontaneously occurring natural processes (like Watson–
Crick pairing in DNA, beads arranging themselves in order, etc.). In other words, as
pointed out in [22], some approaches have yielded new classes of computing models,
yet others, a new breed of computers.
1.3.1
DNA Computing
In DNA Computing, the input data is encoded as DNA sequences which are transformed by means of a series of biochemical operations (taking place in a test tube)
into a certain form that represents the “output” of the computation. Thanks to
DNA Computing, a small test tube can now house massive computations. DNA Computing is interesting for the following reasons.
10
• The (DNA) computer uses unusually small processing elements
In DNA Computing, the primary processing/storage elements are the DNA
molecules themselves, along with certain enzymes. They provide extremely
dense information storage, enormous parallelism and incredible energy efficiency.
• The computer “knows” things that are not even programmed
DNA molecules have certain innate properties like Watson–Crick pairing6 and
the ability to get replicated; on account of these, the solution to a problem—
represented by a certain DNA sequence—can self–assemble out of the raw DNA
“soup” containing sequences that encode the input data.
A key example that practically illustrates the above features of DNA Computing
is Len Adleman’s now famous 1994 experiment [1, 2]: a small instance of the Hamiltonian Path Problem (HPP)7 was successfully solved purely by biochemical means.
Adleman encoded the input data—the edges and vertices of the given graph—using
unique DNA sequences. His skill lay in the special manner in which he composed the
sequences. He wanted Watson–Crick pairing to take place only between certain kinds
of DNA strands. To be more specific, with suitable choice of the DNA sequences,
he encouraged only those DNA sequences corresponding to edges and vertices incident on each other (in the graph) to stick together (form Watson–Crick pairs) under
suitable conditions. Thus, when multiple (say, a million) copies of DNA strands
having the chosen DNA sequences were put in solution, chains of DNA encoding all
valid paths in the graph were obtained, and among them the “solution” itself—a long
DNA sequence encoding a Hamiltonian path (the graph used in the experiment did
contain one). This solution can be fished out, using a “judicious series of molecular
manoeuvres”, as the authors of [53] have aptly put it.
The main weakness in solving problems with this approach is this: the number
of necessary DNA strands—encoding vertices and edges, in this case—to be initially
synthesised is of the order of n! (n being the number of vertices) which is a very large
number even for a reasonable value of n (say, n = 200).
1.3.2
P Systems (Membrane Computing)
P Systems are theoretical computational models of the parallel distributed type which
are inspired by biochemical processes taking place through membranes in biological
6
A DNA strand is composed of the “letters” (nucleotide bases) A(denine), T(hymine), C(ytosine)
and G(uanine); A “instinctively” combines only with T , and C only with G. For instance, if the
DNA strand AT GAAG meets with T ACT T C in solution, they both would anneal, i.e. twist around
each other.
7
Given a directed graph with specified start and end vertices, decide if there is a special path,
known as Hamiltonian path, that starts at the start vertex, ends at the end vertex and passes
through each remaining vertex exactly once.
11
systems, especially among living cells (see [54] where the model was first introduced by
Gh. Păun). The key element in the model is the membrane structure—a structure
consisting of several cell–like membranes placed inside a “skin”. Objects are placed
in the regions circumscribed by the membranes; each membrane is associated with a
set of rules that apply locally to objects in that membrane, and based on these rules
an object can evolve into another object, can be transferred in and out of a membrane,
and can also dissolve the membrane surrounding it. Note that the evolution of objects
in various membranes takes place in parallel. The following comprises a computation:
we start with a fixed number of objects in certain membranes—the input—and let the
system evolve; if, at some point, no objects can evolve further, then the computation
is said to be over. The number of objects collected during the computation in a
specified membrane can be viewed as the output.
Comparing DNA Computing with P Systems, we observe the following: Every
formal operation that finds a place in the DNA Computing model has a real biological counterpart; that is, every operation that is defined and used “on paper” is also
a genuine biochemical operation that can be done in the laboratory. The following
scenario does not exist in DNA Computing: proposing a bunch of hypothetical operations first and then looking out for possible DNA based implementations of them.
On the contrary, in P Systems, a vast number of mathematical, symbol–manipulation
based operations are a priori defined and freely introduced before asserting their presence in biology. Also, no effort is taken to explore the actual “biochemical membrane
world” in order to discover those natural operations that take place spontaneously,
upon which computations in the model can be based.
1.3.3
Genetic Algorithms and Neural Networks
Genetic Algorithms, invented by John Holland in the 1970s, is a problem solving/search technique that uses certain features of biological evolution (natural selection) and genetics as heuristics (see [51, 77, 78] ). Nature, scarce in its resources,
sees to it that only the fittest members of a population survive and the rest “wither
away” in due course. This ruthless natural principle has a useful side effect: successive generations would exhibit a growth in their fitness. Genetic algorithms try to
mimic this phenomenon in nature by creating a population of candidate solutions to
a problem, represented as “genomes” or “chromosomes”, and then applying “genetic”
operations such as mutation and crossover to evolve the solutions in order to find the
best ones. The standard genetic algorithm proceeds as follows: Given a problem,
generate bit strings (chromosomes) that encode an initial population of candidate
solutions. To form a new population, i.e., the next generation, first select parent
chromosomes from the present population based on their degree of fitness as defined
by a fitness criterion: for example, number of 1s in the bit strings ≥ 5. Generate
offsprings (new chromosomes), by allowing a crossover between the parents (a bitwise
operation that exchanges bits of the parents to form new bit strings); offsprings thus
generated undergo mutation, randomly flipping some bits of the offsprings, and the
12
resulting chromosomes form the next generation. These steps, when iteratively carried out, gradually improve the population in terms of its average fitness. When an
acceptable level of fitness is reached, or after a fixed number of generations, we pick
from the present population the best solution, i.e. the chromosome with the highest
fitness level.
Attempts to compute by mimicking the way the brain “computes” is natural
computing in the sense that the brain is essentially a natural mechanism. Artificial
Neural Networks are computational models inspired by the biological neural system (see [4, 57] for an extensive coverage of this topic). The 1943 paper of Warren
McCulloch and Walter Pitts [49] is often considered to have flagged off research in this
area. The simple mathematical model they proposed for the biological neuron8 is still
at the heart of modern research in this area, though several variations of the model
have been developed. They modeled the neuron as a logical gate with two possible
internal states, 1 (active) and 0 (silent). Each neuron receives input signals provided
by the outputs (i.e., states) of other neurons connected to it. Each of these input
signals—be it 0 or 1—bears a certain tag, classifying it either as “excitatory” with
an activating influence or “inhibitory” with a deactivating influence on the neuron.
A neuron will become active or silent depending on the net effect of the input signals
impinging on it; the net effect is calculated by adding up the input signals, care being
taken while “adding” up, to use a negative sign for those inputs that are inhibitory.
If the net effect is greater than a certain threshold level associated with that neuron, the neuron becomes active; otherwise, it becomes silent. The most interesting
thing is that variants of the McCulloch–Pitts model like the perceptron, have the
capacity to learn (using a standard learning algorithm) to classify inputs: given a
set of sample inputs for some of which the perceptron is supposed to output a 0 and
for the rest, a 1, the perceptron can automatically work out the tag to be associated
with each input signal. It can work out which input signals ought to be considered
excitatory and to what extent, which signals are to be treated as inhibitory, and also
the appropriate threshold levels for the individual neurons, all by itself.
1.3.4
Quantum Computing
In his now famous paper [28], Richard Feynman observed that it would take exponential resources for a conventional digital computer to be able to simulate the actual
physical world, which is ultimately quantum mechanical. He further suggested that if
we use a fragment of physical nature itself, which is inherently quantum mechanical,
to construct a (quantum mechanical) computer, the simulation of physics on such
a hypothetical computer will be a lot more efficient. Feynman’s idea of harnessing
nature, especially its weird quantum mechanical effects, to do computations like simulating Physics, and also other computations in general, does have the spirit of natural
computing.
8
The brain is made up of fundamental units called neurons which are heavily interconnected.
13
How come the strange phenomena of the quantum world are useful in constructing
a more powerful computer? It is well–known that even particles like electrons can
behave like waves; they can simultaneously explore two different trajectories in space
at the same time (see [16]). In the same sense, an atom can be prepared in such a way
that it represents two different electronic states at the same time, which is known as
superposition of states. Thus, a quantum bit (qubit) represented by such an atom
can be in a superposition of two logical states 0 and 1, and an n–qubit quantum
register can “hold” all 2n distinct binary numbers (starting from 0 to 2n − 1) at the
same time, while classical n–bit register can store only one out of the 2n numbers at
a time. And, to cap it all, we have ways to perform a mathematical operation on all
the numbers contained in superposition in an n–qubit register simultaneously. Thus
a conventional computer would take 2n sequential steps to do what the quantum
computer can do in just one step. In this way, the quantum computer improves
the usage of resources like time and space exponentially. See [18, 29, 50] where
other important features of quantum computing, such as entanglement of states,
are elaborated and also [23, 33] which cover a number of unconventional computing
models, including Quantum Computing.
1.3.5
Analog computation with dynamical systems
Hava Siegelmann and Shmuel Fishman have developed a mathematical theory concerning natural physical processes interpreted as special purpose analog computers [66]. In a sense, theirs is a mathematical theory of Natural Algorithms. They
observe that a physical system prepared in some initial state, encoding the input,
evolves in phase space according to some equations of motion, thus performing a
“computation” and, in many cases, finally converges to attractors9 , viewed as the
output. Note that physical systems are naturally described in continuous time and
therefore we need new definitions for computation in continuous time and for its computational complexity. Viewing natural physical systems as dynamical systems10 ,
they offer rigorous definitions for computation by dynamical systems and the associated time complexity. They have also classified dynamical systems into different
computational complexity classes analogous to those in classical complexity theory.
Their work is to be distinguished from the analog computer of the past, for instance, from Shannon’s “general purpose analog computer” [62]. While Shannon’s
analog computer is a mathematical abstraction comprising “boxes” that do standard
arithmetic and integration as opposed to a natural physical system viewed as a special purpose computer, their work “constitutes the first effort to provide a natural
interface between dynamical systems and computational models with the emphasis
on continuous time update”.‘
9
According to [75], an attractor is “the equilibrium state to which all other states (of a system)
converge: it is as if it attracts different possible states of the system, so that all trajectories come
together in the same point.”
10
A set of equations that describe the evolution of a vector of variables representing the state of
the system.
14
1.3.6
Natural Algorithms versus “the rest”
Natural algorithms exploit the vast computing power in ordinary, commonplace physical phenomena which occur in the familiar macroscopic world unlike other popular approaches to unconventional computing that feed on sophisticated processes in
molecular biology and quantum mechanics. The very fact that natural algorithms are
“extracted” from easy–to–understand natural physical phenomena witnessed everywhere by everyone—and not only in laboratories by specialists—endows it with great
pedagogical value. Also, note that natural algorithms are “natural” not only in the
sense that they harness nature’s power and craft, but also in the sense that they are
“marked by easy simplicity” (see [76]). Many of the other models are not natural in
the latter sense.
We would also like to distinguish our contribution from that of Hava Siegelmann
and her colleagues, namely, analog computation with dynamical systems [66]. They
develop an abstract mathematical theory viewing natural physical systems as dynamical systems; their purpose is not to furnish concrete examples of actual natural
systems that perform computation, but to work with their mathematical descriptions, mainly to obtain generalizations—for instance, concerning the time complexity
of computations performed by a certain kind of natural systems. The job of imagining the actual physical system, one that fits the mathematical description, is left to
the reader. It may be noted here that there is more than one natural physical system corresponding to an abstract dynamical system description. On the other hand,
we develop throughout the Thesis, examples of concrete, tangible physical devices
that are based on simple natural principles, which one can easily appreciate and also
attempt building. We do not focus on developing an elaborate mathematical theory.
1.4
New ideas developed in the Thesis
This Thesis is a braid of three ideas/inventions:
• A gravity powered “computer”—constructed out of beads and rods of an
abacus—that
can perform data processing tasks like sorting, searching, etc.
√
in O( N) time on an unsorted database of size N.
• Bilateral Computing, a paradigm for the construction of natural physical
computing devices which are “bilateral” in nature: devices realizing a certain
function f can spontaneously compute as well as invert f . These gadgets can be
used to solve NP –hard problems like Boolean Satisfiability (SAT), apparently
without using an exponential amount of resources.
• Balance Machine, a generic natural computational model which is shown to
be universal.
Chapters 2, 3 and 4 of this Thesis discuss each of the above ideas in the order
mentioned above. One can see a gradual progression of ideas as one reads through
15
these chapters: First of all, in Chapter 2, natural algorithms for trivial problems11 like
sorting and searching are developed. These, while serving as typical examples of natural algorithms, set the stage for natural algorithms (discussed in Chapters 3 and 4)
solving hard problems. It may be noted, however, that these natural algorithms—the
physical processes themselves, and not their various implementations—do outperform
their classical counterparts in a spectacular way! These natural algorithms can be
thought of as being run on a natural–computer which relies on the special effects that
spontaneously occur when rows of beads (that encode input data) slide along the rods
of an abacus frame, propelled by gravity.
In Chapter 3 we move on to solve computationally hard problems: We attempt
to solve SAT, a classic hard problem known to be NP –complete using a natural–
computer that uses water levels to encode data and works by the up/down movements
of pistons. Besides proposing a specialized natural–computer that solves SAT, we
develop a general paradigm—referred to as Bilateral Computing—advocating the
design of a class of such “bilateral” natural–computers; these are physical systems in
which there is no intrinsic distinction between its various parameters: there is no tag
attached to each of the parameters classifying it as an “input parameter” or as an
“output parameter”. In other words, one has the freedom to change, or assign a value
to, any of the parameters and watch what happens to the rest12 . Also, a change in
one of the parameters can, in principle, influence all the others; when one or more of
the parameters is varied, the system will react by spontaneously adjusting the rest of
the parameters so as to maintain a certain relation between the various parameters.
Imagine a bilateral natural–computer realizing the equation A = B × C: it would act
as a multiplier if B and C are forced to take some constant values and A is allowed
to assume a suitable value; alternatively, it would behave like a factoring device if
we force A to take a fixed value, and allow B and C to assume one of possibly many
suitable values that make the equality true.
Chapter 4 discusses the culmination of our research: a universal, generic natural
computational model. This model, called Balance Machine, is a mechanical model
like a Turing Machine, as opposed to abstract formalisms like grammars and partial
recursive functions, which can be directly realized in the form of a machine. It consists
of components that resemble ordinary physical balances, each with a natural tendency
to automatically balance their left and right pans: If we start with certain fixed
weights, that represent inputs, on some of the pans, then the balance–like components
would vigorously try to balance themselves by filling up the rest of the pans with
suitable weights that represent the outputs. Interestingly, the balance machine is a
bilateral computational model.
11
These problems can be solved by classical computing methods quite efficiently—using only
resources (for instance, time and memory) polynomial in the size of the input data.
12
Note that this is not possible with a conventional electronic logic circuit realizing a Boolean formula. Try forcing the output line to true and see if the input lines assume a satisfiable configuration
(by itself)!
16
Concluding remarks given in Chapter 5 stress the value of natural algorithms and
also indicate the possible directions of future research on this subject.
Please note that throughout this Thesis, natural algorithms refers to individual
natural algorithms, and Natural Algorithms refers to the general paradigm.
Chapter 2
Beads and Rods
Abacus, an archaic calculating tool might seem a trifle and hopelessly out of date
in our digital age. We shamelessly make use of this ancient tool to develop sorting
and searching algorithms that outdo their classical counterparts. In this Chapter we
revisit Bead–Sort that was very briefly mentioned in Chapter 1. We also introduce a
natural algorithm—it uses beads and rods,
√ like Bead–Sort—that can perform a search
on an unsorted database of size N in O( N ) time, which we first proposed in [10, 11].
We also explore different ways in which Bead–Sort can be realized—using digital,
analog circuits, cellular automata and P Systems. Finally, we envisage a gravity based
natural–computer that uses beads and rods custom made for data processing.
2.1
Bead–Sort
A simple natural phenomenon is used to design a sorting algorithm for positive integers, which we call Bead–Sort [7]. In what follows, we represent positive integers
by a set of beads, like those used in an Abacus, as illustrated in Figure 2.1. Beads
slide through rods as shown in Figure 2.2. Figure 2.2(a) shows the numbers 4 and 3,
represented by the beads, attached to the rods; the beads are shown suspended in the
air, the way they will be just before they start sliding down. Figure 2.2(b) shows the
state of the frame—a frame is a structure with the rods and beads—after the beads
are allowed to slide down: A row of beads representing number 3 has “emerged” on
top of the number 4, the extra bead in number 4 having dropped to a lower row. Figure 2.2(c) shows numbers of different sizes, suspended one over the other in a random
order. When we allow the beads representing numbers 3, 2, 4 and 2 to slide down,
we obtain the same set of numbers, but in a sorted order again (see Figure 2.2(d)).
In this process, the smaller numbers always “emerge” above the larger ones and this
creates a natural comparison: online animated illustrations of the above process can
be seen at [6, 32].
We now present the Bead–Sort natural algorithm: Consider a set A of n positive
integers to be sorted, the biggest number in A being m. Then, the frame should have
at least m rods and n rows. (See Figure 2.3; rods/vertical lines are always counted
17
18
number 3
number 2
beads
Figure 2.1: Representing numbers with beads.
3
3
4
(a)
4
2 (b)
(c)
Figure 2.2: Illustrating Bead–Sort.
1
.
2
3
Rows
2 4
3
.
.
...
.
n
1
2 3
Rods
...
m
Figure 2.3: Bead–Sort conventions.
(d)
19
from left to right and rows are counted from the top to the bottom.) The Bead–Sort
algorithm is the following:
The Bead–Sort natural algorithm
For all x ∈ A, drop x beads (one bead per rod) along the rods
1, 2, . . . , x. Finally, the beads, seen row by row, from the 1st row to
the nth row represent A in the ascending order.
The algorithm Bead–Sort is correct. We prove the result by mathematical induction on the number of rows of beads. We claim (*) that the set of positive integers
represented by the states of the frame before and after the beads are dropped is the
same. We also claim (**) that the number of beads on each row i, after dropping, is
at most the number of beads on row i − 1, the row directly below it.
Consider a set of cardinality n = 1. Since there is no possibility for a bead to
drop the above two claims hold. Now consider a set of cardinality n = 2. Either the
top row is (i) equal to, or (ii) smaller than, or (iii) bigger than the lower one. In (i)
and (ii), the rows remain intact after they are allowed to drop down; in (iii), a pseudo
swapping occurs as shown in Figures 2.2(a) and 2.2(b). Thus, the claims hold in all
the three cases. Now assume that these two properties hold for an input set whose
cardinality is k. Suppose a row k + 1 of m < m beads is now dropped on top of these
k rows. Since claim (**) holds there is an index j ≤ m such that all m − j beads in
columns greater than j drop from row k + 1 to row k. Imagine the dropped beads to
stay “frozen” in row k for a while. If m − j = 0, we are done. Otherwise, the values
of row k + 1 and row k have been swapped, with any excess beads on row k ready
to drop further. Thus, claim (*) holds, after a series of at most k swaps. Claim (**)
also holds since we repeat these swaps until the force of gravity cannot pull down any
more beads.
Note that we work with a row–of–beads as a basic data object similar to a byte
or a word in a digital computer, and not with single beads. This means, we can
(i) introduce a row–of–beads (consisting of, say, n beads) into the rods and also (ii)
read a row–of–beads—i.e., read the number of beads in a given row—all at once, in
parallel. Gadgets that realize (i) and (ii) can easily be designed (See Appendix A for
one such design). The time complexity of Bead–Sort is actually the time taken by
the beads to settle down in a state of rest. We assume that the whole of the input
is first read/loaded. Imagine the initial state of the frame to be a set of unsorted
rows of beads suspended in the air1 , and then the beads are allowed to drop down in
parallel. The beads can be viewed as free falling objects accelerating due to gravity.2
Hence, in the worst case, the time taken by the beads to settle down is 2h
, where
g
h is the height of the rods and g is the acceleration due to gravity. If we fix the
1
Alternatively, one could imagine that the input is read with the frame kept in the horizontal
position; it can then be tilted to a vertical position to sort the list by allowing the beads to drop.
2
The distance d traveled by a free falling object in time t is given by d = 12 gt2 , where g is the
acceleration due to gravity.
20
height of therods to be the same as the size (N) of the list, then the time complexity
√
,
i.e.
O(
N). Note, however, that the amount of energy needed
is given by 2N
g
to preprocess the input, i.e. to load the beads into the frame and prepare them for
sorting, is proportional to the total number of beads that make up the input. Thus
the energy required for sorting N numbers grows linearly with their accumulated size
given by N
i=1 xi where xi ’s are the numbers to be sorted.
2.2
Searching for a needle in a “bead–stack”
Looking up a name in a telephone directory given a telephone number is exponentially
more difficult than looking up a telephone number given a name. Indeed, in the
second case log N steps are enough, but in the first case we need about N/2 steps on
an average and N steps in the worst case. Can we do it better?
First consider the problem of searching for a given integer in a list of positive
integers that is already sorted using Bead–Sort. Let us imagine that the sorted list is
represented in the form of beads in the same “beads–rods” apparatus
√used for Bead–
Sort. We propose a natural algorithm for solving this problem in O( N) time. Now,
to √
perform search on an unsorted list, we can sort the list first using Bead–Sort (in
O( N ) time) and then apply the proposed search procedure on the√resulting sorted
list; thus, the combination of the two algorithms would work in O( N ) time on an
unsorted list of size N.
We use the following simple observation to do the search. Suppose that the list
already contains an integer, say 3, which means, there is a row of 3 beads in the
frame. Now, introduce one more ‘3’ into the list by dropping 3 beads, as before from
left to right, one bead per rod. We would eventually find the new integer—3, in
our case—just above the other ‘3’ that is already in the list (see Figure 2.4), thus
preserving the sorted order.3
We can show that, when we introduce an integer n into the list, at least one
of the beads representing n—the bead sliding along the nth rod, to be precise—will
eventually find itself just above the already existing integer n. The main point in
the above illustration is this: the search for the integer n can be reduced to simply
introducing a new row of n beads into the list and tracking the last bead as it falls
down, say, with some device. The newly introduced beads actually locate the integer,
if present. But, how would the above method indicate the absence of an integer n in
the list? Actually, we would have to drop n + 1 “search beads” to determine whether
an integer n is in the list or not.
In what follows, we present the general natural algorithm for searching, and a
proof of its correctness:
3
Note, however, that not all the beads that we introduce—call them “search beads”—might
appear in the newly formed row of 3 beads. In the example shown in Figure 2.4, the new row of 3
beads contains only two of the search beads.
21
Rows
1
Rows
1
2
2
3
4
5
3
4
5
6
6
‘new’ row of 3 beads
‘old’ row of 3 beads
Figure 2.4: Introducing a new integer into the list.
To do the search, we use another apparatus—a tracking mechanism along with the
beads and rods. It consists of very thin tapes with markings as shown in Figure 2.5
similar to the measuring tape, whose ends are attached to the search beads. (Search
beads are those that are dropped into the list and tracked.) When the search beads
are let down, the tapes unfold, exposing the markings on them; the readings seen
against the measurement level (see Figure 2.5) at any point of time give the row
positions of the search beads attached to them. The natural algorithm for searching
follows:
The Bead–Search natural algorithm
To search for an integer n and to determine its location in the list if present, do the
following:
1. Drop n+1 search beads, one bead per rod, along the rods 1, 2, . . . ,
n + 1, and wait for them to settle down.
2. Compare the readings taken against the measurement level of
the nth and the (n + 1)th search beads. Call them read(n) and
read(n + 1).
3. If read(n) = read(n + 1), then the integer n is not in the list.
If read(n) = read(n + 1), then the integer n is in the list, and can
be found on the row read(n + 1).
More precisely, when read(n + 1) − read(n) = x, the integer n occurs in the list
x times, starting from row read(n) + 1 to row read(n + 1). A few self–explanatory
illustrations—searching for integer 3 (present in the list), and 2 (not present in the
list) are given in Figures 2.5 and 2.6. Note that the search beads that are dropped
down can be pulled up after searching.
We now show that step 3 in the natural algorithm is indeed correct. In other
words, we show that read(n) = read(n + 1) if and only if the integer n is not in the
list.
Suppose read(n) = read(n + 1) = r. It is clear that the integer n is not on row
r, since the bead–position given by the nth rod and the row r is now presently being
22
0
0
0
0
measurement level
search−beads
Rows
Rows
1
3
4
2
3
1
2
0
1
1
2
3
0
2
3
4
4
5
5
6
6
(a)
(b)
Figure 2.5: Searching for the integer 3.
0
0
measurement level
0
Rows
1
Rows
1
2
2
3
4
5
3
4
5
6
6
3
3
2
2
1
1
0
0
(a)
Figure 2.6: Searching for the integer 2.
(b)
23
occupied by the nth search bead. It is also clear that either there is a bead on (r + 1)th
row, in rod n + 1 stopping the (n + 1)th search bead from dropping further (in which
case, there is an integer greater than n on row r + 1) or, row r is the last row in the
mechanical frame. In both the cases, however, n cannot be beneath the row r (note
that the list is already sorted). Also, the integer n cannot be in one of the rows above
r (i.e., the rows 1 to r − 1). This is because there is no bead in rod n on any of the
rows 1 to r − 1; otherwise, the nth search bead would not have descended down to
the r th row. Thus, the integer n is not in any of the rows. It now follows that when
read(n) = read(n + 1), the integer n cannot be in the list.
Before proving the converse, let us first observe the following simple fact: for
every i, j such that i < j, the number of beads in rod i is greater than or equal to the
number of beads in rod j after the beads settle down. This is because we always drop
beads from left to right, one bead per rod. Therefore, when read(n) = read(n + 1),
we can immediately deduce that read(n) < read(n + 1).
Now, we are ready to prove the converse. Suppose read(n) = read(n + 1), and
let read(n + 1) = r. This means, the (n + 1)th search bead has dropped to the row r;
thus, there is no bead in the (n + 1)th rod on all rows starting from 1 to r, except for
the search bead. But, since read(n) < read(n + 1), there are beads in the nth rod
on rows r, r − 1, r − 2, . . . , read(n) + 1. It follows that there is an integer n on all
these above rows. That is, the integer n is present in the list, read(n + 1) − read(n)
times. Indeed, it can also be shown that the list cannot have the integer n in a row
other than these.
The time complexity of this search operation is similar to that of Bead–Sort; it
is the time taken by the search beads—dropped
√ in parallel—to settle down in their
final positions, and hence the complexity is O( N) as before.
2.3
Sorting and searching a database
First of all, we observe that Bead–Sort does not physically rearrange the entire rows of
beads representing positive integers. For instance, see Figures 2.2(a) and 2.2(b): we
do find a row of beads representing number 3 on top in Figure 2.2(b), but this is not
the same row of beads that we originally used in Figure 2.2(a) to represent number 3.
(They, in fact, still remain at the bottom, even after “sorting” has occurred.) Thus,
the “original number 3” has not moved up at all! This property is both the strength as
well as the weakness of Bead–Sort— you do not have to swap or shuffle the (objects
that represent) numbers in order to produce a sorting effect; but, the very same
property has a negative effect when we attempt to sort a hypothetical database like
the one shown in Table 2.1. We now illustrate the fact that ordinary Bead–Sort
cannot accomplish this.
Represent customer ID as usual, with beads. Also, represent vehicle color
using, say, the color of the beads as shown in Figure 2.7. Now, from the resulting
24
Table 2.1: Toy database.
customer ID
vehicle color
(Key)
(Information associated with key)
4
black
1
white
2
grey
Figure 2.7: Sorting a database: mere colored beads do not help.
“sorted” list, it is clear that we cannot extract the right mapping between the keys
and their associated information easily. The mapping is lost, though we have got the
keys themselves sorted. However, we can solve this problem in an indirect way as
discussed below:
Represent customer ID with rows of beads in frame 1; see Figure 2.8. (We
assume that the keys are unique.) The search beads are ready to slide down along each
rod in frame 1 and will be used for a purpose discussed later. Represent vehicle color
with a different set of beads (call these “color beads”) on a separate frame, i.e.
frame 2 as shown in Figure 2.8; for representing a vehicle color, use one bead with a
distinct color.4 For instance, we place the black bead representing the vehicle color
“black”—the first entry in the database—on the first rod, the white bead representing
“white”—the second entry in the database—on the second rod, and so on. Note that
the color beads are not free falling objects, but can be made to drop down by coupling
them with the search beads in frame 1.
Having represented both customer ID and vehicle color individually, we now
represent the mapping between them in the following way: For instance, to map the
customer ID “4” with vehicle color “black”, we just couple the 5th search bead (of
frame 1) with the black colored bead; to map customer ID “1” with vehicle color
“white”, we connect the 2nd search bead to the white colored bead. In general, to
map a key n to the information associated with it (in ), we couple the (n + 1)th search
4
We use a distinct colored bead to represent information associated with a particular key just for
the sake of illustration; one could have used labels or tags that are stuck on the beads, to represent
the same.
25
measurement level
search−beads
4
1
2
frame 1
frame 2
(a) Initial state (before sorting)
3
4
5
5
6
6
4
5
measurement level
Row
1
2
3
4
5
6
frame 1
frame 2
(b) Final state (after sorting)
Figure 2.8: Sorting a database: a different approach.
26
bead with the bead on frame 2 representing in . Note that all these are part of the
input setting up process, and does not involve a search by itself.
Now, how do we sort the database? First, sort the customer IDs by allowing
the beads on frame 1 to drop down. After they are sorted, allow all search beads in
frame 1 to roll down. As detailed in the previous section, the (n + 1)th search bead,
after settling down, will be exactly on the same row as the customer ID n. (Recall
from the previous section that the reading corresponding to the (n + 1)th search bead
gives the location of integer n in the list, if present.) Also, the search bead would
have pulled down along with it, its “partner”, i.e. the color bead representing in (the
one coupled with it) to exactly the same row, thus aligning each customer ID with
its corresponding vehicle color. Indeed we could initiate a search on the database,
after the sorting is over.
The major drawback in the above technique is that every time we wish to insert a
new key + inf ormation into the database, we would have to redo the whole alignment
procedure once again.
2.4
The BeadStack Min/Max data structure
We propose a natural, dynamic data structure called BeadStack. The operations of
interest are finding the minimum and the maximum of a set of integers, along with
insertion and deletion operations. Our data structure has a performance comparable
to the recently proposed SquareList data structure (see [60]). We list the best
running times for various “classic” data structures in Table 2.2, and compare it to
BeadStack.
Table 2.2: Expected performance of common data structures.
Data Structure
van Emde Boas Priority Queue
Priority Queue (Heap)
Binomial Heap
Skiplist
Fibonacci Heap
SquareList
BeadStack
Insert
Θ(lg lg N)
Θ(lg N)
Θ(lg N)
Θ(lg N)
Θ(1)
√
Θ(√N )
Θ( N )
Delete
Θ(lg lg N)
Θ(N)
Θ(lg N)
Θ(lg N)
Θ(lg
√ N)
Θ(√N)
Θ( N)
Find min
Θ(1)
Θ(1)
Θ(lg N)
Θ(lg N)
Θ(1)
Θ(1)
Θ(1)
Find max
Θ(1)
Θ(N)
Θ(N)
Θ(lg N)
Θ(N)
Θ(1)
Θ(1)
The BeadStack data structure is our standard collection of beads on rods, where
each row of beads (flush left) represents a positive integer. Recall that we work with
a row–of–beads as a basic data object. The “Find min” operation is simply to return
the top row of beads. Likewise, the “Find max” operation is simply to return the
bottom row of beads. These two operations can be done in Θ(1) time. The insertion
27
operation is done simply by dropping
another row of beads on top of the existing
√
stack of integers; it requires Θ( N ) time in the worst case. The deletion operation
is done by performing a “search” to find the row containing the integer to delete,
followed by the removal of that physical
√ row of beads. As shown in Section 2.2 of this
Chapter, this can be done in time Θ( N ), which is the cost of the search operation
plus a constant time for deletion.
Finally we make a few observations about the search algorithm. Of course, this is
an algorithm involving a physical device that might be impractically huge, especially
when the list size it can handle is large. It compares well with Grover’s quantum
algorithm (see [34]) which has the same (quantum) time complexity, and hence it
makes sense to briefly compare these algorithms: First, a common weakness is that
in both cases the time complexity refers only to the actual “computational time”,
i.e. the time necessary to read the input is not taken into consideration. Note that
reading the input is not trivial: it requires the preparation of an equally distributed
superposition of all possible indices of the items in the list containing the target index
in case of Grover’s algorithm (takes O(logN) steps) and the set up of beads and
their connections in our case (takes O(N) steps). However, if we want the elements
of the quantum system to represent an arbitrary database, we need to construct a
function which rapidly computes the elements of the data base from their indices
1, 2, . . . , N; and this construction is time consuming. The advantage of our algorithm
over Grover’s is its deterministic nature: in contrast with the probabilistic nature of
Grover’s algorithm, a Monte Carlo type of procedure producing fast a probable result
(with high probability), our method is guaranteed correct.
2.5
Implementing Bead–Sort
Beads and rods are no abstractions; they are real enough for an engineer to consider
using them to build natural–computers that sort/search. However, the idea of an
electronic version of Bead–Sort is attractive for obvious reasons, and we attempt
both digital and analog circuit implementations of Bead–Sort. Also, we simulate the
natural algorithm with unconventional models of computations like cellular automata
and P Systems.
2.5.1
Digital circuit
We use a two dimensional array (call it Frame) of memory cells/flip–flops: states
1 and 0 represent the presence and the absence of a bead respectively (see Figure 2.9).
The states of the memory cells together represent the state of the actual frame at
a given time: the state of the (i, j)th memory cell (i denotes columns and j, rows)
reflects the presence/absence of a bead on the ith rod, j th row of the frame.
The input data, in unary form, is entered one–by–one into a data entry register
(call it DataReg) as shown in Figure 2.10, whose content actually represents a row
28
(rows)
memory cells
j
1
1
0
0
0
2
1
0
0
0
3
1
0
0
0
4
1
1
0
0
5
1
1
1
0
6
1
1
1
1
2
3
4
1
i (rods)
Figure 2.9: The ON-state of a memory cell represents a bead.
of beads ready to be dropped. A ‘1’ on its ith position corresponds to a bead ready to
be dropped along the ith rod. Before the next data is entered, the 1’s in the register
are to be shifted downward into Frame (this is analogous to dropping a row of beads
into the frame). Thus, by the time the last data is entered, the whole list of data
would already be sorted in Frame.
In order to shift down the 1’s in the register we use a simple logic circuit (see
Figure 2.10) which is connected to each and every cell of Frame. This circuit triggers
the cell (makes its state = 1) to which it is connected, depending on certain conditions
discussed below. The next state of cell Frame(i, j) is 1 iff: (i) the ith bit of DataReg
is 1 (there is a bead ready to drop down along ith rod) AND (ii) the state of its
immediate lower neighbor, i.e. Frame(i, j + 1), is 1 (the bead falling down cannot
go any further). (The state of Frame(i, j + 1), j being the last row, is always fixed
as 1.) This logic enables the 1’s in DataReg to “leap” to the exact cells; they don’t
propagate sequentially, cell by cell, which will be time consuming.
The time complexity of sorting using the above implementation is due to two
components: data entry time and the actual sorting time (time taken for the cells to
switch states). In this implementation, data entry and sorting alternate each other;
sorting does not wait till all data are entered. Let t1 be the average time taken for
entering a single data item and t2 the average time taken for the cells to switch states.
It takes t1 + t2 units for every single data item in the set. The overall complexity is
therefore n × (t1 + t2 ). In the case of manual data entry, t2 will be so small compared
to t1 that the actual sorting time is negligible.
2.5.2
Analog circuit
The presence or absence of a bead is represented by an analog voltage across electrical
resistors. A rod is represented by a series of electrical resistors of increasing value
from top to bottom (see Figure 2.11). If the voltage across a resistor is above a
29
Data Entry Register
1
1
1 0
0
0
0
0
2
0
0
0
0
3
0
0
0
0
4
0
0
0
0
j 5
0
0
0
0
(rows)
1
6
1
1
1 0
2
3
i (rods)
0
DataReg[i]
(next state
of)
Frame[i, j]
Frame[i, j+1]
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1 0
0
1
1
1 0
4
Figure 2.10: Digital hardware implementation.
preset threshold–voltage t, then it indicates the presence of a bead. The values
of resistances in series, representing a rod, are arranged in increasing order so that
when a voltage is applied across them, there is more voltage drop across the bottom
resistors than across the top ones; this is the electrical equivalent of the effect of
gravity that attracts the beads downwards.
Let V1 , V2 , . . . ,Vm be the voltages across the rods and the currents passing through
them be C1 , C2 , . . . ,Cm . Each resistor is denoted by Ri,j where i is the rod number
and j the row number; xi,j represents the voltage–drop across Ri,j . Resistors in the
same row (say, j) have the same value, i.e. R1,j = R2,j = . . . = Rm,j , which means,
the rods are identical.
The voltage across each resistor is fed into a trimmer circuit/threshold unit whose
output is given by xi,j = 1, if xi,j ≥ t, xi,j = 0, otherwise. This is shown in Figure 2.12,
where dotted lines/dashes do not indicate physical connections. Let v1 , v2 , . . . ,vn
denote the sums of individual voltages across resistors (after thresholding) in the 1st ,
2nd , . . . ,nth rows, respectively. (Note that in Figure 2.12, the detailed circuitry for
adding voltages is not shown). Hence we have:
v1 = x1,1 + x2,1 + . . . + xm,1 ,
v2 = x1,2 + x2,2 + . . . + xm,2 ,
..
.
vn = x1,n + x2,n + . . . + xm,n .
The data–entry device is used to enter data to be sorted. When an integer is
keyed in, the equivalent unary representation is generated; for instance, the number
30
R 1,1
R 2,1
R m,1 Level 1
R 1,2
R 2,2
R m,2
Level 2
V1
..
.
V2
R 1,n
Rod 1
..
.
Vm
..
.
R 2,n
Rod 2
R m,n Level n
Rod m
Figure 2.11: Representing rods with electrical resistors.
R 2,1
R 1,1
Trim
R m,1
Trim
R 2,2
R 1,2
Trim
Trim
..
.
..
.
R 1,n
v1
Trim
v2
..
.
.. .
R 2,n
Trim
Trim
R m,2
R m,n
Trim
Trim
.. .
Data Entry Device
Figure 2.12: Trimmer circuits.
vn
31
1
Ω
1
1
v1
2
Ω
2
2
v2
3
Ω
3
3
v3
Figure 2.13: Simple analog circuit for sorting.
3 is represented as three 1’s followed by 0’s. The output lines from the data–entry
device are attached to the rods consisting of series of resistors as shown in Figure 2.12,
the first unary digit connected to the 1st rod, and so on. For every ‘1’ in the ith digit
of the unary representation, a voltage increment δv is applied to the corresponding
ith rod of resistors; i.e. the voltage across the ith rod is increased by δv. This is the
way a new bead is introduced into a rod.
Every time a new data is entered, the individual resistors would have a different
voltage level; and these voltage levels should reflect the presence/absence of a bead
in different positions of the frame at that point of time. Hence, the values of the
resistors, the voltage increment δv and the threshold–voltage t should be designed
accordingly. After the whole list of numbers is entered, the voltage vector v1 , v2 ,
. . . ,vn will contain the sorted list.
For example, consider a simple analog resistor circuit (see Figure 2.13) that can
sort 3 positive integers, the biggest of which is 3. The circuit has three rods and
three rows, nine resistors on the whole. Let us see how it can sort {2, 1, 3}. The
total resistance R in each rod is 1Ω + 2Ω + 3Ω = 6Ω. The threshold voltage t is
0.5 volt; the voltage increment δv for every new bead is 1 volt. The integers 2, 1, 3
are entered one by one; the corresponding changes in voltage levels across individual
resistors and the current after entering the data 2 (110 in unary), 1 (100), 3 (111)
are shown in Figures 2.14(a), 2.14(b) and 2.14(c) respectively. The slanting arrows
represent the current (in amperes) through the rods5 . The figures written close to
the individual resistors represent the voltage levels across them (xi,j )6 ; those within
the brackets represent the voltage after thresholding (xi,j ). The corresponding state
of the frame is also shown. Note that after the last integer has been entered, v1 , v2
and v3 represent the sorted list.
5
6
Current through a rod = Voltage across it/R.
Voltage across a resistor = Current through it × Its resistance.
32
1
6
1
6
0.2
(0)
1v
0
0.2
(0)
0.3
(0)
1v
0.5
(1)
0.0
(0)
0.3
(0)
0v
0.5
(1)
0.0
(0)
0.0
(0)
(a)
1
6
1
3
0.3
(0)
2v
0
0.2
(0)
0.7
(1)
1v
1.0
(1)
0.3
(0)
0.0
(0)
0v
0.5
(1)
0.0
(0)
0.0
(0)
(b)
1
2
1
3
0.5
(1)
3v
1.0
(1)
1.5
(1)
1
6
0.3
(0)
2v
0.7
(1)
1.0
(1)
0.2
(0)
1v
0.3
(0)
0.5
(1)
(c)
Figure 2.14: Illustration of sorting in an analog sorting circuit.
33
The time complexity of sorting using the above implementation is due to two components: data entry time and the actual sorting time (time taken for electrical charges
to settle down). In this implementation, data entry and sorting action alternate each
other; sorting does not wait till all the data have been entered. Let t1 be the average
time taken for entering a single data item, and t2 be the time taken for the voltage
levels to take their final values. It takes t1 + t2 units for every single data item in
the set. The overall complexity is therefore n × (t1 + t2 ). Note that t2 is fixed for a
given gadget with rods of a fixed length and a fixed RC component. However, the
length of the rod should increase as n increases, and hence, so should t2 . To derive
the actual function as to how t2 varies with n, we have to take into account the total
resistance and the parasitic capacitance of the rods. However, in the case of manual
data entry, t2 will be so small compared to t1 for a reasonable length of rods that the
actual sorting time, before the next data is entered, will be negligible.
2.5.3
Cellular automata
A cellular automaton (CA) operates by a set of simple local rules (see, for example
[72]) and is a suitable choice for simulating natural physical systems because of its
parallelism.
In its simplest form, a CA can be considered as a homogeneous array of cells in
one, two or more dimensions. Each cell has a finite number of discrete states. Cells
communicate with a number of local neighbors and update synchronously according
to deterministic rules that take into account their current states and the states of their
neighbors. When modeling physical systems using cellular automata, space is treated
as having finitely many locations per unit of volume. Each location is represented by
a cell and a state is associated with each cell.
To simulate Bead–Sort, we use a two dimensional, binary state CA: states 1 and 0
represent the presence and the absence of a bead respectively (see Figure 2.15). The
states of the CA cells together represent the state of the frame at a given time:
the state of the (i, j)th CA cell (i denotes columns and j, rows) reflects the presence/absence of a bead on the ith rod, j th row of the frame. Each cell (i, j) is updated
based on the states of its upper and lower neighbors (i.e., the cells (i, j − 1) and
(i, j + 1)) and its own current state. The CA rule to simulate beads rolling down row
by row uses the following logic: if there is a 1–cell (a bead) above a 0–cell (a vacant
position), then in the next time step, the 0 becomes a 1—the vacant spot gets the
bead—and vice versa. What follows are the actual CA rules written in the following
format:
current state of cell (i, j-1)
current state of cell (i, j)
current state of cell (i, j+1)
CA rule to simulate Bead–Sort
next state of cell (i,j)
34
0
0
0
0
0 0
0
0
0
1
0
0
1
0 0
1
0
0
1
1
0
1
1 0
1
0
0
1
1
1
1
0 0
1
1
0
1
0
0
1
1
1
1
1
1
Figure 2.15: Cellular automaton implementation.
0
0 0
0
0
0 0
1
0
1 0
0
0
1 1
1
1
0 1
0
1
0 1
1
1
1 0
0
1
1 1
1
The evolution of the CA while sorting {1, 3, 2, 1} is shown in Figure 2.15. Note that
in the worst case, there will not be more than n−1 sequential updates—state changes
in a CA are simultaneous—which leads to the worst case complexity of O(n). To deal
with boundary conditions, the CA should have two extra rows—two more than the
actual rows in the physical Bead–Sort frame in question: a top most row of cells (fix
their states as 0) and a bottom most one (fix their states as 1).
2.5.4
P Systems
A brief introduction to P systems was given in Chapter 1 (Section 1.3.2). We use a
special type of P system—a tissue P system that computes by means of communication (using symport/antiport rules) only [8]. For technical literature on tissue
P systems and P systems with symport/antiport, see [46], [47], [48], [55] and [56].
Objects = Beads, Membranes = Rods
Beads are represented by objects x placed within the membranes; a rod is represented
by a group of membranes that can communicate with one another by means of symport/antiport rules (see Figure 2.16). Note that the object ‘−’ represents absence of
bead. Thus, the objects x and ‘−’ together will reflect bead–positions in the initial
state of the frame.
Moreover, the flow of objects between the group of membranes representing a
rod (using communication rules) reflects the actual motion of beads in the physical
system. The “counter membranes” (see Figure 2.16) along with the object p will serve
the purpose of generating clock pulses; they are useful especially to synchronize, while
ejecting the output in the desired sequence. We distinguish these from the other set
of membranes, namely, the “rod membranes”.
More formally, we have a tissue P system with symport/antiport of degree
(m × n) + n (where m × n membranes represent m rods with n rows, and another
35
rod−membranes
counter−membranes
p
.
.
1
x
x
. . .
−
2
x
−
. . .
−
Rows
.
n
.
.
.
.
.
.
.
.
.
x
x
. . . x
1
2
Rods
m
Figure 2.16: Membranes = Rods, Objects = Beads.
n membranes serve as counters). Note that m rods with n rows can sort n positive
integers, the biggest among them being m.
In the following sections, we discuss the various symport/antiport rules used to
simulate Bead–Sort. A simple P system that can sort 3 integers—the biggest among
them is 3—is used for illustration. To explain the necessity of each rule, we initially
start with a system having only the basic rule, then gradually add more and more
complex rules until we arrive at the complete system. We formally define the system
only at the final stage.
Making the beads drop down
As outlined in the previous section, one can set up the initial state of the frame using
a tissue P system. Now, the objects x (representing the beads) should be made to
fall down like real beads in the physical system. This is achieved through a simple
antiport rule. See Figure 2.17 which demonstrates the rule with a simple tissue
P system representing 3 rods with 3 rows; we start with the unsorted set {2, 1, 3}.
The antiport rule initiates an exchange of objects x and ‘−’; for instance, according to
the rule (8, x | −, 5) inside membrane 8, x from membrane 8 will be exchanged with
‘−’ in membrane 5. This simulates the action of beads falling down. Figure 2.17 (c)
shows the sorted state of the frame; note that the antiport rule can no longer be
applied. Also, one can see that not more than (n − 1) steps will be consumed to reach
the sorted state using the application of this rule.
Observe that the multiplicity of the object x in membranes on the same row taken
together—for example, membranes 7, 8 and 9 of Figure 2.17 form row 1—denotes an
integer. And, these integers are now in the sorted order, viewed row wise, as seen in
36
Figure 2.17(c).
Now, we discuss the rules for reading the output in the proper sequence, in the
following section:
Reading output
The output can be read in the proper sequence using only symport/antiport rules.
But, the rules are a little more complex. The idea is to first know when the P system
has reached the sorted state; then, the objects from the rod–membranes can be ejected
into the environment, starting from row n to row 1. The objects from membranes
on the same row will be ejected simultaneously as a single string to be externally
interpreted as representing a distinct integer. Thus the ejected output will be in the
following sequence:
String of objects from row n membranes, string of objects from row (n−1) membranes,
. . . string of objects from row 1 membranes.
Note that, externally, the multiplicity of x is calculated separately for each string,
which are ejected at different points of time.
Now, let us include new rules that accomplish these actions related to reading the
output. Figure 2.18 shows the inclusion of new symport rules in the rod–membranes.
Observe the inclusion of new objects c1 , c2 , c3 in the rules. These rules would
eject/symport the x objects from the membranes into the environment along with
the prompting objects c1 , c2 , c3 , but only if c1 , c2 , c3 are present. It is clear that, one
has to first send c1 , c2 , c3 into the rod–membranes in order to prompt the ejection of
the output. In what follows, we discuss the rules that do the prompting:
Remember, before prompting the ejection of output, the system has to first know,
and ensure, that it has reached the sorted state. The new rules to be discussed
help ensure this. Figure 2.19 shows new rules added to the counter–membranes.
Note the presence of object p within membrane 12. These new symport rules would
move p from one counter membrane to the other, driven by the global clock. After
n − 1 ticks/time steps of the global clock—2 ticks in our example—p would have been
symported to the counter–membrane denoting row n (membrane 10). Recall from the
previous section, not more than (n − 1) steps will be consumed for the bead–objects
to fall down and reach the sorted state.
Thus, the presence of p within row n counter–membrane (membrane 10) would
ensure that the bead–objects have settled down in a sorted state.7 Note that the
rules written in membrane 10 (row n counter–membrane) would start prompting
ejection of the output, after p is available in membrane 10. The rules get c1 , c2 ,
c3 , and another object q, from the environment and transfer them into the group of
membranes representing row n (1, 2 and 3 in our case), thus prompting the ejection
7
Note that we are forced to adopt this method because, there seems to be no better way to ensure
whether the system has reached the final state or not; for instance, we cannot deduce this from the
states of individual membranes.
37
7
8
9
x
x
x
(7, x|−, 4)
(8, x|−, 5)
(9, x|−, 6)
4
x
number 1
(4, x|−, 1)
1
5
6
−
−
(5, x|−, 2)
(6, x|−, 3)
2
x
3
−
x
number 2
(a)
7
8
9
x
−
−
4
5
x
x
1
2
x
x
(b)
6
x
3
−
7
8
x
−
−
4
5
6
x
1
x
x
2
x
9
−
3
x
(c)
Figure 2.17: Making bead–objects fall down.
38
12
7
11
8
x
x
x
(7, x|−, 4)
(8, x|−, 5)
(9, x|−, 6)
(7, xc1 , 0)
(8, xc2 , 0)
(9, xc3 , 0)
5
6
x
−
−
4
10
9
(4, x|−, 1)
(5, x|−, 2)
(6, x|−, 3)
(4, xc1 , 0)
(5, xc2 , 0)
(6, xc3 , 0)
1
2
3
x
x
−
(1, xc1 , 0)
(2, xc2 , 0)
(3, xc3 , 0)
Figure 2.18: Ejecting the output.
c1 , c2 , c3 ....
row−1
12
11
(available in arbitrary numbers outside)
7
8
9
p
x
x
x
(12, p, 11)
(7, x|−, 4)
(8, x|−, 5)
(9, x|−, 6)
(7, xc1 , 0)
(8, xc2 , 0)
(9, xc3 , 0)
row−2
4
5
x
(11, p, 10)
10
row−3
(10, p|qc1c2 c3 , 0)
(10, c1, 1)
(10, c2, 2)
(10, c3, 3)
6
(4, x|−, 1)
−
(5, x|−, 2)
−
(6, x|−, 3)
(4, xc1 , 0)
(5, xc2 , 0)
(6, xc3 , 0)
1
2
3
x
x
−
(1, xc1 , 0)
(2, xc2 , 0)
(3, xc3 , 0)
Figure 2.19: Prompting the ejection of output from row 1.
39
q, r ....
c1 , c2 , c3 ....
12
row−1
p
(12, p, 11)
(12, r|qc1c2 c3 , 0)
(12, c1, 7)
(12, c2, 8)
(12, c3, 9)
11
row−2
(11, p, 10)
(11, q|rc1c2 c3 , 0)
(11, c1, 4)
(11, c2, 5)
(11, c3, 6)
(11, r, 12)
10
(available in arbitrary numbers outside)
row−3
(10, p|qc1 c2 c3 , 0)
(10, c1 , 1)
(10, c2 , 2)
(10, c3 , 3)
(10, q, 11)
7
8
9
x
x
x
(7, x|−, 4)
(8, x|−, 5)
(9, x|−, 6)
(7, xc1 , 0)
(8, xc2 , 0)
(9, xc3 , 0)
4
5
x
6
(4, x|−, 1)
−
(5, x|−, 2)
−
(6, x|−, 3)
(4, xc1 , 0)
(5, xc2 , 0)
(6, xc3 , 0)
1
2
3
x
x
−
(1, xc1 , 0)
(2, xc2 , 0)
(3, xc3 , 0)
Figure 2.20: Prompting the ejection of output from rows 2 & 1.
of x objects as output. (Assume the presence of arbitrary number of c1 , c2 , c3 in the
environment. The need for object q will be discussed shortly.)
Still we need to add more rules which prompt the ejection of output strings from
row n − 1 up to row 1. The idea is to move two new objects q and r alternately
through the counter–membranes, now from row n − 1 up to row 1 upwards8 . The
presence of objects q and r in the counter–membranes would trigger certain new rules
that would subsequently prompt output from row n − 1 up to row 1 rod–membranes,
one by one. (Note, in our case, n = 3.) These rules have been included in Figure 2.20.
(Assume the presence of arbitrary number of q’s and r’s in the environment.)
Sample illustration
We first formally define a tissue P system of degree 12 ( 3 rods × 3 rows + 3 counter
membranes) with symport/antiport rules which can sort a set of three positive
integers, the maximum among them being three:
8
We cannot use p again as it has already been used by earlier rules; p would activate the same
set of actions as before, which is undesirable.
40
12
7
p
11
10
8
x
9
12
x
x
4
5
6
x
−
−
1
2
3
x
−
x
8
7
x
11
10
x
7
11
10
−
4
5
1
p
x
x
x
2
x
12
9
7
−
6
x
11
4
−
x
10
3
x
q
c1 c2
c3
1
x
(iii)
12
11
q
10
6
x
2
x
x
3
−
12
8
x
−
−
4
5
6
x
9
11
−
1
2
3
x c1
x c2
x c3
(v)
8
9
−
−
5
x
2
x
6
−
3
x
(iv)
7
x
5
(ii)
8
x
x
−
1
(i)
12
−
4
p
9
10
r
c1 c2
c3
7
8
x
−
−
4
5
6
x
1
x
2
9
−
3
(vi)
Figure 2.21: Sample illustration — intermediate steps (i) to (vi).
41
12
7
r
11
10
8
9
12
x
−
4
5
x c1
x c2
−c3
1
2
3
7
8
x
−
−
11
4
5
6
10
1
2
12
7
8
11
4
5
10
1
2
−
6
q
c1 c2
c3
(vii)
12
7
q
x c1
11
4
10
1
8
c2
−c3
5
2
(ix)
3
(viii)
9
−
9
6
3
9
q
(x)
Figure 2.22: Sample illustration — steps (vii) to (x).
6
3
42
Π = (V , T , ω1 , ω2 , ..., ω12 , M0 , R)
where:
(i) V (the alphabet) = {x, −, p, q, r, c1 , c2 , c3 }
(ii) T (the output alphabet) ⊂ V
T = {x, c1 , c2 , c3 } (the multiplicity of c1 , c2 , c3 is to be ignored in the end)
(iii) ω1 , ...,ω12 are strings representing the multisets of objects initially present in the
regions of the systems. These are shown in Figure 2.21 (i).
(iv) M0 is a multiset of objects present in the environment.
M0 (x) = M0 (−) = M0 (p) = 0;
M0 (q) = M0 (r) = M0 (c1 ) = M0 (c2 ) = M0 (c3 ) = ∞.
(v) R is a finite set of symport/antiport rules. They are enumerated in Figure 2.20.
Note that we do not use a separate output membrane, but prefer to eject the output
into the environment.
Figures 2.21 and 2.22 illustrate sorting {2, 1, 3} with snapshots of all the intermediate configurations, clearly showing the evolution of the system step by step until
the output is read. Note that steps (i) to (vi) are shown in Figure 2.21 and steps (vii)
to (x) in Figure 2.22.
Note that the output strings from membranes in rows 3, 2 and 1 will be ejected
during steps (vi), (viii) and (x) respectively, as shown in Figures 2.21 and 2.22. One
has to ignore the c1 , c2 , c3 s that accompany x objects and take into account only the
multiplicity of x in each output string.
Thus, Bead–Sort can be implemented with a tissue P system that uses only symport/antiport rules. The complexity of the algorithm is O(n). As the system uses
only communication rules (symport/antiport), the procedure to read the output has
been rather tedious.
The P system built for sorting three numbers can be generalized for sorting any
n positive integers by simply adding more membranes with similar rules.
Some of the antiport rules used involve 5 symbols; a future task could be to
try to avoid this, and keep the number of symbols involved close to 2, as it is the
case in biology. This could be important from the point of view of a possible bio–
implementation.
The beads–and–rods apparatus used for sorting, searching, finding minimum,
maximum, etc. along with the mechanisms to load rows of beads and read the
output make up a full–fledged gravity powered natural–computer for efficient data
processing.
We have also discussed possible electronic, cellular automata, membrane versions
of the gravity based Bead–Sort natural–computer. Electronic implementations do
have their own appeal, but there is the danger of losing a certain essence of the actual
bead sorting process as we try to capture its features using “alien” systems. As
Norbert Wiener observed, “the best material model of a cat is another, or preferably
43
the same, cat”, and perhaps the best implementation of a natural algorithm is the
natural system itself.
Note that the performance of systolic sorting algorithms, i.e. parallel synchronized
algorithms for sorting, is similar to that of Bead–Sort (see [19] for a survey of various
parallel sorting algorithms): Thompson and Kung have adapted the bitonic sorting
√
scheme to a mesh–connected processor which could sort an array of size n in O( n)
time, using n processors [68]. Muller and Preparata have used a modified sorting
network to sort n elements in O(log n) time with O(n2 ) processing elements [52] and
Hirschberg’s bucket sort algorithm [36] sorts n numbers with n processors in time
O(log n). See [63] where Bead–Sort is compared with some well–known sequential
sorting algorithms and, the “practical use of Bead–Sort” is debated. See [35] and [37]
for detailed electronic circuits realizing Bead–Search and Bead–Sort.
Chapter 3
Solving SAT with Bilateral
Computing
In the previous Chapter we demonstrated that, with reference to sorting and searching, natural algorithms can transcend the limits of classical algorithms. However,
sorting and searching are not one of the hard computing problems; they are solvable
by classical means in polynomial time, and it is worthwhile trying to discover efficient natural algorithms for those problems that are considered intractable. In this
Chapter we use yet another old–fashioned computing system, a system made out of
pipes and pistons, to solve a simple instance of SAT, a classic hard problem known
to be NP –complete in polynomial time [9]. The natural computing system we use
is a Bilateral Computing System (BCS)—a physical system in which there is no
intrinsic distinction between its parameters, such as “input parameters” and “output parameters”; one has the freedom to change, or assign a value to, any of the
parameters and watch what happens to the rest.
We start with a definition of SAT and an informal introduction to the notion of
bilateral computing, and then proceed to the design of logic gates based on principles
of liquid statics with which we solve SAT. Our approach is informal: the emphasis is
on motivation, ideas and implementation rather than on a formal description.
3.1
The problem of Satisfiability
The Satisfiability problem for Boolean formulas, commonly known as the SAT
problem, is one of the classic “hard problems” in computer science. Due to its NP–
completeness, any other hard problem in NP can be reduced in polynomial time to
SAT. Naturally, an efficient solution to SAT will translate into an efficient solution
to every NP problem.
The SAT problem is defined as follows: Given a Boolean formula with n variables,
determine whether it can be satisfied (the value of the formula made true) by a
truth assignment, i.e., by at least one possible assignment of the logical values true
or false to each variable. For instance, with the plus symbol denoting a ‘logical
44
45
x1
x1
x2
x2
.
.
.
.
.
.
y
y
xn
xn
(a)
(b)
Figure 3.1: Unilateral and bilateral logic circuits.
or’ and juxtaposition (implied multiplication) denoting a ‘logical and’, the formula
(a+c+ ē)(b̄)(b+c+d+e)(d̄+e) is satisfiable, with the assignment: a = false, b = false,
c = true, d = false and e = false. On the other hand, the formula (a)(ā+b)(ā+c̄)(b̄+c)
is not satisfiable, as there is no possible assignment to a, b and c that can make the
formula true. (See [14] for a detailed coverage of the topic.)
What makes such an “easy question” hard to answer is that often one has to
exhaustively test all possible 2n truth assignments to the n variables in a formula.
Naturally, the exponential number of the possible assignments leads to a combinatorial explosion as n gets bigger, and hence its hardness.
3.2
Bilateral Computing
Figure 3.1 illustrates the idea of a Bilateral Computing System in the context of solving SAT. We use the word “bilateral” as in electrical circuit theory: circuit elements
like resistors that can conduct current equally well in either direction are bilateral as
opposed to diodes which cannot.
Figure 3.1(a) is the normal “unilateral” type of a Boolean circuit that has predesignated input and output parameters: The inputs are fed into x1 , x2 , . . . , xn and the
output is obtained at y. Figure 3.1(b) is a hypothetical “bilateral” Boolean circuit
in which there is no such distinction: Any of the parameters can serve as “input” or
“output”; one has the freedom to assign a value to any one or more of the parameters
and watch what happens to the rest. For instance, if one assigns a 1 (i.e., the value
true) to y, the circuit would automatically assume a set of values for x1 , x2 , . . . ,
xn that are consistent with y = 1. In other words, the x1 , x2 , . . . , xn are “forced”
to assume one of those possible combinations of 0’s and 1’s, if any, that could make
y = 1. The kind of action that the circuit would take, when there is no such possible
assignment to x1 , . . . , xn —i.e., the formula is unsatisfiable—depends on how exactly
the BCS is physically realized.
In what follows we discuss the design of logic gates that behave as BCSs.
46
z
z
threshold
threshold
y
x
x
y
markers
OR gate
AND gate
Figure 3.2: Logic gates based on liquid statics.
3.3
Bilateral AN D, OR gates
In this section we describe a design of bilateral logic gates based on liquid statics.
Figure 3.2 shows designs for the 2–input AND and OR gates. Each gate consists of
three limbs joined together and a working liquid that can move freely within. Pistons
are fitted to the two limbs, x and y, that represent the inputs of the gate; they can be
pushed in, at will, say by giving a gentle pressure on them or by adding weights. A
“push” on a piston represents a 1–input (logical true); a “no-push”—piston in raised
position—represents a 0–input (logical false). (See Figure 3.2; note that there are
“markers” on the limbs; a “push” input should move the piston past the “marker”
to signal a 1–input.) The output is the liquid level in the third limb (z): a raised
liquid level above a certain threshold represents 1, and a lower level corresponds to
0. There is a “marker” in the limb that denotes the threshold level. In the OR gate,
the threshold for output liquid level is adjusted in such a way that, a “push” on just
one of the two inputs would cause the level go past the threshold. In the AND gate,
it is just the opposite: the threshold is set higher, so as to require both pistons to
be pushed simultaneously. Note that each “push” is a predefined, fixed measure and
cannot exceed a certain limit.
Why do we call these gates bilateral? The working liquid can be moved forwards
and backwards, and the input pistons can be moved up and down not only by direct
manual handling, but also indirectly by raising the liquid level in the output limb.
Imagine sucking the liquid level in the output limb up; this would indirectly affect
the position of the input pistons, as well. Thus, certain allowable input settings of
the gate can be indirectly obtained by just maneuvering the output, in the reverse
direction, and hence the gates are bilateral.
The design of a NOT gate is quite straightforward and will be discussed in the
next section.
47
1
0
0
1
00
11
0
1
00
11
(a + b)(a + b)
1
0
(a + b) 1
0
00
11
0
1
00
11
1
1
0
(a + b)
0
1
00
11
0
1
00
11
AND gate
Seesaw
2
Bar
b
OR gate
3
a
a
OR gate
b
Figure 3.3: An instance of SAT.
3.4
Solving an instance of SAT
Given the simple Boolean formula (ā+b)(a+b) whose truth table is given in Table 3.1,
we assemble the bilateral logic gates (discussed in the previous section) together to
form a BCS, as illustrated in Figure 3.3. The OR gates—gates labeled 1 and 2 in
Figure 3.3—realize (ā+b) and (a+b) respectively, and the AND gate (gate 3) combines
their outputs.
Table 3.1: Truth table for (ā + b)(a + b).
a
0
1
0
1
b (ā + b)(a + b)
0
0
0
0
1
1
1
1
Pistons on the input/output sides of each gate are called input pistons and output pistons. There is a seesaw–like structure1 connecting the input pistons of gates 1
and 2, which take the inputs ā and a; it ensures that the input values (push and
1
This is an implementation of the bilateral N OT gate.
48
no-push) assigned to the pistons are complements of each other. Also, observe the
bar–like structure connecting the other two pistons representing the two inputs labeled b; it ensures that the values assigned to the pistons are the same. Both these
structures ensure consistency in the assignment of values to input variables. Observe
the long seesaw structures connecting the outputs of the OR gates—gates 1 and 2 in
Figure 3.3—with input pistons of the AND gate (gate 3). They are the mechanisms
that convert the outputs of gates 1 and 2, indicated by liquid levels, to equivalent
push/no-push inputs, which are subsequently applied to the AND gate. For instance,
suppose, the output of gate 1 is 1—indicated by a raised liquid level in the output limb, and thus, a raised output piston— the mechanism will, in turn, produce a
push input (1–input) to the AND gate.
We briefly explain the overall behavior of the setup. It is easy to see that the
input pistons connected to the OR gates—gates 1 and 2—can be set, say, manually
in one of the two satisfiable input configurations (recall Table 3.1 for its satisfiable
assignments); and the output of the AND gate will be 1 in such cases, which is
indicated by the liquid level rising above a threshold.
Now, consider the reverse case: suppose, the output of the AND gate—gate 3 of
Figure 3.3—is assigned 1, i.e. by forcibly raising the liquid level up, say, by means of
suction. This, as a result, would exert a force on the other pistons through the working
liquid, and would set them all “in motion” and would eventually place the pistons
automatically in one of the two satisfiable configurations. One of the two satisfiable
piston configurations is shown in Figure 3.3; if the seesaw structure connecting gates
1 and 2 is reversed, i.e. a up and ā down, we get the other possible configuration.2
The above described effect will evolve in a sequence, each event triggering the
next:
(a) Liquid–level in output limb of gate 3 — forcibly raised.
(b) Input pistons of gate 3 — pushed down.
(c) Output pistons of gates 1 and 2 — raised.
(d) Input pistons of gates 1 and 2 — some pushed down and some others raised,
depending on the constraints.
Now, what if the formula chosen is not satisfiable? What if there is no allowable
input piston configuration leading to a 1–output? In that case, trying to raise the
liquid level in the output limb would push the system to an abnormal state, say, a
noticeable rise in pressure. This should be detected and suitably dealt with, say,
by releasing a safety valve or a cork, which would signal the unsatisfiability of the
formula.
To make the device workable, the following assumptions are adopted:
2
Initially, there might be a state of “oscillation”, when pistons move up and down, trying to settle
down in one of the two possible configurations; but, eventually they will reach a state of equilibrium
which cannot be one of the non–satisfying configurations, due to the physical arrangement. However, due to slight changes in the initial condition, the device might arrive at a different satisfiable
configuration each time we run it.
49
1. The seesaw structure is perfect. It always keeps the pistons on either side in
complementary states — one up and one down, and never allowing them to assume
the same position.
2. The push inputs given to the gates undergo “squashing”. The push inputs to
the OR and AND gates cannot exceed a certain predefined limit and will be duly
“squashed” if they exceed it.
3. The piston is moved slowly. Only then can the process be treated as adiabatic,
when no heat is gained or lost. And this will simplify the time complexity analysis.
As the working liquid can be moved forwards and backwards, the system acts as
a BCS. When the liquid level of the output limb of gate 3 is raised forcibly by suction
or other means, this will exert a force on the other pistons through the working liquid,
set them in motion and eventually place the pistons automatically in one of the two
satisfiable configurations—the two different possible positions of the seesaw structure
connecting gates 1 and 2. Note that the suction force, applied at the output limb,
will naturally try to push down as many input pistons in every gate as is allowed by
the built–in constraints in the hardware representing the formula.
3.5
Time complexity
Given a CNF formula we can uniformly realize a general SAT device by having a tree
of k − 1 binary OR gates for each clause of k literals and then ‘logically and’ together
m clauses with another tree of m − 1 binary AND gates. We can also consider a
general SAT device in which the binary gates (of Figure 3.2) are replaced with m–ary
gates3 . Thus, the clauses of a CNF formula can be represented using OR gates whose
outputs are combined by a single AND gate. Also, corresponding to each variable in
the formula, say, a, we assume that there is a generic seesaw structure that connects
all a’s onto one side and all ā’s on the opposite side.
The response time t of a SAT device, after suction is applied to the output gate, is
the sum of two components: t1 , the time taken by the impulses generated by suction
to reach the farthest piston and t2 , the time taken by the pistons to react, due to
friction. The time t1 depends on λ, the velocity of sound in the liquid medium and
the geometric dimensions of the apparatus. Thus, t1 = d/λ, where d is the maximum
distance the wave has to travel—from the point where suction is applied to the farthest
piston in the apparatus. Therefore, t = (d/λ) + t2 , where λ and t2 are constants and
d grows linearly with the number of gates, the physical size of the individual gates
being uniform for binary case or unbounded for m–ary case.
3
An m–ary OR gate can be constructed by adjusting the threshold for output liquid level in
such a way that, a “push” on just one of the m inputs would cause the level go past the threshold.
In the AND gate, the threshold is set much higher, so as to require all m pistons to be pushed
simultaneously.
50
3.6
Bilateral Computing vs. Reversible Computing
We can mathematically characterize a Bilateral Computing System as follows: Consider a surjective (not necessarily bijective) function f : X → Y and the set of functions G = {g : Y → X | f (g(y)) = y, ∀y ∈ Y }. A bilateral computing system is one
which can implement f as well as some g ∈ G, using the same intrinsic “mechanism”
or “structure”. In this sense reversible computing systems can be seen as special
cases of Bilateral Computing Systems. Reversible computing, according to Toffoli, is
“based on invertible primitives (as opposed to those like OR and AND) and composition rules that preserve invertibility” [69]; [17, 30] are two other seminal papers in
this area. Naturally, it is possible to deduce the input that will produce a given output on a reversible computer. However, when the function we wish to compute is not
invertible, it has to be painstakingly expressed in terms of invertible primitives first.
And, the reversible computer realizing it will have auxiliary input/output lines apart
from those that figure in the original function—added so as to make the function
invertible. These additional lines, besides adding to the complexity of the gadget,
will entail the following: Before one can attempt to deduce the input that produces
a certain output on the reversible computer, one will have to guess the values of the
auxiliary output lines as well, which may not be possible without knowing the input
in the first place! There is no such problem with a BCS, in principle.
Moreover, Bilateral Computing and reversible computing have different motivations: The motivation for reversible computing, as explicitly stated in [69], is the
urge to reduce heat dissipation in computers, and thus achieve higher density and
speed. On the other hand, the motivation for Bilateral Computing is to discover natural physical systems that spontaneously maintain a certain relation between their
parameters: Vary one or more of them, and the system will “instinctively” react by
varying the rest, thus keeping the relation intact. The very nature of such a bilateral
system is that there is no designated “input parameters” whose values can be set
manually and “output parameters” that can only announce the output. Any of the
parameters can be manually varied and the values of the rest viewed as “output”. In
other words, the motivation is to look for physical (computing) systems in which the
distinction between “input” and “output” is blurred.
In this Chapter we discussed bilateral computing mainly in the context of solving
SAT. The following Chapter discusses the same in a more general context. It should
be noted that we have illustrated the working of a gadget for solving SAT with a very
small SAT instance; we have not proved the correctness of the approach for all cases.
An approach to solve SAT using fluidic logic can be found in Appendix B.
Chapter 4
Balance Machines
In the preceding Chapters, we have discussed natural algorithms for specific computing problems. Departing from this routine, we discuss a generic natural computational
model, the Balance Machine, which we first proposed in [12]. In a more recent paper [13] we introduced a formalism for computing with balance machines—a convenient notation for representing the model on paper, and demonstrated its expressive
power by solving NP–complete problems. The balance machine is a mechanical model
of computation consisting of components that resemble ordinary physical balances, as
the name suggests. The aim is to show that one single operation, “balancing”, suffices
in principle to perform any kind of computation. We show that the balance machine
can do universal computation. We give numerous examples of using the model to
solve problems—even NP–complete ones like SAT, Set Partition and Knapsack.
One of the key features of the balance machine is its bidirectional operation:
both a function and its partial inverse can be computed spontaneously using the
same machine. We discuss bilateral computing, which was introduced in the previous
Chapter, once again in detail in the context of balance machines.
4.1
Computing as a “balancing feat”
In this section, we simply attempt to convey the essence of the model without delving
into technical details. A detailed account of the proposed model will be given in
Section 4.2. The computational model consists of components that resemble ordinary
physical balances (see Figure 4.1), each with a natural tendency to spontaneously
balance their left and right pans: If we start with certain fixed weights, representing
inputs, on some1 of the pans, then the balance–like components would vigorously try
to balance themselves by filling the rest of the pans with suitable weights representing
the outputs. Roughly speaking, the proposed machine has a natural ability to load
itself with output weights that “balance” the input. This balancing act can be viewed
as a computation. There is just one rule that drives the whole computing process:
1
The machine is not necessarily a single balance; it can be a combination of two or more balances.
51
52
Figure 4.1: Physical balance.
the weights on the left and right pans of the individual balances should be made equal.
Note that the machine is designed in such a way that the balancing act would happen
automatically by virtue of physical laws: i.e., the machine is self–regulating.2 One
of our aims is to show that all computations can be ultimately expressed using one
primitive operation: balancing. Armed with the computing = balancing intuition, we
can see basic computing operations in a different light. In fact, an important goal
result is to show that this sort of intuition suffices to conceptualize/implement any
computation performed by a conventional computer.
4.2
The Balance Machine model
At the core of the proposed natural computational model are components that resemble a physical balance. In ancient times, the shopkeeper at a grocery store would
place a standard weight in the left pan and would try to load the right pan with
a commodity whose weight equals that on the left pan, typically through repeated
attempts. The physical balance of our model, though, has an intrinsic self–regulating
mechanism: it can automatically load (without human intervention) the right pan
with an object whose weight equals the one on the left pan. See Figure 4.2 for a
possible implementation of the self–regulating mechanism.
In general, unlike the one in Figure 4.2, a balance machine may have more than
just two pans. There are two types of pans: pans carrying fixed weights which remain unaltered by the computation, and pans with variable liquid weights that are
changed by activating the filler–spiller outlets. Some of the fixed weights represent
the inputs, and some of the variable ones represent outputs. The inputs and outputs
of balance machines are, by default, non–negative reals unless stated otherwise. The
following steps constitute a typical computation by a given balance machine:
2
If the machine cannot eventually balance itself, it means that the particular instance does not
have a solution.
53
(fluid) source
“filler” outlet
“spiller” outlet
fixed weight
X
Y
X+Y
variable weight
push buttons
1
2
Figure 4.2: A self–regulating balance.
The source is assumed to have an arbitrary amount of a liquid–like substance. When activated, the
filler outlet lets liquid from the source into the right pan; the spiller outlet, on being activated, allows
the right pan to spill some of its contents. The balancing is achieved by the following mechanism:
the spiller is activated if at some point the right pan becomes heavier than the left (i.e., when
push button (2) is pressed) to spill off the extra liquid; similarly, the filler is activated to add extra
liquid to the right pan just when the left pan becomes heavier than the right (i.e., when push button
(1) is pressed). Thus, the balance machine can “add” (sum up) inputs x and y by balancing them
with a suitable weight on its right: after being loaded with inputs, the pans would go up and down
till they eventually find themselves balanced.
(i) Plug in the desired inputs by loading weights to pans. Pans with
variable weights can be left empty or assigned with arbitrary weights.
This defines the initial configuration of the machine.
(ii) Allow the machine to balance itself: the machine automatically adjusts
the variable weights till left and right pans of the balance(s) become equal.
(iii) Read output: weigh liquid collected in the output pans, using a standard weighing instrument.
See Figure 4.3 for pictorial, textual representations of a machine that adds two quantities. In the textual representation, the keyword balance represents a balance with left
and right pans. The variable names to the left and the right of the comma represent
weights on the left and right pans respectively. Those beginning with small letters
represent fixed weights and the ones starting with capital letters, variable weights.
Note that there can be more than one variable on each side of the comma, connected
together by the ‘+’ sign which simply denotes a “grouping” of weights that are attached to the same side of the balance. More details regarding syntax will be given
shortly by way of examples and, the syntax is formally described at the end of this
54
+
A
balance(a + b, A).
a
b
Symbol
+
.. .
Meaning
weights on both sides of this “bar”
should balance
represents two (or more) pans
whose weights add up
(these weights need not balance each other)
small letters, numerals
represent fixed weights
capital letters
represent variable weights
Figure 4.3: Pictorial, textual representations of a simple balance machine that performs addition.
Chapter using context–free grammar rules.
We wish to point out that, to express computations more complex than addition,
we would require a combination of such balance machines. Section 4.3 gives examples
of more complicated machines.
4.3
Computing with balance machines: Examples
In what follows, we give examples of a variety of balance machines that carry out a
wide range of computing tasks—from the most basic arithmetic operations to solving
linear simultaneous equations. Balance machines that perform the operations increment, decrement, addition, and subtraction are shown in Figures 4.4, 4.5, 4.6, and 4.7,
respectively. Legends accompanying the figures detail how they work.
Balance machines that perform multiplication by 2 and division by 2 are shown in
Figures 4.8, 4.9, respectively. Note that in these machines, one of the weights or pans
takes the form of a balance machine3 which demonstrates a kind of recursion.
3
The weight contributed by a balance machine is assumed to be simply the sum of the individual
weights on each of its pans. The weight of the bar and the other parts is not taken into account for
the sake of simplicity.
55
+
x
Z
1
balance(x + 1, Z).
Figure 4.4: Increment operation.
Here x represents the input and Z represents the output. The machine computes increment(x).
Both x and ‘1’ are fixed weights clinging to the left side of the balance machine. The machine
eventually loads into Z a suitable weight that would balance the combined weight of x and ‘1’.
Thus, eventually Z = x + 1, i.e., Z represents increment(x).
+
X
z
1
balance(X + 1, z).
Figure 4.5: Decrement operation.
Here z represents the input and X represents the output. The machine computes decrement(z).
The machine eventually loads into X a suitable weight so that the combined weight of X and ‘1’
would balance z. Thus, eventually X + 1 = z, i.e., X represents decrement(z).
+
x
Z
y
balance(x + y, Z).
Figure 4.6: Addition operation.
The inputs are x and y and Z represents the output. The machine computes x + y. The machine
loads into Z a suitable weight, so that the combined weight of x and y would balance Z. Thus,
eventually x + y = Z, i.e., Z would represent x + y.
56
+
x
z
Y
balance(x + Y, z).
Figure 4.7: Subtraction operation.
Here z and x represent the inputs and Y represents the output. The machine computes z − x. The
machine loads into Y a suitable weight so that the combined weight of x and Y would balance z.
Thus, eventually x + Y = z, i.e., Y would represent z − x.
a
B
A
balance(balance(a, B), A).
Figure 4.8: Multiplication by 2.
Here a represents the input and A represents the output. The machine computes 2a. The combined
weights of a and B should balance A: a + B = A; also, the individual weights a and B should
balance each other: a = B. Therefore, eventually A will assume the weight 2a.
a
A
B
balance(balance(A, B), a).
Figure 4.9: Division by 2.
The input is a and let A represent the output. The machine “exactly” computes a/2. The combined
weights of A and B should balance a so that A + B = a; also, the individual weights A and B should
balance each other: A = B. Therefore, eventually A will assume the weight a/2.
57
We now introduce another technique of constructing a balance machine: having
a common weight shared by more than one machine. Another way of visualizing the
same situation is to think of pans belonging to two different machines being placed
one over the other. We use this idea to solve a simple instance of linear simultaneous
equations. See Figures 4.10 and 4.11 which are self–explanatory. In the textual representation a semicolon has been used to separate the individual balances; weights/pans
that are shared among the individual machines bear the same variable names.
Though balance machines are basically analog computing machines, we can implement Boolean operations (AND, OR, NOT) using balance machines, provided we
make the following assumption: There are threshold units that return a desired value
when the analog values, representing inputs and outputs, exceed a given threshold
and return some other value otherwise. See Figures 4.12, 4.13, and 4.14 for balance machines that implement logical operations AND, OR, and NOT respectively.
We represent true inputs with the analog value 10 and f alse inputs with 5; when the
output exceeds a threshold value of 5 it is interpreted as true, and as f alse otherwise.
(Instead, we could have used the analog values 5 and 0 to represent true and f alse;
but, this would force the AND gate’s output to a negative value for a certain input
combination.) Tables 4.1, 4.2, and 4.3 give the truth tables, along with the actual
input/output values.
4.4
Universality of balance machines
The balance machine model is capable of universal discrete computation, in the sense
that it can simulate the computation of a practical, general purpose digital computer.
We can show that the model allows us to construct those primitive hardware components that serve as the building blocks of a digital computer: logic gates, memory
cells and transmission lines.
1. Logic gates
We can construct AND, OR, NOT gates using balance machines as shown in
Section 3. We could also realize any given Boolean expression by connecting
balance machines (primitives) together using “transmission lines” (see the third
item, “Transmission lines”).
2. Memory cells
The weighing pans in the balance machine model can be viewed as “storage”
areas.
Also, a standard S–R flip-flop can be constructed by simply cross coupling two
NOR gates, as shown in Figure 4.15 (see Table 4.4 for its state table). They can
be implemented with balance machines by replacing the OR, NOT gates in the
diagram with their balance machine equivalents in a straightforward manner.
3. Transmission lines
A balance machine, like machine (2) of Figure 4.16, that does nothing but a
58
x + y = 8; x − y = 2
1
2
+
X1
+
8
Y1
Y2
X2
2
balance(Y2 + 2, X2 );
balance(X1 + Y1 , 8);
3
X1
4
X2
Y1
balance(X1 , X2 );
Y2
balance(Y1 , Y2 ).
Figure 4.10: Solving simultaneous linear equations.
The constraints X1 = X2 and Y1 = Y2 will be taken care of by balance machines (3) and (4).
Observe the sharing of pans between them. The individual machines work together as a single
balance machine.
x + y = 8; x − y = 2
+
X
+
8
Y
balance(X + Y, 8);
Y
X
2
balance(Y + 2, X).
Figure 4.11: Solving simultaneous linear equations (simpler representation).
This is a simpler representation of the balance machine shown in Figure 4.10. Machines (3) and (4)
are not shown; instead, we have used the same (shared) variables for machines (1) and (2).
59
Table 4.1:
x
5 (f alse)
5 (f alse)
10 (true)
10 (true)
Table 4.2:
x
5 (f alse)
5 (f alse)
10 (true)
10 (true)
Truth table
y
5 (f alse)
10 (true)
5 (f alse)
10 (true)
for AND.
Z
0 (f alse)
5 (f alse)
5 (f alse)
10 (true)
Truth table for OR.
y
Z
5 (f alse) 5 (f alse)
10 (true) 10 (true)
5 (f alse) 10 (true)
10 (true) 15 (true)
Table 4.3: Truth table for NOT.
x
Y
5 (f alse) 10 (true)
10 (true) 5 (f alse)
Table 4.4: State table for S–R flip-flop.
S
0
0
1
1
R
0
1
0
1
Q
previous state of Q
0
1
undefined
Q
previous state of Q
1
0
undefined
60
+
+
x
y
Z
10
balance(x + y, Z + 10).
Figure 4.12: AND logic operation.
x and y are inputs to be ANDed and Z represents the output. The balance realizes the equality
x + y = Z + 10.
+
+
x
y
Z
5
balance(x + y, Z + 5).
Figure 4.13: OR logic operation.
Here x and y are inputs to be ORed; Z represents the output. The balance realizes the equality
x + y = Z + 5.
+
x
15
Y
balance(x + Y, 15).
Figure 4.14: NOT logic operation.
Here x is the input to be negated; Y represents the output. The balance realizes the equality
x + Y = 15.
61
S
Q
Q
R
Figure 4.15: S–R flip-flop constructed using cross coupled NOR gates.
1
2
3
Figure 4.16: Balance machine as a “transmission line”.
Balance machine (2) acts as a transmission line feeding the output of machine (1) into the input of
machine (3).
“copy” operation—copying whatever is on the left pan to the right pan—would
serve both as a transmission equipment, and as a delay element in some contexts.
(The pans have been drawn as flat surfaces in the diagram.) Note that the left
pan of machine (2) is of the fixed type, with no spiller–filler outlets, and the
right pan is a variable one.
Note that the computational power of Boolean circuits is equivalent to that of a
Finite State Machine (FSM) with bounded number of computational steps (see Theorem 3.1.2 of [61]).4 But, balance machines are “machines with memory”: using
them we can build not just Boolean circuits, but also memory elements such as flip–
flops. Thus, the power of balance machines surpasses that of mere bounded FSM
computations; to be precise, they can simulate any general sequential circuit. (A
sequential circuit is a concrete machine constructed of gates and memory devices.)
Since any finite state machine, with bounded or unbounded computations, can be
realized as a sequential circuit using standard procedures (see [61]), one can conclude that balance machines have at least the computing power of unbounded FSM
computations. Given the fact that any practical, general purpose digital computer
with only limited memory can be modeled by an FSM, we can in principle construct
such a computer using balance machines. Note, however, that notional machines like
Turing machines are more general than balance machines. Nevertheless, standard
4
Also, according to Theorem 5.1 of [14] and Theorem 3.9.1 of [61], a Boolean circuit can simulate
any T –step Turing machine computation.
62
“physics–like” models in the literature like the Billiard Ball Model [30] are universal
only in this limited sense: there is actually no feature analogous to the infinite tape
of the Turing machine.
4.5
Bilateral Computing revisited
An important property of balance machines is that they are bilateral computing devices. We introduced the notion of bilateral computing in Chapter 3. Recall that
bilateral computing devices can compute both a function and its (partial) inverse,
using the same mechanism. For instance, the same underlying balancing mechanism
is used in both increment and decrement operations (see Figures 4.4 and 4.5), except
for the fact that we change fixed weights to variable ones and vice versa. Also, compare machines that (i) add and subtract (see Figures 4.6 and 4.7) and (ii) multiply
and divide by 2 (see Figures 4.8 and 4.9).
There is a fundamental asymmetry in the way we normally compute: while we are
able to design circuits that can multiply quickly, we have relatively limited success in
factoring numbers; we have fast digital circuits that can “combine” digital data using
AND/OR operations and realize Boolean expressions, yet no fast circuits that can
determine the truth value assignment satisfying a Boolean expression. Why should
computing be easy when done in one “direction”, and not so when done in the other
“direction”? In other words, why should inverting certain functions be hard, while
computing them is quite easy? It may be because our computations have been based
on rudimentary operations like addition, multiplication, etc. that force an explicit
distinction between “combining” and “scrambling” data, i.e. computing and inverting
a given function. On the other hand, a primitive operation like balancing does not
do so. It is the same balance machine that does both addition and subtraction: all
it has to do is to somehow balance the system by filling up the empty variable pan
representing output; whether the empty pan is on the right (addition) or the left
(subtraction) of the balance does not particularly concern the balance machine! In
the bilateral scheme of computing, there is no need to develop two distinct intuitions—
one for addition and another for subtraction; there is no dichotomy between functions
and their (partial) inverses.
4.6
Three applications: SAT, Set Partition and
Knapsack
In this Section we show how three classic NP–complete problems can be solved under
a bilateral computing scheme, using a special type of balance machines having the
following properties:
(i)The filler–spiller outlets let out liquid only in discrete “drops”.
(ii) There is a maximum weight which each pan can hold.
63
Satisfiability of (a + b)(a + b)
1
2
+
A
+
10
B
5 Extra1
balance(A + B, 10 + 5 + Extra1 );
+
A
A
15
A
10
B
5 Extra2
balance(A + B, 10 + 5 + Extra2 );
a
Truth table
b (a + b)(a + b)
0
0
1
1
0
1
0
1
3
+
+
0
1
0
1
balance(A + A , 15).
Figure 4.17: Solving an instance of SAT.
The satisfiability of the formula (a + b)(a + b) is verified. Observe that each of the two clauses have
to be true, for the expression to be satisfiable. Machines (1), (2) and (3) work together sharing
the variables A, B and A between them. OR gates, labeled 1 and 2, realize (a + b) and (a + b)
respectively, and the NOT gate (labeled 3) ensures that a and a are “complementary”. Note that
the “output” of gates (1) and (2) are set to 10. Now, one has to observe the values eventually
assumed by the variable weights A and B (that represent “inputs” of OR gate (1)). Given the
assumptions already mentioned, one can easily verify that the machine will balance, assuming one
of the two following settings: (i) A = 5, B = 10, (Extra1 = 0, Extra2 = 5) or (ii) A = 10, B =
10, (Extra1 = 5, Extra2 = 0). These are the only configurations that make the machine balanced.
In situations when both the left pans of gate (1) assume 10, Extra1 will automatically assume 5 to
balance off the extra weight on the left side. (Extra2 plays a similar role in gate (2).)
64
For the time being, we make no claims regarding the time complexity of the approach since we have not analyzed the time characteristics of balance machines. However, we believe that the balance machines will outperform digital computers when
applied to NP–complete problems, owing to their inherent “bilateral parallelism”.
4.6.1
The SAT problem
The SAT problem has already been stated in Section 3.1 of Chapter 3. The main
idea used to solve SAT is this: first realize the given Boolean expression using gates
made of balances; then, by setting the pan that represents the outcome of the Boolean
expression to the analog value representing true, the balance machine can be made
to automatically assume a set of values for its inputs that would “balance” it. In
other words, by setting the output to be true, the inputs are forced to assume one of
those possible truth assignments, if any, that generate a true output. The machine
would never balance when there is no such possible input assignment to the inputs
(i.e., the formula is unsatisfiable). This is like operating a circuit realizing a Boolean
expression in the “reverse direction”: assigning the “output” first, and making the
circuit produce the appropriate “inputs”, rather than the other way round. See
Figure 4.17 where we illustrate the solution of a simple instance of SAT. We have the
following assumptions:
(i) Analog values 10 and 5 are used to represent true and f alse respectively.
(ii) The filler–spiller outlets let out liquid only in discrete “drops”, each
weighing 5 units.
(iii) The maximum weight a pan can hold is 10 units.
4.6.2
The Set Partition problem
Set Partition, a well–known hard computing problem which is NP–complete [31] can
be stated as follows:
Input: A set S of positive numbers.
Question: Can S be “split” into two disjoint subsets S1 , S2 such that S1 ∪ S2 = S and
sum(S1 ) = sum(S2 ), where sum(S1 ) and sum(S2 ) represent the sum of the elements
in S1 and that of the elements in S2 , respectively?
In what follows we will try to rephrase the problem in such a way that one could use
the “Balance Machine vocabulary” to address it.
Consider a set S = {x1 , x2 , . . . xn } of cardinality n. Let us suppose, we can
partition it into disjoint sets S1 and S2 as required. The existence of a partition can
be pictured as follows: imagine two sets of “pockets”—n pockets on the left side and
n on the right side of a balance, say, which are meant to hold the elements of S; the
elements from S can be seen as being distributed/scattered across the pockets—some
are held by left side pockets, and the rest on the right (of course, some pockets will be
65
left empty). The pockets on the left, taken together, and those on the right weigh the
same. Note that every element in S will be found in one of the pockets, either in one
of the left ones or in those on the right, never on both sides. To make things simple
and orderly, we can safely assume that the first element in S will be found either in
the first left side pocket or in the first right side pocket (i.e., one of them will always
be empty), the second element in S will be found either in the second left pocket or
in the second right pocket, and so on. This scheme of picturing the partition can be
formally expressed as follows.
Let variables l1 , l2 , . . ., ln and r1 , r2 , . . ., rn represent the left and the right pockets
respectively. Then, every element xi ∈ S will be held by either li or ri , i.e. one of the
following conditions will be true:
A) li = xi and ri = 0.
B) li = 0 and ri = xi .
The following condition will also be true since we can partition S:
C) l1 + l2 + . . . + ln = r1 + r2 + . . . + rn .
Conversely, consider a set S of cardinality n (whether it is partitionable is not
known yet): If one can generate n left and n right pockets for which there exists an
assignment, obeying condition (A) or (B), such that l1 +l2 +. . .+ln = r1 +r2 +. . .+rn ,
then there should exist a way to partition S—into disjoint sets S1 and S2 . This is
because l1 + l2 + . . . + ln can be seen as constituting sum(S1 ) and r1 + r2 + . . . + rn
as sum(S2 ).
Thus, the problem of deciding if a given set can be partitioned or not, can be recast
as the problem of generating required number of left and right pockets/variables and
checking if an assignment satisfying conditions (A) or (B), and (C), exists. And, this
“new” version of the problem is more suitable for solving with balance machines. A
possible solution is discussed below:
Given a set S = {x1 , x2 , . . . xn } of cardinality n, we use the special type of balance machines that we mentioned in the beginning of Section 4.6 to check if S can
be partitioned or not.
balance(L1 + L2 + . . . + Ln , R1 + R2 + . . . + Rn );
balance(L1 + R1 , x1 );
balance(L2 + R2 , x2 );
..
.
balance(Ln + Rn , xn ).
Note that the maximum weight which pans representing variables Li and Ri can
hold is xi , and that they are filled up in “drops”, each weighing xi . This creates
an “all or nothing” situation for the pans, thus satisfying either condition (A) or
(B). When there is a way to partition the set, the balances would stop, assigning
variables L1 , L2 , . . ., Ln , R1 , R2 , . . ., Rn with suitable values that constitute one such
partition; when such a partition is impossible, the balances would keep “swinging”
66
up and down, thus trying out various possible assignments forever, and do not come
to a halt5 .
4.6.3
The 0/1 Knapsack problem
In this Section we attempt to solve yet another NP–complete problem using balance machines— the 0/1 Knapsack [31], which can be defined as follows:
Input: Objects representing “weights” and their corresponding “values”: (w1 , v1 ),
(w2 , v2 ), . . ., (wn , vn ), a knapsack capacity w and target value t.
Question: Is there an A ⊆ {1, 2, ..., n} such that i∈A wi ≤ w and i∈A vi ≥ t?
Let us initially consider a slightly modified version of the problem: we shall force
i∈A wi = w and
i∈A vi = t. We can use an approach quite similar to the one used
to solve the Set Partition problem.
Use n pockets x1 , x2 , . . . , xn to represent weights and another set of n pockets
y1 , y2 , . . . , yn to represent their corresponding values (unlike set partition, we do not
have “left” and “right” pockets). The following conditions are to be met with:
A) A pocket is either empty, or contains a weight/value: (xj = 0 or xj = wj ) and
(yj = 0 or yj = vj ).
B) If a weight–pocket is empty, then so is the corresponding value–pocket:
xj = 0 ⇒ yj = 0 .
C) If a weight–pocket contains a weight, then the corresponding value–pocket will
contain its value: xj = wj ⇒ yj = vj .
D) The weight–pockets should together weigh w and the value–pockets should weigh t:
x1 + x2 + . . . + xn = w and y1 + y2 + . . . + yn = t .
We make use of the special type of balance machines used in the case of Set Partition.
balance(X1 + X2 + . . . + Xn , w);
balance(Y1 + Y2 + . . . + Yn , t);
balance(X1 + Y1 , Sum1 );
balance(X2 + Y2 , Sum2 );
..
.
balance(Xn + Yn , Sumn ).
Note that the maximum weights which pans representing variables Xi , Yi and
Sumi can hold are wi , vi and wi + vi respectively and that they are are filled up
only in “drops”, each weighing wi , vi and wi + vi respectively. This “all or nothing”
restriction helps satisfy (A), (B) and (C). When there is a solution, i.e. a subset
satisfying the given conditions, the balances would stop, assigning variables X1 , X2 ,
. . ., Xn , Y1 , Y2 , . . ., Yn with values that constitute one such solution; when such a
partition is impossible, the balances would keep “swinging” up and down, thus trying
out various possible assignments forever, and do not come to a halt.
5
We do not know of a way to actually detect this situation.
67
In fact, we can solve the original version of the 0/1 Knapsack Problem if we replace
the first two balances with the following:
balance(Extra1 + X1 + X2 + . . . + Xn , w);
balance(Y1 + Y2 + . . . + Yn , t + Extra2 );
Now, due to the inclusion of Extra1 on the left side, the quantity X1 + X2 + . . . + Xn
could afford to be even less than w, since Extra1 can fill itself up with whatever it
takes to balance w. Likewise Extra2 allows us to obtain more than the target value t.
As said earlier, one of our aims has been to show that all computations can be
ultimately expressed using one primitive operation: balancing. The main thrust is
to introduce a natural intuition for computing by means of a generic model, and
not on a detailed physical realization of the model. We have not analyzed the time
characteristics of the model, which might depend on how we ultimately implement
the model.
A formal description of the syntax of the textual notation used for representing
balance machines is given below, using context–free grammar rules:
Statement → Balances.
Balances
→ Balance;Balances | Balance
Balance
→ balance(W eight, W eight)
W eight
→ W eight + W eight | NonnegativeReal | V ariable | Balance
Finally, we suggest that one of the possible answers to the question, “What does
it mean to compute?” is: “To balance the inputs with suitable outputs (on a suitably
designed balance machine).”
Chapter 5
Conclusion
Windmills, natural algorithms and surfing—disparate as they are—seem to suggest
the same thing: That it is possible to hitch a ride on “freely occurring” natural
processes. Traditionally, we painstakingly develop algorithms to solve computing
problems. The Natural Algorithms paradigm which we have explored in this Thesis
suggests that we can discover algorithms in the form of natural physical phenomena.
In this Thesis, we have explored simple natural physical systems that exhibit useful, interesting computational effects. Moreover, using such physical systems we have
designed natural–computers to solve specific problems: sorting, searching, finding the
average, finding the shortest path between two nodes in a graph, SAT, etc.
In particular, we have developed a gravity based natural–computer
that can per√
form data processing tasks like sorting, searching, etc. in O( N) time on an unsorted
database of size N. We have also introduced a new computing paradigm, namely bilateral computing, in which no explicit distinction is made between computing and
inverting a given function. Using a bilateral computing device that works on the
principles of liquid statics a small instance of SAT has been solved with apparent
efficiency. Further, we have also introduced a generic bilateral computational model,
Balance Machines, and have proved it to be universal.
We now conclude by briefly discussing the usefulness of natural algorithms, both
as a theoretical concept and in practical computing; the unique features of natural algorithms that distinguish them from conventional algorithms; and, possible directions
of research in the future.
5.1
Why natural algorithms?
In this era of programmable, general purpose digital computers, why would someone
develop special purpose natural–computers? Two specific questions arise: (i) Why
entertain a different computing approach? Aren’t conventional algorithms, developed
in the classical computing framework, good enough? (ii) Can’t we just simulate
natural–computers on a digital computer?
68
69
The answer to the first question is well–known: Classical algorithms hitherto
known are simply not good enough for solving certain classes of problems in a reasonable amount of time—NP –complete problems like SAT, for instance. Therefore,
efficient natural algorithms or other unconventional algorithms for such problems are
a boon. Even in the case of not–so–hard computing problems like sorting and searching, it is worthwhile if natural algorithms can surpass the theoretical bounds for their
complexity.
Let us address the second question now. It is true that physical systems—and
therefore, natural–computers—can be simulated on a digital computer. One might
even find an efficient, polynomial time simulation of a natural–computer that efficiently solves an NP –complete problem (for instance, like the device with pistons in
Chapter 3, used to solve SAT)1 .
. Nevertheless, a real physical system is preferable to its simulation because of
the following reasons:
First of all, there is a spontaneity in the way natural physical systems work, and
this seems to be absent in computer programs; elements of nature have a disposition to act in certain ways. For example, it is spontaneous for Adenine to combine
with Thymine (Watson–Crick pairing in DNA molecules), for light to take that path
which takes the minimal time between two points (Fermat’s principle2 ), for soap
films to take up the minimum surface area, for water levels to be the same in all
interconnected limbs (see Section 1.2.1 of Chapter 1 where we used this property to
average numbers) and for beads to fall in a sorted order (see Chapter 2). Though the
above can be simulated computationally, what results is only a kind of “simulated–
spontaneity”! Abstaining from the kind of discussion this would lead to, we simply
state the following obvious difficulty in producing a simulated–spontaneity: the programmer has to thoroughly understand the process, figure out how exactly nature
does it and then explicitly program it, accurately, which can be arduous3 . On the
other hand, with a natural-computer, the means to achieve a desired end comes “for
free”: We do not explicitly program a natural–computer, i.e. we do not spell out
in algorithmic terms the “rules” the individual components/molecules/particles that
make up a natural physical system ought to follow while interacting and producing
the overall functional behavior. However, it is important that we understand how a
natural–computer (viewed as a holistic system) works, and also assert, say, using a
blend of mathematical and experimental methods, that it will work the way we think
it would.
Also, in some cases, the actual physical system might evolve much faster than
1
Please see [27] where an approach to establishing that every finitely realizable physical system
can be perfectly simulated by a universal quantum computer is discussed.
2
The path taken by a ray of light between any two points in a system is always the path that
takes the least time [38].
3
The numerous problems faced by graduate students in the University of Auckland while attempting to simulate the “simple” natural–algorithm (discussed in Section 1.2.1 of Chapter 1) for
finding the average of numbers, with a cellular automaton [64, 24], exemplify this.
70
its fastest known computer simulation: the physical system might leap to its next
state before the computer could predict it. For the sake of illustration, let us consider
protein folding, a biological process of considerable interest today. Given a linear
sequence of amino acids, into what three dimensional configuration will the sequence
fold? To answer this question with reference to a reasonably long sequence, a supercomputer may take an unreasonably long time. Indeed, a particular mathematical
formulation (minimal energy) of protein folding has been shown to be NP –complete,
as mentioned in [70]. Yet, nature does it amazingly fast4 . In fact, it would be much
easier to allow the protein to fold and wait, to know the answer! In some cases, it
might be wiser to simply watch the actual physical process to unfurl its own next
state rather than try to compute it. (Simulation seems to come with an extra price
of having to compute the next state. Reality does not suffer from this burden; it just
does it.)
Finally, there is a whole new side to Natural Algorithms other than building practicable natural–computers: It inspires researchers to look at computation in a new light,
to think and talk about computation in terms of a new “vocabulary” comprising natural physical processes, as opposed to a set of logical operations. Natural–computers
may or may not become practical computing gadgets of the future, but the new way of
thinking will certainly persist. Moreover, natural algorithms have a great pedagogical
value since they are ideally based on simple, familiar natural phenomena.
5.2
Where do we go from here?
Possible directions of future research on Natural Algorithms are aplenty. Following
are some of them.
• Bilateral Computing
Systems that are inherently bilateral abound in nature. A few examples have
been discussed in this Thesis. A lot more can be explored and possibly utilized
as natural computing systems, especially to solve hard combinatorial problems
conventionally solved by brute force. A mathematical framework for bilateral
computing can also be developed.
• Balance Machines
In this Thesis, Balance Machines have been introduced as a natural computational model. However, it was originally conceived as a mechanical model for
the human brain (see [5]); more work can be done in this direction. Balance
machines might be viewed as systems with a brain–like capability to learn: The
weights on the pans of a balance machine can be seen as being analogous to
4
Experimentalists have not yet worked out the time characteristics of the actual biological process
regarding how it varies as a function of the length of the amino acid sequence, as asserted in [70].
We just know that it is very quick, “some as fast as a millionth of a second” as reported in [79].
71
weights of a perceptron5 ; it may be possible to design balance machines that
spontaneously adjust their weights to produce a desired input–output mapping.
• Optical natural–computers
Light has various interesting properties which can be explored to find out if
they can be used to compute. Dominik Schultes has suggested a natural algorithm for sorting (at the speed of light!) based on refraction and dispersion
of light [65]: the sorting is accomplished by a natural phenomenon in which
light entering a prism gets “sorted” by wavelength. Jason Stevens has suggested some computing applications based on the interesting fact that light can
spontaneously follow the path that takes the least time between any two points
(Fermat’s principle) [67]. Considering the speed of light which is unmatched,
the prospects of optical natural–computers seem quite attractive indeed!
• Natural–computers driven by chemical reactions
In this Thesis, natural physical systems have been the primary focus; in the systems that we have studied, we have not looked at interactions at the molecular
level. One can also look for chemical reactions with useful computational effects:
Input data represented in the form of “reactants” can be transformed, using a
suitable chemical reaction, into “products” representing the output and this
may be viewed as a computation. DNA Computing which relies on biochemical
reactions is just one possibility amongst a myriad other ways to make use of
chemical reactions for computing. The Chemical Casting Model (CCM) [43]
uses a set of reaction rules, which resemble chemical equations, as its “program”; this model has been used to solve combinatorial optimization problems
efficiently. Alan Creak has suggested a gel electrophoresis–based chemical implementation of Bead Sort [26].
Natural Algorithms is a fledgling area of research. Much more research has to
be done before one can answer questions as to whether natural algorithms would
supercede their conventional counterparts or eventually replace them in practice.
At this stage, it should suffice to just appreciate the fact that nature is a mine of
algorithms. This is simply the moment to echo what Len Adleman observed during
one of his lectures on DNA Computing [3]: It’s not that the bear dances so well, it’s
that he dances at all.
5
Perceptron is one of the well–known mathematical models for the neuron [4].
Appendix A
Input–Output Devices for
Bead–Sort
1
2
3
4
Figure A.1: Loading a row–of–beads in parallel.
Pushing button 1 will make one bead to drop down, pushing button 2 will make two beads to drop
down in parallel, and so on.
72
73
1
1 2
1 2 3
1 2 3 4
Figure A.2: Reading ouput.
Imagine a mechanism (not shown) that puts a “stamp” on the beads as they roll along the rods:
those rolling down rod 1 should have the label ‘1’, those dropping along rod 2, the label ‘2’, and so
on. Now, each row–of–beads can be read out as an integer by just looking at the frame from the
side, from the direction indicated by the arrow. The label on the last bead in every row which is
the integer representing the row is the only one visible to the viewer.
Appendix B
Solving SAT with Fluidic Logic
Fluidic logic is about implementing Boolean functions using streams of fluid. The
presence or absence of a fluid flowing through pipes represents the logical values true
and f alse respectively. See Figure B.1 for our implementation of logic gates using
fluidic logic.
3
2
1
3
1
OR gate
2
AND gate
Figure B.1: Fluidic logic.
OR gate: Fluid flow through at least one of the inlets (pipes 1 and 2) will result in fluid coming
out of the outlet (pipe 3). AND gate: Only when fluid flows through both the inlets, the plunger is
pushed back, allowing fluid to come out of the exposed outlet.
We show how a simple instance of SAT can be solved using our fluidic gates.
Realize (x + y)(x + y ), the instance of SAT we are solving, as shown in Figure B.2
using the gates. Balls 1, 2, 3 and 4 block fluid flow when they are in the constricted
sections (a, b, c, d) of the pipes. The idea is to pump a fluid through the inlet in such
a way that it will try to force its way out through the outlet; the fluid, while trying to
get out, sets the balls in a “satisfiable” configuration—a physical arrangement that
allows the fluid to flow out. We imagine a physical arrangement details of which are
not clearly shown in the figure, which will allow balls 1 and 4 only to travel alongside
each other: Either both of them can be found in the corresponding constricted sections
(a and d) or in the expanded sections (A and D). This arrangement sees to it that
74
75
one−way valve
outlet
x
A
y
y
x
B
D
C
2
1
a
4
3
b
c
d
inlet
pump fluid in
Figure B.2: Solving an instance of SAT.
both the x–inputs have the same value. On the other hand, balls 2 and 3 which
“decide” the fluid flow through y and y should not travel alongside each other; if one
allows fluid flow, the other should not. They are tied together with a loose non–elastic
string in such a way that when one ball is pushed by the fluid towards an expanded
section, the other will be forced into the constricted section. We assume that the
string will always be kept taut by the force of the fluid pumped in; this means that
the balls will be separated to the maximum extent possible, i.e. as far as the length
of the string would allow. By choosing suitable values for the length of the string and
the diameter of the balls, one can always ensure that one of the following conditions
is always true: (i) Ball 2 is inside B and ball 3 is inside c (fluid flows through y but
not through y ). (ii) Ball 2 is inside b and ball 3 is inside C (fluid flows through y but
not through y). This arrangement is similar to the seesaw arrangement representing
a bilateral NOT gate discussed in Chapter 3.
Now, when fluid is pumped in through the inlet with enough pressure to open up
the outlet, the balls position themselves in a physical configuration that will allow
fluid to flow through the outlet. The figure shows one such configuration. In case the
formula chosen is unsatisfiable—unlike the one that we have considered—there will
be no such physical configuration allowing fluid flow, and this will result in enormous
pressure building up in the system. A safety valve, not shown in the figure, that can
burst open, reacting to such a situation, and indicating the “unsatisfiability” of the
formula, can be designed.
Bibliography
[1] L. M. Adleman. Computing with DNA, Scientific American (August, 1998),
34–41.
[2] L. M. Adleman. Molecular computation of solutions to combinatorial problems,
Science 266 (1994), 5187, 1021–1024.
[3] L. M. Adleman. Caltech’s Computing Beyond Silicon Summer School lecture, http://www.cs.caltech.edu/cbsss/2002/schedule/slides/adleman
dnacompute.pdf.
[4] J. A. Anderson, E. Rosenfeld. Neurocomputing: Foundations of Research, The
MIT Press, Cambridge, 1988.
[5] J. J. Arulanandham. Perhaps, this is how the brain computes?!, Unpublished
document, 2001.
[6] J. J. Arulanandham. The Bead–Sort. Animation, www.cs.auckland.ac.nz/
∼jaru003/BeadSort.ppt.
[7] J. J. Arulanandham, C. S. Calude, M. J. Dinneen. Bead–Sort: A natural sorting
algorithm, EATCS Bull. 76 (2002), 153–162.
[8] J. J. Arulanandham. Implementing Bead-Sort with P systems, In Proc. 3rd
International Conference on Unconventional Models of Computation, UMC ’02,
2002, 115–125.
[9] J.J. Arulanandham, C.S. Calude, M.J. Dinneen. Solving SAT with bilateral
computing, Romanian Journal of Information Science and Technology 6, 1–2
(2003), 9–18.
[10] J. J. Arulanandham, C. S. Calude, M. J. Dinneen. A fast natural algorithm for
searching, Theoretical Computer Science 320, 1 (2004), 3–13.
[11] J. J. Arulanandham, C. S. Calude, M. J. Dinneen. A fast natural algorithm
for searching, In Proc. 1st South-East European Workshop on Formal Methods,
SEEFM03, 2003, 189–199.
76
77
[12] J. J. Arulanandham, C. S. Calude, M. J. Dinneen. Balance machines: Computing = balancing, Lecture Notes in Comput. Sci. 2933, Springer Verlag, Berlin,
2003, 36–47.
[13] J. J. Arulanandham, M. J. Dinneen. Balance machines: A new formalism for
computing, CDMTCS Report 256 (2004).
[14] J. L. Balcázar, J. Dı́az, J. Gabarró. Structural Complexity I, Springer–Verlag,
Berlin, 1988.
[15] P. Ball. Liquid logic, http://www.nature.com/nsu/010329/010329-8.html.
[16] A. Barenco, A. Ekert, A. Sanpera and C. Machiavello. A short introduction to
quantum computing, http://www.qubit.org.
[17] C. H. Bennett. Logical reversibility of computation, IBM J. Res. Dev. 6 (1973),
525–532.
[18] G. P. Berman, G. D. Doolen, R. Mainieri, V. I. Tsifrinovich. Introduction to
Quantum Computers, World Scientific, Singapore, 1999.
[19] D. Bitton, D. J. DeWitt, D. K. Hsiao, J. Menon. A taxonomy of parallel sorting,
Comput. Surveys 16 (1984), 3, 287–318.
[20] G. Brassard, P. Bratley. Fundamentals of Algorithmics, Prentice–Hall, N.J.,
1996.
[21] C. S. Calude. Personal communication.
[22] C. S. Calude, Gh. Păun, M. Tataram. A glimpse into Natural Computing,
J. Multi Valued Logic 7 (2001), 1–28.
[23] C. S. Calude, Gh. Păun. Computing with Cells and Atoms: An Introduction
to Quantum, DNA, and Membrane Computing, Taylor and Francis, New York,
2000.
[24] J. Chen. Simulation of the liquid–based natural algorithm for finding the average, Course work, Unconventional Models of Computation, University of Auckland, 2004.
[25] D. T. Chiu, E. Pezzoli, H. Wu, A. D. Stroock, G. M. Whitesides. Using three–
dimensional microfluidic networks for solving computationally hard problems,
In Proc. National Academy of Sciences USA 98 (2001), 2961–2966.
[26] G. A. Creak. Personal communication.
[27] D. Deutsch. Quantum theory, the Church-Turing principle and the universal
quantum computer, In Proc. Royal Society Series A 400 (1985), 97–117.
78
[28] R. P. Feynman. Simulating physics with computers, Int. J. of Theor. Phys. 21
(1982), 467–488.
[29] R. P. Feynman. Feynman Lectures on Computation, Perseus Publishing, Massachusetts, 1999.
[30] E. Fredkin, T. Toffoli. Conservative logic, Int’l J. Theoret. Phys. 21 (1982),
219–253.
[31] M. Garey, D. Johnson. Computers and Intractability: A Guide to the Theory of
NP–completeness, Freeman, San Francisco, 1979.
[32] L. Giavitto, J. Cohen, O. Michel. MGS simulation of Arulanandham–Calude–
Dinneen Bead–Sort, http://www.lami.univ-evry.fr/mgs/ImageGallery/
mgs gallery.html#beadsort.
[33] T. Gramss, S. Bornholdt, M. Gross, M. Mitchell, T. Pellizzari. Nonstandard
Computation, Wiley–VCH, New York, 1998.
[34] L. K. Grover. A fast quantum mechanical algorithm for database search, In
Proc. Twenty-Eighth Annual ACM Symposium on the Theory of Computing,
1996, 212–219.
[35] D. Haug. Two search algorithm implementations for digital Bead–Sort, Course
work, Unconventional Models of Computation, University of Auckland, 2004.
[36] D. S. Hirschberg. Fast parallel sorting algorithms, Commun. ACM 21 (1978),
8, 657–666.
[37] D. Husselmann, M. Qiu. The Bead–Sort algorithm, Software Engineering
Project, University of Auckland, 2004.
[38] A. Isaacs. A Dictionary of Physics, Oxford University Press, New York, 2000.
[39] C Isenberg. Problem solving with soap films: Part I, Phys. Educ. 10 (1975), 6,
452–456.
[40] C. Isenberg. Problem solving with soap films: Part II, Phys. Educ. 10 (1975),
7, 500–503.
[41] A. J. Jeyasooriyan. Ball Sort — A natural algorithm for sorting, Sysreader (1995), 13–16.
[42] A. J. Jeyasooriyan, R. Soodamani. Natural algorithms — A new paradigm for
algorithm development, In Proc. 2nd International Conference on Information,
Communications and Signal Processing, CD-Rom, ICICS ’99, 1999.
79
[43] Y. Kanada. Combinatorial Problem Solving Using Randomized Dynamic Composition of Production Rules, Int’l Conference on Evolutionary Computation,
ICEC ’95, 1995, 467–472.
[44] S. Kirkpatrick, C. Gelatt, Jr., M. Vecchi. Optimization by simulated annealing,
Science 220 (1983), 4598, 498-516.
[45] R. Lai. Course work, Unconventional Models of Computation, University of
Auckland, 2004.
[46] C. Martin-Vide, Gh. Păun, J. Pazos, A. Rodriguez-Paton. Tissue P systems,
Technical Report 421, Turku Center for Computer Science, September 2001.
[47] C. Martin-Vide, A. Păun, Gh. Păun. On the power of P systems with symport
rules, J. Univ. Computer Sci., 8, 2 (2002), 317–331.
[48] C. Martin-Vide, A. Păun, Gh. Păun, G. Rozenberg. Membrane systems with
coupled transport: Universality and normal forms, Fundamenta Informaticae,
49, 1-3 (2002), 1–15.
[49] W. McCulloch, W. Pitts. A logical calculus of the ideas immanent in nervous
activity, Bulletin of Math. Bio. 5 (1943), 115–133.
[50] G. J. Milburn. The Feynman Processor, Perseus Books, Massachusetts, 1998.
[51] M. Mitchell. An Introduction to Genetic Algorithms, The MIT Press, Massachusetts, 1996.
[52] D. E. Muller, F. P. Preparata. Bounds to complexities of networks for sorting
and for switching, J. ACM 22 (1975), 2, 195–201.
[53] M. Ogihara, A. Ray. DNA computing on a chip, Nature 403 (2003), 143–144.
[54] Gh. Păun. Computing with Membranes, Journal of Computer and System Sciences 61 (2000), 1, 108–143.
[55] A. Păun, Gh. Păun. The power of communication: P systems with symport/antiport, New Generation Computers 20 (2002), 3, 295–306.
[56] A. Păun, Gh. Păun, G. Rozenberg. Computing by communication in networks
of membranes, Intern. J. of Foundations of Computer Science 13 (2002), 6,
779–798.
[57] P. Peretto. An Introduction to the Modeling of Neural Networks, Cambridge
University Press, Cambridge, 1992.
[58] G. Rozenberg. The natural computing column, Bulletin of EATCS 66 (1998),
99.
80
[59] G. Rozenberg. The nature of computation and the computation in nature, International Colloquium on Graph Transformation and DNA Computing, Technical
University of Berlin, 14 February 2002.
[60] M. Sams. The SquareList Data Structure, Dr. Dobb’s Journal, May 2003, 37–40,
http://www.ddj.com.
[61] J. E. Savage. Models of Computation, Addison–Wesley, Reading, Mass., 1998.
[62] C. E. Shannon. Mathematical theory of the differential analyzer, Journal of Mathematics and Physics of the Massachusetts Institute of Technology 20 (1941), 337–354.
[63] D. Schultes. On the practical use of Bead–Sort, Course work, Unconventional Models of Computation, University of Auckland, 2004,
http://www.dominik-schultes.de/umc/asg2/.
[64] D. Schultes. A simulation of a liquid–based natural algorithm for finding the average of n integers using a cellular automaton, Course work,
Unconventional Models of Computation, University of Auckland, 2004,
http://www.dominik-schultes.de/umc/asg3/.
[65] D. Schultes. Rainbow Sort: Sorting at the Speed of Light, CDMTCS Report 244
(2004).
[66] H. T. Siegelmann, S. Fishman. Analog computation with dynamical systems,
Physica D 120 (1998), 1–2, 214–235.
[67] J. Stevens. Computing with light, Course work, Unconventional Models of Computation, University of Auckland, 2004.
[68] C. D. Thompson, H. T. Kung. Sorting on a mesh–connected parallel computer,
Commun. ACM 20 (1977), 4, 263–271.
[69] T. Toffoli. Reversible computing, Automata, Languages and Programming,
Springer–Verlag (1980), 632–644.
[70] J. Traub. Reality and models, in Boundaries and Barriers: On the Limits to
Scientific Knowledge, Addison-Wesley, 1996, 238–251.
[71] A. Vergis, D. Steiglitz and B. Dickinson. The Complexity of analog computation, Mathematics & Computers in Simulation 28 (1986), 91–113.
[72] S. Wolfram. Theory and Applications of Cellular Automata, World Scientific
Publishers, Singapore, 1986.
[73] http://architecture.about.com/library/blsagradafamilia.htm
81
[74] http://www.mcs.drexel.edu/∼crorres/Archimedes/Crown/
CrownIntro.html
[75] http://pespmc1.vub.ac.be/ATTRACTO.html
[76] http://www.merriam-webster.com/cgi-bin/dictionary?book=Dictionary
&va=natural
[77] http://cs.felk.cvut.cz/∼xobitko/ga/
[78] http://www.cs.bgu.ac.il/∼sipper/ga.html
[79] http://folding.Stanford.edu/science.html#fold
© Copyright 2026 Paperzz