Mathematics: The Loss of Certainty

Mathematics: The Loss of Certainty (Galaxy Books) (Paperback)
by Morris Kline (Author)
Editorial Reviews
Book Description
This work stresses the illogical manner in which mathematics
has developed, the question of applied mathematics as against
'pure' mathematics, and the challenges to the consistency of
mathematics' logical structure that have occurred in the
twentieth century.
About the Author
Morris Kline is Professor Emeritus at the Courant Institute of
Mathematical Sciences, New York University.
Product Details
Paperback: 384 pages
Publisher: Oxford University Press, USA; Reprint edition (June 17, 1982)
Language: English
engaging intellectual history in the domain of mathematics, July 14, 2003
Reviewer:
los desaparecidos (Makati City, Philippines) - See all my reviews
Morris Kline, Professor Emeritus of Mathematics at New York University, offers us with this
book a superb popular intellectual history in the domain of mathematics focusing on a single
theme, the search for the perfection of truth in mathematical formalism. The outcome of this
quest is described in its essence on page 257:
"The science which in 1800, despite the failings in its logical development, was hailed as the
perfect science, the science which establishes its conclusions by infallible, unquestionable
reasoning, the science whose conclusions are not only infallible but truths about our universe
and, as some would maintain, truths in any possible universe, had not only lost its claim to truth
but was now besmirched by the conflict of foundational schools and assertions about correct
principles of reasoning."
Kline informs us that the current state of the science is that in which in true postmodern fashion
several schools somewhat peacefully coexist--among them, Russell's logicism, Brouwer's
intuitionism, Hilbert's formalism, and Bourbaki's set theory--in apparent abandonment of the
nineteenth-century goal of achieving the perfection of truth in formal mathematical structures. In
this coliseum of competing paradigms, the tipping point that engenders the status quo of peaceful
coexistence is, of course, Kurt Godel, who in 1931 with his Incompleteness Theorem of almost
cultic fame showed that any mathematical system will necessarily be incomplete because there
will always exist a true statement within the system that cannot be proven within the system.
Despite this Babel, Kline believes that mathematics is gifted with the intellectual wherewithal to
fruitfully pursue even the farthest and most abstruse reaches of abstraction because in this quest
1
it is always assured the boon of the Holy Grail by virtue of the touchstone of empiricism. He
concludes on the last page:
"Mathematics has been our most effective link with the world of sense perceptions and though it
is discomfiting to have to grant that its foundations are not secure, it is still the most precious
jewel of the human mind and must be treasured and husbanded."
In Scripture the counterpart of this outlook might be, "Test everything; retain what is good" (1
Thessalonians 5:21), while in common proverbs it would be, "The proof of the pudding is in the
eating."
Although the book is written as a popular intellectual history and therefore is accessible to every
educated reader, I believe that the extent to which readers would appreciate various historical
portions of this book would depend on their formal mathematical preparation. From the time of
Euclid's Elements to Newton's Principia Mathematica, sufficient for a deep appreciation on the
reader's part is a high school background in mathematics. Beginning with Newton's fluxions and
Leibniz's differentials and ending with nineteenth-century efforts to place algebra on formal
footing, a finer understanding of the book requires the undergraduate-level background in
mathematics that is usually obtained by scientists and engineers. Starting in the late eighteenthcentury with Gauss' investigation of non-Euclidean geometry until twentieth-century disputes
concerning mathematical philosophy, the discussion is probably more accessible to trained
mathematicians or logicians.
Here and there I picked up interesting trivia, such as the historical fact that algebra, unlike
geometry, was not initially developed as a formal system but rather as a tool of analysis, or that
the intellectual enterprise to cast mathematics as a complete, consistent formal system really
began in the second decade of the nineteenth century.
For lovers of mathematics, I recommend this book as engaging diversion in intellectual history.
Read it on vacation.
30 of 55 people found the following review helpful:
Did not Convince Me, April 17, 2002
Reviewer:
Pedro Rosario (Río Piedras, PR USA) - See all my reviews
I wish to point out first the positive aspects of the book. First of all, it should be noted that
Morris Kline is one of the greatest mathematicians and now discusses a very important
philosophical issue that is pertinent today.
Kline shows a great insight concerning the history of the development of mathematics, a
recount of the problems that different mathematicians had throughout history, the way they
pretended to solve the problem, their logical and illogical reasons for doing so. He at least
defends himself very well looking to history to prove how uncertain mathematics is.
However, his book lives up according to a fallacy. Let's say that somebody thinks that
certainty depends on a property "F" characteristic of some "a" mathematical system. Then
2
the fact that up to that point it was believed by many people that F(a), then mathematics
was certain, while when they discovered that it was not the case that F(a) then certainty of
mathematics can no longer be established. An analogy with science will make clear the
fallacy. Galileo insisted that the certainty of science on the universe depended greatly on
the fact that the planets and stars moved in perfect circular orbits; Kepler on the other hand
proved that the planets move in eliptical orbits. It would be an exaggeration to think, that
the certainty of science is lost just because planets move in eliptical orbits.
Another problem is that he states that mathematics is also uncertain because the irrational
reasons to admit certain mathematical entities or axioms. However, the *validity* of the
axioms is what is at stake in mathematics, not the subjective reasons that somebody had to
admit them. An analogy again with science can show this second fallacy. Some of the
reasons Copernicus admited that the Sun was the center and not the Earth, was because
the Sun was the noblest star, and because it would restore the perfection of the circles in
which planets revolve, because it had been lost in the Ptolemaic geocentric view of the
universe. Do these reason should really dismiss the validity of Copernicus' theory? No. The
same happens with mathematics. The illogical reasons that somebody might have to
discover something, is irrelevant concerning the validity and certainty of mathematics.
Also, there is the fallacy that because that there is a development of mathematics in one
area that seems to be unorthodox at some moment, might compromise the certainty of
mathematics. For example, he uses the development of "strange" algebras or "strange"
geometries as examples of this. Non-Euclidean geometry doesn't invalidate Euclidean
geometry, as Morris seems to suggest, nor does imply the loss of certainty of Euclidean
geometry. It only means that Euclidean geometry is one of infinite possible mathematical
spaces. Certainty is guaranteed in each one of them.
Also, he seems to use the word "disaster" concerning Godel's theorems. But it was a
"disaster" only to *some* philosophical schools. Godel's theorems doesn't seem at all to
imply the uncertainty of mathematics, since Godel himself believed in its certainty during
his entire life. In fact, Platonist propoposals such as Husserl's, though Edmund Husserl
posited the completeness of mathematics, his main philosophy of mathematics is supported
*even after* Godel's discovery. The only thing refuted in his philosophy is the completeness
of mathematics, but not his mathematical realism, nor his account of mathemathical
certainty. Interestingly, Husserl is never mentioned in the book (just as many philosophers
of mathematics ignore his philosophy).
Though the book is certainly instructive and Morris shows his knowledge of history of
mathematics, due to these fallacies, he never proves his case.
Customer Reviews
Average Customer Review:
Write an online review and share your thoughts with other customers.
1 of 1 people found the following review helpful:
Excellent survey of the history of mathematics , February 9, 2007
Reviewer:
Michael Emmett Brady "mandmbrady" (Bellflower, California ,United States) - See all my reviews
3
Kline demonstates, in a clear and detailed fashion ,that the pursuit of " pure " mathematics(the
set theoretical, real analysis approach),as opposed to the applied mathematics useful to scientific
discovery ( the differential and integral calculus plus ordinary and partial differential
equations),leads to a dead end as far as scientific discovery is concerned. This is well illustrated
in his discussion of the rise of the Nicholas Bourbaki school that has come to dominate
mathematics(pp.256-257)since the mid -1930's and its impact on the social sciences.
The field of economics is an excellent example of Kline's point. Economists are notorious for
trying to copy the latest technical developments that occur in mathematics, statistics, physics,
biology, etc., irrespective of whether or not such techniques will yield useful knowledge which
economists can use to analyze the events/historical processes occurring in the real world so that
they can explain and predict why and when these events/processes will occur/reoccur.The best
examples of the non or anti-scientific approach of the economics profession are the (a) ArrowDebreu-Hahn general equilibrium approach based on various fixed point theorems,(b)the
Subjective Expected Utility approach of Ramsey-De Finetti-Savage ,and(c)the universal belief of
econometricians in the applicability of multiple regression and correlation analysis based on a
least squares approach which requires the assumption of normality. It is not surprising that no
econometrician in the 20th century ever did a basic goodness of fit test on their time series data
to check to see whether or not the assumption of normality was sound. It took a Benoit
Mandelbrot to demonstrate that the assumption of normality did not stand up.
The result has been that the economists simply are incapable of dealing with phenomena in the
real world. Their pursuit of the latest fad or gimmick or technique to copy leads to the type of
comment made by Robert Lucas,Jr.,the main founder of the rationalist expectationist school, that
his theory can't deal with uncertainty, but only risk which must be represented by the standard
deviation of a normal probability distribution. It is unfortunate that Lucas never did any
goodness of fit test on business cycle time series data before constructing a theory that is only
applicable if business cycles can be represented by multivariate normal probability distributions.
Kline's approach to the nature of mathematical discovery is very similar to that of J M Keynes
and R Carnap-"The recognition that intuition plays a fundamental role in securing mathematical
truths and that proof plays only a supporting role suggests that ...mathematics has turned full
circle.The subject started on an intuitive and empirical basis...the efforts to pursue rigor... have
led to an impasse..."(p.319).It can easily be observed that all of the three economist approaches
mentioned above have ended in an impasse also.
3 of 6 people found the following review helpful:
Kline's uncertainty, June 2, 2006
Reviewer:
Walt Peterson (Ames, Iowa) - See all my reviews
One reviewer said, ``First, Barbosa attacks Morris Kline (he's got some nerve doing that) for
Prof. Kline's supposed lack of understanding of mathematics. This frivolous insult is so
ridiculous that it isn't necessary to discuss it further.'' I won't claim that Kline doesn't understand
mathematics, but it is quite clear from this book that he does not understand logic. I looked up
reviews in the professional literature by logicians and found they made the same point.
Kline makes many technical errors in his account of the foundational debates in the early
twentieth century. My favorite mistake, and perhaps his most blatant blooper, is Kline's
statement that the Loewenheim-Skolem Theorem implies Goedel's Incompleteness Theorem; he
4
thinks that models with different cardinalities cannot satisfy the same sentences. (For nonlogicians: they can and do; Kline's alleged implication is wrong.) His account of the history of
mathematics is not as bad.
Kline was an applied mathematician, and in his last two chapters informs us in very strong terms
that applied mathematics is good and true, but pure mathematics is not. He urges mathematicians
to abandon the study of analysis, topology, functional analysis, etc., and devote themselves to the
problems of science.
The book is lively and entertaining, if not entirely reliable.
1 of 3 people found the following review helpful:
Great book by a great author, February 23, 2005
Reviewer:
G. A. Meles "Sirtor" (Italy) - See all my reviews
English:
This book isn't meant to be a mathematics book, still it offers a very good qualitative view of the
problems it describes - at least as long as the reader has a not-zero competence in mathematics.
Don't forget what Kant wrote, in the introduction of his masterpiece "Critique of Pure Reason"
i.e. "that many a book would have been much clearer if it had not made such an effort to be
clear": there are topics that can't be explained in "too simple words".
There are a lot of divulging books that are not clear for competent reader and seem to be clear for
inadequate readers: this is not the case of Kline books, which provides a interesting reading for
an interested reader.
3 of 4 people found the following review helpful:
Mathematical Uncertainty, December 23, 2004
Reviewer:
Jefferson D. Bronfeld "always_reading" (Binghamton, New York USA) - See all my reviews
A delightful and important book for all math enthusiasts. A must read for budding
mathematicians.
This book authoritatively chronicles the gradual realization that mathematics is not the
exploration of hard edged objective reality or the discovery of universal certainties, but is
more akin to music or story telling or any of a number of very human activities.
Kline is no sideline popularizer bent on de-throwning our intellectual heros - he speaks
knowledgeably from within the discipline of mathematics, revealing the evolution of
mathematical thought from "If this is real, why are there so many paradoxes and seeming
inconsistencies?" to "If this is just something people do, why is it so damned powerful?"
5
Mathematics: The Loss of Certainty. by Morris Kline. Oxford. 366 pp. $19.95.
Professor Kline recounts a series of ``shocks'', ``disasters'' and ``shattering'' experiences leading
to a ``loss of certainty'' in mathematics. However, he doesn't mean that the astronaut should
mistrust the computations that tell him that firing the rocket in the prescribed direction for the
prescribed number of seconds will get him to the moon.
The ancient Greeks were ``shocked'' to discover that the side and diagonal of a square could not
be integer multiples of a common length. This spoiled their plan to found all mathematics on that
of whole numbers. Nineteenth century mathematics was ``shattered'' by the discovery of nonEuclidean geometry (violating Euclid's axiom that there is exactly one parallel to a line through
an external point), which showed that Euclidean geometry isn't based on self-evident axioms
about physical space (as most people believed). Nor is it a necessary way of thinking about the
world (as Kant had said).
Once detached from physics, mathematics developed on the basis of the theory of sets, at first
informal and then increasingly axiomatized, culminating in formalisms so well described that
proofs can be checked by computer. However, Gottlob Frege's plausible axioms led to Bertrand
Russell's surprising paradox of the the set of all sets that are not members of themselves. (Is it a
member of itself?). L.E.J. Brouwer reacted with a doctrine that only constructive mathematical
objects should be allowed (making for a picky and ugly mathematics), whereas David Hilbert
proposed to prove mathematics consistent by showing that starting from the axioms and
following the rules could never lead to contradiction. In 1931 Kurt Goedel showed that Hilbert's
program cannot be carried out, and this was another surprise.
However, Hilbert's program and Tarski's work led to metamathematics, which studies
mathematical theories as mathematical objects. This replaced many of the disputes about the
foundations of mathematics by the peaceful study of the structure of the different approaches.
Professor Kline's presentation of these and other surprises as shocks that made mathematicians
lose confidence in the certainty and in the future of mathematics seems overdrawn. While the
consistency of even arithmetic cannot be proved, most mathematicians seem to believe (with
Goedel) that mathematical truth exists and that present mathematics is true. No mathematician
expects an inconsistency to be found in set theory, and our confidence in this is greater than our
confidence in any part of physics.
http://www-formal.stanford.edu/jmc/reviews/kline/kline.html
6
Mathematics: The Loss of Certainty
Most intelligent people today still believe that mathematics is a body of
unshakable truths about the physical world and that mathematical reasoning
is exact and infallible. Mathematics: The Loss of Certainty refutes that myth.
Morris Kline points, out that today there is not one universally accepted
concept of mathematics - in fact, there are many conflicting ones.
Yet the effectiveness of mathematics in describing and exploring physical
and social phenomena continues to expand. Indeed, mathematical activity is
flourishing as never before, with the rapidly growing interest in computers
and the current search for quantitative relationships in the social and
biological sciences. "Are we performing miracles with imperfect tools?"
Kline asks.
This book traces the history of mathematics’s falls from its lofty pedestal
and explores the reasons for its mysterious effectiveness. Kline explains in
non-technical language the drastic changes that have taken place in our
understanding of "pure" as well as "applied" math, and the implications for
science and for human reason generally.
Two nineteenth-century developments - non-Euclidean geometry and
quaternions - forced mathematicians to realize that mathematics is not a
series of self-evident truths about nature produced by infallible reasoning.
They found, for example, that several different geometries fit spatial
experience equally well. All could not be truths. This shocking realization
impelled mathematicians to investigate the nature of their axioms and
"unassailable" reasoning. To their surprise, they found that the axioms were
arbitrary and inadequate and the proofs ware woefully defective.
7
To rebuild the foundations of mathematics and to resolve the
contradictions, four different schools of thought cropped up - each differing
radically from the others in their views of what mathematics is. The pride of
human reason and its most effective expression suffered a fall which directly
and indirectly affects all employment of reason.
Morris Kline is Professor Emeritus of. Mathematics at New York University's Courant
Institute of Mathematical Sciences and associate editor of Mathematics Magazine and
Archive for History of Exact Sciences. He has been a Guggenheim Fellow and a Fulbright
Lecturer in Germany. His many books include Mathematics in Western Culture,
Mathematical Thought from Ancient to Modern Times, Why Johnny Can't Add, and Why the
Professor Can't Teach.
A quotation from the book:
"The current predicament of mathematics is that there is not one but
many mathematics and that for numerous reasons each fails to satisfy the
members of the opposing schools. It is now apparent that the concept of a
universally accepted, infallible body of reasoning - the majestic mathematics
of 1800 and the pride of man - is a grand illusion. Uncertainty and doubt
concerning the future of mathematics have replaced the certainties and
complacency of the past. The disagreements about the foundations of the
'most certain' science are both surprising and, to put it mildly, disconcerting.
The present state of mathematics is a mockery of the hitherto deep-rooted
and widely reputed truth and logical perfection of mathematics ....
"It behooves us therefore to learn why, despite its uncertain foundations,
and despite the conflicting theories of mathematicians, mathematics has
proved to be so incredibly effective."
From Mathematics: The Loss of Certainty
OXFORD UNIVERSITY PRESS, NEW YORK (1980)
http://www.philosophy-religion.org/handouts/mathematics.htm
Contact: Prof. Brian Davies
[email protected]
American Mathematical Society
Mathematics: The loss of certainty
Pure mathematics will remain more reliable than most other forms of
knowledge, but its claim to a unique status will no longer be sustainable."
So predicts Brian Davies, author of the article "Whither Mathematics?", which
will appear in the December 2005 issue of Notices of the AMS.
8
For centuries mathematics has been seen as the one area of human endeavor in
which it is possible to discover irrefutable, timeless truths. Indeed, theorems
proved by Euclid are just as true today as they were when first written down
more than 2000 years ago. That the sun will rise tomorrow is less certain than
that two plus two will remain equal to four.
However, the 20th century witnessed at least three crises that shook the
foundations on which the certainty of mathematics seemed to rest. The first was
the work of Kurt Goedel, who proved in the 1930s that any sufficiently rich
axiom system is guaranteed to possess statements that cannot be proved or
disproved within the system. The second crisis concerned the Four-Color
Theorem, whose statement is so simple a child could grasp it but whose proof
necessitated lengthy and intensive computer calculations. A conceptual proof
that could be understood by a human without such computing power has never
been found. Many other theorems of a similar type are now known, and more
are being discovered every year.
The third crisis seems to show how the uncertainty foreshadowed in the two
earlier crises is now having a real impact in mathematics. The Classification of
Finite Simple Groups is a grand scheme for organizing and understanding basic
objects called finite simple groups (although the objects themselves are finite,
there are infinitely many of them). Knowing exactly what finite simple groups
are is less important than knowing that they are absolutely fundamental across
all of mathematics. They are something like the basic elements of matter, and
their classification can be thought of as analogous to the periodic table of the
elements. Indeed, the classification plays as fundamental a role in mathematics
as the periodic table does in chemistry and physics. Many results in
mathematics, particularly in the branch known as group theory, depend on the
Classification of Finite Simple Groups.
And yet, to this day, no one knows for sure whether the classification is
complete and correct. Mathematicians have come up with a general scheme,
which can be summarized in a few sentences, for what the classification should
look like. However, it has been an enormous challenge to try to prove
rigorously that this scheme really captures every possible finite simple group.
Scores of mathematicians have written hundreds of research papers, totaling
thousands of pages, trying to prove various parts of the classification. No one
knows for certain whether this body of work constitutes a complete and correct
proof. What is more, so much time has now passed that the main players who
really understand the structure of the classification are dying or retiring, leaving
open the possibility that there will never be a definitive answer to the question
of whether the classification is true. As Davies puts it:
We have thus arrived at the following situation. A problem that can be
formulated in a few sentences has a solution more than ten thousand pages
long. The proof has never been written down in its entirety, may never be
9
written down, and as presently envisaged would not be comprehensible to any
single individual. The result is important and has been used in a wide variety of
other problems in group theory, but it might not be correct.
These three crises could be hinting that the currently dominant Platonic
conception of mathematics is inadequate. As Davies remarks:
[These] crises may simply be the analogy of realizing that human beings will
never be able to construct buildings a thousand kilometres high and that
imagining what such buildings might "really" be like is simply indulging in
fantasies.
We are witnessing a profound and irreversible change in mathematics, Davies
argues, which will affect decisively its character:
[Mathematics] will be seen as the creation of finite human beings, liable to
error in the same way as all other activities in which we indulge. Just as in
engineering, mathematicians will have to declare their degree of confidence
that certain results are reliable, rather than being able to declare flatly that the
proofs are correct.
Davies's article "Whither Mathematics?" (PDF, 448KB) is available at
Mathematics: The Loss of Certainty.
###
Founded in 1888 to further mathematical research and scholarship, the 30,000member American Mathematical Society fulfills its mission through programs
and services that promote mathematical research and its uses, strengthen
mathematical education, and foster awareness and appreciation of mathematics
and its connections to other disciplines and to everyday life.
http://www.eurekalert.org/pub_releases/2005-11/ams-mtl110105.php
The Loss of Certainty
Herman von Helmholtz's Mechanism at the Dawn of Modernity. A Study on the Transition from Classical
to Modern Philosophy of Nature
Series: Archimedes , Vol. 17, Schiemann, Gregor - 2007, Approx. 300 p., Hardcover
ISBN: 978-1-4020-5629-1, Not yet published. Available: November 3, 2007, approx. $129.00
About this book
Two seemingly contradictory tendencies have accompanied the development of the natural sciences in
the past 150 years. On the one hand, the natural sciences have been instrumental in effecting a
thoroughgoing transformation of social structures and have made a permanent impact on the conceptual
world of human beings. This historical period has, on the other hand, also brought to light the merely
10
hypothetical validity of scientific knowledge. As late as the middle of the 19th century the truth-pathos in
the natural sciences was still unbroken. Yet in the succeeding years these claims to certain knowledge
underwent a fundamental crisis. For scientists today, of course, the fact that their knowledge can possess
only relative validity is a matter of self-evidence.
The present analysis investigates the early phase of this fundamental change in the concept of science
through an examination of Hermann von Helmholtz's conception of science and his mechanistic
interpretation of nature. Helmholtz (1821-1894) was one of the most important natural scientists in
Germany. The development of this thoughts offers an impressive but, until now, relatively little considered
report from the field of the experimental sciences chronicling the erosion of certainty.
Written for:
Scholars and students of History, History of Science, History of Physics, History of Philosophy, History of
the Philosophy of Science. Philosophers of Science and Technology
http://www.springer.com/west/home/philosophy?SGWID=4-40385-22-173701759-0
Mathematics: The Loss of Certainty
Morris Kline
Oxford Universiy Press, 1980
You can't trust a simple axiom these days.
But could we ever. After reading Kline's book, you might not think so. He's put together a
sweeping history of the revolutions in mathematics. In each revolution the foundations of the
previous mathematical thinking are questioned. A classic example is Euclid's fifth axiom (in two
dimensions, given a line L and a point P not on L, there is only one line parallel to L that passes
through P). After hundreds of years of dissatisfaction with this axiom, some mathematicians tried
variations, and created non-Euclidean geometry.
Kline delves much deeper into the mathematical thinking of the past two hundred years. From
mathematics as a one-to-one model of the world, it became successively more divorced from
reality. By the middle of this century, Kurt Gödel proved that arithmetic could be either complete
(all statements are true or false) or consistent (there are no conflicts between statements), but not
both. A consequence was that there are mathematical statements that are true but not provable.
The disasters didn't end there. In the 1960s, Paul Cohen proved that the Continuum Hypothesis, a
fundamental hypothesis about set theory (the foundations of mathematics), is not provable or
disprovable given the standard axioms of set theory. The Continuum Hypothesis is about the size
of infinite sets. In effect, it says that there are two sets which you either can or cannot put in a
one-to-one correspondence. It's your choice whether this is allowed. It's sort of like saying you
get to decide if there's another integer between 3 and 4. Mathematics would still work either
way, but the decision is up to you.
That's one of the reasons I didn't pursue mathematics. The axiomatic nature of the Continuum
Hypothesis really shook my faith in whether mathematics has any relation to the world. This
book is a must for any student interested in mathematics; it's especially good in high school. I
wish I had read it many years before I found it. It's the single most important book on
mathematics I've ever read.
http://www.rdrop.com/~half/Personal/Hobbies/Books/RequiredReading.html
11
OBITUARY
Morris Kline, 84, Math Professor And Critic of Math Teaching,
Dies
by Eric Pace
Morris Kline, a professor of mathematlcs who was a longtime critic
of the way mathematlcs was taught, died early yesterday at
Maimonides Hospital in Brooklyn. He was 84 years old and lived
in Brooklyn.
He had been in declining health, and his death was caused by heart
failure, said his wife, the former Helen Mann.
From 1938 to 1975, Professor Kline taught at New York University, with time out as
a civilian employee in the Army in World War II. He was the author or editor of more
than a dozen books, including "Mathematics in Western Culture" (1953),
Mathematics: The Loss of Certainty" (Oxford University Press, 1980) and
"Mathematics and the Search for Knowledge" (Oxford University Press, 1985).
In a 1986 editorial in Focus, a Journal of the Mathematical Association of America, he
summarized some of his views: "On all levels primary, and secondary and
undergraduate - mathematics is taught as an isolated subject with few, if any, ties
to the real world. To students, mathematics appears to deal almost entirely with
things whlch are of no concern at all to man".
The Key to Understandlng
The error, he contended, was that "mathematics is expected either to be
immediately attractlve to students on its own merits or to be accepted by
students solely on the basis of the teacher's assurance that it will be helpful in
later life." And yet, he wrote,"mathematlcs is the key to understanding and
mastering our physical, social and biological worlds."
He argued that teachers should stress useful applications of mathematics in various
other fields: that they could have elementary schoolchildren deal wlth baseball batting
averages and puzzles, get high school students work with statistics and probability,
and bring college students to apply mathematics to computers and and physics.
12
But, he said, many schoolteachers are simply unfamiliar with such teaching
techniques, and the same is true of numerous college professors who were under
"pressure to write research papers." He called on professional mathematics journals
to print articles that instructed school and college teachers about ways of presenting
such applications to their pupils and students.
Doubts on its Utility.
Professor Kline himself came to doubt the utility and importance for what he called
"the life of Man". He himself turned to applied mathematics since, as he once put it,
"the greatest contribution mathematics has made and should continue to make
was to help man understand the world about him."
He was perennially interested in the cultural significance of mathematics., and he took
up that subject notably in his 1980 book "Mathematics: The Loss of Certainty,"
which William Barrett writing in the New York Times Book Review, called
"intensely dramatic, immensely readable" in its treatment of "the decline of
mathematics from its once lofty pinnacle as the supreme embodiment of human
certainty to its present uncertain state rent by conflicting schools."
In his 1973 book, "Why Johnny Can't Add: The Failure of the New Math,"
Professor Kline was critical of the New Math, an educational trend of the 1960's.
Reviewing the book in The New York Times, Harry Schwartz said "its significance
goes far beyond its immediate topic. It raises the broader issue of how, in field
after field in American life, there come to be sudden fixations on supposed
panaceas for perceived problems. All too often however, these panaceas turn out
to have unforseen consequences as bad as or worse than the original difficulties
that triggered their adoption."
Morris Kline ws born in Brooklyn, the son of Bernard Kline, an accountant, and the
former Sarah Spatt. He grew up in Brooklyn and in Jamaica, Queens. He graduated
from Boys High School in Brooklyn and went to study mathematics at New York
University, where he earned a bachelor's degree in 1930, a master's degree in 1932
and a doctorate in 1936.
In addition to his wife, to whom he was married in 1939, Professor Kline is survived
by a brother, Emanuel, of Great Neck, L.I.; two daughters, Elizabeth Landers of San
Francisco and Judith Karamazov of Boston; a son, Douglas, of Cambridge, Mass., and
three grandchildren.
The above obituary first appeared in The New York Times, June 10, 1992.
13
Version: 22nd March 2001
http://www.marco-learningsystems.com/pages/kline/obituary.html
Is Arithmetic Consistent?
Do I contradict myself? Very well then, I contradict myself. I am large, I contain multitudes.
-Walt Whitman
The most widely accepted formal basis for arithmetic is called Peano's Axioms (PA). Giuseppe
Peano based his system on a specific natural number, 1, and a successor function such that to
each natural number x there corresponds a successor x' (also denoted as x+1). He then
formulated the properties of the set of natural numbers in five axioms:
(1) 1 is a natural number.
(2) If x is a natural number then x' is a natural number.
(3) If x is a natural number than x' is not 1.
(4) If x' = y' then x = y.
(5) If S is a set of natural numbers including 1, and if for every x in S the successor x' is
also in S, then every natural number is in S.
These axioms (together with numerous tacit rules of reasoning and implication, etc) constitute a
formal basis for the subject of arithmetic, and all formal “proofs” ultimately are derived from
them. The first four, at least, appear to be “clear and distinct” notions, and even the fifth would
be regarded by most people as fairly unobjectionable. Nevertheless, the question sometimes
arises (especially in relation to very complicated and lengthy proofs) whether theorems based
on these axioms (and tacit rules of implication) are perfectly indubitable. According to Goedel’s
theorem, it is impossible to formally prove the consistency of arithmetic, which is to say, we
have no rigorous proof that the basic axioms of arithmetic do not lead to a contradiction at some
point. For example, if we assume some proposition (perhaps the negation of a conjecture we
wish to prove), and then, via some long and complicated chain of reasoning, we arrive at a
contradiction, how do we know that this contradiction is essentially a consequence of the
assumed proposition? Could it not be that we have just exposed a contradiction inherent in
arithmetic itself? In other words, if arithmetic itself is inconsistent, then proof by contradiction
loses its persuasiveness.
On one level, this kind of objection can easily be vitiated by simply prefacing every theorem
with the words "If our formalization of arithmetic is consistent, then...". Indeed, for short
simple proofs by contradiction we can strengthen the theorem by reducing this antecedent
condition to something like "If arithmetic is consistent over this small set of operations,
then...". We can be confident that the contradiction really is directly related to our special
assumption, because it's highly implausible that our formalization of arithmetic could exhibit a
contradiction over a very short chain of implication. However, with long proofs of great
subtlety, extending over multiple papers by multiple authors, and involving the interaction of
many different branches and facets of mathematics, how would we really distinguish between a
subtle contradiction resulting from one specific false assumption vs. a subtle contradiction
14
inherent in the fabric of arithmetic itself?
Despite Goedel’s theorem, the statement that we cannot absolutely prove the axioms of
arithmetic is sometimes challenged on the grounds that we can prove the consistency of PA,
provided we are willing to accept the consistency of some more encompassing formal system
such as the Zermelo-Frankel (ZF) axioms, perhaps augmented with the continuum hypothesis
(ZFC). But this is a questionable position. Let's say a proof of the consistency of system X is
"incomplete" if it's carried out within a system Y whose consistency has not been completely
proven. Theorem: Every proof of the consistency of arithmetic is incomplete. In view of this,
it isn't clear how "working in ZFC" resolves the issue. There is no complete and absolute proof
of the consistency of arithmetic, so every arithmetical proof is subject to doubt. (By focusing on
arithmetic I don't mean to imply that other branches of mathematics are exempt from doubt.
Hermann Weyl, commenting on Gödel’s work, said that "God exists because mathematics is
undoubtedly consistent, and the devil exists because we cannot prove the consistency".)
As Morris Kline said in his book Mathematics and the Loss of Certainty, “Gödel’s result on
consistency says that we cannot prove consistency in any approach to mathematics by safe
logical principles”, meaning first-order logic and finitary proof theory, which had been shown
in Russell's "Principia Mathematica" to be sufficient as the basis for much of mathematics.
Similarly, in John Stillwell's book Mathematics and its History we find "If S is any system that
includes PA, then Con(S) [the consistency of S] cannot be proved in S, if S is consistent." On
the other hand, some would suggest that the contemplation of inconsistency in our formal
arithmetic is tantamount to a renunciation of reason itself, i.e., if our concept of natural numbers
is inconsistent then we must be incapable of rational thought, and any further considerations are
pointless. This attitude is reminiscent of the apprehensions mathematicians once felt regarding
"completed infinities". "We recoil in horror", as Hermite said, believing that the introduction of
actual infinities could lead only to nonsense and sophistry. Of course, it turned out that we are
quite capable of reasoning in the presence of infinities. Similarly, I believe reason can survive
even the presence of contradiction in our formal systems.
Admittedly this belief is based on a somewhat unorthodox view of formal systems, according to
which such systems should be seen not as unordered ("random access") sets of syllogisms, but
as structured spaces, with each layer of implicated objects representing a region, and the
implications representing connections between different regions. The space may even possess a
kind of metric, although "distances" are not necessarily commutative. For example, the
implicative distance from an integer to its prime factors is greater than the implicative distance
from those primes to their product. According to this view a formal system does not degenerate
into complete nonsense simply because at some point it contains a contradiction. A system may
be "locally" consistent even if it is not globally consistent. To give a crude example, suppose
we augment our normal axioms and definitions of arithmetic with the statement that a positive
integer n is prime if and only if 2n  2 is divisible by n. This axiom conflicts with our existing
definition of a prime, but the first occurrence of a conflict is 341. Thus, over a limited range of
natural numbers the axiom system possesses "local consistency".
Suppose we then substitute a stronger axiom by saying n is a prime iff f(rn) = 0 (mod n) where r
is any root of f(x) = x5  x3  2x2 + 1. With this system we might go quite some time without
15
encountering a contradiction. When we finally do bump into a contradiction (e.g.,
2258745004684033) we could simply substitute an even stronger axiom. In fact, we can easily
specify an axiom of this kind for which the smallest actual exception is far beyond anyone's
(present) ability to find, and for which we have no theoretical proof that any exception even
exists. Thus, there is no direct proof of inconsistency. We might then, with enough
imagination, develop a plausible (e.g., as plausible as Banach-Tarski) non-finitistic system
within which I can actually prove that our arithmetic is consistent. In fact, it might actually be
consistent… but we would have no more justification to claim absolute certainty than with our
present arithmetic.
As to the basic premise that we have no absolute proof of the consistency of arithmetic, here are
a few other people's thoughts on the subject:
A meta-mathematical proof of the consistency of arithmetic is not excluded by...Goedel's analysis. In
point of fact, meta-mathematical proofs of the consistency of arithmetic have been constructed, notably by
Gerhard Gentzen, a member of the Hilbert school, in 1936. But such proofs are in a sense pointless if, as
can be demonstrated, they employ rules of inference whose own consistency is as much open to doubt as
is the formal consistency of arithmetic itself. Thus, Gentzen used the so-called "principle of transfinite
mathematical induction" in his proof. But the principle in effect stipulates that a formula is derivable
from an infinite class of premises. Its use therefore requires the employment of nonfinitistic meta mathematical notions, and so raises once more the question which Hilbert's original program was intended
to resolve.
-Ernest Nagel and James Newman
Gödel showed that...if anyone finds a proof that arithmetic is consistent, then it isn't!
-Ian Stewart
...Hence one cannot, using the usual methods, be certain that the axioms of arithmetic will not lead to
contradictions.
-Carl Boyer
An absolute consistency proof is one that does not assume the consistency of some other system...what
Gödel did was show that there must be "undecidable" statements within any [formal system]... and that
consistency is one of those undecidable propositions. In other words, the consistency of an all-embracing
formal system can neither be proved nor disproved within the formal system.
-Edna Kramer
Gentzen's discovery is that the Goedel obstacle to proving the consistency of number theory can be
overcome by using transfinite induction up to a sufficiently great ordinal... The original proposals of the
formalists to make classical mathematics secure by a consistency proof did not contemplate that such a
method as transfinite induction up to 0 would have to be used. To what extent the Gentzen proof can be
accepted as securing classical number theory in the sense of that problem formulation is in the present
state of affairs a matter for individual judgment...
-Kleene, "Introduction to Metamathematics"
Some mathematicians assert that there is a consistency proof of PA, and it is quite elementary,
using standard mathematical techniques (ie, ZF). It consists of exhibiting a model. However,
we speak of "exhibiting a model" we are referring to a relative consistency proof, not an
absolute consistency proof. Examples of relative consistency theorems are
If Euclidean geometry is consistent then non-Euclidean geometry is consistent.
16
If ZF is consistent then ZFC is consistent.
Relative consistency proofs assert nothing about the absolute consistency of any system, they
merely relate the consistency of one system to that of another. Here's what the Encyclopedic
Dictionary of Mathematics (2nd Ed) says on the subject:
Hilbert proved the consistency of Euclidean geometry by assuming the consistency of the theory of real
numbers. This is an example of a relative consistency proof, which reduces the consistency proof of one
system to that of another. Such a proof can be meaningful only when the latter system can somehow be
regarded as based on sounder ground than the former. To carry out the consistency of logic proper and set
theory, one must reduce it to that of another system with sounder ground. For this purpose, Hilbert
initiated metamathematics and the finitary standpoint...Let S be any consistent formal system containing
the theory of natural numbers. Then it is impossible to prove the consistency of S by utilizing only
arguments that can be formalized in S.... In [these] consistency proofs of pure number theory...,
transfinite induction up to the first e-number 0 is used, but all the other reasoning used in these proofs can
be presented in pure number theory. This shows that the legitimacy of transfinite induction up to 0 cannot
be proved in this latter theory.
All known consistency proofs of arithmetic rely on something like transfinite induction (or
possibly primitive recursive functionals of finite type), the consistency of which is no more
self-evident than that of arithmetic itself.
Oddly enough, some people (even some mathematicians) are under the impression that
Goedel’s results apply only to very limited formal systems. One mathematician wrote to me
that “there is no proof in first-order logic that arithmetic is consistent, but that has more to do
with the limitations of first-order logic than anything else, and there are other more general
types of logic in which proofs of the consistency of arithmetic are available.” Of course,
contrary to this individual’s claim, Goedel's results actually apply to any formal system that is
sufficiently complex to encompass and be modeled by arithmetic. Granted, if we postulate a
system that cannot be modeled (encoded) by arithmetic then other things are possible, but the
consistency of such a system would be at least as doubtful as the consistency of the system we
were trying to prove. For example, Gentzen's proof of the consistency of PA uses transfinite
induction, but surely it is pointless to try to resolve doubts about arithmetic by working with
transfinite induction, since the latter is even more dubious.
The inability of even many mathematicians to absorb the actual content and significance of
Goedel’s theorems is interesting in itself, as are the various misconstruals of those theorems,
which tend to reflect what the person thinks must be true. For example, we can see how
congenial is the idea that Goedel’s results apply only to a limited class of formal systems. The
fact that mathematicians at universities actually believe this is rather remarkable. It seems to be
a case of sophomoric backlash, in reaction to what can often seem like sensationalistic popular
accounts of Goedel’s theorems. Apparently it becomes a point of pride among math graduate
students to “see through the hype”, and condescendingly advise the less well-educated as to the
vacuity of Goedel’s theorems. (This would be fine if Goedel’s theorems actually were vacuous,
but since they aren’t, it isn’t.) Another apparent source of misunderstanding is the sheer
inability to believe that any rational person could doubt the consistency of, say, arithmetic.
Imagine the reaction of the typical mathematician to the even more radical suggestion that
17
every (sufficiently complex) formal system contains a contradiction at some point. When I
mentioned this to an email correspondent, he expressed utter incredulity, saying "You can't
possibly believe that simple arithmetic could contain an inconsistency! How would you
balance your check book?" This is an interesting question. I actually balance my checkbook
using a formal system called Quicken. Do I have a formal proof of the absolute consistency and
correctness of Quicken? No. Is it conceivable that Quicken might contain an imperfection that
could lead, in some circumstances, to an inconsistency? Certainly. But for many
mathematicians this situation must be a real paradox, so it’s worth examining in some detail.
Suppose I balance my checkbook with a program called Kash (so as not to sully the good name
of Quicken), and suppose this program implements arithmetic perfectly - with one exception.
The result of subtracting 5.555555 from 7.777777 is 2.222223. Now if B is my true balance
then I should have the formal theorems
for every value of q. Thus, in the formal system of Kash I can prove that 1 = 2 = 3 = 4.23 =
89.23 = anything. Clearly the Kash system has a consistency problem, because I can compute
my balance to be anything I want just by manipulating the error produced by that one particular
operation. ("So oft it chances in particular men...") But here is the fact that must seem
paradoxical to many mathematicians: thousands of people have used Kash for years, and not a
single error has appeared in the results. How can this be, given that Kash is formally
inconsistent?
The answer is that although Kash is globally inconsistent, it possesses a high degree of local
consistency. Traveling from any given premise (such as 5+7) directly to the evaluation (e.g.,
12), we are very unlikely to encounter the inconsistency. Of course, in a perfectly consistent
system we could take any of the infinitely many paths from 5+7 to the evaluation and we would
always arrive at the same result, which is clearly not true within Kash (in which we could, by
round-about formal manipulations, evaluate 5+7 to be -3511.1093, or any other value we
wanted). Nevertheless, for almost all paths leading from a given premise to its evaluation, the
result is the same.
Now consider our formal system of arithmetic. Many people seem agog at the suggestion that
our formalization of arithmetic might possibly be inconsistent at some point. Clearly our
arithmetic must possess a very high degree of local consistency, because otherwise we would
have observed anomalies long before now. However, are we really justified in asserting that
every one of the infinitely many paths from every premise to its evaluation gives the same
result? As with the system Kash, this question can't be answered simply by observing that our
checkbooks usually seem to balance.
Moreover, the question cannot even be answered within any formal system that can be modeled
by the natural numbers. It is evidently necessary to assume the validity of something like
transfinite induction to prove the consistency of arithmetic. But how sure are we that a formal
system that includes transfinite induction is totally consistent? (If, under the assumption of
transfinite induction, we had found that arithmetic was not consistent, would we have
18
abandoned arithmetic... or transfinite induction?) The only way we know how to prove this is
by assuming still less self-evidently consistent procedures, and so on.
The points I'm trying to make are
(1) We have no meaningful proof of the consistency of arithmetic.
(2) If arithmetic is inconsistent, it does not follow that our checkbooks must all be out of
balance. It is entirely possible that we could adjust our formalization of arithmetic
to patch up the inconsistency, and almost all elementary results would remain
unchanged.
(3) However, highly elaborate and lengthy chains of deduction in the far reaches of
advanced number theory might need to be re-evaluated in the light of our patched-up
formalization.
Of course, the consistency or inconsistency of arithmetic can only be appraised in the context of
a completely formalized system, but the very act of formalizing is problematic, because it
invariably presupposes prior knowledge on the part of the reader. Thus it can never be
completely clear that our formalization corresponds perfectly with what we call `arithmetic'.
Our efforts to project (exert) our formalizations past any undefined prior knowledge tend to
lead to the possibility of contradictions and inconsistencies. But when such an inconsistency
comes to light, we don’t say the ideal of “arithmetic” is faulty. Rather we say "Ooppss, our
formalization didn't quite correspond to true ideal arithmetic. Now, here's the final ultimate and
absolutely true formalization... (I know we said this last time, but this time our formalization
really is perfect.)" As a matter of fact, this very thing has occurred historically. In this sense we
are tacitly positing the existence of a Platonic ideal "ARITHMETIC" that is eternal, perfect, and
true, while acknowledging that any given formalization of this Platonic ideal may be flawed.
The problem is that our formal proofs are based on a specific formalization of arithmetic, not on
the ideal Platonic ARITHMETIC, so we are not justified in transferring our sublime confidence
in the Platonic ideal onto our formal proofs.
Claims that arithmetic is indubitable, while acknowledging that our formalization may not be
perfect, are essentially equivalent to saying that we are always right, because even if we are
found to have said something wrong, that's not what we meant. Any given theorem can be
regarded as a theorem about the ideal of ARITHMETIC, prior to any particular formalization,
but then the first step in attempting to prove it is to select a formal system within which to
work. Of course, it's trivial to devise a formal system labeled "arithmetic" and then prove X
within that system. For example, take PA+X. But the question is whether that system really
represents ARITHMETIC, one requirement of which is consistency.
We don't know what, if any, parts of our present mathematics would be rendered uninteresting
by the discovery of an inconsistency in our present formalization of arithmetic, because it
would depend on the nature of the inconsistency and the steps taken to resolve it. Once the
patched-up formalization was in place, we would re-evaluate all of our mathematics to see
which, if any, proofs no longer work in the new improved "arithmetic". One would expect that
almost all present theorems would survive. The theorems most likely to be in jeopardy would
be the most elaborate, far-reaching, and "deep" results, because their proofs tax the resources of
19
our present system the most.
Some mathematicians respond to the assertion that we have no meaningful proof of the
consistency of arithmetic by claiming that “the usual ZFC proof is quite meaningful." But this
seems to hinge on different understandings of the meaning of “meaningful”. Consider the two
well known theorems
(1) con(ZF) implies con(ZFC)
(2) con(ZF) implies con(PA)
From a foundational standpoint, these two theorems act in opposite directions. In case (1), if
the result had been {con(ZF) implies NOTcon(ZFC)} then it would have undermined our
confidence in the "C" part of ZFC. However, if the result of case (2) had been {con(ZF)
implies NOTcon(PA)}, it would presumably would have undermined our confidence in ZF, not
in PA (because the principles of PA are considered to be more self-evidently consistent that
those of ZF). The only kind of proof that would enhance our confidence in PA would be of the
form
con(X) implies con(PA)
where X is a system whose consistency is MORE self-evident than that of PA. (This is the key
point.) For example, Hilbert hoped that with X = 1st Order Logic it would be possible to prove
this theorem, thereby enhancing our confidence in the consistency of PA. That would have
been a meaningful proof of the consistency of PA. However, it's now known that such a proof
is impossible (unless you believe in the existence of a formal system that is more self-evidently
consistent than PA but that cannot be modeled within the system of natural numbers).
Others argue that it may be possible to regard the theory of primitive recursive functionals as
more evidently consistent than PA. It's well known that both transfinite induction and the theory
of primitive recursive functionals cannot be modeled within the system of natural numbers, but
we do not need to claim that it would be impossible to regard such principles as more evidently
consistent than PA. We simply observe that no one does – and for good reason. Each
represents a non-finitistic extension of formal principles, which is precisely the source of
uncertainty in the consistency of PA. Again, there is a little thought experiment that sometimes
helps people sort out their own hierarchy of faith: If, assuming the consistency of the theory of
primitive recursive functionals, one could prove that PA is NOT consistent, would we be more
inclined to abandon PA or the theory of primitive recursive functionals?
Some mathematicians assert that doubts about whether PA is consistent, and whether it can be
proven to be consistent, are trivial and pointless, partly because this places all of mathematics in
doubt. However, as to the triviality, much of the most interesting and profound mathematics of
this century has been concerned with just such doubts. As to the number of proofs that are cast
into doubt by the possibility of inconsistency in PA, the "perfect consistency or total gibberish"
approach to formal systems evidently favored by many mathematicians is not really justified.
It just so happened that Russell and Whitehead's FOL was a convenient finitistic vehicle to use
20
as an example, although subsequent developments showed that this maps to computability,
from which the idea of a universal Turing machine yields a large segment (if not all) of what
can be called cognition. Of course, people sometimes raise the possibility of a finitistic system
that cannot be modeled within the theory of natural numbers but, as Ernst Nagel remarked, "no
one today appears to have a clear idea of what a finitistic proof would be like that is NOT
capable of formulation within arithmetic".
PA can be modeled within ZF. It follows that con(ZF) implies con(PA). This was simply
presented as an illustration of how the formal meaning of a theorem of this form depends on our
"a priori" perceptions of the relative soundness of the two systems.
Some mathematicians have alluded to the "usual" proof of con(PA) but have not specified the
formal system within which this "usual" proof resides. Since there are infinitely many
possibilities, it's not possible to guess which specific one they have in mind. In general terms, if
there exists a proof of con(PA) within a formal system X, then we have the meta-theorem {
con(X) implies con(PA) }, so we can replace X with whatever formal system we consider to be
the "usual" one for proofs of PA. The meaningfulness of such a theorem depends on our
perception of the relative soundness of the systems X and PA. I assert that no system X within
which con(PA) can be proved is evidently more consistent that con(PA) itself.
Here's a related quote from "Mathematical Logic", 2nd Ed, by H. D. Ebbinghaus, J. Flum, and
W. Thomas, Springer-Verlag, 1994:
The above argument [Goedel's 2nd Thm] can be transferred to other systems where there is a substitute
for the natural numbers and where R-decidable relations and R-computable functions are representable.
In particular, it applies to systems of axioms for set theory such as ZFC... Since contemporary
mathematics can be based on the ZFC axioms, and since...the consistency of ZFC cannot be proved using
only means available within ZFC, we can formulate [this theorem] as follows: If mathematics is
consistent, we cannot prove its consistency by mathematical means.
I wanted to include this quote because every time I say there is no meaningful formal proof of
the consistency of arithmetic (PA), someone always says "Oh yes there is, just work in ZFC, or
ZF, or PA + transfinite induction, or PA + primitive recursive functionals, or PA + con(PA), or
just use the "usual" proof, or (as one mathematician advised me) just stop and think about it!"
But none of these proposed "proofs" really adds anything to the indubitability of con(PA). Of
course, it's perfectly acceptable to say "I'm simply not interested in the foundational issues of
mathematics or in the meaning of consistency for formal systems". However, disinterest should
not be presented as a substitute for proof.
In response to the rhetorical question “Are we really justified in asserting that every one of the
infinitely many paths from every premise to its evaluation gives the same result?”, some
mathematicians will say that if we allow associativity of modus ponens, then there is essentially
only one proof of any classical formula. However, this misses a crucial point. If we define the
"essential path" between any two points in space as a function only of the beginning and ending
points, then there is essentially only one path from New York to Los Angeles. This definition
of a "path", like the corresponding definition of a "proof" is highly reductionist. We are
certainly free to adopt that definition if we wish, but in so doing we ignore whole dimensions of
21
non-trivial structure, as well as making it impossible to reason meaningfully about "imperfectly
consistent" systems which, for all we know, may include every formal system that exists. The
unmeasured application of the modus ponens (rule of detachment) is precisely what misleads
people into thinking that a formal system must be either absolutely consistent or total
gibberish. Then, when they consider systems such as naive set theory, PA, or ZFC, which are
clearly not total gibberish, they conclude that they must be absolutely consistent.
Some mathematicians claim that PA is a collection of simple statements, all of which are
manifestly true about the positive integers. However, although the explicit axioms look simple,
they are meaningful only in the context of a vast mathematical apparatus that includes
conventions regarding names of variables, rules about substitutions, and more generally rules of
inference and implication. For example, if we propose a sequence of variable names x, x', x''
and so on, we need to know that all these names are different. Of course, most of us would
accept this as intuitively obvious, but if the formal system is to be absolutely watertight, we
must prove that they are different. Thus we end up needing to apply the Peano axioms to the
language used to talk about them.
As Jan Nienhuys observed, one of the basic subtleties related to Peano’s axioms is that they
implicitly assert that any set of positive integers has a smallest element. In view of Ramsey
numbers, it’s clear that even in simple cases this is more a metaphysical assertion than a matter
of "manifestly true". How can we have confidence that any set of positive integers, no matter
how unwieldy its definition, has a smallest element? (The unwieldiness of definitions is made
possible by having an unlimited amount of positive integers and variable names at our
disposal.) The Peano axioms plus a formal system in which they are embedded must be free of
contradictions, i.e. for no statement X is both X and not-X provable. (Also, note that we allow
variables such as X to denote statements, so the formal system should have watertight
descriptions - no hand waving or illustration by example allowed - of how to go about making
variables represent statements.) Only such a system as a whole can conceivably contain a
contradiction. Without the formal framework a contradiction doesn't make much sense.
Here’s a sampling of comments on this topic received from various mathematicians:
Personally I think that mathematicians are sometimes too hung up on the details of formalizations and
lose track of the actual math they are talking about. As the old saying goes, mathematicians are Platonists
on weekdays and formalists on Sunday. I personally think that this tendency is a good and healthy one!
But Goedel's Theorem can be derived even with a substantially weakened Axiom of Induction, and I
believe that such a weakened axiom would then also lead to a contradiction, so it seems that we would
have to throw away all induction.
If the consistency of FOL is somehow a hypothetical matter, why should I assume that you or I make any
sense whatever in our babbling?
If arithmetic was inconsistent, wouldn't all the bridges fall down?
If so, it's an awfully trivial point, and hardly worth making. She might as well say "Everything we think
might have an error in it because some demon somewhere is messing with our brains." Quite true. So
what?
22
A proof from ZF brings with it the supreme confidence that a century of working with ZF and beyond has
given us.
One mathematician argued that (apparently with a straight face) “The consistency of ZFC is
provable in ZFC+Con(ZFC), the consistency of ZFC+Con(ZFC) is provable in
FC+Con(ZFC)+Con(ZFC+Con(ZFC)), etc., so the infinite hierarchy of such systems provides a
complete proof of the consistency of ZFC. Or, just to press the point, there is actually a system
which proves the consistency of any system... This system even proves its own consistency.”
This proposed hierarchy of systems possesses an interesting structure. For example, each
system has a successor which is assumed to be a system, and distinct from any of the previous
systems (i.e., no loops). And of course it's necessary to justify speaking of the completed
infinite hierarchy, implying induction. The whole concept might almost be formalized as a set
of axioms along the lines of:
(i) ZFC is a System.
(ii) For any System X, the successor X+con(X) is also a System.
(iii) ZFC is not the successor of a System.
(iv) If the successors of Systems X and Y are the same, then X and
Y are the same.
(v) If ZFC is in a set M, and if for every System X in M the
successor X+con(X) is in M, then M contains every System.
Now we're getting somewhere! If we can just prove this is consistent... By the way, here's an
interesting quote from Ian Stewart's book "The Problems of Mathematics":
Mathematical logicians study the relative consistency of different axiomatic theories in terms of
consistency strength. One theory has greater consistency strength than another if its consistency implies
the consistency of the other (and, in particular, if it can model the other). The central problem in
mathematical logic is to determine the consistency strength in this ordering of any given piece of
mathematics. One of the weakest systems is ordinary arithmetic, as formalized axiomatically by Giuseppe
Peano... Analysis finds its place in a stronger theory, called second-order arithmetic. Still stronger
theories arise when we axiomatize set theory itself. The standard version, Zermelo-Frankel Set Theory, is
still quite weak, although the gap between it and analysis is large, in the sense that many mathematical
results require MORE than ordinary analysis but LESS than the whole of ZF for their proofs.
It should be noted that Peano's Postulates (P1-P4 in Andrews, "An Introduction to
Mathematical Logic and Type Theory") assert the existence of an infinitely large set. Since it is
not possible to simple ‘exhibit’ an infinite model, any such assertion must simply assume that it
is possible to speak about such things without the possibility of contradictions. And indeed the
Axiom of Infinity is one of the axioms of ZF. Hence the assertion that PA can be “proved”
within ZF can be taken as nothing more than a joke.
Much of this depends on the status of induction. Normally we are careful to distinguish between
common induction and mathematical induction. The former consists of drawing general
conclusions empirically from a finite set of specific examples. The latter is understood to be an
exact mathematical technique that, combined with the other axioms of arithmetic, can be used
to rigorously prove things about infinite sets of integers. For example, by examining the square
number 25 we might observe that it equals the sum of the first five odd integers, i.e., 52 =
23
1+3+5+7+9. We might then check a few more squares and by common induction draw the
general conclusion that the square of N is always equal to the sum of the first N odd numbers.
In contrast, mathematical induction would proceed by first noting that the proposition is
trivially true for the case N=1. Moreover, if it's true for any given integer n it is also true for
n+1 because (n+1)2  n2 equals 2n+1, which is the (n+1)th odd number. Thus, by mathematical
induction it follows that the proposition is true for all N.
Understandably, many mathematicians take it as an insult to have mathematical induction
confused with common induction. The crucial difference is that MI requires a formal
implicative relation connecting all possible instances of the proposition, whereas CI leaps to a
general conclusion simply from the fact that the proposition is true for a finite number of
specific instances. Of course, it's easy to construct examples where CI leads to a wrong
conclusion but, significantly, CI often leads to correct conclusions. We could devote an entire
discussion to "the unreasonable effectiveness of common thought processes", but suffice it to
say that for a system of limited complexity the possibilities can often be "spanned" by a finite
number of instances.
In any case, questions about the consistency of arithmetic may cause us to view the distinction
between MI and CI in a different light. How do we know that (n+1)2  n2 always equals 2n+1?
Of course this is a trivial example; in advanced proofs the formal implicative connection can be
much less self-evident. Note that when challenged as to the absolute consistency of formal
arithmetic, one response was to speak of "the supreme confidence that a century of working
with ZF has given us". This, of course, is nothing but common induction. So too are claims
that arithmetic must be absolutely consistent because otherwise bridges couldn't stand up and
check books wouldn't balance. (These last two are not only common induction, they are bad
common induction.)
Based on these reactions, we may wonder whether, ultimately, the two kinds of induction really
are as distinct as is generally supposed. It would seem more accurate to say that mathematical
induction reduces a problem to a piece of common induction in which we have the very highest
confidence, because it represents the pure abstracted essence of predictability, order, and reason
that we've been able to infer from our existential experience. Nevertheless, this inference is
ultimately nothing more (or less) than common induction.
It's clear that many people are highly disdainful of attempts to examine the fundamental basis of
knowledge. In particular, some mathematicians evidently take it as an affront to the dignity and
value of their profession (not to mention their lives) to have such questions raised. (One
professional mathematician objected to my quoting from Morris Kline's "Mathematics and the
Loss of Certainty", advising me that it is “a very very very very very very pathetic and ignorant
book”.) In general, I think people have varying thresholds of tolerance for self-doubt. For
many people the exploration of philosophical questions reaches its zenith at the point of
adolescent sophistry, as in "did you ever think that maybe none of this is real, and some demon
is just messing with our minds?" Never progressing further, for the rest of their lives whenever
they encounter an issue of fundamental doubt they project their own adolescent interpretation
onto the question and dismiss it accordingly.
24
In any case, this discussion has provided some nice examples of reactions to such questions,
including outrage, condescension, bafflement, fascination, and complete disinterest. The most
controversial point seems to have been my contention that every formal system is inconsistent. I
was therefore interested to read in Harry Kessler’s “Diaries of a Cosmopolitan” about a
discussion that Kessler had had at a dinner party in Berlin in 1924.
I talked for quite awhile to Albert Einstein at a banker's jubilee banquet where we both felt rather out of
place. In reply to my question what problems he was working on now, he said that he was engaged in
thinking. Giving thought to almost any scientific proposition almost invariably brings progress with it. For
without exception, every scientific proposition was wrong. That was due to human inadequacy of thought
and inability to comprehend nature, so that every abstract formulation about it was always inconsistent
somewhere. Therefore, every time he checked a scientific proposition, his previous acceptance of it broke
down and led to a new, more precise formulation. This was again inconsistent in some respects and
consequently resulted in fresh formulations, and so on indefinitely.
http://www.mathpages.com/home/kmath347/kmath347.htm
Sunday, August 28, 2005
Shameless Plug
Here's a brief description of my (mathematical) novel that will be out in the next 12-18 months
(2 of you sent mail asking for it and I don't need much more incentive than that!):
The human heart yearns for absolute truth and certainty. But can we be truly certain about
anything—or is everything we believe accidental and meaningless, shaped by the happenstance
of genetic and social inheritance? Perhaps mathematics alone, with its uncompromising rigor,
can lead us to certainty. In our 90,000 word novel, we examine where mathematics can and
cannot take us in the quest for certainty.
Our book will show the reader the following: First, that mathematics can be deeply beautiful—in
this regard it is not unlike music or painting; second, that mathematics has profound things to say
about whether absolute truth is obtainable; and lastly, that a novel is the best medium through
which to convey the excitement and meaning of doing mathematics
Our protagonist, Vijay Sahni, an Indian mathematician, has glimpsed the certainty that
mathematics can provide and does not see why its methods cannot be extended to all branches of
human knowledge, including religion. Arriving to pursue his academic career in a small New
Jersey town in 1919, his outspoken views land him in jail, charged under a little-known
Blasphemy law (on the state statute books to this day). His beliefs are challenged by Judge John
Taylor, who does not believe that mathematical deduction can be applied to matters of faith. In
their discussions the two men discover the power—and the fallibility—of Euclid's axiomatic
treatment of geometry, long considered the gold standard in human certainty. In the end both
Vijay and Judge Taylor come to understand that doubt must always accompany knowledge.
posted by Gaurav at 2:39 PM - Gaurav Suri Location:California, United States
I'm a management consultant by vocation and a mathematician by avocation. I've authored a
mathematical novel that touches on several topics discussed on these pages. It has been accepted for
publication and will be out next year.
25
Here's a brief description of a mathematics-based novel written by Gaurav Suri and Hartosh
Singh Bal. It's called A Certain Ambiguity and will be published in 2007 by Princeton. They've
offered books to Mathforge for review--hopefully they'll send an advanced copy. Stay tuned...
Two Indians take a novel approach to maths
Sachin Kalbag Tuesday, August 08, 2006 00:21 IST
Gaurav Suri, Hartosh Bal impress Princeton with unique tale.
WASHINGTON: When Princeton University decides to publish your book, it usually sends you
an author response form. Like any other form, it has the usual zzzz questions about your name,
address, and background. But Q22 is interesting. It asks you to list the prizes and awards your
book will qualify for.
San Francisco-based management consultant and Stanford University alumnus Gaurav Suri
answered that question in one word: Pulitzer. He then added a smiley for good measure.
Did he mean that in jest? Maybe he did, maybe not. There is, like the title of his groundbreaking
mathematical novel, a certain ambiguity to it. Or, as Suri puts it, “Perhaps the search for absolute
certainty is destined to remain just a search.”
Written by Suri and his childhood friend and New York University graduate Hartosh Singh Bal,
A Certain Ambiguity, a mathematical novel, will be published by Princeton in 2007. It tells the
story of an Indian mathematician, who comes to the US in 1919 and gets caught in the web of a
little-known blasphemy law, and how his grandson — a Stanford student — discovers his story.
The book is the result of long instant messenger chats between Suri and Bal. The latter, a former
mathematics teacher, is a freelance journalist and wildlife photographer based in New Delhi.
“We often talked late into the night,” says Suri. “First about the mathematics and then about the
plot-lines.”
They did that for over five years, and got down to sending the finished book to publishers only
late last year.
Their love for maths, however, is older. “We were 17 when we fell in love with mathematics.
The book that did that was George Gamow’s One, Two, Three… Infinity.”
Suri hopes his readers will learn to love mathematics, because “life and mathematics are closely
related”.
Suri told Princeton: “As the two main characters in the book struggle to determine how certain
they are about what they believe, they come to find that the dilemmas of mathematics and
ordinary life cannot be separated.”
Therein, perhaps, lies an ambiguous certainty.
http://www.dnaindia.com/report.asp?NewsID=1046029
Numbers and Experience
It is often argued that while geometry is unable to adequately describe the world around us,
numbers are more reliable, more certain. 2 cows plus 2 cows, always equal 4 cows. Unlike non
euclidean geometries, there are no non standard arithmetics. Gauss, for a time at least, believed
that ‘truth resides in number.’ In a similar vein Jacobi said “God ever arithmetizes” (as opposed
to eternally geometrizing).
26
However, as Kline observes in Mathematics, The Loss of Certainty, the sharpest attack on the
truth of arithmetic came from Hermann von Helmoholtz, a superb physicist and mathematician.
In his Counting and Measuring he observed that the problem in arithmetic lay in the automatic
application of arithmetic to physical phenomena. Some kinds of experiences suggest whole
numbers and fractions, while others don’t: one raindrop added to another does not make two
raindrops. Two pools of water, one at 40◦ another pool of water at 50◦ when mixed together do
not make a pool of water at 90◦. Lebesgue facetiously pointed out that if one puts a lion and a
rabbit in a cage, one will not find two animals an hour later! Helmoholtz gives many (more
serious) examples but his overarching point is that only experience can tell us where to apply,
and not apply, standard arithmetic.
Like Euclidean Geometry, arithmetic is not absolutely applicable to the physical world.
posted by Gaurav at 11:27 PM
Monday, June 13, 2005
Evolutionary Mathematics
Chaitin, as he often does, has got me thinking. He writes:
Von Neumann also said that we ought to have a general mathematical theory of the evolution of
life... But we want it to be a very general theory, we don't want to get involved in low-level
questions like biochemistry or geology... He insisted that we should do things in a more general
way, because von Neumann believed, and I guess I do too, that if Darwin is right, then it's
probably a very general thing.
For example, there is the idea of genetic programming, that's a computer version of this. Instead
of writing a program to do something, you sort of evolve it by trial and error. And it seems to
work remarkably well, but can you prove that this has got to be the case? Or take a look at Tom
Ray's Tierra... Some of these computer models of biology almost seem to work too well---the
problem is that there's no theoretical understanding why they work so well. If you run Ray's
model on the computer you get these parasites and hyperparasites, you get a whole ecology.
That's just terrific, but as a pure mathematician I'm looking for theoretical understanding, I'm
looking for a general theory that starts by defining what an organism is and how you measure its
complexity, and that proves that organisms have to evolve and increase in complexity. That's
what I want, wouldn't that be nice?
And if you could do that, it might shed some light on how general the phenomenon of evolution
is, and whether there's likely to be life elsewhere in the universe. Of course, even if
mathematicians never come up with such a theory, we'll probably find out by visiting other
places and seeing if there's life there... But anyway, von Neumann had proposed this as an
interesting question, and at one point in my deluded youth I thought that maybe program-size
complexity had something to do with evolution... But I don't think so anymore, because I was
never able to get anywhere with this idea...
Tons of interesting stuff to chew on, but I'll limit myself to this: Imagine a simulation where you
27
have two entities: organisms and resources. The organisms are just data structures which
reproduce when they have been getting enough resources. The resources are re-generable and are
of various types.
Now let's add on a few complexities: Assume that an organism 'eats' only certain types of
resources. So Organism 42 can only live on Resource 118 for example. Further assume that the
quantity of Resources stays relatively stable...with exceptions of rare time units of plenty and
others (also rare) of drought. Also assume that there can be more than one type of Organism that
consumes a certain type of Resource, and also that there are Resources that are not consumed by
any organism when the simulation starts.
An Organism will then have the following data elements: Its type [corresponds to the species it
belongs to]; its number [i.e. its name]; the Resource number(s) it consumes; its wellness number
- a measure of how well fed the organism is - if the wellness number goes over a limit the
organism will reproduce; an organism competitive index which will measure how well the
individual competes within his species; and a species competitive number that measures how
well the species competes with other species vying for the same resource. Reproduction passes
on the competitive indices to the progeny. When the wellness index falls below a certain level,
the organism dies.
Now also imagine that you have random mutations. A random mutation could change the type of
resources an individual consumes and/or its competitive indices (either up or down).
These are only the barest details...but I hope you believe that it is possible to capture the main
points of Darwin theory in a reasonable simulation.
Hit start and run the simulation: You will probably see organisms dying and being born; species
will be created by the right mutations - they will also thrive or struggle - but eventually all will
die out. The world itself may reach some kind of stable equilibrium, but more likely than not...at
some point we'd hit zero organisms or zero resources.
All this is worth doing in its own right (in fact I'd be shocked if someone hasn't already done it),
but now, just for fun, imagine one last externality: Say that organisms of a certain complexity
level can perceive a proportional complexity of mathematical truths. So for example an organism
of complexity index 1088 could really 'get' that there can be no largest prime (but other, more
difficult theorems are beyond it), and an organism of complexity index 4063 could 'get' the prime
number theorem ('get' = a deep understandig that does not allow for the result not be true.
Similar, but not equal to proof).
It seems to me then that there will always be mathematical statements that we humans couldn't
get, no matter what.
This is far from air tight, but there may be something to chew on here.
--Gaurav Suri
posted by Gaurav at 5:21 PM
28
http://meaningofmath.blogspot.com/
Tuesday, June 07, 2005
The Voynich Manuscript
The Voynich manuscript is a very old 230+ page manuscript written in a code that no one has
been able to crack. Here's the Wikipedia entry:
The Voynich manuscript is a mysterious illustrated book of unknown contents, written some 600
years ago by an anonymous author in an unidentified alphabet and unintelligible language.
Over its recorded existence, the Voynich manuscript has been the object of intense study by many
professional and amateur cryptographers — including some top American and British
codebreakers of World War II fame — who all failed to decipher a single word. This string of
egregious failures has turned the Voynich manuscript into the Holy Grail of historical
cryptology; but it has also given weight to the theory that the book is nothing but an elaborate
hoax — a meaningless sequence of random symbols.
The book is named after the Russian-American book dealer Wilfrid M. Voynich, who acquired it
in 1912. It is presently item MS 408 in the Beinecke Rare Book Library of Yale University.
The book has strange drawings of flowers, alien looking plants and naked women. Its history is
utterly fascinating. Find out more here. Almost as fascinating as the book itself is the history of
the men who have attempted to decipher the symbols. In many cases they have bolted on semiplausible theories even though they were few supporting facts to be had.
All this forces us me to ask - where does the meaning of anything lie? Surely it is not in the
symbols we use to communicate the ideas. No, meaning must lie in the mind of the humans
deciphering the language/symbol. The human mind has the power to give anything meaning; it
also has the power to force meaning where there is none to be had. Be it the notion of the color
red, the undecidability of the Continuum Hypothesis or the rhythms of the Voynich manuscript.
--Gaurav
posted by Gaurav at 11:03 PM
Friday, May 20, 2005
First thoughts on Rebecca Goldstein’s, Incompleteness: The proof and
paradox of Kurt Gödel
I bought this book despite myself. I’ve carefully studied Gödel’s Incompleteness Theorems and
expected Goldstein to give a soft, non rigorous, largely biographical treatment which wouldn’t
teach me anything new. I bought the book almost out of duty – it is after all in the subject I care
most deeply about – I should read it just in case. I am glad I got took the chance. This is a great
book, and I am not one to use the term loosely.
29
The power of the book doesn’t come from its treatment of the theorem itself (she does an
adequate job, but others have done better. See for example Nagel and Newman’s classic, Gödel’s
Proof for a fine non-technical treatment); rather the books achievement is that it puts Gödel’s
work in context. Goldstein successfully (and finally) gives Gödel’s theorems the philosophical
interpretation that he himself would have intended.
Before reading Incompleteness I often wondered why Gödel, an avowed Platonist, did most of
his work in Mathematical Logic, the most formalist of all mathematical fields. Also, why did he
join the Logical Positivists of Vienna who in their way were the most extreme kind of
Formalists; and lastly why did Gödel associate himself with a group who revered the teachings
of Wittgenstein – the very same Wittgenstein who essentially claimed that all of a mathematics
was a mere tautology (a claim that was almost surely quite repulsive to Gödel, and to almost
every other mathematician).
Goldstein answered all of this (and more). She gets her answers not from the mathematics, but
from the story of Gödel’s life and the philosophical battles that drove him.
In brief, the story is that the Logical Positivists essentially believed that truth lived in the precise,
meaning-aware use of language. According tho them, it is only possible to identify a statement as
being true or false by proving or disproving it by experience. Logic and mathematics was
excluded from this rule; they claimed that mathematics was a branch of logic and was for all
intents and and purposes a mere tautology.
Gödel on the other hand was a Platonist; he believed that mathematicians uncovered truths about
the universe, and mathematical concepts were merely communicated by—but not contained
within—its equations and symbols. Yet, confusingly, Gödel belonged to a Positivist group. He
largely stayed silent through their meetings, neither objecting nor agreeing, for that was not his
way.
But the internal storm of disagreement that welled within him did lead him to prove that
Positivists were wrong. He proved that the structural manipulation of mathematical symbols
could not yield all statements that we know to be true. He demonstrated a ‘true’ statement that
was not provable—which should have banished Logical Positivism for ever.
Yet it didn’t; for Godel, before Goldstein’s book, was never well understood.
I’ll have a lot more to say about all this in the coming weeks.
posted by Gaurav at 11:33 AM
http://meaningofmath.blogspot.com/
30
BBC-Dangerous Knowledge
July 31, 2008 In this one-off documentary, David Malone looks at four brilliant mathematicians Georg Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing - whose genius has profoundly
affected us, but which tragically drove them insane and eventually led to them all committing
suicide.
The film begins with Georg Cantor, the great mathematician whose work proved to be the
foundation for much of the 20th-century mathematics. He believed he was God's messenger and was
eventually driven insane trying to prove his theories of infinity.
http://www.youtube.com/watch?v=Cw-zNRNcF90&feature=related
also available here:
http://bestdocumentaries.blogspot.com/2007/09/dangerous-knowledge-full-documentary.html
excellent math visuals and links:
http://faculty.etsu.edu/gardnerr/Math-Videos/Math-Videos.htm
DANGEROUS KNOWLEDGE
http://www.bbc.co.uk/bbcfour/documentaries/features/dangerous-knowledge.shtml
BBC Two: Wednesday 11 June 2008 11.30pm
In this one-off documentary, David Malone looks at four brilliant mathematicians - Georg
Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing - whose genius has profoundly affected
us, but which tragically drove them insane and eventually led to them all committing suicide.
The film begins with Georg Cantor, the great mathematician whose work proved to be the
foundation for much of the 20th-century mathematics. He believed he was God's messenger and
was eventually driven insane trying to prove his theories of infinity.
"MY BEAUTIFUL PROOF LIES ALL IN RUINS"
Presenter David Malone reads letters which demonstrate Cantor's crumbling self-belief.
Ludwig Boltzmann's struggle to prove the existence of atoms and probability eventually drove
him to suicide. Kurt Gödel, the introverted confidant of Einstein, proved that there would always
be problems which were outside human logic. His life ended in a sanatorium where he starved
himself to death.
Finally, Alan Turing, the great Bletchley Park code breaker, father of computer science and
homosexual, died trying to prove that some things are fundamentally approvable.
31
The film also talks to the latest in the line of thinkers who have continued to pursue the question
of whether there are things that mathematics and the human mind cannot know. They include
Greg Chaitin, mathematician at the IBM TJ Watson Research Center, New York, and Roger
Penrose.
Dangerous Knowledge tackles some of the profound questions about the true nature of reality
that mathematical thinkers are still trying to answer today.
32