History of Logic vs. History of Mathematics. Jaakko

Jaakko Hintikka
WHICH MATHEMATICAL LOGIC IS THE LOGIC OF MATHEMATICS?
1. Logization of mathematics
One of the banes of current scholarship is overspecialization that leads to ignorance of developments in other
fields different from one’s own even when they are directly relevant to it. Often ‘ignorance’ nevertheless is not
the right word. Rather, what is involved is a failure to understand and to appreciate that relevance. A striking
example is offered on the one hand by the histories of mathematics and its foundations as they are dealt with by
working mathematicians as a part of their professional work and on the other hand by the history of logic as it has
been cultivated by philosophers and some mathematicians as a separate subject for philosophical and foundational
purposes. Here certain especially interesting aspects of the respective histories of mathematic and logic since the
early nineteenth century are examined. The overall development of mathematics in this period is well known, at
least in its broad outline. Around 1800 mathematics consisted of the study of two or three subjects. Geometry
was the study of space, and arithmetic and algebra were parts of the study of numbers and functions of numbers.
Analysis and analytic geometry combined ideas from both directions.
The changes in the nature of mathematics since early nineteenth century have been described in many
different ways, emphasizing different aspects of the mathematical enterprise. These characterizations include
among others an increase of rigor, especially the avoidance of appeals to intuition; greater abstractness, especially
the genesis of set theory and the increasing use of set theory as a medium of mathematical theorizing and
mathematical reasoning; the use of axiomatization, and the arithmetization of analysis. As a consequence,
mathematics has changed from the study of space and number to a study of all and sundry structures, not only
those structures that are exhibited in traditional arithmetic, analysis and geometry. In some projects, such as the
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 1
Bourbaki program and the “New Math” movement, set theory is thought of as the lingua franca of all
mathematics. (Cf. Bourbaki 1938)
It is not badly controversial to suggest that the common theme in these developments has been a greater
and greater reliance on logic in mathematical concept formation, in the analysis of mathematical concepts, in
mathematical theorizing in general. For instance, the way in which the enhanced rigor is implemented is usually
an analysis of mathematical concepts and mathematical modes of reasoning in purely logical terms. The extreme
doctrine of logicism claims that all mathematical concepts and rules of reasoning can be reduced to logic. Even if
such a complete reduction is not possible, the less radical but historically more prominent reductions for
mathematical theories to arithmetic or to set theory mean defining logically the concepts and modes of reasoning
needed in these theories in terms of natural number or sets, respectively. This enterprise is essentially logical
analysis, and accordingly it is a challenge to the logic that is (usually implicitly) employed in these reductions, but
it need not involve a formalization of the logic that is being used.
The first stages of these developments included that analysis of geometrical and semi-geometrical
concepts in analytical terms. Developments like the Gauss-Riemann theory of surfaces are emblematic steps in
this direction. The notion of space itself was analyzed as a structure of a certain kind. Once this was done to
what intuitively seems to be the actual space, analysis automatically shows what alternatives are mathematically
possible, thus opening the door to non-Euclidean geometries. What was involved was not only the deductive
structure of geometry, but a conceptual analysis of the basic geometrical concepts.
The deductive independence of Euclid’s fifth postulate showed only that non-Euclidean geometries are
self consistent mathematical structures. An analysis of the structure of different geometries in metric terms was
needed to show what it means for out actual observable space to instantiate some particular geometry, Euclidean
or not. In a foundational perspective, these developments meant a gradual elimination of geometry from
analysis, which virtually automatically meant the disappearance of appeals to intuition in analysis. In this
analytization of geometry, one of the most critical bunch of concepts were those pertaining to continuity. In the
early twentieth century, Hilbert was still struggling to express them in purely logical and axiomatic terms. (See
e.g. Hilbert 1899, 1918.)
This elimination of geometry from analysis naturally took the form of an analysis in logical and
arithmetical terms of the basic concepts of analysis, such as limit, continuity, convergence, differentiation, and so
on. The first great figure in this work was Cauchy, but the fundamental results were achieved by Weierstrass.
(See here Grattan-Guinness 1970, Bottazzini 1986, Grabiner 1981 and the references given there.)
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 2
These tendencies typically reflect, and are reflected by, the use of axiomatic method whose nature was
spelled out especially forcefully by Hilbert (See e.g. Hilbert 1899, 1918). It is part and parcel of the axiomatic
method that all the theorems are strict consequences of the axioms alone, so that new information that is not
contained in the axioms is not smuggled in the derivation of the theorems. And this implies, as Hilbert saw
especially clearly that the theorems must be purely logical (formal) consequences of the axioms, independently of
what the axioms are talking about. This precludes of course all appeals to intuitions in the deductive structure of
an axiomatic system, although it does not restrict their role in the choice of the axioms.
The story of these changes is an important part of the history of mathematics in the nineteenth century.
This increasing logization naturally meant that mathematicians had to develop ways of handling logical concepts
themselves. That they did, but they did not systematize, let alone formalize, their logical techniques. They
expressed their conceptualizations and differences in ordinary language, trusting that their readers master the tacit
logic that our ordinary language relies upon. As a consequence neither historians and historiographers of
mathematics nor historians and philosophers of logic have inquired with any real depth into the “mathematical
logic” that was used in the mathematical practice of the time. Both have in effect trusted Frege and early modern
logicians whose project was to formalize the general logic that all our conceptual thinking relies on including
mathematicians reasoning. What these logicians claimed to have done is to free our ordinary language from
unclarities and ambiguities. Thus they in effect claimed that they had captured fully the informal modes of
reasoning that mathematicians had been using. This universality is reflected for instance in Frege’s term
Begriffsschrift.
The core area of philosophers’ logic and all logic is what in our day and age is called the received firstorder logic, in brief RFO logic. This is the logic that has been generally considered to be the basic part of our
actual working logic also in mathematics. It is the logic that is relied on for instance in set theory.
But were these universality claims right? This historically and theoretically fundamental question has not
been seriously attended to in the earlier discussion. Does the implicit logic of nineteenth century mathematicians
resemble RFO logic? If not, what is it and how is it related to logicians’ logics?
2. The epsilon-delta treatment of quantifiers
In tacitly practicing logic, nineteenth century mathematicians in quest of rigor had to deal with the most central
concepts of all nontrivial logic, the two quantifiers, the existential quantifier and the universal one. How did they
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 3
do so? Quantifiers taken one by one in isolation are easy. They express the nonemptyness or exceptionlessness
of some (usually complex) predicate. The interesting case is that of dependent quantifiers. Their job description
is not only class theoretical. They are the only way of expressing the dependencies of variables (viz. variables
bound to them) on each other on the first-order level. But the most basic concepts of analysis involve dependent
quantifiers. So how did Cauchy and his followers handle dependent quantifiers in defining notions like limit and
convergence?
The answer is known to everybody who has taken a rigorous introductory calculus course. They used
what is known as the “epsilon-delta” method, sometimes referred to as “epsilontics”. This method is a logical
theory of dependent quantities expressed in ordinary language. (plus the usual mathematical notation) For
instance, the continuity of a function f(x) at x is expressed as follows:
(1)
For any given
one can choose
such that for any
whenever y
Here
are reals with
The definition of differentiability says likewise that one can choose, for any given
such that
(2)
whenever
Here d is the derivate of f(x) and
reals o
The definition of the convergence of a sequence of functions f1(x),f2(x),…to fo(x) was likewise expressed
somewhat as follows:
(3) Given any
one shall choose k such that for any n> k
Whenever n
Here
is a real number
and k, n are natural numbers.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 4
What these examples illustrate is a perfectly viable way of handling quantifiers in mathematical concept
formation and mathematical reasoning. It does not need any formalism to be understandable and applicable, as is
in fact done in innumerable textbooks. What is going on logically is not difficult to understand. Universal
quantifiers are expressed by speaking of what is “given” and existential quantifiers are expressed by speaking of
what “one can choose”.
Following this interpretation, what most philosophers of mathematics say here is that the real logical
structure of this largely informal method is shown by its representation in the RFO logic formalism that in effect
goes back to Frege. In the current notation of RFO logic a definition of the three sample mathematical concepts
could be expressed as follows:
(4)
(5)
(6)
This explication of mathematicians’ definitions is often considered a great achievement. Philosophers like Quine
typically present as a virtue of the logic that Frege founded that it can thus capture in precise formal terms the
epsilon-delta technique. In contrast, many historians of mathematics fail to appreciate the generality of the
technique or its logical nature. (See e.g. Alexander 2010, p. 142 and p. 287, note 21.)
3. Formal quantifiers vs choice terms
But who is capturing what here? There is an obvious connection between (1)-(3) and (4)-(6) and they can
admittedly be said to be pairwise equivalent. But there are deeper differences here than perhaps first meet the
eye. The informal logic of Cauchy and Weierstrass and our RFO logic obviously rely on altogether different
semantics. For Frege quantifiers are higher-order predicates that express the nonemptyness and exceptionlessness
of the (usually complex) predicates that follow them in the correlated brackets. The conditions of their doing so
can be formulated in a Tarski-style semantics.
In the epsilon-delta technique we consider quantifiers as proxies for certain choice functions. What a
quantificational proposition expresses is the claim that certain choices can always be made (“one can choose”), in
other words that the functions that implement the choices actually exist.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 5
Such a treatment of the semantics of quantifiers is possible and it not an unknown idea. In effect, a
treatment of quantifiers as choice functions in disguise was attempted by Hilbert and Bernays 1933-39). Their
attempt was not fully successful, however, largely because they did not spell out explicitly in their notation what
the choice in question depends on. The complications in Hilbert and Bernays are caused by the use of an
apparently free-standing choice term
F(x) instead of an explicitly context-dependent Skolem function term
where the dependence of other terms is explicitly indicated by its argument. For the force of Hilbert’s and
Bernay’s epsilon term depends often on its context, but without any explicit rule of how it so depends. (As it will
be seen, they were not the only mathematicians who failed to appreciated this crucial question.) This defect has
been corrected in what is known as game-theoretical semantics, but only more than a hundred years after Frege.
It is based on the natural idea of thinking of the choices associated with quantifiers as moves in a game. This
natural idea was already relied on by C.S. Peirce in his interpretation of quantifiers. He was prevented from fully
implementing the game idea by not having the notion of strategy (in the von Neumann-Borel sense) at his
disposal. (See here Pietarinen 2006.)
From the point of view of game-theoretical semantics it is seen that ordinary language locutions like “one
can choose” are ambiguous in that they do not tell what the choice in question depends on. For instance, in (1)
the choice of
obviously depends on , but does it also depend on x? A satisfactory notation should allow the
expression of either reading. In (4)-(6) this question is tacitly answered by the convention that a quantifier
depends on free variables in its scope,
Formally speaking, these variables can be considered as being bound to
sentence-initial universal quantifiers. But this leaves the other possibility in a limbo. Can the choice of
be
independent of x? How can such a reading be expressed? It will be shown here that that simple logical question
has played a significant role in actual mathematical practice.
The two semantics give the same results in the special case of RFO logic. However, they represent
entirely different approaches and facilitate radically different extensions. For instance, in the most natural way of
implementing a game-theoretical semantics the “axiom” of choice turns out to be a first order logical principle,
even though in the prevalent RFO tradition it has to be as a separate set-theoretical or higher-order axiom.
This is indicative of the general situation. Game theoretical semantics can serve as a basis of much
stronger logics than Frege”s RFO logics. Moreover, the semantics that late nineteenth century mathematicians
were tacitly using was obviously GTS. Hence the epsilon-delta logic relying on GTS as it was already in Frege’s
time used by Weierstrass was much stronger than Frege’s logic or the currently and RFO logic.
For this reason, it is historically incorrect to assimilate the two kinds of logic to each other. Further
systematic and historical analysis only deepens the differences between the two. It is seriously misleading to
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 6
think of Frege’s logic merely as a formalization of the episilon-delta technique or for that matter to think of the
epsilon-delta talk merely as a verbalization of Freges’ formal logic. It would have been a feather in Frege’s cap if
he could have presented his logic as doing the same job as mathematicians’ informal methods. But as a brute
historical fact, Frege never as much as mentions the epsilon-delta technique. And this is not simple oversight or
an unexploited possibility. For deep reasons, he could not have done so.
4 Cauchy’s theorem as a case study
These reasons can be seen by having a closer look at the history of mathematics, especially at the
development of the epsilon-delta technique. The first major steps in that development and in the entire
rigorization (logization) of analysis were taken by Cauchy. (In saying that, we must make a significant allowance
to the earlier role of Lagrange.) Cauchy formulated most of the modern definitions of the crucial notions like
continuity, limit and convergence. But the path of progress was not smooth. In exploring the role of the newly
defined concepts, Cauchy presented an important theorem. It says that the limit of a converging sequence of the
continuous functions is itself continuous.
This was no mean theorem. Cauchy gave it a prominent pace in his influencial text Course d’analyse
(1821), as its apparent significance seemed to motivate. Systematically speaking it would have had huge
consequences. For one thing, it seemed to make the entire Fouorier analysis impossible in that one could not
represent a discontinuous function as a limit of a Fourier series of continuous functions.
Luckily for Fourier and luckily for mathematical physics, Cauchy’s theorem turned out to be fallacious.
Of course it was not literally a matter of luck. Cauchy had made a mistake. The way this mistake was overcome
was one of the most important progressive steps in the history of analysis. It is an instructive example of how
mathematics advances conceptually
It was not hard to see that something was amiss with Cauchy’s proof. It contradicted some of Dirichlet’s
results. The first one not only to suspect that something was wrong with Cauchy’s “theorems” but to see where
counter-examples might be found was Abel. But the precise nature of Cauchy’s mistake was far from obvious.
The first one to pull the emergency brake was P.L. Seidel (1848), but even he could at first say only that “its proof
must basically rest on some hidden supposition.”
But what was this hidden assumption? What Cauchy assumed was that the members of a sequence of
functions f1(x), f2(x),… are all continuous and that they converge to fo(x). His definition of convergence was
correct and so was his definition of continuity. They were essentially (3) and (1) above. But it turned out that he
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 7
should have assumed something more of his sequence of functions than ordinary convergence. But what? The
great progress that Cauchy’s mistake unwittingly prompted was brought about by mathematicians’ efforts to
answer this question. In our contemporary terminology, the progress was essentially the acknowledgment and
definition of uniform convergence as distinguished from ordinary convergence. Analogous to uniform
convergence, mathematicians came to define a host of other uniformity concepts: uniform continuity, uniform
differentiability and so on.
But what precisely is this new concept? What was wrong with Cauchy’s “proof”? The joker here was an
additional factor that Cauchy had overlooked. It was the role of the variable x. For any one value of x, the only
choice one apparently has is between convergence and non-convergence and non-convergence. In later usage,
uniformity concepts are in fact often defined so as to be relative to a range of values of a variable analogous to x.
For instance, uniform continuity is defined as in (4), but relative to a range of values
x1 ≤ x ≤ x2.
But this is not a full diagnosis of the problem, for the sought-for stronger convergence is after all a local
phenomenon. It could be characterized by speaking of what happens in the arbitrarily small neighborhood of x.
One had to introduce “distinctions between different modes of convergence relative to [a single value of] the
variable x”, as Grattan-Guinness puts it (1970, p. 118). Seidel (1948) tried to do this by defining what he called
arbitrarily slow convergence. (See Grattan-Guinness, op.cit.) Stokes did the same with a different notion he
referred to as infinitely slow convergence. These terms should already warn you. These notions are very messy.
They help to expunge Cauchy’s mistake, but they do not yield an insight into what the logical (conceptual) gist of
the problem is.
5. Uniformity concepts
The crucial distinction can be seen from the definition of any uniformity concept. The problem comes
down to the same conceptual unclarity as was seen to have bothered Hilbert and Bernays. When it is said in (l)
that “one can chose ”, it is left open what the choice depends on. Does it depend on
alone, or does it also
depend on x? The latter answer yields the usual definition of plain pointwise continuity, the former a definition of
uniform continuity. In this precisely analogous way we can distinguish differentiability simpliciter and uniform
differentiability by spelling out whether the choice of
in (2) depends on x or not. Likewise, in the similar
definition of the convergence of a sequence of functions fi(x) we can distinguish uniform convergence from the
ordinary variety by making the choice of
independent of x.
Thus the informal but accurate definition for uniform continuity is obtained from (1) by stipulating simply
that the choice of
must be made independently of x, and a definition of uniform differentiability is obtained
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 8
similarly from (2). In the definition of convergence (3) uniformity is obtained by making the choice of k
independent of x where
<x <
is the range of (uniform) convergence, and likewise for the other uniformity
concepts. Grattan-Guinness describes what happened vividly by reference to Bolzano’s earlier definition of
convergence along the lines of (3) by saying that Weierstrass restored x into Bolzano’s definition from which it
had been omitted by Cauchy.
The gist of the discovery of uniformity concepts is therefore the idea of independent choice. And this
idea is firmly part and parcel of the tradition of considering quantifiers as disguised choice functions. But the
precise logical and mathematical interpretation and implementation of the independent choice idea is far from
obvious. This distinction between pointwise (plain) and uniform continuity can be captured naturally in gametheoretical semantics. The dependence of a variable on another is naturally represented by the informational
dependence of the choices of values for them in a semantical game.
It is not much of an exaggeration to say that neither the historians of mathematics not the historians of
logic have told us what the Pudels Kern here is in explicit logical terms. For working mathematicians’ purposes,
the right diagnosis of the conceptual situation was slowly worked out, mainly by Weierstrass. The fundamental
role played in analysis by the concept of the uniform convergence of a series was not explicitly emphasized by
Weierstrass until the early 1860s and was subsequently developed by him during the curse of his long career.
(Bottazzini, 1980, p. 204.)
But even in our day and age the simple logical nature of uniformity concepts is not explained even in the
most careful expositions of analysis. For instance, in Brabenee (2004, pp 74-79) uniformity concepts are used
even in elaborate exercises but never really explained! Thus, a thoughtful professional mathematician might ask a
logician to explain what uniformity concepts really mean, even when that mathematician is completely
comfortable with the “plain” epsilon-delta definitions that he or she might be teaching to students every year. (It
has happened to me.)
What can a logician say by way of an explanation, in the light of what has been found here? One
superficially tempting idea is to bring out the distinction between pointwise and uniform concepts by
manipulating the order of quantifiers. Other things being equal, values of variables bound to quantifiers all are
introduced in the left-to-right order of these quantifiers. For instance, the rare philosophers who have taken notice
of uniformity concepts routinely dismiss them as a matter of “quantifier ordering”. And in the authoritative text
by Gleason (1991, pp 245-249) uniformity concepts are explained in terms of quantifier ordering. According to
this idea, a definition of uniform continuity could be
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 9
(7)
But this is not a correct definition. To see this, consider the negation of (7)
(8)
When you unpack these, you can see that what (8) essentially says is that for some ε the function f(x) has a
discontinuity of the order ε somewhere in the interval x1<x<x2. But this is not the right denial of uniform
continuity. A function can fail to be uniformly continuous and yet be continuous. Hence the attempted definition
(7) just does not work. Gleason is wrong about uniformity concepts. A logically perceptive mathematician might
have anticipated this judgment on the basis of the fact that uniform continuity is a species of continuity and hence
a local property of functions.
The same remarks can be addressed to any one of the other uniformity concepts. Gleason’s explanation
of uniformity is flawed. We are not dealing only with quantifier ordering. How, then, can the nature of
uniformity concepts be explained in terms of formal logic?
6. The (in)expressibility of quantifier independence
Philosophers and philosophical logicians often seem to think that the refinement of the epsilon-delta technique
that led to uniformity concepts was a specifically mathematical achievement without major relevance to logic or
to the general foundations of mathematics. (This oversight may have been encouraged by the practice of some
mathematical writers to pidgeonhole uniformity concepts with other mathematical path-independence concepts as
e.g. in Forsyth (1893).) What has been found out here shows that such philosophers could not be more wrong.
The story of the notion uniformity belongs to the history of logical thinking as much as to the history of
mathematics. The introduction of uniformity concepts enriched essentially the epsilon-delta logic, even though it
was not expressed in terms of formal logic. The key idea is nothing more and nothing less than the quantifier
dependence as revealed dramatically by the other side of the same conceptual coin, unsuspected quantifier
independence.
This independence is what is manifested by the behavior of uniformly convergent sequences of functions.
We can in fact obtain a definition of uniform convergence from (3) simply by stipulating that the choice of k must
be made independently of x. Likewise, from (1)–(2) we obtain definitions of the corresponding uniformity
concepts by making the choice of δ independent of x. (This was foreshadowed in the earlier observation that
notions like uniform convergence are local notions.) Thus it is the notion of quantifier independence that can
claim the credit of the enormous advance not only in rigor but in substantial mathematical theorizing brought
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 10
about the work of Weierstrass and his followers. Its gist was an enrichment of the epsilon-delta logic of
quantifiers by the use of the notion of quantifier independence.
But this independence cannot be expressed by means of our everyday RFO logic. For the independence
of variables has to be expressed in RFO by the independence of the quantifiers they are bound to. Now in (3) as
reproduced by (6) the quantifier (∃k) must depend on x, for the definition should of course apply for all its values.
(There is an implicit quantifier (∀x) fronting the definition.) Likewise for (1)–(2) and (4)–(5). Hence the enriched
epsilon-delta logic used by Weierstrass and his ilk was much richer than received logic of quantifiers, i.e. RFO
logic.
Since the first-order part of Frege’s logic is essentially equivalent to RFO logic, this shows that Frege’s
project failed abjectly. Far from being a universal notation for our concepts, it fails even to capture the modes of
reasoning of his fellow mathematicians at the time.
7. IF logic to the rescue
Needless to say, the flaws in RFO logics are reparable. The first main step is to introduce a notation to exempt an
existential (existential-force) quantifier (∃y) from its dependence on a universal (universal-force) quantifier (∀x)
within whose formal scope it occurs by writing it (∃y/∀x). Likewise, the independence of (∃x) of the variable z
can be expressed by (∀x/z). By means of this notation we can express the “missing” independence relation in (4)(6) by writing the critical quantifiers.
Hence, no criticism of RFO, as far as it goes, is intended here. However, this innovation was introduced
only in the nineteen-nineties. It took more than one hundred years for the symbolic logic tradition to catch up with
the Cauchy-Weierstrass tradition. It was not only Frege who failed to capture fully the epsilon-delta technique in
his logic. For a hundred years, other logicians did not do any better. Thus Frege’s failure to deal with dependent
quantifiers slowed down the development of logic by more than a century.
Furthermore, the improvement just mentioned meant replacing RFO by another richer logic. The first step
takes us to what is ill named as independence-friendly (IF) first-order logic. Thus IF first-order logic is not
(Stanford Philosophical Dictionary notwithstanding) a further development of RFO logic. It replaces RFO. It is
naturally based on game-theoretical semantics, and as such is an implementation (among other things) of the
epsilon-delta technique. Far from being a superstructure on RFO, IF logic was in effect used before RFO existed,
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 11
however informally. In a very real sense, IF logic is not a novelty. It is simply the logic that mathematicians like
Weierstrass were already using in the nineteenth century.
The replacement of RFO logic by IF first-order logic is not only architectonic, a question about how to
best formulate and formalize our logic. It has potentially important foundational consequences. In IF logic, the
law of excluded middle does not hold. We must allow in it predicates with truth-value gaps. For instance, it can
easily be shown (as in Hintikka 2011) that a mathematical induction works only for fully defined predicates (i.e.
predicates without truth-value gaps). In such a logic, unrestricted use of mathematical induction can in principle
lead to paradoxes. Since both IF logic and Weierstrass’s implicit epsilon-delta logic are cases in point,
mathematicians must be on alert as to what kinds of predicates they apply mathematical induction to. Whether in
the actual history of mathematics negligence in this respect has led anyone into actual trouble does not seem to be
known.
The failure to catch up with the epsilon-delta tradition has not prevented the symbolic logic from being
developed and applied in other directions. It has nevertheless distorted symbolic logicians’ perspective on the
foundations of mathematics, especially on what can be done in mathematics by logical means, including the
famous incompleteness and impossibility results by Gödel, Tarski and Paul Cohen that are often considered as the
major results in logical theory in the twentieth century. We have to realize that what these results reveal are
merely limitations of RFO logic, a logic that was flawed from the start, and not a limitation of logic as such or of
axiomatization. (See here Hintikka forthcoming (a).)
We can consider as a test case the claim that elementary arithmetic is not completely axiomizeable. If
this claim had been made a hundred years ago to a mathematician in the epsilon-delta tradition, he or she might
very well have countered by claiming that such a complete axiomatization is easily accomplished. Most of the
Peano axioms are unproblematic. The problem is to express the principle of induction in purely logical terms.
This can be done if we can express that natural numbers are well-ordered with zero as the only number that does
not have an immediate predecessor. This well-ordering can be expressed by saying that there are no infinite
descending chains of natural numbers. Now the instance of such a claim can be expressed by the epsilon-delta
technique as follows:
(9)
For any given natural numbers ε1 and ε2, one can choose δ1 depending on ε1 only and δ2
depending on
ε2 only, such that δ1 = δ2 if and only if ε1= ε2 and that δ1< ε1 and δ2< ε2.
Here δ1, δ2 are also natural numbers.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 12
Technically, this does not amount to a counter-example to Gödel’s first incompleteness (meta)theorem,
for several reasons. One of them is that Gödel in effect requires that the logic elementary arithmetic uses is
“axiomatizable” in the sense that there exists a recursive enumeration of all logical truths. In fact, in IF logic
there is no such “axiomatization”. But apart from that, (9) it is not expressible by means of RFO logic, which is
the logic used in Gödel’s arithmetic. This is because of the independence of the choices of δ1 and δ2 of ε2 and ε1,
respectively. But if this is the reason for the inexpressibility of a categorical axiomatization, Gödel’s
incompleteness results must be considered as showing the limitations of RFO as used in elementary arithmetic
than any limitations to the use of logical conceptualizations in arithmetic and elsewhere in mathematics. With the
rise of IF logic and its extensions, questions of complete axiomatizablility are put to a new light. For a
philosopher it is instructive to realize that in principle Weierstrass could have formulated such an axiomatization
in a perfectly natural sense.
8. Frege’s failure
Historically, this use of too poor a logic of quantifiers goes back to Frege. Was it merely an oversight on Frege’s
part, an historical accident? No, it was a mistake waiting to happen. It was based on an inadequate understanding
of the meaning (semantics) of quantifiers by Frege.
Frege never betrays any awareness — or at least any appreciation — of the important development in
analysis that the uniformity concepts facilitated. The apparently only reference to them in his writing is a mention
of the “arithmetization of analysis” in his review of Hermann Cohen (See Frege 1885.) What is even more
striking, Frege never mentions epsilon-delta definitions. It would have been an impressive proof of the
significance of Frege’s logic if he had pointed out how it enables us to formulate the epsilon-delta technique.
Frege’s logic is often presented as having accomplished that. Yet Frege nowhere as much as mentions the
epsilon-delta method. What is more, he never even comments on the phenomenon of quantifier ordering, let alone
its semantical meaning as an ordering of subsequent givenness and choices.
But perhaps we should not give much weight to such evidence from silence. Be that as it may, a
comparison with contemporary logicians should offer a fair perspective on the scope of Frege’s active knowledge
and interests. An obvious object of such comparison is Frege’s co-inventor of modern symbolic logic, Charles S.
Peirce. This comparison is unexpectedly stark. Even though Peirce, unlike Frege, was not a professional
mathematician, he shows a firm grasp of the conceptual development of mathematics from Cauchy to Weierstrass
and comments on it in considerable detail. In particular, he is aware of the distinction between pointwise and
uniform concepts in the foundations of analysis. Peirce not only wrote a review of Forsyth’s 1893 treatise of
function theory where uniformity concepts are used. (See Peirce (1894).) He pointed out that Forsyth in certain
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 13
theorems commits the same mistake as Cauchy and used in his assumptions only pointwise convergence when
uniform convergence is needed. He was cognizant of Weierstrass (1886) and praised Weierstrass for improving
the “logical clearness” of mathematics.
In his own work, Peirce developed an explicitly game-theoretical account of quantifiers, complete with
two players, order of moves etc., not only as one possible illustration of the logic of quantifiers, but as revealing
their meaning. Clearly, Peirce was a more perceptive and more creative logician than Frege. The evidence
suggests that he might also have been a better informed logician and philosopher of mathematics. (I am here
indebted to Ahti-Veikko Pietarinen for information about Peirce’s work.)
Frege’s mistake was in fact far deeper than an oversight. It is not only that he did not recognize quantifier
independence when it occurred. He did not understand quantifier dependence in the first place. For Frege,
quantifiers were higher-order predicates that expressed the nonemptyness or exceptionlessness of lower-order
predicates. (See Frege 1893, sec. 8, pp. 11-14.) As a consequence, quantified sentences had behaved according
to him like long (possibly infinite) disjunctions and conjunctions. In such a thinking, quantifier (in)dependence
becomes (in)dependence of propositional connectives on each other. This idea was unknown until our day and
age, and would have been totally incomprehensible to Frege.
Frege simply failed to understand fully the meaning of quantifiers. He understood their semantical role in
expressing the nonemptyness or exceptionlessness of certain (lower order) predicates. Indeed, he characterized
quantifiers as doing just that. But he never acknowledged the even more important semantical role of quantifiers
of expressing through their formal dependence on each other the actual dependence of the respective variables
bound to them.
This is not a matter of semantical interpretation only. I have shown (Hintikka forthcoming (c))
that a neglect of dependence relations between quantifiers is what caused the paradoxes of set theory and thereby
the entire crisis of foundations
A way of putting Frege’s mistake in a historical perspective is to say that he restricted himself to the
symbolic logic tradition. (Yes, he helped to start it in the first place.) The choice function interpretation was
foreign to him, and it remained foreign to most mainstream logicians after him.
9. Frege’s anti-intuitionism as a source of his failure
The difference between the two traditions can be illustrated by a case study that is interesting in its own
right. The ‘axiom’ of choice serves to illustrate the systematic and historical issues involved here. That it has not
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 14
long ago been recognized as a first-order logical principle is due to a self-imposed restriction on the use of logical
operations. This restriction is merely notational, without any theoretical motivation other than a convenience in
manipulating symbols according to formal rules. The usual formulations the rules characterizing logical notions
like quantifiers can only be applied to them in a formula-initial position (or when they occur otherwise as the
principal connective or operator). In particular, existential instantiation is usually applicable only to a formulainitial existential quantifier (∃x)F[x] allowing us to replace it by F[b], where b is a new individual constant. But
we could equally well apply existential instantiation to any existential (existential-force) quantifier in context
(10)
S[
… (∃x)F[x] … ]
Then we would have to replace x, not by a constant term but by a function term f(y1, y2, …) Here f is a new
function constant and (Q1y1),(Q2y2),… are all the quantifiers on which (∃x) depends on in S.
This obvious generalization brings the full force of the “axiom” of choice to bear on the first-order level.
This generalization is trivially easy to explain and to motivate, as was just done, to anyone who appreciates the
crucial role of dependence relations between quantifiers, in other words to anyone relying on the epsilon-delta
logic. The “axiom” of choice is an integral part of this logic. In contrast, in the symbolic logic tradition it has
never been integrated in logic itself, and has been treated as an optional axiom in a special mathematical theory.
This is not accidental, for the axiom of choice is closely related to the basic ideas of IF logic and indeed to
the entire epsilon-delta tradition. Furthermore, the implementation of the axiom of choice through liberated
instantiation rules is suggestive of the sources of Frege’s way of thinking, including his mistake.
Frege’s failure to understand the dependence-indicating role of quantifiers has in fact interesting
philosophical roots. His avowed purpose in logical theory was to dispense with the use of intuition. This was
thought by him as a refutation of Kant. Now what was this use? I have shown (Hintikka, forthcoming (b)) that
for Kant an appeal to intuition in mathematics meant (expressed in our contemporary jargon) an application of
instantiation rules. Hence a part of Frege’s project was to dispense with instantiation rules. This is not possible
absolutely, but in the ordinary first-order logic they can be limited to instantiations of sentence-initial quantifiers.
Such quantifiers can apparently be interpreted somehow as not involving intuition. Hence Frege could construe
his logic as being intuition-free.
However, independence friendly logic and accordingly in the use of the unrestricted epsilon-delta
technique, instantiation of dependent quantifiers is indispensable. Hence, Frege could not incorporate unrestricted
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 15
instantiation rules in his logic and as a consequence this logic could not do justice to the unrestricted epsilon-delta
technique.
9. Formalization vs. logic of ordinary language
We are touching a great many important issues here. Among them is the rationale of formalization. Frege claims
as a merit of his formalized Begriffsschrift that it frees the study of logic from the fetters of ordinary language.
He says that in trying to achieve complete rigor in reasoning, he “found an obstacle in the inadequacy of language
[…] the more complex the relations became, the less precision […] could be obtained.” (Frege 1879, preface). But
one can ask what it was that was difficult for him to understand, the propositions of ordinary language that occur
in reasoning or the subject matter itself, that is, the structures one is reasoning about. Using as a test case the
episode in the history of mathematics that has been discussed here, where was the source of difficulty in
mastering notions like uniform convergence? Was it in the vagueness of the informal or semi-formal language
that mathematicians like Cauchy used? Is the difficulty in understanding the definition (3) due to the use of
ordinary language in it? Was it perhaps an ambiguity or unclarity of the words “choice” and “choose”? One
might suggest that. But the very same problems come up in the formalized version (6) of (3). Does the quantifier
(∃k) depend on x or not? This is not any clearer or unclearer a question when asked about (3) or (6). The fact that
we have introduced a formalism for the logical concepts like connectives and quantifiers does not help to answer
the question. a formalization may result in fixing the meaning of an ambiguous expression, but without awareness
of the ambiguity of the original ordinary language expression, the same problem persists and it is only pushed to
another location. In the case of uniformity concepts, it became a problem of a meaning missing from the
formalization, which was solved only more than a century after Frege’s formalization.
Frege understood and formalized a wealth of logical concepts. But when it comes to his central concepts,
the two quantifiers, his difficulties in dealing with them are not problems of translation from ordinary language to
a formal notation. Rather, the formalism was for him a tool for trying to understand what it is that is expressed in
informal discourse. Admirable as Frege’s creation of a formal language in many ways is, a formalism is neither a
necessary nor sufficient precondition for mastering logical reasoning.
Acknowledgements
This paper was written when Jaakko Hintkkka was a Distinguished Visiting Fellow of the Collegium for
Advanced Studies of the University of Helsinki. He was assisted by Antti Kylänpää. This support is gratefully
acknowledged.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 16
BIBLIOGRAPHY
Bibliographies of the original literature are found in Bottazzini (1986), Grattan-Guinness (1970) and Grabiner
(1981).
Alexander, Amir, 2010, Duel at Dawn: Heroes, Martyrs, and the Rise of Modern Mathematics, Harvard U.P.,
Cambridge
Belhoste, Bruno, 1991, Augustin-Louis Cauchy: A Biography, Springer, Heidelberg and New York.
Bourbaki, Nicholas, 1938, Théorie des ensembles, Hermann, Paris.
Bottazzini, Umberto, 1986, The Higher Calculus: A History of Real and Complex Analysis from Euler to
Weierstrass, Springer, Heidelberg and New York.
Brabenee, Robert Z., 2004, Resources for the Study of Real Analysis, The Mathematical Association of America,
Washington, D.C.
Cauchy, Augustin-Louis, 1821, Cours d’analyse de l’Ecole Royale Polytechnique, Paris.
Durham, William, 2005, The Calculus Gallery, Princeton U.P., Princeton
Frege, Gottlob, 1879, Begriffsschrift,eine der arithmetischen nachgebildete Formelsprache des reinen Denkens,
L. Nobert, Halle
___________ , 1885, “Review of H. Cohen, Das Prinzip der Infinitesimal-Methode und seine Geschichte”,
Zeitschrift für Philosophie und Philosophishe Kritik, vol. 87, pp 324–329.
___________ , 1893, Grundgesetze der Arithmetik. Begriffsschriftlich abegeleitet, vol. 1, H. Pohle, Jena.
Forsyth, A.R., 1893, Theory of Functions of a Complex Variable, Cambridge U.P., Cambridge.
Gleason, Andrew M., 1991, Fundamentals of Abstract Analysis, Jones and Bartlett, Boston.
Grabiner, Judith V., 1981, The Origins of Cauchy’s Rigorous Calculus, MIT Press, Cambridge, MA
Grattan-Guinness, I, 1970, The Development of the Foundations of Mathematical Analysis from Euler to
Riemann, MIT Press, Cambridge, MA.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 17
Hilbert, David, 1899 (many later editions), Grundlagen der Geometrie, Taubner, Leipzig.
Hilbert, David, 1918, “Axiomatisches Denken”, Mathematische Annalen, vol. 78, pp. 405–415.
Hilbert, David, and Paul Bernays, 1936–39, Grundlagen der Mathematik 1–2, Springer, Heidelberg and Berlin.
Hintikka, Jaakko, 2011, “What the bald man can tell us”, in Anat Biletzky, editor, Hues of Philosophy: Essays in
Memory of Ruth Manor, College Publications, London.
Hintikka, Jaakko, forthcoming (a), “On the significance of incompleteness results”
Hintikka, Jaakko, forthcoming (b), “Kant’s theory of mathematics: What theory? What mathematics?”
Hintikka, Jaakko, forthcoming (c), “IF logic, definitions and the Vicious Circle principle”, Journal of
Philosophical Logic, probably 2012.
Hintikka, Jaakko, and Gabriel Sandu, 1996, “Game Theoretical Semantics”, in J. van Benthem and Alice ter
Meulen, editors, Handbook of Logic and Language, Elsevier, Amsterdam, pp. 361–410.
Moore, Gergory H., 1982, Zermelo’s Axiom of Choice: Its Origins, Development and Influence, Springer,
Heidelberg and New York.
Peirce, Charles S., 1894, “A review of Forsyth, Harkness and Picard”, Nation vol. 58, pp 197–199.
Peirce, Charles S, 1931-1958, Collected Papers 1- 8, edited by C. Hartshorne, P. Weiss and A. Burks, Harvard
U.P., Cambridge. (Referred to as CP.)
Pietarinen, Ahti-Veikko, 2006, Signs of Logic: Peircean Themes in the Philosophy of Logic, Springer, Dordrecht.
Seidel, Phillip L., 1900 (original 1848), “Note über eine Eigenschaft der Reihen, welche discontinuerliche
Funktionen darstellen” in Ostwaldso Klassiker, vol. 116, pp. 35–45.
Tulenheimo, Tero, “Independence-Friendly Logic”, The Stanford Encyclopedia of Philosophy (Summer 2009
Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2009/entries/logic-if/>.
Weierstrass, K.T.W., 1886, Abhandlungen aus der Functionenlehre, Berlin.
Weierstrass, K.T.W., 1894-1915 (reprinted 1967), Werke 1-7, Berlin.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 18
Whitaker, E.T., and Watson, G.N., (1952) A Course of Modern Analysis (4th ed.). Cambridge U.P., Cambridge.
History of Logic vs. History of Mathematics. Jaakko Hintikka. 032012 19