Mathematical Analysis I

Mathematical Analysis I
University of Padova — ENSTP Yaoundé
Paolo Guiotto
ii
Introduction
This material covers a first course of Mathematical Analysis for scientific degrees. The accent is posed on development
of Calculus abilities, where Calculus means differential and integral calculus. A clever use of Calculus requires a good
comprehension of the general theory, which is presented here in almost all the details. Some proofs are lacking because
they are natural extension of others (and usually they are left as useful exercise to the reader), other are omitted because
they don’t add anything new to the comprehension, and finally some are omitted because too technical and beyond of
the scope of the course (for instance the existence of the real numbers system). Sometimes the hypotheses are not the
most general one, but the statements cover the majority of cases one meet in the applications. Finally, with respect a
Mathematical Analysis course for mathematicians, the topology is reduced to the essential concepts. Nonetheless the spirit
of Mathematical Analysis is intact.
As said above, the scope is to develop calculus abilities. Many people study Analysis splitting the theory by exercises,
as if exercises would be referred to something different by the theory. Moreover, they look for standard recipes to work out
the problems in a mechanical way. Well very often Mathematical Analysis don’t have lots of standard methods a part ideas
suggested by the general theory. In a certain sense Mathematical Analysis appears something schizophrenic, working
simultaneously with an abstract paradigm, some geometric intuition and some numerical sense. In the anglo–saxon
mathematical literature this dichotomy is named soft analysis versus hard analysis. For instance:
• the Weierstrass Thm for the existence of the minimum and the maximum for a continuous function on a closed and
bounded interval is an example of soft analysis: it says that the minimum and the maximum exist, but it doesn’t give
a practical method to find them!
• the definition of derivative of a function in a point is a soft definition with lot of geometrical sense giving the concept
of tangent to a graph of a function;
n
• the proof of existence of the Napier number e, that is of the limit limn→+∞ 1 + n1 is a masterwork of soft analysis
combined with hard computations.
These examples tell us that if we look to Mathematical Analysis as a mere calculus (with small "c") we don’t focus on the
true sense of this branch of Mathematics. Therefore, to have a good understanding of this discipline, we need to master
both: soft and hard. This is undoubtedly difficult work, but when it is successful it really satisfactory!
There’s another myth that has to be unfated. Often the student feels Mathematical Analysis as an enormous handbook
of recipes without a precise sense and intuition. This is completely false! Mathematical Analysis is like a theme with
variations: the theme is the concept of limit, the variations are limit of sequences, infinite sums, continuity for functions,
derivatives, integrals, . . . . Not only: this concept has an intuitive sense fundamental to describe practical situations, that
is: what happens to the behavior of some system when we change a little the system or some of its parameters. This is
a fundamental problem for applied sciences because nothing is exact and everything is only approximate. Our calculus
would be useless if a little change in our data would determine a drastic change in the behavior of the system. For instance:
you compute everything to build a bridge in such a way that it doesn’t crash but when you finish the construction the
bridge fall down! So we need to know that our computations are in some sense "stable", and this is one of the scopes of
Mathematical Analysis for applied sciences.
iii
An advice on the study of Mathematical Analysis.
By what we said above, it is a good way to solve exercises by having a good comprehension of the theory. Fundamental is a
strong knowledge of all the definitions and the statements of the theorems. To understand better definitions and statements
of theorems is a good practice to try to complete the proofs left as exercise. This will help you to learn simultaneously the
idea behind the statement and the argumentation. The most important thing to do is to understand what is the strategy of
the proof. A good student, having the strategy in mind is able to reconstruct the entire proof without to recur to its memory
(that often and specially during exams is not a good allied. . . ).
The text includes, in the last section of each Chapter, several exercises (they are more than 600 in all the text). Some
of them are more difficult. Those with a (?) requires a little more abilities than others, those with (??) requires also some
more creativity.
Well, have a nice journey and. . . good luck!
iv
Contents
0
Preliminaries: basic concepts
0.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.2 Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Real Numbers
1.1 Definition of R . . . . . . . . . . . .
1.2 Elementary functions . . . . . . . . .
1.2.1 Powers . . . . . . . . . . . .
1.2.2 Exponentials and Logarithms
1.2.3 Trigonometric functions . . .
1.3 Modulus . . . . . . . . . . . . . . . .
1.4 Informations on the structure of R . .
1.5 Factorial and binomial coefficients . .
1.6 Exercises . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
8
11
12
14
16
18
20
20
21
Sequences
2.1 Mathematical models . . . . . . .
2.1.1 The Malthus model . . . .
2.1.2 Fibonacci numbers . . . .
2.1.3 Interest rates . . . . . . .
2.2 Limit: the concept . . . . . . . . .
2.2.1 Finite limit . . . . . . . .
2.2.2 Infinite limit . . . . . . .
2.2.3 Non existence . . . . . . .
2.3 Fundamental properties . . . . . .
2.4 The Bolzano–Weierstrass theorem
2.5 Rules of calculus . . . . . . . . .
2.5.1 Infinite limits . . . . . . .
2.6 Principal infinities . . . . . . . . .
2.7 Indeterminate forms for powers . .
2.8 The Napier number . . . . . . . .
2.9 Exercises . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
23
23
24
25
25
26
28
30
31
34
35
36
39
42
43
45
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
3
3
4
6
vi
3
4
5
6
Numerical Series
3.1 Definition and examples . . . .
3.2 Constant sign terms series . . .
3.2.1 Asymptotic comparison
3.2.2 Root and Ratio tests . .
3.3 Variable sign terms series . . . .
3.3.1 Alternate sign . . . . . .
3.3.2 Absolute convergence .
3.4 Exercises . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
48
52
54
55
57
57
59
61
Limits
4.1 Elementary topology on the real line . . . . . . . . . . . .
4.2 Definition of limit and of continuous function . . . . . . .
4.3 Basic properties of limits and continuous functions . . . .
4.4 Rules of calculus . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Change of variable . . . . . . . . . . . . . . . . .
4.5 Fundamental limits . . . . . . . . . . . . . . . . . . . . .
4.5.1 Comparison exponentials/powers/logarithms at +∞
4.5.2 Trigonometric functions . . . . . . . . . . . . . .
4.5.3 Exponentials, logarithms and powers at 0 . . . . .
4.6 Hyperbolic functions . . . . . . . . . . . . . . . . . . . .
4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
64
67
71
73
75
75
76
79
81
82
Continuity
5.1 Weierstrass Theorem . . . . . . . . . . . . . . .
5.2 Zeroes theorem and intermediate values theorem
5.3 The theorem of continuous inverse . . . . . . . .
5.4 Exercises . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
87
87
88
89
92
.
.
.
.
.
.
.
.
.
.
.
.
.
.
93
94
96
97
99
102
105
106
108
111
115
117
121
125
126
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Differential Calculus
6.1 Definition and first properties . . . . . . . . . . . . . . .
6.2 Derivatives of elementary functions . . . . . . . . . . .
6.3 Rules of calculus . . . . . . . . . . . . . . . . . . . . .
6.4 Fundamental theorems of Differential Calculus . . . . .
6.5 Hôpital’s rules . . . . . . . . . . . . . . . . . . . . . . .
6.6 Derivative and monotonicity . . . . . . . . . . . . . . .
6.7 Differentiable inverse mapping theorem . . . . . . . . .
6.8 Convexity . . . . . . . . . . . . . . . . . . . . . . . . .
6.9 Plot of the graph of a function . . . . . . . . . . . . . .
6.10 Applied Calculus . . . . . . . . . . . . . . . . . . . . .
6.11 Taylor formula . . . . . . . . . . . . . . . . . . . . . . .
6.11.1 Computing limits by using asymptotic expansion
6.11.2 Applications to convergence for numerical series
6.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
7
8
9
Primitives
7.1 Elementary and quasi–elementary primitives .
7.2 Rules of calculus . . . . . . . . . . . . . . .
7.3 Primitive of rational functions . . . . . . . .
7.3.1 Some standard changes of variable . .
7.4 Exercises . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
133
134
135
139
143
145
Riemann Integral
8.1 Definition of integrable function . . . . . . .
8.2 Classes of integrable functions . . . . . . . .
8.3 Properties of the Riemann Integral . . . . . .
8.4 Fundamental Theorem of Integral Calculus .
8.5 Integration formulas . . . . . . . . . . . . . .
8.6 Generalized integrals . . . . . . . . . . . . .
8.7 Convergence criteria for generalized integrals
8.7.1 Constant sign integrand . . . . . . . .
8.7.2 Non constant sign integrand . . . . .
8.8 Functions of integral type . . . . . . . . . . .
8.9 Exercises . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
149
149
151
153
154
156
157
160
160
162
163
166
Basic Differential Equations
9.1 Motivating models . . . . . . . . . . .
9.1.1 Decay phenomena . . . . . . .
9.1.2 Newton equations . . . . . . . .
9.1.3 Bernoulli brachistochrone . . .
9.2 First order linear equations . . . . . . .
9.3 First order separable variables equations
9.4 Second Order Linear Equations . . . . .
9.4.1 Applications . . . . . . . . . .
9.4.2 Cauchy problem . . . . . . . .
9.5 Exercises . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
169
169
169
170
171
172
174
177
180
182
183
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
Chapter 0
Preliminaries: basic concepts
Mathematics is first of all a language. As any language has an its own vocabulary and a grammar. Like any vocabulary
the sense of some word has to be taken as primitive. Think about: if you want to define what is a chair you could say is
something on which we sit down. But this would be partially not precise (there’re lots of thing on which we can sit down
and we wouldn’t call chairs) and, more important, would refer to another concept (to sit down) that now we have to define.
In this backward process, easily you arrive to some point take you need to take "not defined", given by the intuition. This is
in any language and Mathematics doesn’t make any difference. The peculiarity of Mathematics is that the "mathematical
literature" must be abstract, precise and it must not to give space to ambiguities.
Aim of this short Chapter is to introduce the "primitive concepts" of all Mathematics and the basic structures of the
"grammar".
0.1
Sets
The concept of set is a primitive concept. What does it means this? If we try to define what is a set, we should say
something like: "a set is a collection of elements" or "a set is a family of objects" and so on. We would pass by the
problem to define a set to that one to define a collection or a family. In other words, we would define the concept of set by
a synonymous, creating a never ending chain. For this reason we need necessarily to give some concepts as primitive, that
is without a definition.
Usually sets are denoted by capital letters A, B, C, X, Y, . . .. The notation x ∈ X means that x is an element of the set
X, that is it belongs to X, while x < X means that x doesn’t belong to X. We say that A is a subset of B,
A ⊂ B, if every x ∈ A belongs to B, that is x ∈ B.
Many sets are defined through operations on other sets. The main operations are:
• union: given A, B ⊂ X, the union of A with B is the set of elements belonging to at least one bewteen A and B
A ∪ B := {x ∈ X : x ∈ A or x ∈ B} .
• intersection: given A, B ⊂ X, the intersection of A and B is the set of elements belonging to both A and B, that is
A ∩ B := {x ∈ X : x ∈ A and x ∈ B} .
In the case that A and B have no common elements, we will write A ∩ B := ∅. It is convenient to think ∅ as a set
without elements called empty set.
3
4
• difference: given A, B ⊂ X, the difference between A and B (in this order) is the set of elements of A that do not
belongs to B, that is
A\B := {x ∈ X : x ∈ A and x < B} .
Usually A\B , B\A as one can easily see by some example.
Often we work with subsets of a given set that is something like a universe (for instance, in all this course we will treat
with subsets of the real numbers). If X is the universe, we call complementary of A ⊂ X the set
Ac := X\A = {x ∈ X : x ∈ X and x < A} = {x ∈ X : x < A}.
It is easy to check that
if A, B ⊂ X, then A\B = A ∩ B c .
It is not difficult to check that the previously defined operations fulfill the following properties:
• (commutativity) A ∪ B = B ∪ A, A ∩ B = B ∩ A, for any A, B.
• (associativity) A ∪ (B ∪ C) = ( A ∪ B) ∪ C, A ∩ (B ∩ C) = ( A ∩ B) ∩ C, for any A, B, C.
• (distributivity) A ∩ (B ∪ C) = ( A ∩ B) ∪ ( A ∩ C), for any A, B, C.
• (neutral element for the union) A ∪ ∅ = A for any A.
In this way ∪ and ∩ sounds like + and · for numbers, with ∅ in the role of 0. Actually not everything is similar. For instance
A ∪ A = A (while for numbers a + a = a iff a = 0) or A ∩ A = A (while for numbers a · a = a iff a = 0 or a = 1).
0.2
Propositions
The phases we write in Mathematics are statements that can be true or false. Such statements are called propositions. For
instance
2 divide 6 is a true proposition, 2 divide 3 is a false proposition.
For instance again
2 is a beautiful number or God exists
are not propositions. The first is a nonsense, the second is true for the believer, false for the atheist.
Usually propositions are denoted by letters like p, q, . . .. In some cases the proposition may depend on some variable.
For instance
p(n) := 2 divide n, is true if n is even, false if n is odd.
In these cases we say that p is a predicate. Predicates are very useful tools to describe sets. Indeed, it is often impossible
to list all the elements of a set (for instance when they are infinitely many) but it is possible to find out a property
characterizing all the elements of the set. For instance
S = {n ∈ N : p(n) is true} = {n ∈ N : 2 divide n} = {even numbers} .
In general, if X is a universe set and p is a predicate over X we will write
S = {x ∈ X : p(x) is true} ≡ {x ∈ X : p(x)} ,
where the last is a shortening for the previous.
Also on propositions we define operations, called logic operations. If p, q are proposition then
5
• the disjunction between p and q is the proposition
p or q (notation p ∨ q) is true iff at least one between p and q is true.
• the conjunction between p and q is the proposition
p and q (notation p ∧ q) is true iff p and q are both true.
• the negaton of p is the proposition
not p, (notation ¬p) is true iff p is false.
For instance
(2 divide 6) ∨ (2 divide 3) is true, (2 divide 6) ∧ (2 divide 3) is false.
Conjunction and disjunction correspond, respectively, to intersection and union for sets in the sense that
if A = {x ∈ X : p(x)}, B = {x ∈ X : q(x)}, then
A ∪ B = {x ∈ X : p(x) ∨ q(x)} ,
A ∩ B = {x ∈ X : p(x) ∧ q(x)} .
Let’s now consider a predicate p = p(x) where x ∈ X, X universe. Often we encounter phrases of type
exists at least an x ∈ X such that p(x) is true ≡ ∃ x ∈ X : p(x).
Or
for any x ∈ S, p(x) is true, ≡ ∀x ∈ X : p(x).
The symbols ∃ (exists) and ∀ (for all) are called universal quantifiers. We have also
exists a unique x ∈ X such that p(x) is true, ≡ ∃!x ∈ X : p(x).
The quantifiers are symbols that we encounter in any mathematical statement, so it is useful to have a good understanding
of them. This may not be very easy. For instance
¬ (∀x ∈ X : p(x)) = ∃x ∈ X : p(x) false ≡ ∃ x ∈ X : ¬p(x).
Moreover, often the quantifiers appear combined together. For instance, consider a predicate in two variables p(x, y) with
x, y ∈ X and the propositions
∀x ∈ X ∃y ∈ X : p(x, y), ∃x ∈ X, ∀y ∈ X : p(x, y).
Do they say the same thing? No! To see it consider a specific example with
p(x, y) := x loves y, on X = {human beings}.
Let’s see what do the two statements mean:
∀x ∈ X ∃y ∈ X : p(x, y) ≡ every human being loves at least one other human being,
∃x ∈ X, ∀y ∈ X : p(x, y) ≡ there exists at least one human being that loves all the others.
This should give an easy idea about the difference. Be careful: the most important concept of Mathematical Analysis, that
is the concept of limit, is just a combination of ∀, ∃.
6
0.3
Functions
A function f : X −→ Y is a way to associate to any x ∈ X an element f (x) ∈ Y . Also the concept of function is a primitive
concept. Usually X is called domain whereas Y is called co–domain. The element f (x) is called image of x through f .
In general, if A ⊂ X, we set
f ( A) := { f (x) : x ∈ A} ,
and we call f ( A) image of A through f . Some important properties of functions are:
• we say that f is injective if f (x) = f (y) iff (if and only if) x = y;
• we say that f is surjective if f (X ) = Y , that is ∀y ∈ Y , ∃x ∈ X such that f (x) = y.
• we say that f is bijective if it is injective and surjective.
Warning! 0.3.1. Never forget that a function is not only a rule, because the same "rule" may change the properties of the
function by changing domain or co–domain (or both). For instance, consider f (n) = n2 . We may think f in different contexts:
• f : N −→ N, in this case it is injective but not surjective;
• f : Z −→ Z, it is not injective nor surjective;
• f : N −→ f (N), it is injective and surjective (that is bijective).
If f is bijective, in particular we have that
∀y ∈ Y, ∃!x ∈ X, : f (x) = y.
In other words, for any y ∈ Y there’s a unique x ∈ X such that f (x) = y. This (the way to associate to y the element x
such that f (x) = y) defines a function,
f −1 : Y −→ X, f −1 (y) := x, where x ∈ X : f (x) = y.
This function is called inverse of f . In particular:
f ( f −1 (y)) = y, ∀y ∈ Y, f −1 ( f (x)) = x, ∀x ∈ X .
Let’s introduce a useful and important operation on functions: if
f : X −→ Y, g : Y −→ Z,
we set
g ◦ f : X −→ Z, (g ◦ f )(x) := g( f (x)), ∀x ∈ X,
calle composition of f and g. In particular, calling identity on X
IX : X −→ X, IX (x) := x, ∀x ∈ X,
if f is bijective
f ◦ f −1 = IY , f −1 ◦ f = IX .
To finish, let’s recall that a useful way to represent a function is associating to it a set, called graph, defined as
G( f ) := {(x, f (x)) ∈ X × Y : x ∈ X } .
Here X × Y is the cartesian product of X and Y , that is the set of couples (x, y) with x ∈ X and y ∈ Y .
Chapter 1
Real Numbers
The set of real numbers R is the framework where we will develop all the concept of Analysis. The concept of number has
a long history and only in recent times, that is at the end of XIX century, mathematicians found a satisfactory definition.
A set of numbers, indeed, is not simply a set because on its elements are defined some operations. There’re several
sets of numbers: naturals, integers, rationals, reals, complexes,. . . . Each of them correspond to a specific need. The set
of natural numbers N = {0, 1, 2, 3, . . .} answers to the need to count. On N are defined two important operations (sum
and product) and an order, fulfilling some elementary properties. Natural numbers are enough for most of the situations
we find in common real life, but not for all. In some cases it is convenient to have the concept of negative number. For
instance, considering an economic outfit, it is convenient to denote a debt as a negative profit in such a way that summing
(algebraically) all the income with the (negative) outcome we get easily the economic result of the enterprise. For this
reason were historically introduced negative numbers and the set of integers Z = {0, ±1, ±2, ±3, . . .}. On Z are defined the
same operations we have on N. Actually, N may be seen as a subset of Z but differently by N on Z we can always talk
about the opposite of any number.
When we pose the problem to divide something in a certain number of parts we recognize easily that Z is not enough.
Sure, in some cases it is possible: for instance 6 may be divided in 2 or 3 parts, but not in 4 or 5 parts. For this reason
we are lead to introduce the rational numbers Q = { qp : p, q ∈ Z, q , 0}. Greeks have a fundamental role in history of
knowledge in general and in particular for mathematics. They didn’t had the concept of numbers in the abstract form as we
have today and they treated with numbers as geometric measures. In other words, numbers were lengths, areas or volumes.
The algebraic operations were defined geometrically: for instance, the number 15 was built considering a segment of length
(conventionally) 1 and dividing it in 5 equal parts. Being able to construct the division in any number n of parts, taking m
copies of one part they were able to construct the number m
n . In other words they knew rational numbers. One of the most
amazing discovery of Greeks was nevertheless that√there were numbers beyond
√ rationals. If you consider a square of side
1, its diagonal has measure (by Pithagorean Thm) 2. Greeks discovered that 2 is not a rational numbers and their proof
is a masterpiece of elegance and depth, a striking example of "proof by contradiction argument". Also the ratio between
the area of a circle and the square of its radius, that is π, was suspect, but they didn’t proved that π is also irrational (the
proof was done lots of centuries after).
√
So the question is: what kind of number is 2 or π? Along centuries mathematicians used these numbers continuously,
but for their definition it has been necessary to wait up to the end of ’800 when Georg Cantor and Richard Dedekind,
independently and with original solutions, solved the puzzle and gave a construction of the set of real numbers. The
construction is too complex and beyond of the scopes of this course, so it will be omitted. Here we will state just the
properties of real numbers and we will learn how to use this powerful machine. This should be faced without problems:
on one side people like Newton, Euler or Gauss (to quote three of most important mathematicians of all times) used real
numbers even without to have a definition of them; more vulgarly, if you want to learn to drive a car to travel you go to
7
8
driving school and you learn how to drive without knowing all the details about the construction of the car.
1.1
Definition of R
The set of real numbers is introduced by the following
Theorem 1.1.1. There exists a set R ⊃ Q called set of real numbers with the following properties:
• sum: on R is defined a sum + (coinciding with the ordinary sum for rationals when we add rationals) such that:
i) (associativity): x + (y + z) = (x + y) + z, ∀x, y, z ∈ R.
ii) (commutativity): x + y = y + x, ∀x, y ∈ R.
iii) (zero): x + 0 = x, ∀x ∈ R.
iv) (opposite): ∀x ∈ R, ∃!y ∈ R : x + y = 0. The opposite is denoted by −x.
• product: on R is defined a product · (coinciding with the ordinary product for rationals when we multiply rationals)
such that:
i) (associativity): x · (y · z) = (x · y) · z, ∀x, y, z ∈ R.
ii) (commutativity): x · y = y · x, ∀x, y ∈ R.
iii) (unit): x · 1 = x, ∀x ∈ R.
iv) (reciprocal): ∀x ∈ R\{0}, ∃!y ∈ R : x · y = 1. The reciprocal is denoted by x1 .
• distributivity: x · (y + z) = x · y + x · z, ∀x, y, z ∈ R.
• order: on R is defined an order < (coinciding with the order for rationals when we order rationals) such that:
i) (total order): ∀x, y ∈ R only one between x < y, x = y, y < x is true.
We write x 6 y if x < y or x = y. Then
ii) (transitivity): if x 6 y and y 6 z then x 6 z.
iii) (reflexivity): if x 6 y and y 6 x then y = x.
iv) (invariance by sum): if x 6 y then x + z 6 y + z, ∀z ∈ R.
v) (invariance by product): if x 6 y then x · z 6 y · z, ∀z ∈ R, z > 0.
Finally, the following property holds:
• (completeness): any lower bounded set S ⊂ R (that is such that there exists m ∈ R for which m 6 s for any s ∈ S)
has the best lower bound. That is: there exists then a number α ∈ R such that
i) α is a lower bound for S, that is α 6 s, ∀s ∈ S;
ii) α is the best lower bound for S, that is ∀ β > α, ∃s ∈ S such that α 6 s 6 β.
The best lower bound is denoted by inf S.
Of course, existence of the best lower bound is equivalent t existence of the best upper bound:
9
Α
Β
S
Proposition 1.1.2. Any upper bounded set S ⊂ R (that is, such that there exists M ∈ R for which s 6 M for any s ∈ S)
has the best upper bound, that is there exists α =: sup S ∈ R such that
i) α is an upper bound for S: α > s, ∀s ∈ S;
ii) α is the best upper bound for S: ∀ β < α, ∃s ∈ S such that β 6 s 6 α.
Proof. — Exercise.
Let’s introduce also the
Definition 1.1.3. Let S ⊂ R.
• If S is lower unbounded we write inf S := −∞. We have
inf S = −∞, ⇐⇒ ∀m ∈ R, ∃s ∈ S, : s 6 m.
• If S is upper unbounded we write sup S := +∞. We have
sup S = +∞, ⇐⇒ ∀M ∈ R, ∃s ∈ S, : s > m.
Warning! 1.1.4. ±∞ are not numbers! Just useful agreements.
Definition 1.1.5 (minimum, maximum). Let S ⊂ R. We say that m = min S if
i) m 6 s, ∀s ∈ S; ii) m ∈ S.
Similar definition for max S.
Clearly
Proposition 1.1.6. If there exists min S then min S = inf S, if there exists max S then max S = sup S.
Proof. — Exercise.
Example 1.1.7. Find inf / sup and eventually min / max if they exist for the set
(
)
1
S=
: n ∈ N, n > 1 .
n
Sol. — The elements of S are numbers of type n1 with n ∈ N, n > 1 (warning: someone believes that the elements of S are n ∈ N,
n > 1: of course this is false). They are all between 0 and 1, that is 0 6 n1 6 1 for any n ∈ N, n > 1. In particular S is lower and upper
bounded (respectively by 0 and 1).
inf / min. A natural candidate for inf is 0: it is indeed a lower bound as noticed and it seems intuitively clear that it is the best. Let’s
check this:
1
∀ β > 0, ∃n ∈ N, n > 1 :
6 β.
n
10
But
1
1
6 β, ⇐⇒ n > ,
n
β
so it is enough to take any n bigger than β1 : but who knows if this is possible? Actually the answer is yes but a little bit delicate (see
Archimedean property). Now inf S = 0. If min S exists it is necessarily equal to 0, so it is enough to check if 0 ∈ S or not to know if
min S exists or not. Because 0 < S we conclude that @ min S.
sup / max. We have seen that 1 is an upper bound. Immediately 1 ∈ S so ∃ max S = 1, hence sup S = max S = 1.
Definition 1.1.8. Let a, b ∈ R, a < b. We set
[a, b] := {x ∈ R : a 6 x 6 b}.
]a, b] := {x ∈ R : a < x 6 b}.
[a, b[:= {x ∈ R : a 6 x < b}.
]a, b[:= {x ∈ R : a < x < b}.
] − ∞, b] := {x ∈ R : x 6 b}.
] − ∞, b[:= {x ∈ R : x < b}.
[a, +∞[:= {x ∈ R : x > a}.
]a, +∞[:= {x ∈ R : x > a}.
All these sets are called intervals. We set also ] − ∞, +∞[:= R and
R+ := [0, +∞[, R− :=] − ∞, 0].
In the Example we have done in previous section we posed the question: given a positive number b does it exists a natural
n > b? This seems obvious, but actually just by the properties stated for R it is not evident: why there couldn’t be real
numbers bigger than any integer?
Lemma 1.1.9 (Archimedean property).
∀b > 0, ∃n ∈ N, : n > b.
Proof. — Suppose that the conclusion is false and that there exists a real b such that
n 6 b, ∀n ∈ N.
N would be then upper bounded in R, so by completeness sup S would exists
R 3 α := sup N
Let’s give a look to this weird situation with a picture:
N
Α
b
N
Β
ñ
Α
ñ
+1
b
Now, taking any α − 1 < β < α, being α the best upper bound,
∃H
n ∈ N, : β 6 H
n 6 α.
But then N 3 H
n + 1 > β + 1 = α − 1 + 1 = α: so we have found an element of N (H
n + 1) strictly bigger than α = sup N: this is a
contradiction!
11
Let x > 0. Sooner or later some integer n will be bigger than x, that is x ∈ [0, n[. Now
[0, n[= [0, 1[∪[1, 2[∪[2, 3[∪ . . . ∪ [n − 1, n[.
And because the intervals [k, k + 1[ are disjoints, x belong only to exactly one of them.
Definition 1.1.10. Given x > 0 we call integer part of x the number [x] ∈ N such that
[x] 6 x < [x] + 1.
We call fractional part of x the number x − [x] =: {x}.
Example 1.1.11. Find inf / sup and, eventually, min / max of
(
)
1
S := 2 − √ : n ∈ N, n > 1 .
n
Sol. — sup / max. The elements of S are numbers
1
2− √ ,
n
as n ∈ N, n > 1. Clearly 2 − √1 < 2 for any n > 1. We deduce that S is upper bounded (by 2 for instance), therefore sup S ∈ R by
n
completeness. Let’s see that sup S = 2. Intuitively it is clear: as n is "big", the number 2 − √1 is close to 2 and lower than it. We have
n
already seen that 2 is an upper bound, it remains to check that it is the best, that is
1
∀ β < 2, ∃n > 1 : 2 − √ > β.
n
But
!
√
1
1
1 2
1
2 − √ > β, ⇐⇒ √ 6 2 − β, ⇐⇒ n >
, ⇐⇒ n >
.
2− β
2− β
n
n
1 2 (precisely n >
1 2 + 1) we are done. About max: if it exists it coincides with sup S = 2. So we have to
Therefore, if n > 2−β
2−β
check if 2 ∈ S or not. But 2 − √1 < 2 for any n > 1, so 2 < S, therefore max S doesn’t exist.
n
inf / min. Notice that if n % (that is n increase) then 2 − √1 % (that is 2 − √1 decrease). Therefore
n
1
1
S 3 2 − √ = 1 6 2 − √ , ∀n > 1, =⇒
n
1
1.2
n
1 = min S = inf S.
Elementary functions
Elementary functions are the basic function of Analysis on which all others are constructed. Despite the name, the
construction of the elementary functions is not at all elementary and it is beyond of our scope. The properties, in particular
graphs of these functions, have to be known very well.
12
1.2.1
Powers
The elementary power is the function x n with n ∈ N n > 1: x, x 2, x 3, . . .. These functions are defined for every x ∈ R.
Moreover
(−x) n = x n, ∀x ∈ R, n even, (−x) n = −x n, ∀x ∈ R, n odd.
Definition 1.2.1. Let f : D ⊂ R −→ R. We say that f is
even if f (−x) = f (x), ∀x ∈ D; odd if f (−x) = − f (x), ∀x ∈ D.
An even function has graph symmetric with respect to the y−axis; an odd function has graph symmetric with respect to
the origin. It is implicitly clear that to be even/odd a function f must be defined on a symmetric domain w.r.t. 0, that is:
x ∈ D iff −x ∈ D.
Extending now to x n , n ∈ Z, n < 0 by setting
1
x n := −n ,
x
we see that the power is well defined for every x , 0. The first non trivial power is the x q with q = m
n ∈ Q. Of course
1
we assume q ∈ Q\Z otherwise we would be in the previous case. In particular, considering q = n , standard properties of
powers like
1
xn
n
1
= x n ·n = x 1 = x,
suggests to define x 1/n as the n−th root of the number x. In other words: the definition of the power x 1/n is based on the
existence of the n−th root fo a number, an highly non trivial result:
Theorem 1.2.2.
∀n ∈ N, n > 2, ∀x > 0 ∃! y > 0, : y n = x.
√
We call such unique y n−th root of x and we denote it by n x. We have
√
n
x = inf{y > 0 : y n > x}.
Proof. — Omitted.
If n is odd we can define the n−th root also for negative numbers setting
√
√
n
x := − n −x, if x < 0, (n odd).
Indeed this definition works as the n−th root of x < 0 because
n n odd √
n
√ n (1.2.1) √
n
x
= − n −x
= − n −x = −(−x)
Therefore
(1.2.1)
pos. root
=


 [0, +∞[, (n > 0 even),
x 1/n, is defined on 

 ] − ∞, +∞[, (n > 0 odd).

To pass to x q with q ∈ Q is now easy and suggested by the identity
1 m
m
x n := x n
.
x.
13
There’s just a little care to have: a rational number q = m
n can be written in infinitely many ways. We will agree to consider
as
reduced
as
much
as
possible,
that
is
m
and n without common divisors; moreover we can always assume
the fraction m
n
n > 0. With this agreement, if m > 0,
1 m
m
x n := x n ,
∀x ∈ [0, +∞[, (n, even),





 ∀x ∈] − ∞, +∞[, (n, odd)

When m < 0 the definition is the same except that in this case the power is not defined for x = 0. We have
• x p+q = x p x q ;
• (x p ) q = x pq ;
• 0 p = 0, for any p > 0;
• 1 p = 1 for any q ∈ Q.
Moreover we have the following monotonicity properties:
x p < y p,




if 0 < x < y, then 
 x p > y p,

p > 0;
p < 0.
x p < x q,




if p < q, then 
 x p > x q,

x > 1;
x < 1.
As functions, rational powers x q are





m
∈ Q\Z, =⇒ 
q=

n




xq
xq
xq
xq
: [0, +∞[−→ [0, +∞[,
:]0, +∞[−→]0, +∞[,
:] − ∞, +∞[−→] − ∞, +∞[,
:] − ∞, +∞[\{0} −→] − ∞, +∞[\{0},
q
q
q
q
> 0,
< 0,
> 0,
< 0,
n even,
n even,
n odd,
n odd.
It is convenient to introduce the following
Definition 1.2.3. Let f : D ⊂ R −→ R. We say that f is increasing (notation f %) if
f (x) 6 f (y), ∀x, y ∈ D : x 6 y.
When f (x) < f (y) as x < y we say that f is strictly increasing. Similar defs for decreasing and strictly decreasing. An
increasing/decreasing function is said to be monotone.
To√ define exponentials we need powers x α with exponent real number. It is however not very clear what it should means
x 2 for instance: to which operation it should correspond? The definition is based on properties of powers with rational
exponent introduced above. Indeed: take x > 1 and notice that
x q < x p, ∀q < α < p.
In other words we may expect that
x α = inf{x p : p ∈ Q, p > α}.
This definition turns out to work and fulfills usual properties of powers:
14
Theorem 1.2.4. Let α ∈ R and x > 0. If x > 1 set
x α = inf{x p : p ∈ Q, p > α}.
If x = 1, 1α := 1, while if x < 1 we pose
x α :=
1
.
x −α
Then
• x α+β = x α x β .
• (x α ) β = x αβ , in particular x −α =
1
xα .
• x α % strictly if α > 0, x α & strictly if α < 0.
As function x α is bijective between I =]0, +∞[ and J =]0, +∞[ (with (x α ) −1 = x 1/α ).
y
y=xp Hp<0L
y=xp Hp>1L
y=x
y=xp Hp<1L
x
1.2.2
Exponentials and Logarithms
Fix now a > 0 and call
expa :] − ∞, +∞[−→]0, +∞[, expa (x) := a x, x ∈ R,
exponential of base a. Clearly exp1 ≡ 1. From properties of powers we have
Theorem 1.2.5. Let a > 0, a , 1. Then
• expa % strictly if a > 1, expa & strictly if a < 1.
• a x+y = a x a y , for any x, y ∈ R.
• expa (0) = a0 = 1, for any a > 0.
y
y=ax H0<a<1L
y=ax Ha>1L
y=ax Ha=1L
x
Proof. — It is an easy exercise by property of powers.
15
Example 1.2.6. Solve
5x −
1
5x−1
> 4.
Sol. — Being 5 x−1 > 0 the inequality makes sense for any x ∈ R. We have,
1 2x 4 x
1
5 x − x−1 > 4, ⇐⇒ 52x−1 − 1 > 4 · 5 x−1, ⇐⇒
5 − 5 − 1 > 0, ⇐⇒ 52x − 4 · 5 x − 5 > 0.
5
5
5
Therefore, setting y = 5 x , we have
2x
5
√
√
4 − 36
4 + 36
− 4 · 5 − 5 > 0, ⇐⇒ y − 4y − 5 > 0, ⇐⇒ y 6
= −1, ∨ y >
= 5,
2
2
y=5 x
x
2
iff 5 x 6 −1 or 5 x > 5. The first has not solutions. The second, being 5 > 1, produce 5 x > 5 = 51 iff x > 1. Therefore: the solutions
are x > 1.
Let a > 0, a , 1. The logarithm of base a is the inverse of expa . Precisely
Theorem 1.2.7. Let a > 0, a , 1. Then
∀x > 0, ∃!y ∈ R : a y = x.
We call loga x := y. The function loga :]0, +∞[−→] − ∞, +∞[ is the inverse of expa , that is
loga (a y ) = y, ∀y ∈ R, alog a x = x, ∀x ∈]0, +∞[.
and it fulfills the following properties
i) loga % strictly if a > 1, loga & strictly if a < 1.
ii) loga (x y) = loga x + loga y, for any x, y ∈]0, +∞[.
iii) loga (x y ) = y loga x, for any y ∈ R, x > 0.
iv) loga x = (loga b)(logb x), for any x > 0, a, b , 1.
iv) loga 1 = 0, for any a > 0, a , 1.
v) loga a = 1, for any a > 0, a , 1.
In particular: expa is bijective between I =] − ∞, +∞[ and J =]0, +∞[ whereas loga is bijective between J and I with
log−1
a = exp a .
y
y=loga x Ha<1L
y=loga x Ha>1L
1
Proof. — Omitted.
x
16
It is useful to notice that


 ∀x,
a x > y, ⇐⇒ 
 ∀x > loga y,

 ∀x 6 loga y,
Example 1.2.8. Solve
if y 6 0,
if y > 0, (a > 1),
if y > 0, (a < 1).
√
log2 x − 1 + 2 > log4 (x + 4). (?)
Sol. — Let’s first discuss about the domain of existence of the inequality. We need
x − 1 > 0,


√

 x − 1 > 0,


 x + 4 > 0,
x > 1,



x > 1,
⇐⇒ 


 x > −4,
⇐⇒ x > 1,
Therefore the domain is D =]1, +∞[. Now, by properties of logarithms,
√
(?) ⇐⇒ log2 x − 1 + 2 > (log4 2)(log2 (x + 4)).
Clearly log4 2 = log4 41/2 = 21 log4 4 = 12 , so
√
√
√
√
√
x−1
(?) ⇐⇒ log2 x − 1 + 2 6 log2 x + 4, ⇐⇒ log2 x − 1 − log2 x + 4 6 −2, ⇐⇒ log2 √
6 −2. (?2 ).
x+4
Being the base 2 > 1, by monotonicity,
√
√
√
x−1
4
1
(?2 ) ⇐⇒ √
6 2−2 = . ⇐⇒ 4 x − 1 6 x + 4, ⇐⇒ 16(x − 1) 6 x + 4, ⇐⇒ x 6 .
4
3
x+4
Hence, the solutions (in D) are the interval ]1, 34 ].
1.2.3
Trigonometric functions
The trigonometric functions are called also circular functions because of their geometric meaning. Consider a point (x, y)
on the unitary circumference of equation
x 2 + y 2 = 1.
Instead of cartesian coordinates (x, y) we could characterize a point on the circumference by the length θ of the arc of
circumference between (conventionally) (1, 0) and (x, y). The total length being 2π we have
θ = 0 ←→ (1, 0), θ =
√ √
π
3π
π
2 2+
←→ (0, 1), θ = π ←→ (−1, 0), θ =
←→ (0, −1), θ = ←→ * ,
.
2
2
4
, 2 2 -
In general, to any point (x, y) on the circumference it corresponds a θ ∈ [0, 2π[. With this convention, given θ we call the
cartesian coordinates (x, y) as (cos θ, sin θ).
In this way we defined two functions
sin, cos : [0, 2π[−→ R.
For several practical reasons, it is convenient to define sin and cos for any θ ∈ R. There’s a natural way to proceed. Imagine
a string fixed in (1, 0) winding on the circumference. If the length 2π represent the entire circumference, 2 · 2π = 4π could
be like two complete counterclockwise windings on the circumference. In this way 2π + θ 0 with θ 0 ∈ [0, 2π[ represent
17
sin Θ
Hx,yL
Θ
cos Θ
H1,0L
the same point of θ 0 . At the same time a length −2π could represent a complete clockwise winding, so −2π + θ 0 (again
θ 0 ∈ [0, 2π[) represents always the same point as 2π + θ 0 and θ 0 . More in general
θ + k2π, k ∈ Z,
is always the same point. Therefore
sin(θ + k2π) = sin θ, cos(θ + k2π) = cos θ, ∀k ∈ Z.
The properties of sin and cos are summarized by the
Theorem 1.2.9. There exist two functions sin, cos : R −→ R such that:
i) (cos 0, sin 0) = (1, 0), cos π2 , sin π2 = (0, 1).
ii) (fundamental identity): it holds
(sin θ) 2 + (cos θ) 2 = 1, ∀θ ∈ R.
In particular −1 6 sin θ 6 1, −1 6 cos θ 6 1, ∀θ ∈ R.
iii) (periodicity): sin(θ + 2π) = sin θ, cos(θ + 2π) = cos θ, for any θ ∈ R.
iv) (addition formulas)
sin(θ 1 + θ 2 ) = sin θ 1 cos θ 2 + cos θ 1 sin θ 2, cos(θ 1 + θ 2 ) = cos θ 1 cos θ 2 − sin θ 1 sin θ 2 .
In particular: sin(2θ) = 2 sin θ cos θ, cos(2θ) = (cos θ) 2 − (sin θ) 2 = 2(cos θ) 2 − 1
v) (simmetries): sin(−θ) = − sin θ and cos(−θ) = cos θ for any θ ∈ R.
Proof. — Omitted.
y
y=sin x
1
y=cos x
x
-Π
-
Π
Π
2
2
Π
3Π
2Π
2
-1
Let’s give a particular name to some of the properties fulfilled by sin and cos:
Definition 1.2.10. A function f : R −→ R is called T−periodic if
f (x + T ) = f (x), ∀x ∈ R,
but it doesn’t exist S < T such that the same holds.
18
Connected to sin and cos we have tangent and cotangent:
π
sin θ
cos θ
tan : R\
+ kπ : k ∈ Z −→ R, tan θ :=
, cot : R\ {kπ : k ∈ Z} −→ R, cot θ :=
2
cos θ
sin θ
y
y=tan x
y=cot x
x
-Π
-
Π
Π
2
2
Π
3Π
2Π
2
It is easy to check that tan and cot are π−periodic.
1.3
Modulus
The set of real numbers R is usually graphically represented by a straight line (and we talk about "real line"). This gives
a geometric representation of numbers. Geometry means measure science, so it is not surprising that it is natural to
introduce a concept of distance between points. This is done through the following
Definition 1.3.1 (modulus).


 x,
|x| := 

 −x,

-ÈxÈ
0
if x > 0,
if x < 0.
x=ÈxÈ
x=-ÈxÈ
0
Figure 1.1: At the left |x| when x > 0, at the right |x| when x < 0.
Clearly |x| > 0 for any x ∈ R. Here’s the fundamental properties of modulus:
Proposition 1.3.2. The following properties hold true
• vanishing: |x| = 0 iff x = 0.
• homogeneity: |x y| = |x||y|, for any x, y ∈ R.
• triangular inequality: |x + y| 6 |x| + |y|, for any x, y ∈ R.
Proof. — The first two are easy exercise. About the third: by the picture
−|x| 6 x 6 |x|,
−|y| 6 y 6 |y|,
By this the conclusion is evident!
summing
=⇒
−(|x| + |y|) 6 x + y 6 |x| + |y|.
ÈxÈ
19
Remark 1.3.3. The name triangular inequality derives by the following argument: call
d(x, y) := |x − y|, x, y ∈ R, (euclidean distance between x and y).
Then
d(x, y) 6 d(x, z) + d(z, y), ∀x, y, z ∈ R.
You may imagine (if x, y, z would be points into the plane) x, y, z as vertex of a triangle: the last inequality says that the length of
one side is less than the sum of the other two. Now the proof of the previous inequality on distance is immediate consequence of the
triangular inequality for the modulus: indeed
d(x, y) = |x − y| = |(x − z) + (z − y)|
dis. 4
6
|x − z| + |z − y| = d(x, z) + d(z, y).
Notice again that


 never,
|x| = a, ⇐⇒ 
 x = 0,

 x = ±a,
a < 0,
a = 0,
a > 0.
Similarly, when a > 0,
|x| 6 a, ⇐⇒ −a 6 x 6 a, |x| > a, ⇐⇒ x 6 −a, ∨ x > a.
Here’s the graph of modulus
y
x
Example 1.3.4. Solve
||x| + 1| 6
√
x + 2.
Sol. — The domain of existence of the inequality is {x ∈ R : x + 2 > 0} = [−2, +∞[. Being both members of the inequality positive
we can square,
√
||x| + 1| 6 x + 2, ⇐⇒ ||x| + 1| 2 6 x + 2, ⇐⇒ (|x| + 1) 2 6 x + 2, ⇐⇒ |x| 2 + 2|x| + 1 6 x + 2,

x 2 + x − 1 6 0,



⇐⇒ 


 x 2 − 3x − 1 6 0,

x > 0,
x < 0.
Now
as x > 0, x 2 + x − 1 6 0, ⇐⇒
√ and because x > 0, it follows x ∈ 0, −1+2 5 . Moreover,
√
√
−1 − 5
−1 + 5
6x6
,
2
2
√
√
3 − 13
3 + 13
6x6
,
2
2
√
√
√ that is, being x < 0, −2 < 3−2 13 6 x < 0[. So, solutions are 3−2 13 , −1+2 5 .
as − 2 6 x < 0, x 2 − 3x − 1 6 0, ⇐⇒
20
1.4
Informations on the structure of R
As we said, R arise as "extension" of rationals. There’re two qualitative questions we want to answer here. The first is:
how is posted Q with respect to R? In other words: there’re intervals of R without any rational?
Theorem 1.4.1 (density of rationals).
∀[a, b] ⊂ R, ∃q ∈ Q : q ∈ [a, b].
1 , so
Proof. — By Archimedean property there exists an integer such that n > b−a
∃n ∈ N :
1
6 b − a.
n
To proceed in the argument we assume a > 0 (slight adjustment is needed for a < 0). We will show that
∃m ∈ N, : a 6 m
1
6 b.
n
Indeed: consider the numbers
m
1 2 3
, , , . . ., , . . .
n n n
n
m
Sooner or later m
n > a. Indeed, if this is false, that is if n 6 a then m 6 na for any m ∈ N and this would be contrary to the
Archimedean property again. Let’s take the smallest m such that m
n > a. Then
0,
m−1
m
<a6 .
n
n
We say that our m is such that a 6 m
n 6 b. Indeed
m m−1+1 m−1 1
1
=
=
+ < a + 6 a + (b − a) = b.
n
n
n
n
n
Definition 1.4.2. We say that S ⊂ R is dense in R if
∀[a, b] ⊂ R, [a, b] ∩ S , ∅.
We can now restate the previous Thm as follows: Q is dense in R. More or less at the same way it is possible to show that
Theorem 1.4.3 (density of irrationals). R\Q is dense in R.
Proof. — Omitted (exercise, same idea of previous proof but with a little trick. . . ).
1.5
Factorial and binomial coefficients
Consider the following problem: find a general formula for the binomial (a + b) n . We have
(a + b) n = (a + b)(a + b) · · · (a + b).
To compute the product we would choose by any factor a or b obtaining something like a k bh with k + h = n, that is
h = n − k. We should do this product in all possible ways of choice a or b from each factor then sum up all the products.
So we would arrive to something like
n
X
(a + b) n =
ck,n a k bn−k ,
k=0
21
where
ck,n := number of choices of a, k times among n factors.
But, how to compute ck,n ? Notice first that if we choose the first a among n factors, for the second a we have available
n − 1 factors, for the third n − 2 factors, and so on. So we may choose a, k times among n factors,
n(n − 1)(n − 2) · · · (n − k + 1)
times. Actually this way over count the choices because it counts different ways two choices done in a different order. So
ck,n =
n(n − 1)(n − 2) · · · (n − k + 1)
.
number of possible orderings of k choices
This last number is easy to compute. If we have k objects (it doesn’t matter if they are choices or other things), the possible
orderings are k for one object, k (k − 1) for 2, k (k − 1)(k − 2) for 3, and in general
number of possible orderings of k choices = k (k − 1)(k − 2) · · · 3 · 2 · 1 =: k!
where k! called factorial of the number k. It is useful to set
0! := 1.
Then
ck,n =
n(n − 1) · · · (n − k + 1)
,
k!
and writing
n(n − 1) · · · (n − k + 1) =
n!
n(n − 1) · · · (n − k + 1)(n − k)(n − k − 1) · · · 3 · 2 · 1
=
(n − k)(n − k − 1) · · · 3 · 2 · 1
(n − k)!
we get
!
n!
n
=:
.
ck,n =
k
k!(n − k)!
By obvious reasons this quantity is called binomial coefficient and we proved the formula
!
n
X
n
(a + b) n =
a n−k bk .
k
(1.5.1)
k=0
1.6
Exercises
Exercise 1.6.1 (souvenir de jeunesse. . . ). Solve the following inequalities
p
√
√
1. x 2 + x − 1 > 1.
2. x + 2 − x − 1 < 1.
p
|x| + 1 > x − 1.
4. |x(x − 1)| 6 x − 1.
p
x 2 + 1 > |x| − 2x.
2
8. 2 x +2x > 4.
2
+ 2 x > 1.
2x
12. 2 x + 2−x 6 2.
6.
√
x + 1 + x + 2 > 1.
7.
9. 22x + 2 x−1 > 3.
10.
1
− 2 x > 1.
2x
11.
13. 2 x − 2−x > 2.
14. log2 (x 2 ) + log2 (2x) > 0.
15. (x − 1)2 x > 0.
18. log2 22x − 2 x + 1 > 0.
19. 2 x + 2−x 6
5.
17.
p
x 2 + 3x + 2 6 x + 1.
log10 (|x| − 1)
< 0.
x
21. log2 |x + 1| + log2 |x − 3| < 1.
√
3.
1
5
.
2
16. 2 |x−1 | < 4−x .
2
20. 23+x 6 24x .
22
Exercise 1.6.2. Find the domain of the following functions.
r
q
p
2
2
|x−3
|
− 8 + 3 x +x+2 − 9.
1. log2 x + .
2. 2
3
log2 (x − 2) +
.
4. log2 *2 − p
1
+ log2 (x − 2) ,
q
5. 2
x 2 −4
2x 2 −5x+3
!
1
3. log10 2 −
.
8
x
r
.
6. log4
x+1
.
x−1
Exercise 1.6.3.
1. Let S := √ n2
: n ∈ N, n > 1 , show that min S = √1 and sup S = 1. What about max?
2
n +1
2. Let S := √ n2
: n ∈ Z, n 6 −2 , show that min S = − √2 and sup S = −1. What about max?
3
n −1
√
3. Let S := {n − n − 3 : n ∈ N, n > 3}, show that min S = 3 and sup S = +∞.
√
4. Let S := 3 + √ 2 2 √ : n ∈ N, n > 0 , show that inf S = 3 and max S = 4. What about min?
n +7n− 2
√
5. Let S := {−n + n − 2 : n ∈ N, n > 2}, show that max S = −2 and inf S = −∞.
√
6. Let S := 2 − √ 2 2 √ : n ∈ N, n > 0 , show that min S = 1 and sup S = 2. What about max?
n +2n− 2
√
√
n+1
7. Let S := √
: n ∈ N, n > 1 , show that min S = 22 and sup S = 1. What about max?
n+1
Exercise 1.6.4. For any of the following sets discuss inf, min, sup, max.
)
(
)
(
√
n+ n
n−1
: n ∈ N\{0}
[0, 0, 1, @] 2.
1.
√ : n ∈ N, n > 1
n
n− n
r
(
1+
3.
n
: n ∈ N\
n+1
)
(
[1, 1, 2, @]
( 2
)
n + 3n + 4
5.
: n∈N
n2 + 3n + 3
r
(
log2 1 +
7.
(
9.
√
2n
n
n+1
!
[1, @,
)
: n∈N
4n − 1
15.
(−1) n
[0, 0, 1, @]
)
: n ∈ N\{0}








log2 n

11. 
:
n
∈
N\{0}
q




 1 + (log n) 2

2


13.
4, 4 ]
3 3
n
: n∈N
n+1
nπ n
2 (−1) n sin
: n∈N
3
[1, @, √2 √2 ]
3
3
4.
2n2 + 1
: n ∈ N\{0}
2
2n + 2n + 1
√
[1, @, 1, 2+√2 ]
2− 2
)
(√
)
n−1
6. √
: n ∈ N, n > 1
n+1
√


 1 + 32n

8. 
: n ∈ N
n


3


r




n+1

10. 
log
:
n
∈
N\{0}
1/2


n


(
[0, 0, 1, @]
12.
[−1, @, 1, @]
14.
[−∞, @, +∞, @]
16.
4n − 1
: n∈Z
4n + 2n + 1
(
2
(
(−1) n n 2
n+1
[ 53 , 35 , 1, @]
[0, 0, 1, @]
[1, @, 2, 2]
[− 12 , − 21 , 0, @]
)
[−1, @, 1, @]
)
[0, @, +∞, @]
: n∈N
n−1
nπ
cos
: n∈N
n+1
2
)
[−1, @, 1, @]
Chapter 2
Sequences
The concept of sequence arise in a natural way in mathematical modeling of real world systems. A mathematical model is
a suitably simplified idealization of a real system described through tools of Mathematics. Math allows to a quantitative
treatment as well as to introduce qualitative concepts useful to have a picture of the behavior of the modeled system under
certain circumstances. Among the qualitative concepts, the concept of limit deserve a special place and it will be the main
goal of this Chapter.
2.1
2.1.1
Mathematical models
The Malthus model
The Malthus model is a simplified model for the study of the evolution of a biologic population self reproducing with a
rate r. Let’s denote by x n the population at the n−th generation (here n ∈ N). Then
x n+1 = x n + r x n = (1 + r)x n, ∀n ∈ N.
This is called evolution equation. We can easily find out the explicit dependence of x n by n. Indeed
x n = (1 + r)x n−1 = (1 + r)(1 + r)x n−2 = (1 + r) 2 x n−2 = (1 + r) 3 x n−3 = . . . = (1 + r) n x 0,
where x 0 > 0 is the initial population, that we may assume known. Intuitively it is clear that if r > 0 the population
increase without any upper bound and it would be natural to write x n −→ +∞. If r = 0 the population remains constant,
that is x n ≡ x 0 , while if r < 0 (and in this case r > −1 otherwise we would get nonsenses in our model like x n < 0 being
x n+1 = (1 + r)x n ) the population will tend to extinction and we would say x n −→ 0. But, how to translate rigorously these
statements? For instance, in the first case we could say that x n becomes arbitrarily big as n is big enough. Indeed
x n > K, ⇐⇒ (1 + r) n x 0 > K, ⇐⇒ (1 + r) n >
K
,
x0
(r >0, 1+r >1)
⇐⇒
n > log1+r
K
:= N0 (K ).
x0
Therefore, no matter K bigger is we find an initial time N0 (K ) such that x n > K for all the generations after N0 (K ), that
is for any n > N0 (K ). Similarly we could say that x n −→ 0 if x n becomes arbitrarily small as n is big enough. Indeed, if
now −1 6 r < 0, we have
0 6 x n 6 ε, ⇐⇒ (1 + r) n x 0 6 ε, ⇐⇒ (1 + r) n 6
ε
x0
which is similar to the previous one: 0 6 x n 6 ε for any n > N0 (ε).
23
(r <0, 1+r <1)
⇐⇒
n > log1+r
ε
=: N0 (ε).
x0
24
2.1.2
Fibonacci numbers
This model describes the evolution of a (very ideal) population of rabbits reproducing according to the following rule:
every couple, beginning with the second month of life, generates a new couple of rabbits (male and female) each month.
Let’s call x n the number of couples after n months, leaving with x 0 = 1. At the first month we have still the first couple,
so x 1 = 1, but at the second month the first couple generates a second couple, so x 2 = 2. At the third month we have the
previous couples plus a new one generated by the first couple again, so x 3 = 3. At the fourth month also the second couple
start to reproduce, so we have to add 2 new couples and x 4 = 5. And so on. A general rule is
x n+1 = x n + x n−1 .
Also this model is explicitely "solvable". We agree that x n increase very fast. Assume for a moment that this growth is of
exponential type, that is x n = a n and let’s look for a possible value of a. We have
x n+1 = x n + x n−1, ⇐⇒ a
In other words,
n+1
=a +a
n
√
1± 5
.
, ⇐⇒ a = a + 1, ⇐⇒ a − a − 1 = 0, ⇐⇒ a =
2
n−1
2
2
√ n
√ n
5
5+
1
+
1
−
+ , f
xn = *
xn = *
.
2
2
,
,
-
Notice that both assumes the value 1 as n = 0 but none of them has value 1 as n = 1. How to do then? Notice that any
linear combination of the two sequences is a solution, that is
√ n
√ n
1 + 5+
1 − 5+
*
*
yn := c1 x n + c2 f
x n = c1
+ c2
,
, 2 , 2 is such that yn+1 = yn + yn−1 . This is easy to check. Now we will determine c1, c2 in such a way that y0 = 1 and y1 = 1.
We have

c1 + c2 = 1,
c1 + c2 = 1,






1
1


⇐⇒ c1 = 1 + √ , c2 = − √ ,
⇐⇒  √
√
√



5
5
 c 1+ 5 + c 1− 5 = 1
 5(c1 − c2 ) = 1
2 2
 1 2
therefore
√ n
√ n
!
1 *1 − 5+
1 *1 + 5+
−√
.
yn = 1 + √
5 , 2 5, 2 √ n
√
As n −→ ∞ it is clear that being 1+2 5 > 1 we have 1+2 5
−→ +∞, while, being −1 6
√ n
1− 5
−→ 0 hence yn −→ +∞. By this we would say that
2
√ n
!
1 *1 + 5+
yn ≈ 1 + √
,
5 , 2 -
which give a precise idea to the growth: exponential of base
√
1+ 5
2 .
√
1− 5
2
6 0 intuitively
25
2.1.3
Interest rates
A composed interest rate r is corresponded in the following way: fixed a time interval, the rate is corresponded in measure
r
n any of the n parts of the division of the entire period. For instance: a semestral rate r is corresponded 2 times in the
measure of r2 each time. If s is the initial sum, after a first sub-period we will have the sum
s+
r
r
s = 1+
s.
n
n
After a second sub-period we will have
1+
r r
r
r
r 2
r
s+
1+
s = 1+
1+
s = 1+
s.
n
n
n
n
n
n
3
Easily you’ll see that after another sub-period we will have the sum 1 + nr s, and so on. After n sup-periods, therefore
at the end of the entire period, we will have the sum
r n
x n := 1 +
s.
n
n
A first natural question is: what number n of sub-periods maximize the profit? Clearly everything depends on 1 + nr ,
so we may assume s = 1. We notice immediately that
r 2
r r2
r2
x2 = 1 +
=1+2 +
= 1+r +
> 1 + r = x1 .
2
2
4
4
Similarly
r 3
r
r2 r3
r2 r3
r2
x3 = 1 +
=1+3 +3 +
= 1+r +
+
> 1+r +
= x2 .
3
3
9
27
3
27
4
This leads immediately to the conjecture
nthat x n+1 > x n for any n. This is actually true, but it is not easy to prove in
general. At this point, given that 1 + nr % as n %, how much can be big this quantity? Does it increase to +∞ or to
some limit ` finite? It is clear that in the first case it would mean an infinite profit, and this sounds wrong. . . We will see
that there exists a special number, e ∈]2, 3[, such that
1+
r n
−→ er , as n −→ +∞,
n
The number e is one of more important constant of Mathematical Analysis: the Napier number.
2.2
Limit: the concept
We start by formalizing the concept of sequence:
Definition 2.2.1 (sequence). A numerical sequence (shortly, a sequence) is a function a : N −→ R. We will denote it by
the symbol (an ) ⊂ R, where an := a(n), n ∈ N is called element of the sequence, n is called index.
An visual representation of a sequence is just given by the plot of the graph of the function n 7−→ an .
26
n
1
n+1
Figure 2.1: Graph of the sequence an =
2.2.1
n+(−1) n
n+1 .
Finite limit
We want to make precise the idea an −→ ` ∈ R as n that becomes bigger and bigger. The idea is simple: the distance
between an and ` must become smaller as n gets bigger.
Definition 2.2.2. Let (an ) ⊂ R. We say that an −→ ` ∈ R (we read as: (an ) tends to ` as n tends to +∞) if
∀ε > 0, ∃N (ε) ∈ N : |an − `| 6 ε, ∀n > N (ε).
(2.2.1)
We write also limn→+∞ an := `.
The (2.2.1) has a precise geometrical meaning. Writing it as
∀ε > 0, ∃N (ε) ∈ N : ` − ε 6 an 6 ` + ε, ∀n > N (ε),
we read: for any ε > 0 fixed (intuitively "small") we find an initial index (think to a time) N (ε) such that, starting by the
index N (ε) the point (n, an ) lies in the strip between quotes ` − ε and ` + ε. As the picture suggests, as ε gets smaller,
N (ε) gets bigger.
{+¶
{
{-¶
NH¶L
Example 2.2.3. Show that
1
−→ 0.
n
Sol. — It is clear that as n gets bigger, n1 gets smaller close to 0. To show that the (2.2.1) holds, we have to fix ε > 0 and find N (ε)
such that
1
− 0 6 ε, ∀n > N (ε).
n
Now,
1
1
1
− 0 6 ε ⇐⇒ 6 ε ⇐⇒ n > =: N (ε).
n
ε
n
f g
Of course N (ε) = ε1 < N, but with N (ε) = ε1 + 1 we are done.
27
Example 2.2.4. Show that
n
−→ 1.
n+1
Sol. — Intuitively it is natural: as n gets bigger, n and n + 1 are very "similar" numbers, so their ration should be approximatively 1.
Let’s check the (2.2.1), in this case translated as
n
− 1 6 ε, ∀n > N (ε).
∀ε > 0, ∃N (ε) : n + 1
n − 1 6 ε. We have
Let’s look to solutions of the inequality n+1
n
1
n
n + 1 − 1 6 ε, ⇐⇒ 1 − n + 1 6 ε, ⇐⇒ (n + 1) − n 6 ε(n + 1), ⇐⇒ 1 6 ε(n + 1), ⇐⇒ n > ε − 1.
f
g
But then, if N (ε) := ε1 − 1 (more precisely, N (ε) := ε1 − 1 + 1) then (2.2.1) holds.
Example 2.2.5. Show that
n2
n
−→ 0.
+1
Sol. — Let ε > 0. We have to find N (ε) such that
n
− 0 6 ε, ∀n > N (ε).
2
n +1
Let’s study the inequality n2n+1 − 0 6 ε. We have
n
n
− 0 6 ε, ⇐⇒ 2
6 ε, ⇐⇒ ε(n2 + 1) − n > 0, ⇐⇒ εn2 − n + ε > 0.
2
n +1
n +1
This is a second degree inequality in n, (here n ∈ N). Recalling the fundamental facts about second order inequalities, being ∆ = 1 − 4ε
we have the following cases:
• ∆ < 0 (this happens iff 1 − 4ε 6 0, that is ε > 41 ): the inequality is fulfilled for any n ∈ N. In this case we could take N (ε) = 0.
• ∆ > 0 (iff 0 < ε < 14 ): the solutions of the inequality are
√
√
1 − 1 − 4ε
1 + 1 − 4ε
n6
, n>
.
2
2
√
The first one may be doesn’t have integer solution, but the second has solutions all the integers bigger than N (ε) = 1+ 21−4ε (as
√
usual, N (ε) := 1+ 21−4ε + 1).
Dr0
D<0
NH¶L
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
In any case (that is, for any ε > 0) we find N (ε) such that any n > N (ε) is a solution of the inequality.
28
Example 2.2.6. Show that
√
√
n + 1 − n −→ 0.
Sol. — Let ε > 0: we have to find N (ε) such that
√
√
| n + 1 − n| 6 ε, ∀n > N (ε).
√
√
Noticed that n + 1 > n, we have
√
√
| n + 1 − n| 6 ε
√
⇐⇒
√
n + 1 − n 6 ε, (squaring)
√
⇐⇒ n + 1 + n − 2 n(n + 1) 6 ε 2
√
⇐⇒ 2n + 1 − ε 2 6 2 n(n + 1).
To elimitate the root we would like to square. To do it correctly it is necessary that both members have the same sign. To this aim also
2
the lhs should be positive (otherwise the inequality is trivially true), and 2n + 1 − ε 2 > 0 iff n > ε 2−1 . For such ns we can square and
we get
(1 − ε 2 ) 2
⇐⇒ (2n + (1 − ε 2 )) 2 6 4n(n + 1), ⇐⇒ 4(1 − ε 2 )n + (1 − ε 2 ) 2 6 4n, ⇐⇒ n >
.
4ε
Therefore, if
(
)
(1 − ε 2 ) 2 ε 2 − 1
N (ε) := max
,
,
4ε
2
√
√
then for any n > N (ε) we have | n + 1 − n| 6 ε.
2.2.2
Infinite limit
As we have seen in the introduction, a second important situation is when an gets bigger without any bound:
Definition 2.2.7. Let (an ) ⊂ R. We say that an −→ +∞ if
∀K > 0, ∃N (K ) ∈ N : an > K, ∀n > N (K ).
(2.2.2)
We write also limn→+∞ an := +∞. Similarly is defined limn→+∞ an = −∞:
∀K < 0, ∃N (K ) ∈ N : an 6 K, ∀n > N (K ).
Also in this case it is useful to have a picture of what does it means an −→ +∞: the (2.2.2) says that for any K > 0 fixed
(intuitively "big and positive") we find an initial time N (K ) such that from this on the point (n, an ) has quote an > K.
K
NHKL
Let’s see some examples.
29
Example 2.2.8. Show that
n2 + 1
−→ +∞.
n+1
Sol. — Fix K > 0: we have to find N (K ) such that
n2 + 1
> K, ∀n > N (K ).
n+1
2 +1
Studying the inequality nn+1
> K with n ∈ N, we have
n∈N, n>0
n2 + 1
> K,
⇐⇒
n2 + 1 > K (n + 1), ⇐⇒ n2 − K n + (1 − K ) > 0,
n+1
that is a second degree inequality in n. Let ∆ := K 2 − 4(1 − K ). We have
• if ∆ < 0 the inequality is fulfilled for every n ∈ N: this means we can take N (K ) = 0.
• if ∆ > 0, the solutions are
√
√
K− ∆
K+ ∆
n6
, n>
.
2
2
√
The first has at maximum a finite number of solutions natural number. But if we consider the second and we set N (K ) := K+2 ∆
we see that any n > N (K ) is a solution.
In any case we are able to find an initial index N (K ).
Example 2.2.9. Show that
n
√ −→ −∞.
1− n
Sol. — We have to show that
∀K < 0, ∃N (K ), :
n
√ 6 K, ∀n > N (K ).
1− n
As n > 2,
√
√
n
√ 6 K, ⇐⇒ n > (1 − n)K, ⇐⇒ n − K > −K n.
1− n
√
Now: n − K > 0 (because K < 0) and also −K n > 0. Squaring,
n
√ 6 K, ⇐⇒ (n − K ) 2 > K 2 n, ⇐⇒ n2 + 2K n + K 2 − K 2 n > 0, ⇐⇒ n2 + (K 2 + 2K )n + K 2 > 0.
1− n
We meet again a second degree inequality. Setting ∆ := (K 2 + 2K ) 2 − 4K 2 we have
• if ∆ < 0 every n is solution, therefore being n > 2 we can take N (K ) := 2;
• if ∆ > 0 then the solutions for the inequality are
n6
√
√
−(K 2 + 2K ) − ∆
−(K 2 + 2K ) + ∆
, n>
.
2
2
√
2
)+ ∆
Now, taking N (K ) := −(K +2K
we have that any n > N (K ), 2 is a solution.
2
So we are always able to find N (K ) such that
n√
1− n
6 K for any n > N (K ).
Here’s the behavior of some basic quantities:


 +∞,
nα −→ 
 1,

 0,
α > 0,
α = 0,
α < 0.


 +∞,
a n −→ 
 1,

 0,
a > 1,
a = 1,
0 < a < 1.


 +∞,
logb n −→ 

 −∞,

b > 1,
0 < b < 1.
30
2.2.3
Non existence
A sequence having a finite limit is shortly called convergent, while if the limit is infinite is called divergent. There’s a third
possible situation, because not all the sequences are convergent or divergent. Consider the sequence
an := (−1) n, that is + 1, −1, +1, −1, . . .
1
2
3
4
5
6
7
8
9
It seems evident that such a sequence cannot have a limit. Instead to prove that both (2.2.1) and (2.2.2) fails, let’s introduce
a more effective instrument. To this aim we notice that if we look at the sequence only each two years (admitting that the
index means years), that is if we consider the new sequence a0, a2, a4, a6, . . . , a2k , . . . ≡ (a2k ), clearly a2k = 1 −→ 1. At
the same time, looking at the sequence done by the elements with odd index, that is a1, a3, a5, . . . , a2k+1, . . . ≡ (a2k+1 ),
then a2k+1 = −1 −→ −1. It seems clear that if the mother an −→ ` then all the daughter a2k , a2k+1 −→ `, and if this
doesn’t happen (as in our case) it is because the mother has not a limit. Let’s see precisely this circle of ideas: we start by
formalizing what is a daughter.
Definition 2.2.10. Let (an ) ⊂ R and (nk ) ⊂ N be a strictly increasing sequence of indexes (in particular nk < nk+1 for
every k ∈ N). The sequence
an1 , an2 , an3 , . . . , ank , ank+1 , . . . ≡: (ank )
is called subsequence (or daughter) of (an ). Notation: (ank ) ⊂ (an ).
Therefore a subsequence is nothing but a free selection by a sequence with the unique oblige to respect the positions (or
the "time order") in the sequence. So, for instance a4, a1, a3, a8, a20, a15, . . . is not a subsequence of (an ). Here’s some
examples:
• (a0, a2, a4, . . . , a2k , . . .) ≡ (a2k );
• (a1, a3, a5, . . . , a2k+1, . . .) ≡ (a2k+1 );
• (a0, a3, a6, a9, a12, . . . , a3k , . . .) ≡ (a3k );
• (a0, a1, a4, a9, a16, a25, . . . , ak 2 , . . .) ≡ (ak 2 );
• (a1, a2, a4, a8, a16, a32, . . . , a2k , . . .) ≡ (a2k );
• (a1, a2, a3, a5, a7, a11, . . .) ≡ (a prime ).
As announched
Proposition 2.2.11.
an −→ ` ∈ R ∪ {±∞}, =⇒ ank −→ `, ∀(ank ) ⊂ (an ).
Proof. — Consider the case ` ∈ R (leaving ` = ±∞ to the reader as exercise). We have:
an −→ `, ⇐⇒ ∀ε > 0, ∃N (ε), : |an − `| 6 ε, ∀n > N (ε).
31
Now, being nk % and nk ∈ N, it is clear that nk > k. Therefore
|ank − `| 6 ε, ∀k > N (ε).
Corollary 2.2.12 (non existence criterium). If there exist (ank ), (amk ) ⊂ (an ) such that
ank −→ ` 1, amk −→ ` 2, with ` 1 , ` 2,
then (an ) cannot have a limit.
Example 2.2.13. The sequence (−1) n doesn’t have a limit. Indeed, a2k = 1 −→ 1 while a2k+1 = −1 −→ −1.
We quote two other sequences without limit, even if it is very hard to prove that this is the case: (sin n), (cos n).
2.3
Fundamental properties
Is it possible that a sequence would have two limits? Of course no.
Theorem 2.3.1. If limn an exists it is unique.
Proof. — Suppose that an −→ ` 1 and an −→ ` 2 with ` 1, ` 2 ∈ R ∪ {±∞} and ` 1 , ` 2 .
Case ` 1, ` 2 ∈ R. The idea is simple: because an −→ ` 1 , sooner or later an will be so close to ` 1 that it cannot be close to ` 2 . Precisely:
let d := |` 1 − ` 2 | be the distance between ` 1 and ` 2 and let’s take ε := d4 . By definition,
∃N1 (ε), : |an − ` 1 | 6 ε, ∀n > N1 (ε), and ∃N2 (ε), : |an − ` 2 | 6 ε, ∀n > N2 (ε).
But then, if N (ε) := max{N1 (ε), N2 (ε)} the two previous properties hold for any n > N (ε) and
d
d
|` 1 − ` 2 | 6 |` 1 − an | + |an − ` 2 | 6 ε + ε = 2ε = 2 = < d = |` 1 − ` 2 |,
4
2
which is clearly impossible. It follows that ` 1 = ` 2 .
Cases ` 1 ∈ R, ` 2 = ±∞ and ` 1 = −∞, ` 2 = +∞: exercise.
Basically, a sequence and its (eventual) limit share the same sign.
Proposition 2.3.2. Assume an −→ ` ∈ R ∪ {±∞}. Then
• If ` > 0 (included ` = +∞) exists N ∈ N such that an > 0 for any n > N.
• If exists N ∈ N such that an > 0 for any n > N then ` > 0.
Warning! 2.3.3. Despite their similarity, be careful to the two slightly different statements! If in the second statement we
write the strict sign for the inequality (that is an > 0 for any n > N) the conclusion is false: an = n1 > 0 for every n but an −→ 0.
Proof. — First statement: suppose ` ∈ R and ` > 0 (the case ` = +∞ is similar and is left as exercise). Let’s take ε := `2 into (2.2.1):
there exists then N (ε) =: N such that
` − ε 6 an 6 ` + ε, ∀n > N, =⇒ an > ` − ε = ` −
`
`
= > 0, ∀n > N .
2 2
H such that an < 0 for
Second statement: suppose, by contradiction, that ` < 0. By the first statement it follows that there exists N
H But this is a contradiction because if n > max{N, N
H} we would have an < 0 < an .
every n > N.
32
The previous Thm emphasize the following important concept: given a sequence (an ) ⊂ R we say that a certain property
p(an ) is definitively true if
∃N ∈ N, : p(an ), ∀n > N .
In this case we will write shortly p(an ) definitively. For instance, the permanence of sign may be restated in the following
form: if an −→ ` then
• if ` > 0 then an > 0 definitively;
• if an > 0 definitively, then ` > 0.
If you are between two policemen you don’t have alternatives than to follow them. This is the sense of the
Theorem 2.3.4 (Two policemen). Let (an ), (bn ), (cn ) ⊂ R such that
i) an 6 bn 6 cn definitively;
ii) an −→ `, cn −→ `, ` ∈ R.
Then also bn −→ `.
Proof. — We have to prove that
∀ε > 0, ∃N (ε) : |bn − `| 6 ε, ∀n > N (ε), ⇐⇒ ` − ε 6 bn 6 ` + ε, ∀n > N (ε).
By assumption
an −→ `, =⇒ ∃N1 (ε) : ` − ε 6 an 6 ` + ε, ∀n > N1 (ε) ,
cn −→ `, =⇒ ∃N2 (ε) : ` − ε 6 cn 6 ` + ε, ∀n > N2 (ε) ,
Moreover, by i) there exists N such that an 6 bn 6 cn for any n > N. Then, if N (ε) := max{N1 (ε), N2 (ε), N } the previous properties
hold for any n > N. We deduce that
` − ε 6 an 6 bn 6 cn 6 ` + ε, ∀n > N (ε).
Example 2.3.5. Show that
(−1) n
−→ 0.
n
Sol. — Indeed, being −1 6 (−1) n 6 1 for any n ∈ N, we have
−
(−1) n
1
1
6
6 , ∀n > 1.
n
n
n
n
Now: − n1 and n1 are the two policemen going to 0. Therefore, (−1)
n −→ 0.
This example suggest a general rule. Let’s first introduce the following
Definition 2.3.6. We say that (an ) ⊂ R is
• bounded if there exists M such that |an | 6 M for any n ∈ N.
• null if an −→ 0.
Corollary 2.3.7 (bounded×null=null). If (an ) is bounded and (bn ) is null, then (an bn ) is null.
Proof. — Exercise.
33
Example 2.3.8. Compute
sin n
.
n→+∞ n
lim
Sol. — Writing sinn n = (sin n) · n1 and because sin n is bounded (being | sin n| 6 1 for any n ∈ N) and n1 null, their product is null:
therefore the limit is 0.
By evident modifications we have also the
Theorem 2.3.9. Suppose that
i) an 6 bn definitively;
ii) an −→ +∞.
Then also bn −→ +∞.
Proof. — Exercise.
Example 2.3.10. Show that
lim (n + sin n) = +∞.
n→+∞
Sol. — It is enough to notice that n + sin n > n − 1 −→ +∞.
Definition 2.3.11. We say that a sequence (an ) is increasing (notation: an %) if
an 6 an+1, ∀n ∈ N.
Similarly, decreasing sequences are defined. An increasing/decreasing sequence is called also monotone.
It is intuitive: any monotone sequence have a limit (eventually infinite). This is actually one of the consequences of
completeness. Here’s a precise statement:
Theorem 2.3.12.
If an %, =⇒ ∃ lim an = sup {an : n ∈ N} ∈ R ∪ {+∞}.
n
If an &, =⇒ ∃ lim an = inf {an : n ∈ N} ∈ R ∪ {−∞}.
n
Proof. — Consider the case an % and let
` := sup{an : n ∈ N}.
There’re two possibilities: i) ` ∈ R, ii) ` = +∞.
i) ` ∈ R. Let ε > 0. By characteristic properties of sup, there exists N = N (ε) ∈ N such that
` − ε 6 a N 6 `.
Being an % we have
` − ε 6 a N 6 an 6 `, ∀n > N, =⇒ |an − `| 6 ε, ∀n > N,
and this is nothing but the definition an −→ `.
ii) ` = +∞. In this case {an : n ∈ N} is upper unbounded. Therefore, for any K > 0, we find an N = N (K ) such that
aN > K .
Being an % we have
an > a N > K, ∀n > N,
and this is nothing but the definition an −→ +∞.
34
2.4
The Bolzano–Weierstrass theorem
In this subsection we put a first important brick to the future development with a very important soft Analysis tool: the
Bolzano–Weierstrass Thm. Basically it says that any bounded sequence has at least a convergent subsequence. This is
evident in the case of the sequence (−1) n but it is in general not evident (think to the sequence sin n).
Proposition 2.4.1. Any convergent sequence is bounded.
Proof. — Exercise.
Theorem 2.4.2 (Bolzano–Weierstrass). Any bounded sequence has a convergent subsequence.
Proof. — The idea is easy: consider the "trace" left by the sequence on R, that is the set
S := {an : n ∈ N}.
For instance: if an ≡ ξ (constant sequence), S = {ξ}; if an = (−1) n then S = {−1, +1}. And so on. Because (an ) is bounded, by
assumption, we can say that
∃α0, β0 ∈ R : α0 6 an 6 β0, ∀n ∈ N, =⇒ S ⊂ [α0, β0 ].
Now the point is the following:
• if S is finite: it is clear that at least one of its elements has to be equal to an for infinitely many n (otherwise, if any s ∈ S would
be equal to an only for a finite number of n, the set S should be infinite). In this case we have that there exists (ank ) ⊂ (an ) with
ank ≡ s hence ank −→ s, so we have the thesis.
• if S is infinite we construct the subsequence in the following way. First: divide [α0, β0 ] in two parts. At least one of them must
contain infinite elements of S (otherwise S would be finite). Let’s call
[α1, β1 ] ⊂ [α0, β0 ], the half part such that [α1, β1 ] ∩ S =: S1 is infinite
Call n1 such that an1 ∈ [α1, β1 ] ∩ S ≡ S1 . Now, repeat the argument with S1 : divide [α1, β1 ] in two parts: because S1 is infinite,
at least one of the two half must contain infinitely many points of S1 . Let’s call
[α2, β2 ] ⊂ [α1, β1 ], the half part such that [α2, β2 ] ∩ S1 =: S2 is infinite.
It is clear that we can find an2 ∈ [α2, β2 ] ∩ S1 =: S2 with n2 > n1 . Iterating this procedure we get:
– intervals [α k , βk ], each one being one half of the previous [α k−1, βk−1 ];
– elements ank ∈ [α k , βk ] with nk > nk−1 .
Then (ank ) ⊂ (an ). We say that (ank ) converges. To this aim notice that, by construction
α k 6 a nk 6 βk .
Moreover α k % while βk &: being monotone sequences α k −→ α = supk α k and βk −→ β = inf k βk and because
0 6 β − α 6 βk − α k 6
β0 − α 0
−→ 0, =⇒ α = β.
2k
But then, by the two-policemen theorem it follows that ank −→ α.
35
2.5
Rules of calculus
We need of some rules to compute limits in an efficient way. These rules exist and are quite intuitive. However, there’re
cases that cannot be covered by rules, the so called indeterminate forms. This will be the main problem in treating with
limits.
Proposition 2.5.1. Suppose that
an −→ ` 1 ∈ R, bn −→ ` 2 ∈ R.
Then
i) an ± bn −→ ` 1 ± ` 2 .
ii) an · bn −→ ` 1 · ` 2 .
iii) if ` 2 , 0,
an
bn
−→
`1
`2 .
Proof. — Let’s prove only the i). We have to prove that
∀ε > 0, ∃N (ε) : |(an + bn ) − (` 1 + ` 2 )| 6 ε, ∀n > N (ε).
Now
dis. 4
|(an + bn ) − (` 1 + ` 2 )| = |(an − ` 1 ) + (bn − ` 2 )|
Fixed ε > 0 we have
Therefore, if
6
an −→ ` 1, =⇒ ∃N1 ε2 : |an − ` 1 | 6 ε2 , ∀n > N1 ε2 ,
bn −→ ` 2, =⇒ ∃N2 ε2 : |bn − ` 2 | 6 ε2 , ∀n > N2 ε2 .
ε
ε ε
ε
N (ε) := max N1
, N2
, =⇒ |an − ` 1 | 6 , |bn − ` 2 | 6 , ∀n > N (ε),
2
2
2
2
so
|(an + bn ) − (` 1 + ` 2 )| 6 |an − ` 1 | + |bn − ` 2 | 6
Example 2.5.2. Compute
Sol. — We have
|an − ` 1 | + |bn − ` 2 |.
ε ε
+ = ε, ∀n > N (ε).
2 2
√
n−1
lim √
.
n→+∞ n + 1
√ √
n 1−
n−1
= √ √
n+1
n 1+
√1
n
√1
n
=
1 − √1
n
.
1 + √1
n
Now, clearly
1−
1
1
1 − √ −→ 1, 1 + √ −→ 1, =⇒
n
n
1+
√1
n
√1
n
−→
1
= 1.
1
36
2.5.1
Infinite limits
To treat with infinite limits is much more delicate. However some easily intuitive facts still holds.
Proposition 2.5.3. Suppose
an −→ ` 1, bn −→ ` 2 .
Then
i) if ` 1 = ±∞ and ` 2 ∈ R then an + bn −→ ±∞ (same sign of ` 1 ).
ii) if ` 1 = ` 2 = ±∞ (same sign) then an + bn −→ ±∞ (with the common sign of ` 1 and ` 2 ).
Proof. — Exercise.
We use the following notation to summarize the content of the previous proposition:
(±∞) + ` = ±∞, (` ∈ R), (+∞) + (+∞) = +∞, (−∞) + (−∞) = −∞.
Warning! 2.5.4 (Important!). In the case ` 1 = +∞ and ` 2 = −∞ nothing can be said in principle. We can find examples
that show very different behavior that cannot be described by an a priori rule. For this reason we say that (+∞) + (−∞)
(or (+∞) − (+∞)) is an indeterminate form. To understand better let’s see some examples:
• an = n2 −→ +∞, bn = −n −→ −∞. Then an + bn = n2 − n. It is however easy to show that n2 − n −→ +∞. For instance:
n2 > 2n as n > 2, so
n2 − n > 2n − n = n −→ +∞.
• an = n −→ +∞, bn = −n2 −→ −∞. Here
an + bn = n − n2 −→ −∞.
• an = n + 1 −→ +∞, bn = −n −→ −∞. Here
an + bn = (n + 1) − n = 1 −→ 1.
• an = n + (−1) n > n − 1 −→ +∞, bn = −n −→ −∞. Here
an + bn = (n + (−1) n ) − n = (−1) n,
and the limit doesn’t exists.
Similarly
Proposition 2.5.5. Suppose
an −→ ` 1, bn −→ ` 2 .
Then
i) if ` 1 = ±∞ and ` 2 ∈ R\{0} then an bn −→ sgn(`) ± ∞.
ii) if ` 1, ` 2 ∈ {±∞} then an bn −→ sgn(` 1 )sgn(` 2 )∞.
Proof. — Exercise.
37
Like for the sum, we use the shortened notations
(+∞) · ` = sgn(`)∞, (` , 0), (−∞) · ` = −sgn(`)∞, (` , 0),
(+∞) · (+∞) = +∞, (+∞) · (−∞) = −∞, (−∞) · (−∞) = +∞.
Warning! 2.5.6. The indeterminate form for the product is ±∞ · 0. Here’s some examples:
an = n −→ +∞, bn = n1 −→ 0,
an bn = n n1 = 1 −→ 1.
an = n2 −→ +∞, bn = n1 −→ 0,
an bn = n2 n1 = n −→ +∞,
n
an = n −→ +∞, bn = (−1)
n −→ 0,
n
an bn = n (−1)
n = (−1) , doesn’t exist.
n
Finally, about the ratio we have the
Proposition 2.5.7. Suppose
an −→ ` 1, bn −→ ` 2 .
Then
i) if ` 1 ∈ R and ` 2 ∈ {±∞} then
an
bn
−→ 0.
ii) if ` 1 ∈ {±∞} and ` 2 ∈ R\{0} then
an
bn
−→ sgn(` 1 )sgn(` 2 )∞.
Proof. — Exercise.
The mnemonic forms are
`
= 0, (` ∈ R),
±∞
The case
∞
0
+∞
= sgn(`)∞, (` , 0),
`
−∞
= −sgn(`)∞, (` , 0).
`
is not an indeterminate form if we know the "sign" of 0. To this aim, let’s introduce the
Definition 2.5.8. We say that an −→ `+ (or `−) if an > ` (or < `) definitively.
Then
+∞ −∞
=
= +∞,
0+
0−
Warning! 2.5.9. The indeterminate forms for the ratio are
+∞ −∞
=
= −∞.
0−
0+
0
0
and
∞
∞.
Summarizing: are indeterminate forms
(±∞) + (∓∞), (opposite signs), (±∞) · 0,
Let’s see some examples of application of the previous rules.
0 ±∞
,
.
0 ±∞
38
Example 2.5.10. Compute
lim
n→+∞
n3 − n2 + 2
n3 − n − 5
Sol. — At first sight there’re several indeterminate forms. At the numerator we have (+∞) − (+∞) + 2 = (+∞) − (+∞). It is however
easy to eliminate the problem: clearly as n is big, n3 should be much bigger than n2 , driving the numerator to +∞. For a precise
argument notice that writing
!
1
2 (+∞) ·(1−0+0)=(+∞) ·1=+∞
3
2
3
n −n +2= n 1− + 3
−→
+∞.
n n
(+∞) ·1
Notice that we used the rule (+∞) · 1 = +∞. Similarly at the denominator n3 − n − 5 = n3 1 − n12 − n53
−→ +∞. Done this we
meet another indeterminate form: the fraction appears as +∞
.
However,
using
previous
factorizations,
+∞
n3 1 − n1 + n23
1 − n1 + n23
n3 − n2 + 2
=
−→ 1.
=
n3 − n − 5
1 − 12 − 53
n3 1 − 12 − 53
n
Example 2.5.11. Compute
lim
n→+∞
n
n
n
(n + 2)! + (n + 1)!
.
(n + 2)! − (n + 1)!
Sol. — Of course n! −→ +∞ and similarly (n + 2)!, (n + 1)! −→ +∞. Therefore (n + 2)! + (n + 1)! −→ +∞ while (n + 2)! − (n + 1)!
is an indeterminate form (+∞) − (+∞). However it seems evident that (n + 2)! is much bigger than (n + 1)! and noticed that
(n + 2)! = (n + 2)(n + 1)!, we have
!
!
(+∞) ·1
1
(n + 1)!
= (n + 2)! 1 −
−→ +∞.
(n + 2)! − (n + 1)! = (n + 2)! 1 −
(n + 2)!
n+2
Now we have a form +∞
+∞ , but using the previous trick,
(n + 2)! + (n + 1)! 1 +
=
(n + 2)! − (n + 1)! 1 −
1
n+2
1
n+2
−→ 1.
In these first examples emerges an idea: "bigger" terms dominate. But in which sense "bigger"? Notice that for instance,
in a difference of type an − bn we have done the following transformation
!
bn
a n − bn = a n 1 −
.
an
Now: an was "bigger" because
bn
an
was small! This is a crucial idea:
Definition 2.5.12. Given (an ), (bn ) ⊂ R, an, bn , 0 we say that
• bn is lower order w.r.t. an (notations: bn = o(an ) or, equivalently, bn an , or again an bn ) if
bn
−→ 0.
an
• bn is asymptotic to an (notation: bn ∼ an ) if
bn
−→ 1.
an
39
Therefore: if bn = o(an ) (or bn an )
a n − bn = a n 1 −
!
bn
,
an
is no more an indeterminate form. However, if an ∼ bn the previous transformation change the indeterminate form
(+∞) − (+∞) into the form (+∞) · 0, so it is useless.
Example 2.5.13. Compute
lim
√
n→+∞
√ n+1− n .
√
√
√
√
Proof.
√ — Clearly
√ n + 1, n −→ +∞ so we have the indeterminate form (+∞) − (+∞). Notice that even if n + 1 > n is not true
that n + 1 n. Indeed
r
r
√
n
n
1
=
= 1−
.
√
n+1
n+1
n+1
q
√
1 −→ 1 then 1 − 1 −→ 1 = 1. This is actually a property of the root and of lots of functions
It seems evident that being 1 − n+1
n+1
called continuity. For the moment we don’t care and we assume is true. Therefore
√
√
√
n
−→ 1, ⇐⇒ n ∼ n + 1.
√
n+1
So none of the two term is bigger and the factorization is useless. Then? An algebraic trick allows to proceed:
√
√
1
√
√
√
√ n+1+ n
(n + 1) − n
1
+∞
n+1− n = n+1− n · √
√ = √
√ = √
√ −→ 0.
n+1+ n
n+1+ n
n+1+ n
In the last example we met the following situation:
?
an −→ `, =⇒
Calling f (x) :=
√
√
√
an −→ `.
x, the previous property reads as follows:
?
an −→ `, =⇒ f (an ) −→ f (`).
As we will see in a next Chapter, this is one of the fundamental concept of Analysis, that one of continuous function at
some point `. For the moment we accept the validity of the following result that will be used here and in the next Chapter,
proved only later. Of course we have to assure the reader that there’re not logical problems anticipating here this fact!
Proposition 2.5.14. Elementary functions (powers, exponentials, logarithms, trigonometric functions, modulus) are
continuous where defined, that is the following property holds: if D is the domain of the function,
if (an ) ⊂ D, : an −→ ` ∈ D, =⇒ f (an ) −→ f (`).
2.6
Principal infinities
As we have seen it is important to compare (in the sense of Definition 2.5.12) different quantities to find out the biggest
one. Among the fundamental quantities going to +∞ there’re exponentials (with base > 1), powers (with exponent > 0)
and logarithms (with base b > 1). It is clear that
a n bn, if a > b > 1; nα nβ, if α > β > 0.
40
Logarithms are not "ordered" because
logb n = (logb a)(loga n),
therefore it is not true that logb n loga n. The comparison among these quantities is not at all trivial and it is the content
of the
Theorem 2.6.1.
a n nα logb n, ∀a > 1, α > 0, b > 1.
Proof. — We will limit to prove that a n nα for any α > 0 and a > 1.
First step: case α < 1. The proof is based on the following
Lemma 2.6.2 (Bernoulli inequality).
(1 + h) n > 1 + nh, ∀h > 0, ∀n ∈ N, n > 1.
Proof. — By the Newton formula we have
!
n
X
n
(1 + h) n =
hk 1n−k =
k
k=0
being h > 0 so
Pn
k=2
n
k
!
hk > 0 (recall that
n
0
!
n
k
!
n
1
h0 +
!
h1 +
n
X
n
k
k=2
!
hk = 1 + nh +
(2.6.1)
n
X
k=2
n
k
!
hk > 1 + nh
> 0).
Applying the Bernoulli inequality to a n = (1 + h) n with h = a − 1 > 0 being a > 1 we have
1 + nh
1
an
>
= α + hn1−α −→ +∞
nα
nα
n
because 1 − α > 0.
Second step: case α > 1. We reduce to the case α < 1 by writing
n
a n * a 2α +
=
nα , n1/2 -
2α
=
(a1/2α ) n
n1/2
! 2α
=:
!
β n 2α
,
n1/2
where β := a1/2α > 1, being a > 1. But then β n n1/2 by previous case, so
!
βn
an
β n 2α
=
−→
+∞,
=⇒
−→ +∞.
nα
n1/2
n1/2
Example 2.6.3. Compute
2n − n2
.
n→+∞ 3n − n100 2n
lim
Sol. — Consider the numerator first. Being 2n n2 we have
!
n2
N := 2n − n2 = 2n 1 − n = 2n · 1n,
2
2
where, by shortness we setted 1n := 1 − 2nn −→ 1. At the denominator apparently n100 2n is bigger than 3n . This is false because
n100 2n
n100
= n
n
3
3
2
n
3
2
n100
−→
0, =⇒ 3n − n100 2n = 3n 1 −
!
n100 2n
= 3n · 1n .
3n
41
Therefore
!
N
2n · 1n
2 n
· 1n −→ 0.
= n
=
D 3 · 1n
3
Example 2.6.4. Compute for a > 0 the
lim
n→+∞
a n − n2 a−n + (−1) n n4
.
a2n + n3
Sol. — It’s better to treat separately numerator
nand denominator. A the numerator seems evident that the first exponential dominates
when a > 1 while, noticed that n2 a−n = n2 a1 it should be the second term to dominate when a < 1 (because a1 > 1). So we should
treat three different cases: a > 1, a = 1 and a < 1. If a < 1 we could write
!
(−1) n n4
a2n
n2
an
N = −n2 a−n 1 − 2 −n − 2 −n = −n2 a−n *.1 − 2 − (−1) n n +/ .
1
n a
n a
n
a
,
Now:
a2n −→ 0, (a < 1), =⇒
a2n
−→ 0.
n2
Moreover, being a < 1, that is a1 > 1, we have
!
n2
1 n
n2
n2, =⇒ n −→ 0, =⇒ (−1) n n −→ 0, (bounded·null)
1
1
a
a
a
hence N = −n2 a−n · 1n . In the case a = 1 we have
!
1
1
N = 1 − n2 + (−1) n n4 = (−1) n n4 1 + (−1) n 4 − (−1) n 2 = (−1) n n4 · 1n,
n
n
again by the rule bounded·null=null. Finally, if a > 1 the dominating term is a n and indeed
4!
4!
n2 a−n
n2
nn
n
nn
N = an 1 −
+
(−1)
=
a
+
(−1)
= a n · 1n
1
−
an
an
an
a2n
4
being a2n = (a2 ) n n2 (a > 1) and a n n4 while (−1) n ann −→ 0 by the rule bounded·null=null. Summarizing,
−n2 a−n · 1n,








N=
(−1) n n4 · 1n,







n
 = a · 1 n,
a < 1,
a = 1,
a > 1.
Also for the denominator we have the cases a < 1, a = 1, a > 1.
2n

n3 1 + an3 = n3 · 1n, (a2n −→ 0),









 3
n 1 + n13 = n3 · 1n,
D=









 a2n 1 + n3 = a2n · 1n, (a2n = (a2 ) n n3 ),
2n

a
a < 1,
a = 1,
a > 1.
42
Finally,












N 
=
D 












n
n
1
a
1
a
n
−n2 a−n ·1 n
n3 ·1 n
=− n
(−1) n n4 ·1 n
n3 ·1 n
= (−1) n n · 1n, doesn’t exists,
a n ·1 n
a2n ·1 n
· 1n
−→
−∞,
= a1n · 1n −→ 0,
a < 1,
a = 1,
a > 1.
Other interesting infinities are nn and n!. We have
Proposition 2.6.5.
nn n! a n, ∀a > 1.
Proof. — Let’s prove that nn!n −→ 0. We have
n(n − 1) · · · 3 · 2 · 1 n n − 1
3 2 1
1 1
n!
1
n!
= ·
· · · · · 6 1 · 1 · · · 1 · 1 · = , ⇐⇒ 0 6 n 6 .
06 n =
n
n · n··· · n · n · n
n
n
n n n
n n
n
n
The conclusion follows now by the two policemen thm.
n
Let’s pass to the second and prove that an! −→ 0. Arguing similarly as before,
06
an
a · a···a · a
a
a
a a
=
= ·
··· · .
n
n · (n − 1) · · · 2 · 1 n n − 1
2 1
Now, as n > a, let say n > N := [a] + 1, we have
a
a
a a
a
a
a
a a
a
a
a
a a
·
··· · 6 · 1···1 ·
·
· · · · =: b · , where b :=
·
··· · ,
n n−1
2 1
n
N −1 N −2
2 1
n
N −1 N −2
2 1
is fixed (because N doesn’t depend on n). In other words
06
an
a
6 b , n > N,
n!
n
and now the conclusion follows by the two policemen theorem.
2.7
Indeterminate forms for powers
There’s another indeterminate form that often appear. Consider a limit of type
lim a bn .
n→+∞ n
Here (an ) ⊂]0, +∞[ and (bn ) ⊂ R. Notice that
bn
anbn = alog a (an
)
= a bn log a (an ), (a , 1, a > 0).
Choosing for instance a > 1, we have that
if lim bn loga (an ) =: `,
n→+∞
cont. o f ex ponentials
=⇒
+∞,








 a`,






 0,

when ` = +∞,
when ` ∈ R,
when ` = −∞.
43
The quantity bn loga (an ) may lead to an indeterminate form of type ∞ · 0 or 0 · ∞. We have
b −→ ±∞,


 n
⇐⇒ 


log a
 an = a a n −→ 1,


 bn −→ ±∞,
bn loga (an ) = ∞ · 0, ⇐⇒ 


 loga an −→ 0,
or


 bn −→ 0,
⇐⇒ 

 a −→ 0+, ∨ a −→ +∞.
n
 n


 bn −→ 0,
bn loga (an ) = 0 · ∞, ⇐⇒ 

 log a −→ ±∞,
a n

Conclusion:
1±∞, (0+) 0, (0+) +∞
are indeterminate forms. The identity
anbn = a bn log a an ,
is the way to reduce this form to products.
Example 2.7.1. Compute
1
n
lim
n→+∞
Sol. — We know that n log2 n, so
log2 n
n
1
n
! logn2 n
.
−→ 0 while n1 −→ 0+: we have the indeterminate form (0+) 0 . Now,
! log2 n
n
=2
log2 n
n
log2 n
=2
(log2 n) 2
n
=2
log2 n 2
n 1/2
−→ 20 = 1,
because n1/2 log2 n.
2.8
The Napier number
In the introduction of this Chapter we met the limit
lim
n→+∞
r n
.
n
1+
The case r = 1 is very important for all Mathematical Analysis!
Theorem 2.8.1.
∃ lim
n→+∞
1
1+
n
!n
=: e ∈]2, 3[.
(2.8.1)
n
Proof. — We will prove first that 1 + n1 %. To do this, let’s recall the Newton formula and let’s write
1+
!
n
1 n X
=
n
k=0
n
k
!
n
X
1
n(n − 1) · · · (n − k + 1) 1
=
.
k!
nk k=0
nk
(2.8.2)
44
Now,
n(n − 1) · · · (n − k + 1)
nk
so
1+
!
1 n
n
6
6
!
= 1−
1
n
6 1−
1
n+1
!
!
2
k −1
··· 1−
n
n
1−
!
1−
!
!
2
k −1
(n + 1)n · · · (n + 1 − k + 1)
··· 1−
=
n+1
n+1
(n + 1) k
n
n
X
X
(n + 1)n · · · (n + 1 − k + 1) 1
=
k!
(n + 1) k
k=0
k=0
n+1
X
k=0
n+1
k
!
n+1
k
!
1
(n + 1) k
!
1
1 n+1
.
=
1
+
n+1
(n + 1) k
that is an %.
Let e be the limit. We will show now that e ∈]2, 3[. First notice that
an %, =⇒ an > a1 = 2, ∀n ∈ N.
The difficult task is to prove that an 6 3. Looking carefully at the (2.8.2) and noticing that
!
!
!
n(n − 1) · · · (n − k + 1)
2
k −1
1
1
−
·
·
·
1
−
6 1,
=
1
−
n
n
n
nk
we can write
an 6
n
n
X
X
1
1
=1+1+
.
k!
k!
k=0
k=2
Now: if k > 2, k! = 1 · 2 · 3 · · · (k − 1)k > 1 · 2 · 2 · · · 2 · 2 = 2k−1 , so
an 6 2 +
n
X
n−1
X 1 !k
=
2
+
.
2
2k−1
k=2
k=1
1
This last sum is easily computed by the remarkable identity
(q,1)
1 − q n = (1 − q)(1 + q + q2 + . . . + q n−1 ), ⇐⇒ 1 + q + q2 + . . . + q n−1 =
In particular
1 − qn
.
1−q
n
!
n−1
X 1 !k
1 − 12
1
1
=
−
1
=
−
1
=
2
1
−
−1
2n
2k k=0 2
1 − 12
k=1
!
1
1
an 6 2 + 2 1 − n − 1 = 3 − n−1 < 3, ∀n ∈ N.
2
2
n−1
X
so
Since now we will call log the logarithm in base e (that is loge ), called also natural logarithm (sometimes denoted by ln).
Example 2.8.2. Compute
lim
n→+∞
1
1−
n
!n
.
Sol. — We have
1−
!
!
1
1 n
n−1 n
1
1
1
1
=
= = n = n−1 −→ e · 1 = e .
n n
1
n
n
1
1
1 + n−1
1 + n−1
1 + n−1
n−1
45
2.9
Exercises
Exercise 2.9.1. Using the definition, show that
!
1
1. lim 1 + n = 1.
n→+∞
2
2.
10.
13.
log10 (n + 1) − log10 n = 0.
lim
n→+∞
lim
n→+∞
n
= 1.
n+1
√
n+ n
5. lim
√ = 1.
n→+∞ n − n
n+2
4. lim 2
= 0.
n→+∞ n + 3n
7.
lim
n→+∞
√
n − n − 1 = +∞.
8.
lim
n→+∞
11.
n
√ = −∞.
n→+∞ 1 − n
lim
14.
lim
lim
n→+∞
6.
2n2 + 1
= 2.
+n+1
9.
n2 + 1
= +∞.
n+1
12.
!
1
= +∞
n
15.
n2
n→+∞
3.
n+
lim
n+2
= 1.
n+1
lim
1
= 0, (α > 0).
nα
lim
2n − 2−n
= 1.
2n + 2−n
n→+∞
n→+∞
n→+∞
lim
1−n
√ = −∞.
1+ n
lim
3
3n − 1
=− .
2 − 2n
2
n→+∞
n→+∞
Exercise 2.9.2. By using suitable subsequences, show that the following sequences don’t have a limit.
1. (−1) n n2 . 2. sin
2
nπ
3n + 1
1 + (−1) n
. 3. (−1) n − (−1) n . 4.
. 5. (−2) n . 6. (−1) n
.
4
2
n
Exercise 2.9.3. For each of the following sequences find two policemen and compute the limit:
√
n + cos n
n2 + (−1) n n
(−1) n n
n sin n
n sin(n!)
1.
. 2.
. 3. 2
. 4. 2
. 5.
.
n+1
n+1
n2 + 1
n +1
n +1
Exercise 2.9.4. Compute
!
2 − 3n 3
.
1. lim
n→+∞ 2n + 1
4.
lim
√3
n→+∞
√
n + a − 3 n, (a > 0).
n!
.
(n + 1)! − n!
s
2
3 n − 1+
2*
/.
10. lim n .1 −
n→+∞
n2 + 1
,
7.
lim
n→+∞
2.
√
lim
n→+∞
5. lim
p
n→∞
8.
11.
lim
n→+∞
lim
√3
√
n + 5n
3. lim √
.
n→+∞
n−1
√ n − 3n .
√ 2n2 + n − 1 − n 2 .
n + sin n
.
n − cos n
n3 + n2 sin n1
n3 + 1
n→+∞
6.
9.
lim
n→+∞
((n + 1)!) 2 (2n)!
.
(2(n + 1))!(n!) 2
√
lim (n + (−1) n n).
n→+∞
.
Exercise 2.9.5. Discuss in function of the parameter α ∈ R existence and value of
1.
lim
n→+∞
√
√
√
√ nα − n + 1
. 2. lim nα + 5 − n + 1. 3. lim nα n + 1 − n .
2
n→+∞
n→+∞
n +1
Exercise 2.9.6. Compute
1.
2n
.
n→+∞ (n + 2) n
5.
2n
.
4
n→+∞ n + 1
lim
2.
2n + n5
.
n→+∞ 3n − n2
3.
n2 2n + n10 − 5n
.
n→+∞
n2 5n + 10n
6.
2n + (sin n)(log2 n)
.
n→+∞
n
7.
(log2 n) 4 + (log4 n) 2
.
√
10
n→+∞
n+1
2
lim
lim
lim
lim
lim
4.
n7 + 2n − 5n
.
n→+∞ n5000 − 3n
lim
46
Exercise 2.9.7. Discuss in function of the parameter a > 0 existence and value of
1.
3.
5.
7.
9.
lim
n2 a n − n2n + n3
.
+ n2 2n + n3 (sin n) n
n→+∞ n8 3−n
a n + n2 3n
.
n→+∞ (−3) n + n2 2n − a 2n
lim
lim
3n 2−n + a−n n
.
− (2a) n + a2n
n→+∞ n3 a −n
lim
n→+∞
lim
n→+∞
n
a n − n2n + 4n! sin n
an
+ n2n
− 3n
.
2.
2n n a − 2n n3 + 2−n cos nn
.
n→+∞
2n n4 − 2−n n8 + n10
4.
a n − n4 32n + 3n cos(n!)
.
n→+∞ n4 9n + 9−n 42n − n9
6.
(−3) n + n2 4n − (a + 1) n
n→+∞
n2 4n + (3a) n
8.
n4 4n − (4a) n + 4n n2 cos(n!)
.
n−n − n4 23n + 12n
lim
lim
lim
lim
n→+∞
10.
2n nα − 2n n3 + (2 cos n) −n
.
2n n − 2−n n3 + n3
lim
n→+∞
n−n − 32n n12 + 12n
.
− (3a) n + n3 6n
6n n2 sin(n!)
Exercise 2.9.8. Compute, in function of the parameter a ∈ R,
1.
!n


2a2
a n − log n
lim  2
.
− n−5a  . 2. lim
n→+∞  a + 1
n→+∞ a n + 2n


Exercise 2.9.9. Put the right symbol (∼, )
1. n . . . 2n .
√
√
2. n + (log n) 3 . . . n + 1.
√3
√
3. n + sin n . . . n2 .
4. n log n . . . n2 .
Exercise 2.9.10. Order the following quantities with respect the symbol :
1+ √ 1
√
n
n
log2 n
n n, n2 , 2log2 n+log4 n, 22 , n
.
Exercise 2.9.11. Reducing to the limit of e, compute
!
1 3n
1. lim 1 +
.
n→+∞
n
4.
(n + 1) n
.
n→+∞ n n+1
lim
2.
5.
lim
n→+∞
lim
n→+∞
! 3
1 n
1+ 2
.
n
!
1 n+3 n
.
n n+2
!
1 n
3. lim 1 +
, (k ∈ N).
n→+∞
n+k
Exercise 2.9.12. Compute
1. (?)
lim
n→+∞
(n − 1)
n+ log1 n
(n + 1)
√
1+n2
. 2. (??)
lim
n→+∞
n!
nn/2
Chapter 3
Numerical Series
In some problems (like exhaustion methods to compute area of plane figures), we met the following problem: to sum
infinitely many numbers
∞
X
a0 + a1 + . . . + ak + ak+1 + . . . ≡
ak , where (ak ) ⊂ R.
k=0
When this problem was posed, Mathematical Analysis was far to have solid bases and there weren’t a rigorous definition
of what an infinite sum means. We could think that it is basically how to treat with finite sums, but this is not at all true.
For instance, consider the following formal operations:
S = 1 + 2 + 4 + 8 + 16 + . . . = 1 + 2(1 + 2 + 4 + 8 + . . .) = 1 + 2S, ⇐⇒ 1 + 2 + 4 + 8 + . . . = S = −1,
that seems a little bit weird. So finite and infinite sums don’t seem to follow the same rules (at least not always), and the
problem to give a meaning to the concept of sum is not at all trivial. Indeed, since we are child we learn to sum two
numbers. Hence, thanks to associative and commutative properties of the sum (that we use without any mention usually)
we can sum any finite number of numbers. For instance we could write
a0 + a1 + . . . + an = (. . . ((a0 + a1 ) + a2 ) + . . .) + an−1 ) + an .
Clearly, if n is too big our life, or the life of the Universe, won’t be enough to do really the computation, but at least in
principle it is clear how to proceed. But, it seems, if the sum is infinite we will never end, so: what it could be the sense of
this operation? Actually, this opened lots of speculations, most of them out of Mathematics as arguments to prove existence
of God (like the abbot Guido Grandi in the XVII century) or to construct extraordinary paradoxes that still today represent
formidable puzzles. The most known is undoubtedly the Zeno paradox, called also Achille and the turtle paradox.
In a run, the faster runner leaving behind the slower one will never reach him because he will have first to reach the
position of the slower, and when this will be the slower will be always forward.
This paradox clearly contradict our experience. Where is the trouble? To fix ides, suppose that Achille have to cover
1.000m and the turtle initially is at the middle, that is at 500m to the end. For simplicity we assume that Achille is quite
slow and its speed is 1.000m/h, while the turtle is particularly fast having a speed around 500m/h. It seems obvious that
after 1h both will be at the end of the road together. However, Zeno suggest the following argument: first Achille will have
to reach the initial position of the turtle, that is he will have to run 500m, and this will be done in 1/2h. When he will have
done this, the turtle will be 250m forward. To cover these 250m Achille will run for another 1/4h, but the turtle will be
125m forward. And so on, in such a way that Achille will be always behind to the turtle.
47
48
The paradox is just in the word always. Indeed, the total time of Achille behind the turtle is given by
1
1
1
1
1
h+ h+ h+ h+ h+...
2
4
8
16
32
The point is: is this sum finite? Because if this is the case, we have no contradiction (and the paradox is solved). To solve
the mystery recall that
1 − qn
1 + q + q2 + . . . + q n−1 =
, (q , 1),
1−q
therefore
1
1
1
1
1
1
+ 2 + . . . + n = *1 + +
2 2
2
2,
2
2
hence it seems natural to conclude that
!2
1
+...+
2
! n−1
1 − 21n
1
+= 1
= 1 − n,
1
2
- 2 1− 2
1
1
1
1
1
+
+
+
+
+ . . . = 1,
2 22 23 24 25
which is nothing but the expected value! In other word we interpret the infinite sum
∞
X
1
as
k
2
k=1
n
X
1
,
k
n→+∞
2
k=1
lim
and this makes perfectly sense because we have no problem with finite sums.
The aim of this Chapter is to introduce to concept and methods of infinite sums, mathematically called series. This
actually doesn’t seem a nice name because it sounds like a synonymous of sequence, leading someone to confuse series
with sequences. The confusion is twofold because, as the last formula points out, we will use limit of suitable sequences
to define convergent sums! Because this terminology is universally accepted we won’t change it, so we advice the student
to pay a particular attention especially when he/she begins to enter in this Chapter.
3.1
Definition and examples
The central definition is the following:
Definition 3.1.1. Let (an ) ⊂ R. If the limit
lim
N →+∞
• exists finite we say that the series
P
n
an converges;
N
X
n=0
an
49
• exists infinite we say that the series
P
• doesn’t exist, we say that the series
P
n
an diverges;
n
an doesn’t converge (or it is indeterminate).
In the case of a convergent series we call sum of the series
∞
X
an := lim
N →+∞
n=0
The finite sums s N :=
PN
n=0
N
X
an ∈ R ∪ {±∞}.
n=0
an, N ∈ N are called partial sums of the series.
Remark 3.1.2. In other words: the series is convergent/divergent/indeterminate according that the sequence of partial
sums has a finite/infinite/doesn’t have limit.
Example 3.1.3 (geometric series).
1
= 1−q
∈ R (converges),





∞

X


qn 
= +∞, (diverges),



n=0




 indeterminate,
if |q| < 1.
if q > 1,
(3.1.1)
if q 6 −1.
Sol. — Let’s compute the partial sums. Recalling the remarkable identity
1 − q N +1 = (1 − q)(1 + q + q2 + . . . + q N ), =⇒ s N =
N
X
qn =
n=0
1 − q N +1
, if q , 1.
1−q
If q = 1
sN =
N
X
n=0
1n =
N
X
1 = N + 1.
n=0
Therefore

1 − q N +1



, q , 1,


1−q
sN =
qn = 




n=0
 N + 1,
q = 1.

Clearly, if q = 1, s N = N + 1 −→ +∞. Let’s study now the case q , 1. We have to compute
N
X
1
1
1 − q N +1
=
−
lim q N +1 .
1 − q 1 − q N →+∞
N →+∞ 1 − q
lim s N = lim
N →+∞
But q N +1 is an exponential and
−→ 0,








−→ +∞,
q N +1 







 doesn’t have a limit,
By this the conclusion easily follows.
|q| < 1,
q > 1,
q 6 −1.
50
Example 3.1.4 (snow flake von Koch curve). The von Koch curve is a plane curve constructed as follows. Take an
equilateral triangle with side conventionally assumed = 1. Divide each side into three equal parts and in the middle one
construct an equilateral triangle. Apply again the same procedure on each side of the new picture, and repeat infinitely
many times the procedure. We get a closed line in the plane delimitating a plane figure. We ask: what is the area of this
picture and what is the length of its perimeter.
√
Sol. — Let’s compute the area. For an equilateral triangle of side ` the area is 43 ` 2 . Now
• initially, we have 1 triangle of side ` = 1;
• at the first iteration we add 3 triangles of side 31 ;
• at the second iteration we add 12 triangles of side 19 ;
• ...
Let’s look for a general rule. The number of triangles added at the nth step is equal to the number of sides at the n − 1th step. The number
of sides arise in the following way: on each side 4 new sides arise. Therefore the number of successive sides is 3 · 4, 3 · 4 · 4, . . . , 3 · 4n−1
at step n. Moreover, at the step n the side is 31n . Therefore, at the n1th step we increase the total area of
3·4
n−1
√
√
√
!
!
3 1 2 3 3 4n
3 3 4 n
×
=
.
=
4 3n
16 32n
16 9
Therefore the total area will be
√
√
√
√
√
√
!
!
!
∞ √
∞
3
2 3
3 X3 3 4 n
3 3 34X 4 n
3*
1 1 +
3
1+
=
=
=
1
+
=
+
+
.
4
16 9
4
16 9
9
4 ,
31− 44
5
5
n=1
n=0
9
n
Let’s see the perimeter: each time the total perimeter increase of a factor 43 , so at the step n we have 34 −→ +∞.
Example 3.1.5 (Mengoli’s series).
∞
X
n=1
1
= 1.
n(n + 1)
Sol. — Let’s compute the Nth partial sum:
sN =
N
X
n=1
!
!
!
!
N
X
1
1
1
1
1
1 1
1
1
=
−
= 1−
+
−
+...+
−
=1−
−→ 1.
n(n + 1)
n n+1
2
2 3
N N +1
N +1
n=1
These examples may induce the idea that to study convergence and sum of a series is relatively easy: according to the
definition, we have to compute the partial sum s N hence to compute its limit as N −→ +∞. The problem is that not always
(let say: almost never) to compute s N in a useful way to be able to compute its limit is possible.
51
Example 3.1.6 (harmonic series).
∞
X
1
= +∞.
n
n=1
Sol. — Here is not at all evident how find a useful expression for
sN =
N
X
1
1 1 1
1
= 1+ + + +...+ .
n
2 3 4
N
n=1
Useful to compute the limit of course! The problem is that, differently by previous examples, it is not possible to simplify a sum s N to
reduce this to fixed number of operations on N. If we look at some values for partial sums the situation is not very easy to foresee:
s10 = 2, 92897 s100 = 5, 18738 s1.000 = 7, 48547 s10.000 = 9, 78561 s100.000 = 12, 0901 . . .
The sum increase always (this is evident, because it is a sum of positive numbers!), but very slowly. We will prove actually that
s N −→ +∞ thanks to a striking idea due to Cauchy. Suppose for a moment that N = 2m and let’s consider s2 m :
!
!
!
!
1 1
1
1
1
1
1
1
1
+
+
+...+
+
+...+
+ . . . + m−1
+...+ m .
s2m = 1 + +
2
3 4
5
8
9
16
2
2
+1
Notice that:
• the first parenthesis has 2 terms, both bigger than 14 ;
• the second parenthesis has 4 terms bigger than 18 ;
1 ;
• the third parenthesis has 8 terms bigger than 16
• (if we have understood the game) the last parenthesis has 2m−1 terms bigger than 21m .
Therefore
1
1
1
1
1 1 1 1
1
1
m
1
+ 2 + 4 + 8 + . . . + 2m−1 m = 1 + + + + + . . . = 1 + m = 1 + .
2
4
8
16
2
2 2 2 2
2
2
2
Now we can start to believe that s N −→ +∞. Let’s find an estimate for s n with generic n. Clearly we will find m such that
2m 6 n 6 2m+1 , that is m 6 log2 n 6 m + 1. Then, being all terms positive
s2m > 1 +
log2 n − 1
m
s n > s2m > 1 +
> 1+
−→ +∞.
2
2
P
This example helps to drive out a (false) common believe: in order that a sum n an be finite it is necessary and sufficient
that we are summing small terms, that is an −→ 0. As the example shows this is false. However the necessity is true:
Proposition 3.1.7 (necessary condition of convergence).
X
If
an converges, =⇒ an −→ 0.
n
Proof. — Very easy: let s N :=
PN
a .
n=0 n
Then s N −→ s ∈ R so
a N = s N − s N −1 −→ s − s = 0.
In general infinite sums doesn’t fulfills ordinary properties of finite sums (like associativity or commutativity, as shown in
the introduction). It is however possible to show that in certain circumstances these are true. For instance
52
Proposition 3.1.8 (linearity). Let
and
P
n
an and
X
P
bn be convergent series. Then n (αan + βbn ) converges for any α, β ∈ R
X
X
(αan + βbn ) = α
an + β
bn .
P
n
n
Proof. — Just notice that the partial sum S N of
S N :=
N
X
n
P
n (αan
+ βbn ) is
(αan + βbn ) = α
n=0
3.2
n
N
X
an + β
n=0
N
X
bn −→ α
X
an + β
X
n
n=0
bn .
n
Constant sign terms series
The constant sign term series are series
∞
X
an, with (an ) ⊂ [0, +∞[≡ R+, or (an ) ⊂] − ∞, 0] ≡ R− .
n=0
They deserve a special role in the theory. This is due, once again, to consequence of monotonicity (hence of completeness):
Proposition 3.2.1. Every constant sign terms series is either convergent or divergent. That is: it cannot be indeterminate!
Proof. — Consider the case (an ) ⊂ R+ . Then partial sums are increasing because
s N +1 =
N
+1
X
an =
n=0
N
X
an + a N +1 = s N + a N +1 > s N ,
n=0
being a N +1 > 0 for any N ∈ N. But the, by Thm 2.3.12, there exists lim N s N ∈ R ∪ {+∞}.
It should be intuitive: bigger numbers have a bigger sum, so if the bigger sum is finite the same holds for the smaller.
Theorem 3.2.2 (comparison test). Assume that
0 6 an 6 bn, definitively.
Then
X
bn converges =⇒
n
X
an converges.
n
We call (bn ) a summable dominant of (an ).
Proof. — For simplicity in the proof we will assume that
an 6 bn, ∀n ∈ N.
Then
sN =
N
X
n=0
an 6
N
X
bn =: H
s N , ∀N .
n=0
By previous Prop. s N %. If s N −→ +∞ then also H
s N −→ +∞, but this contradicts that
P
n bn
is convergent.
53
Let’s see some example of application of this result. We emphasize the sense of comparison test: we don’t compute the
sum, we just say if it is finite or less.
Example 3.2.3.
∞
X
1
converges.
2
n
n=1
P
π2
1
Actually it is possible to prove (but is very difficult) that ∞
n=1 n2 = 6 (Bernoulli).
Sol. — Also this series is called harmonic series (we will see immediately an entire general family of these series). This series has the
P
same difficulties of the harmonic series n n1 . However the series can be compared easily with a Mengoli–type series noticing that
1
1
1
=
6
, ∀n > 2.
n·n
n(n − 1)
n2
1
is a summable dominant because
We have that n(n−1)
∞
X
n=2
By comparison test we deduce that
P
1
n n2
∞
X
1
1
=
= 1.
n(n − 1)
n(n + 1)
n=1
converges.
Example 3.2.4.
∞
X
1
converges ∀α > 2.
α
n
n=1
Sol. — Let α > 2. Then
1
1
6 2 , ∀n > 1.
nα
n
P
P
By previous example n n12 converges, so n n1α converges by comparison.
The comparison test may be applied on the contrary direction: if a series dominates a divergent series it cannot be
convergent (otherwise the dominated would be convergent by comparison!).
Example 3.2.5.
∞
X
1
diverges ∀α 6 1.
nα
n=1
Sol. — Let α < 1. Notice that nα 6 n for any n > 1, so
1
1
6 α , ∀n > 1.
n
n
By this follows that
P
1
n nα
dominates
P 1
n n which is divergent, and this forces to be divergent also its dominant.
Notice that we proved
converges if α > 2,





 diverges if α 6 1.

What can be said in the cases 1 < α < 2? It is not easy to prove that
∞
X
1
,
α
n
n=1
54
Theorem 3.2.6.
∞
X
1
converges, ⇐⇒ α > 1.
α
n
n=1
(3.2.1)
Proof. — This could be done by adapting the Cauchy trick, or with a more powerful tool that we will see with generalized integrals
later (see Example ??).
The difficulty applying the comparison test is to look for summable dominants. It is a good idea to have a more versatile
instrument, like the following
Corollary 3.2.7.
If 0 6 an bn, and
X
bn converges, =⇒
n
X
an converges.
n
Proof. — Indeed: an bn means ba n −→ 0, therefore
n
06
an
6 1, definitively, ⇐⇒ 0 6 an 6 bn, definitively.
bn
By this the conclusion follows.
Example 3.2.8. Say if it is convergent or less the series
∞
X
e−
√
n
.
n=0
√
Sol. — Of course, it is not a geometric series! If an := e− n it is clear that an > 0, so we have a constant sign terms series (hence
either converges or diverges). The necessary condition an −→ 0 is clearly fulfilled, but being not sufficient it doesn’t help us here. It is
however natural to think that
√
1
e− n 2 .
n
Indeed
√
!
√
√
√
n2
2 log n
e− n
2 log n− n
=
−→
0,
being
2
log
n
−
n
=
−
n
1
−
−→ −∞
√ =e
√
1
n
e n
2
n
and n1/2 log n. Therefore the series is convergent.
3.2.1
Asymptotic comparison
It seems intuitive that if an ∼ bn the sums
P
n
an and
P
n
bn have the same behavior:
Theorem 3.2.9. Let (an ), (bn ) ⊂ R+ such that an ∼ bn . Then
X
X
an converges ⇐⇒
bn converges.
n
Proof. — We know that
n
an
−→ 1.
bn
55
Therefore there exists an N such that
1
an
1
6
6 2, ∀n > N, ⇐⇒
bn 6 an 6 2bn, ∀n > N .
(3.2.2)
2
bn
2
P
P
But then: 2bn is a dominant for an , and 2an is a dominant for bn . By this easily: if n an converges, of course n (2an ) converges
P
(linearity), hence n bn converges (comparison) and vice versa.
Remark 3.2.10. Notice that if an ∼ bn then definitively an and bn have the same sign. In particular: if an ∼ bn > 0, then
an > 0 definitively. Indeed:
an
an
1
−→ 1, =⇒
> > 0, definitively,
bn
bn
2
and by this it is clear that an and bn must share the same sign.
In particular:
for some C , 0 and α ∈ R, then
• if an ∼
C
nα
• if an ∼
Cq n
P
for some C , 0 and q > 0, then
n
P
an converges iff α > 1.
an converges iff q < 1.
n
Example 3.2.11. Discuss the convergence for the series
∞
X
n=0
nα
n
, α ∈ R.
+1
Sol. — Let an := nαn+1 . Clearly (an ) ⊂ R+ , so we have a constant sign terms series. Moreover
n
n
1n
1
an = α
=
= α−1 ∼ α−1 .
n + 1 nα · 1n
n
n
Therefore: by asymptotic comparison
P
n an
converges iff α − 1 > 1 that is α > 2.
With consequences of differential calculus we will extend widely the use of the asymptotic comparison.
3.2.2
Root and Ratio tests
If an ∼ Cq n then, assuming for instance C > 0,
an1/n ∼ C 1/n q −→ q,
an+1 Cq n+1
∼
= q −→ q,
an
Cq n
so that
an+1
,
an
and according to q < 1 or q > 1 we could say if the series converges or not. This is essentially the content of the two
following tests.
q = lim an1/n = lim
n
n
Theorem 3.2.12 (root test). Let (an ) ⊂ R+ and suppose
√
∃ lim n an ≡ lim an1/n =: q (∈ R+ ∪ {+∞}).
n→+∞
Then
n→+∞
56
• if q < 1 the series
P
n
an converges.
• if q > 1 (included q = +∞) the series
P
n
an diverges and an −→ +∞.
In the case q = 1 nothing can be said based on the test (that is: the test fails).
Proof. — Let’s start with the case q < 1. By definition of limit,
∀ε > 0, ∃N = N (ε), q − ε 6 an1/n 6 q + ε, ∀n > N (ε).
(3.2.3)
Let ε > 0 small enough in such a way that q + ε < 1 (this is possible because q < 1). Therefore, by (3.2.3)
an1/n 6 q + ε, definitively, ⇐⇒ an 6 (q + ε) n, definitively.
P
P
Being q + ε < 1, the series n (q + ε) n converges and by comparison it converges also n an .
Consider now the case q > 1, q ∈ R (we leave q = +∞ to the reader). The (3.2.3) is still true: but now we choose ε > 0 such that
q − ε > 1 (this is actually possible because q > 1). Then
an1/n > q − ε, definitively, ⇐⇒ an > (q − ε) n, definitively.
Now, being q − ε > 1, we know that (q − ε) n −→ +∞, therefore an −→ +∞. This means that the necessary condition is not fulfilled
and the proof is finished.
Example 3.2.13. Discuss convergence for
2
∞
X
(n − 1) n
n=1
nn
2+ 1
n
.
2
n
Sol. — Clearly, if an = (n−1)
, (an ) ⊂ R+ (n > 1), so we have a constant sign terms series. Let’s apply the root test:
n2 + 1
n
an1/n
n
n2 1/n
(n − 1) +
=*
2 1
, nn + n -
Now
1+
=
(n − 1) n
n
n+
1
n2
= 1
1
n n
n−1
!
1 n−1
−→ e,
n−1
1+
2
n1/n
= 1
1+
1 n
n−1
1
2
n1/n
= 1
1+
1 n−1
n−1
1+
1
n−1
1
1/n2 .
n
!
log n n2 log n
2
1
−→ e0 = 1,
−→ 1, n1/n = e n2
n−1
so an1/n −→ e1 < 1 and we conclude that the series is convergent.
Warning! 3.2.14. What does it means "the test fails"? It means that there are convergent series with q = 1 and divergent
series with again q = 1. In other words: q = 1 doesn’t distinguish between a convergent and a divergent series. For
instance:
• the series
P 1
n n diverges and
an1/n =
• the series
P
1
n n2
!
log n
1
1 1/n
1
= 1/n = n− n = e− n −→ 1;
n
n
converges and
an1/n =
!
log n
2
1 1/n
1
= 1/n 2 = n− n = e−2 n −→ 1.
n2
(n )
57
A twin test is the
Theorem 3.2.15 (ratio test). Let (an ) ⊂]0, +∞[ and suppose that
an+1
∃ lim
=: q (∈ R+ ∪ {+∞}).
n→+∞ an
Then
• if q < 1 the series
P
n
an converges.
• if q > 1 (included q = +∞) the series
P
n
an diverges and an −→ +∞.
In the case q = 1 nothing can be said based on the test (that is: the test fails).
Proof. — Similar to that one of the root test, therefore omitted.
Example 3.2.16. Discuss convergence for
∞
X
2n n!
n=1
nn
.
n
Sol. — Clearly an := 2n nn! > 0 for any n > 1. Applying the ratio test:
2 n+1 (n+1)!
n n
n n 1
nn
2
an+1
(n + 1)!
2
(n+1) n+1
=2
= =
=2
= 2(n + 1)
−→ < 1,
n n!
2
n+1
1 n
an
n! (n + 1)
n+1 n+1
n+1
e
1
+
n
n
n
by which we conclude that the series converges.
Remark 3.2.17. Basically, root and ratio tests are equivalent. It could be proved that if the ratio test give answer q then also the root
give answer q, but not always it holds the contrary. For instance consider the series
∞
X
n
2 (−1) −n .
n=0
Then
n+1
n
an+1
1
2 (−1) −(n+1)
1
=
= 2−2(−1) −1 = , 2, , 2, . . . ,
n
an
8
8
2 (−1) −n
doesn’t exist. On the other hand,
an1/n = 2
(−1) n −n
n
=2
(−1) n
n
−1
1
−→ 2−1 = ,
2
so the root test works correctly and we can deduce the convergence.
3.3
3.3.1
Variable sign terms series
Alternate sign
We start the analysis of variable sign terms series with series of the form
∞
X
(−1) n an = a0 − a1 + a2 − a3 + a4 − a5 + . . .
n=0
where (an ) ⊂ R+ . These series are called alternate sign terms series. The main tool is the
58
Theorem 3.3.1 (Leibniz test). Consider an alternate sign terms series
∞
X
(−1) n an, an > 0, ∀n.
n=0
If an & 0 then the series converges. Moreover, if s is its sum and s N the Nth partial sum, we have
|s − s N | 6 a N +1 .
Proof. — The point is that even sums decrease while odd sums increase. Indeed
s2n+2 = s2n − a2n+1 + a2n+2 = s2n + (a2n+2 − a2n+1 ) 6 s2n,
because an &, hence a2n+2 6 a2n+1 . Similarly
s2n+3 = s2n+1 + (a2n+2 − a2n+3 ) > s2n+1 .
Moreover
s2n+1 = s2n − a2n+1 6 s2n = s2n−1 + a2n .
Now, being monotone s2n+1 and s2n+2 converge, let say s2n+1 % σ, and s2n & H
σ . But being also a2n −→ 0 by the last inequality and
passing to the limit we have
σ6H
σ 6 σ, =⇒ σ = H
σ =: s,
By this it follows easily that s n −→ s. Let’s come finally to the estimate: because s2n+1 & s, we have
s2n − a2n+1 = s2n+1 6 s, =⇒ 0 6 s2n − s 6 a2n+1,
and similarly
s2n+1 + a2n+2 = s2n+2 > s, =⇒ 0 6 s − s2n+1 6 a2n+2 .
Example 3.3.2. Discuss convergence for the series
∞
X
(−1) n
.
n
n=1
Sol. — It is clearly and alternate sign terms series of type
Leibniz test.
P
n (−1)
na
n
with an := n1 > 0. Clearly an & 0, so the series converges by
Example 3.3.3. Discuss convergence for the series
∞
X
n=3
(−1) n
log n
.
(log n) 2 − 1
log n
Sol. — Let an := (log n) 2 −1 , n > 3. Being log n > log e = 1 as n > 3, (log n) 2 − 1 > 0 as n > 3, so an > 0. This means that we have
an alternate sign terms series. Let’s apply the Leibniz test. First notice that
an =
log n
log n
1 1
=
−→ 0.
=
log n 1n
(log n) 2 − 1 (log n) 2 1 − 1
2
(log n)
59
Now we want to check if an &. This is not evident by a simple look: while n increase log n % but also (log n) 2 − 1 % (faster than the
numerator so it seems reasonable the conclusion).
an+1 6 an,
log n
log(n + 1)
6
,
2
(log(n + 1)) − 1
(log n) 2 − 1
⇐⇒ (log(n + 1)) (log n) 2 − 1 6 (log n) (log(n + 1)) 2 − 1
⇐⇒
⇐⇒ (log n)(log(n + 1)) log(n + 1) − log n > log n − log(n + 1),
⇐⇒ (log n)(log(n + 1)) > −1, (being log(n + 1) − log n > 0)
This is evident because n > 3 implies log n, log(n + 1) > 0. Therefore Liebniz test applies and the series converges.
3.3.2
Absolute convergence
In the general case the most important tool is the following
Theorem 3.3.4 (sufficient condition for convergence).
If
∞
X
|an | converges =⇒
n=0
∞
X
an converges.
n=0
P
A series such that n |an | converges is called absolutely convergent. Hence: absolutely convergent implies convergent
(and for this reason sometimes we rename convergence as simple convergence).
Proof. — Let’s call
an+ := max{an, 0}, an− := max{−an, 0}.
Clearly an± > 0, an+ + an− = |an | while an+ − an− = an . In particular
0 6 an± 6 |an |.
By comparison then,
P
k
ak± are both convergents hence, by linearity, also
P
+
k (ak
− ak− ) =
Remark 3.3.5. The sufficient condition is not necessary: for instance, the series
(−1) n+1 P 1
P
test but not absolutely convergent because ∞
= n n .
n
n=1 P
k
P∞
n=1
ak converges.
(−1) n+1
n
is simply convergent by Leibniz
The technique to study variable sign terms series combine the necessary and the sufficient conditions according the
following scheme.
The "lightning" is the disgrace when the sufficient condition is not fulfilled while the necessary condition it is. In these
cases, however, sometimes a little bit of craftsmanship helps to solve troubles.
Example 3.3.6. Discuss in function of x ∈]0, +∞[ simple and absolute convergence for
∞
X
(log x) n
.
√
n=1 n(n + 1)
60
!"#$%&'()*+')$'(%'$,
#
!!"- " !
./0
#
!!"- $" !$%&##
)))))))))12
)))))))
#
!!"- " !
))34$"56*'57)8"#9'(:'#*)
)))))./0
"! ' - #
)))));+'#8')$%<=57>
))))))))12
#
!!"- " !
#"#)8"#9'(:'#*
;&%9'(:'#*)"()%#&'*'(<%#3*'>
x) n
√
Sol. — Let an := (arctan
. Let’s start by absolute convergence. This means to study the series
n(n+1)
∞
X
|an | =
n=1
∞
X
| log x| n
.
√
n(n + 1)
n=1
Applying the root test
1
|an | n =
being
n1/n
=e
log n
n
| log x|
(n(n + 1))
1
2n
−→ 1 and similarly (n + 1) 1/n = e

converges,



|an |, 


 diverges and |an | → +∞,
n

X
= | log x|
log(n+1)
n
1
−→ | log x|
(n1/n ) 2 ((n + 1) 1/n ) 2
nlog(n+1) 0
−→
e
= 1. Therefore
if | log x| < 1, ⇐⇒ −1 < log x < 1, ⇐⇒ e−1 < x < e,
if | log x| > 1, ⇐⇒ log x < −1, ∨ log x > 1, ⇐⇒ x < e−1, ∨ x > e.
In the cases | log x| = 1 (that is x = e−1, e) the test fails. For the moment we can say that

absolutely (hence simply) convergent,



a n, 


 non convergent (simply or absolutely) being a 6−→ 0,
n
n

X
if e1 < x < e,
if x < e1 , ∨ x > e.
To finish we have to study the cases x = e1 , e. Replacing directly these values,
• if x = e1 = e−1 the series is
∞
∞
X
X
(−1) n
an =
,
√
n(n + 1)
n=1
n=1
that is an alternate sign terms series. Being √ 1
& 0 the series converges by Leibniz test. Does it converges also absolutely?
n(n+1)
We have
∞ ∞
n X
X
1
1
1
1 asymp. comp.
(−1)
=
and √
= q
=⇒
diverges.
∼ ,
√
√
n(n + 1) n
n(n
+
1)
n(n
+
1)
n=1
n=1
n 1+ 1
n
• if x = e the series becomes
∞
X
n=1
√
1
,
n(n + 1)
which is of course simply and absolutely (it’s the same!) divergent by previous point.
f
f
g
f
Conclusion: the series is simply convergent as x ∈ e1 , e , absolutely convergent as x ∈ e1 , e .
61
3.4
Exercises
Exercise 3.4.1 (?). For any of the following series compute the partial sums and discuss convergence (computing the eventual sum) of
the series:
!
∞
∞
∞ p
X
X
X
p
1
1
log 1 − 2 .
1.
n(n + 1) − n(n − 1) − 1 . 3.
. 2.
(2n − 1)(2n + 1)
n
n=0
n=2
n=0
P
P
Exercise 3.4.2. For any of the following series n an find bn such that an bn and n bn converges.
1.
∞
X
n=1
1
n
3+(−1) n
. 2.
∞
∞ √
∞
X
X
X
1
1 + sin n
n −n
2
, (α > 1, β > 0).
.
3.
3
.
4.
α (log n) β
2n
n
n=0
n=2
n=0
Exercise 3.4.3. Applying the asymptotic comparison test, discuss the convergence of the following series
1.
∞
X
(n + 2) n
.
nn+2
n=1
!
∞
X
1 n+2 n
5.
.
n n+3
n=1
∞
X
n + log n
.
(n
− log n) 3
n=1
2.
3.
∞ √
X
√
3
( n + 1 − 3 n).
4.
n=0
∞
X
1
6.
√
√
√ .
2
n=2 n − n( n + 1 − n)
7.
s
∞
X
nβ *.1 −
n=0
,
∞
X
1
.
√
n
n
n
n=1
n3 +
/ , ( β ∈ R).
n3 + 1
-
Exercise 3.4.4. Applying root or ratio tests discuss the convergence of the following series:
1.
∞
X
n
5 (−1) −n .
2.
n=0
5.
∞ nx
X
n
, (x ∈ R).
n!
6.
n=1
∞
X
(n!) 2
.
n2 (2n)!
n=1
3.
∞
X
(n!) x
, (x ∈ R).
nn
7.
∞ k
X
n
.
n!
4.
n=0
n=1
∞
X
n=1
x n!, (x > 0).
n=1
9.
∞
X
(−1) n
n=1
n+1
.
4n
∞
X
r
1
6.
(−1)
1 + − 1+ .
n
,
n=1
10.(?)
n*
∞
X
(n!) 2
.
(−1) n 2
n (2n)!
n=1
7.
8.
n=1
∞
X
n=1
1
(−1) sin .
n
11.(?)
n
∞
X
∞
X
nn+1
.
3n (n + 1)!
n! x n, (x > 0).
n=0
Exercise 3.4.5. Applying the Leibniz test discuss convergence of the following series:
√
√
∞
∞
∞
X
X
X
√
n
n
n
n n+1− n
1.
(−1) 2
. 2.
(−1)
.
3.
(−1) n ( 3 − 1).
n
n
+
1
n=1
n=1
n=1
1
∞
X
en −1
.
5.
cos(nπ)
∞
X
(−1) n
n=1
4.
∞
X
(−1) n 1 − cos √
n=0
8.
∞
X
n=1
(−1)
n
1
!
n+1
!
1 n
1−
.
n
log n
.
n
Exercise 3.4.6. For any of the following series, discuss simple and absolute convergence.
1.
∞
X
n=1
∞
X
(−1) n
x2 + n
, (x ∈ R).
n2
xn
, (x ∈ R).
1 + x 2n
n=1
3
∞ √
X
1 − x 2n
7.
(x ∈ R).
3n
4.
n=1
2.
5.
8.
∞
X
xn
, (x ∈ R).
n2 + e x
n=0
∞
X
2
n
(x 2 − 1) n , (x ∈ R).
2
n +1
n=1
∞
X
n=1
xn
, (x ∈ R)
n + arctan n
2
3.
6.
9.
! 2n+n
∞
X
1 x2 + 1
, (x ∈ R\{±2}).
n x2 − 4
n=1
∞
X
(sin x) n
, (x ∈ R).
√
n + n log n
n=1
∞
X
n=0
!
1
ex + 1 n
, x , 0.
2n (n + 1) e x − 1
.
62
Chapter 4
Limits
Once we are acquainted with limits for sequences we can extend this concept, moving from the discrete variable n ∈ N to
the continuous variable x ∈ R. While the possibilities of movement for n are basically reduced to go to +∞, a real variable
can move through any point x 0 ∈ R, besides going to +∞ and to −∞. In other words, we want to give a meaning to
lim f (x) = `,
x→x0
where f is a real variable function. The idea is the same: the value f (x) gets closer to ` as x gets closer to x 0 . We will
translate this by using sequences in the following way:
lim f (x) = `, ⇐⇒ ∀an −→ x 0, f (an ) −→ `.
x→x0
In other words: no matter you move through x 0 , f will go to `.
In this Chapter we will introduce the concept of limit. Because most of the properties of limits are the same of those
for limits for sequences, we will reduce proofs only to that one that introduce something really new. The reader is invited
to develop an his/her own intuitive sense.
4.1
Elementary topology on the real line
To define limx→x0 f (x) we understand immediately that x 0 cannot be any point. We need that, in a suitable sense, x 0 be
arbitrarily close to the domain D of f . The key concept is the following
Definition 4.1.1 (accumulation point). Let S ⊂ R and x 0 ∈ R ∪ {±∞}. We say that x 0 is an accumulation point for S if
∃ (an ) ⊂ S\{x 0 }, : an −→ x 0 .
Acc(S) will denote the sets of accumulation points of a set S.
x0 an
a1
a0
S
Figure 4.1: The set S is colored in red.
Remark 4.1.2 (Important!). It is a good thing to reflect a little on this definition.
63
(4.1.1)
64
• To be an accumulation point has in general nothing to do with to be in S: we may have x 0 ∈ S as well as x 0 < S. For instance
Acc([a, b]) = [a, b], (here a ∈ S and a ∈ Acc(S)); Acc(]a, b[) = [a, b], (here a < S and a ∈ Acc(S)).
• The definition says that x 0 ∈ Acc(S) iff there exists a sequence an −→ x 0 lying in S of points distinct by x 0 : this is to avoid the
case, for instance
S = {x 0 }, Acc(S) = ∅.
If we would allow the approximating sequence to be simply (an ) ⊂ S such that an −→ x 0 , then an ≡ x 0 would be such a
sequence, so x 0 ∈ Acc(S): but in this case we cannot say that x 0 is an accumulation of (infinitely many) points of S!
Example 4.1.3. We have
(
(
)
)
• 0 ∈ Acc n1 : n ∈ N, n > 1 : indeed, n1 ⊂ S := n1 : n ∈ N, n > 1 , n1 −→ 0 and n1 , 0 for any n. More difficult is to
show that 0 is the unique accumulation point for S (even if it should be intuitively clear!).
• +∞ ∈ Acc(R), +∞ ∈ Acc([a, +∞[).
• +∞ ∈ Acc(N), −∞ ∈ Acc(Z).
• 0 < Acc([1, 2]), 0 < Acc([1, 2] ∪ {0}), 1 < Acc
(
1
n
)
: n ∈ N, n > 1 .
A second fundamental concept is that one of neighborhood:
Definition 4.1.4. Any open interval centered in x 0 ∈ R is called neighborhood of x 0 . Any half line ]a, +∞[ is called
neighborhood of +∞ and similarly any half line of type ] − ∞, b[ is called neighborhood of −∞.
We will denote neighborhoods of x 0 ∈ R ∪ {±∞} with Ix0 . A very important fact is the following
Proposition 4.1.5.
x 0 ∈ Acc(S), ⇐⇒ S ∩ (Ix0 \{x 0 }) , ∅, ∀Ix0 .
That is: x 0 is an accumulation point for S iff in every neighborhood of x 0 there’s a point of S differente by x 0 .
Proof. — Let’s consider the case x 0 ∈ R (the remaining cases are left to the reader).
=⇒ Let x 0 ∈ Acc(S) and take Ix0 = [x 0 − ε, x 0 + ε]. By hypothesis, there exists (an ) ⊂ S\{x 0 }, an −→ x 0 . Therefore |an − x 0 | 6 ε
definitively, that is an ∈ Ix0 , and because an , x 0 , an ∈ S ∩ (Ix0 \{x 0 }) definitively. Here’s a picture of the situation.
I x0
x0 an
a1
a0
S
f
g
⇐= Let’s construct an approximating sequence of x 0 in S. Take Ix0 = x 0 − n1 , x 0 + n1 (here n ∈ N). By assumption
1
∀n : ∃an ∈ S ∩ Ix0 \{x 0 } , ⇐⇒ an ∈ S, an , x 0, |an − x 0 | 6 .
n
But then an −→ x 0 and this finishes the proof.
4.2
Definition of limit and of continuous function
We are now ready for the
Definition 4.2.1. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D). We say that f (x) −→ ` ∈ R ∪ {±∞} as x −→ x 0 (in words: f has
limit ` as x goes to x 0 ) if
∀ (an ) ⊂ D\{x 0 }, an −→ x 0, =⇒ f (an ) −→ `.
(4.2.1)
Notation: limx→x0 f (x) := `.
65
y
{
f Han L
f Ha1 L
f Ha0 L
x
x0
an
a1
a0
D
Remark 4.2.2 (Important!). Notice few but important details in the (4.2.1). First: we consider sequences (an ) ⊂ D; in this way
f (an ) makes sense. Moreover, because we take sequences (an ) ⊂ D\{x 0 } it means that the (eventual) value of f at x 0 is not
important for the limit. This is to include cases where f is not defined at x 0 like, for instance,
sin x
.
x
lim
x→0
On the other side, f could be defined at x 0 but the value f (x 0 ) could have nothing to do with lim x→x0 f (x). For instance consider the
function
1, x , 0,



f (x) = 


 0, x = 0.
y
1
x
Let’s see that lim x→0 f (x) = 0. Indeed: if (an ) ⊂ D\{0} and an −→ 0, then f (an ) = 0 (being an , 0), so f (an ) = 0 −→ 0. This is
true for any (an ). On the other side f (0) = 1, so lim x→0 f (x) , f (0).
Notice that if we would replace the (4.2.1) by the following
∀ (an ) ⊂ D, an −→ x 0, =⇒ f (an ) −→ `.
(4.2.2)
we wouldn’t include cases where f is not defined at x 0 and, what is even worse, in the cases like the previous one, the limit wouldn’t exists,
even if it is intuitively clear that lim x→0 f (x) must exists and to be 0. Indeed: taking an ≡ 0 we would have f (an ) = f (0) = 1 −→ 1
whereas, if for instance an = n1 then f (an ) = 0 −→ 0. The two gives that the (4.2.2) is false!
The phenomenon described just now, that is limx→x0 f (x) , f (x 0 ) describes a "discontinuity" in the graph. For this
reason we will introduce the
Definition 4.2.3 (continuous function). Let f : D ⊂ R −→ R, x 0 ∈ D ∩ Acc(D) ( 1 ) . We say that f is continuous at point
x 0 if
lim f (x) = f (x 0 ).
x→x0
If f is continuous at any x 0 ∈ D we will write f ∈ C (D).
It is not difficult to show that
1That is x0 belongs to D and to compute the lim x→x0 f (x) makes sense as well as f (x0 ).
66
Proposition 4.2.4. Let f : D ⊂ R −→ R and x 0 ∈ D ∩ Acc(D). Then
f continuous at x 0, ⇐⇒
∀(an ) ⊂ D, : an −→ x 0, =⇒ f (an ) −→ f (x 0 ).
Proof. — Exercise.
Continuity is an important qualitative property of functions. Of course not all the functions are continuous, but as we will
see
Theorem 4.2.5. All the elementary functions (powers, exponentials, logarithms, trigonometric and hyperbolic functions
with their inverses) are continuous where defined.
This is a good thing because it simplify what we need to memorize. We will use repeatedly this fact along this Chapter.
In some cases (powers with integer exponent, trigonometric functions) we will prove continuity in this Chapter, most of
cases (power with rational or real exponent, exponentials, logarithms, inverse of elementary functions) will be treated in
the next Chapter. We will anticipate here the use of these properties in order to have much material to work with, but
there’s no logical contradiction in the exposition.
In R it is possible to move leftwards or rightwards: this gives the definition of unilateral limit.
Definition 4.2.6. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D ∩ [x 0, +∞[) ( 2 ) . We say that f (x) −→ ` ∈ R ∪ {±∞} as x −→ x 0 + if
∀ (an ) ⊂ D\{x 0 }, an −→ x 0 +, =⇒ f (an ) −→ `.
Notations: f (x 0 +) ≡ limx→x0 + f (x) := `.
A similar definition holds for
f (x 0 −) ≡ lim f (x).
x→x0 −
Warning! 4.2.7. Of course, x 0 ± are not points! Correspondently, f (x 0 ±) is not the value of f at x 0 ±!
Proposition 4.2.8. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D ∩ [x 0, +∞[), Acc(D∩] − ∞, x 0 ]). Then
∃ lim f (x) = `, ⇐⇒ ∃ lim f (x) = `, ∃ lim f (x) = `.
x→x0 +
x→x0
x→x0 −
Proof. — Evident. Exercise.
Correspondingly, introducing the concept of continuity from the left/right at x 0 as
lim f (x) = f (x 0 ),
x→x0 −
lim f (x) = f (x 0 )
x→x0 +
we have: f is continuous at x 0 iff it is continuous from the right and from the left at x 0 .
2Hence, necessarily, x0 ∈ R. The assumption means that there exists (a n ) ⊂ D\{x0 } such that a n −→ x0 +.
(4.2.3)
67
4.3
Basic properties of limits and continuous functions
In this section the proofs will be given only in few cases to show the method to reduce everything to properties for limits
of sequences (on which everything is based). We start by uniqueness:
Theorem 4.3.1. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D). If limx→x0 f (x) exists, it is unique.
Proof. — Suppose f has two limits ` 1, ` 2 . By (4.2.1) this means
∀ (an ) ⊂ D\{x 0 }, an −→ x 0, =⇒ f (an ) −→ ` 1 and f (an ) −→ ` 2 .
But limits for sequences are unique, so ` 1 = ` 2 .
Directly by (4.2.1) an important non existence criterion for the limit is the
Proposition 4.3.2. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D). If
∃ (an ), (bn ) ⊂ D\{x 0 } : f (an ) −→ ` 1, f (bn ) −→ ` 2, ` 1 , ` 2,
then limx→x0 f (x) doesn’t exist.
Proof. — Evident.
Example 4.3.3.
@ lim sin x, @ lim cos x.
x→+∞
x→+∞
Sol. — Take an = 2nπ and bn = π2 + 2nπ. Clearly an, bn −→ +∞ but
sin an = sin(2nπ) = 0 −→ 0, sin bn = sin
π
2
+ 2nπ = 1 −→ 1.
Similarly for cos.
Example 4.3.4.
1
1
@ lim sin , @ lim cos .
x→0+
x→0+
x
x
1 −→ 0+, and 1 = π + 2nπ, that is
Sol. — Let see for cosine: take an in such a way that a1n = 2nπ, that is an = 2nπ
2
bn
1
bn := π +2nπ
−→ 0+. Then
2
cos
π
1
1
= cos(2nπ) = 1 −→ 1, cos
= cos
+ 2nπ = 0 −→ 0.
an
bn
2
The limit (and the continuity) depends on the behavior of the function in a neighborhood of a point.
Proposition 4.3.5. If f ≡ g on some Ix0 \{x 0 } (that is: f (x) = g(x) for any x ∈ Ix0 \{x 0 }) then
∃ lim f (x) = `, ⇐⇒ ∃ lim g(x) = `.
x→x0
x→x0
In particular, under the same hypotheses, f is continuous at x 0 iff g it is.
Proof. — Exercise.
68
y
I x0
x
x0
Figure 4.2: Two functions locally equal in x 0 .
Another way to see locality is the following: existence of a limit (or continuity) doesn’t propagate.
Example 4.3.6. Let f : R −→ R defined as


 x,
−x,
f (x) := 

 0,

x , 0, x ∈ Q,
x , 0, x < Q,
x = 0.
Then f is continuous only at x = 0.
Sol. — First: f is continuos at 0. Take (an ) ⊂ R\{0} with an −→ 0. Then f (an ) = ±an according to an ∈ Q or not. In any case,
| f (an )| = |an | −→ 0, =⇒ f (an ) −→ 0.
This proves that lim x→0 f (x) = 0 = f (0).
Now, consider x 0 , 0 and let see that @ lim x→x0 f (x). Indeed, take two sequences (an ), (bn ) in the following way
(an ) ⊂ Q, an −→ x 0, (bn ) ⊂ Qc, bn −→ x 0 .
The existence for both of them is guaranteed by the density of rationals and irrationals. So
f (an ) = an −→ x 0, f (bn ) = −bn −→ −x 0 , x 0 .
An important problem that connect, again, limit with continuity is the following:
Let f : D\{x 0 } −→ R, x 0 ∈ Acc(D). Under which conditions is it possible to define f also at x 0 in such a way that be a
continuous function at x 0 ? In other words: which value ` ∈ R we have to take in such a way that


 f (x),
fH : D −→ R, fH(x) := 

 `,

x ∈ D\{x 0 },
x = x 0,
be continuous at x 0 ?
The function fHis called continuous extension of f in x 0 . A typical example is the function
f : R\{0} −→ R, f (x) :=
sin x
, x , 0.
x
In general, in order that fHbe continuous at x 0 we need
lim fH(x) = fH(x 0 ) = `.
x→x0
69
But because fH ≡ f as x , x 0 , by locality the previous becomes
lim f (x) = `.
x→x0
Therefore: there’s a unique possible value for f to extend it continuously at x 0 , that is limx→x0 f (x). In the previous case,
because we will show that
sin x
lim
= 1,
x→0 x
the continuous extension of sinx x in x = 0 is possible giving the value 1.
Similarly for sequences, it holds
Proposition 4.3.7. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D) such that there exists limx→x0 f (x). Then
• lim f (x) > 0, =⇒ ∃Ix0 : f (x) > 0, ∀x ∈ D ∩ (Ix0 \{x 0 }).
x→x0
• ∃Ix0 : f (x) > 0, ∀x ∈ D ∩ (Ix0 \{x 0 }), =⇒
lim f (x) > 0.
x→x0
Proof. — Let’s prove the case x 0 ∈ R (leaving x 0 = ±∞ as exercise).
First statement. Suppose it is false: then
∀ Ix0 , ∃x ∈ D ∩ (Ix0 \{x 0 }) : f (x) 6 0.
g
Taking, as usual, Ix0 = x 0 −
1
n , x0
+
1
n
f
, we can write
#
∀n ∈ N, ∃an ∈ D ∩
"
!
1
1
x0 − , x0 +
\{x 0 } , : f (an ) 6 0.
n
n
This means (an ) ⊂ D\{x 0 } and an −→ x 0 : therefore f (an ) −→ ` = lim x→x0 f (x) > 0. Because of permanence of sign for sequences,
we should have f (an ) > 0 definitively. But this is a contradiction!
Second statement. Suppose lim x→x0 f (x) < 0. Then, by the first statement, there exists Jx0 such that
f (x) < 0, ∀x ∈ D ∩ (Jx0 \{x 0 }).
But then, taking x ∈ Ix0 ∩ Jx0 (non empty: why?) we would have f (x) > 0 and f (x) < 0, which is impossible!
Remark 4.3.8. The two statements, as for sequences, are not completely specular. For instance, in the second we cannot write the
strong sing > because, for instance, f (x) = x 2 > 0 in I0 \{0} but lim x→0 x 2 = 0.
Corollary 4.3.9. Let f : D ⊂ R −→ R be continuous at x 0 ∈ D. The
• if f (x 0 ) > 0 there exists Ix0 such that f (x) > 0, ∀x ∈ D ∩ Ix0 .
• if there exists Ix0 such that f (x) > 0 ∀x ∈ D ∩ Ix0 then f (x 0 ) > 0.
Proof. — Just recall that lim x→x0 f (x) = f (x 0 ).
Theorem 4.3.10 (two policemen). Let f , g, h : D ⊂ R −→ R, x 0 ∈ Acc(D). Suppose that
i) f (x) 6 g(x) 6 h(x) ∀x ∈ D ∩ (Ix0 \{x 0 }).
70
ii) f (x) −→ `, h(x) −→ `, per x −→ x 0
Then g(x) −→ ` as x −→ x 0 .
Proof. — Exercise.
Example 4.3.11. Show that
lim
x→+∞
sin x
= 0.
x
Sol. — We have −1 6 sin x 6 1 for any x, therefore
−
1
sin x
1
6
6 , ∀x ∈]0, +∞[= I+∞ .
x
x
x
Now: − x1 and x1 are two policemen going to 0 as x −→ +∞ (evident). Therefore sinx x −→ 0.
As for sequences we deduce the rule bounded×null = null. Let’s introduce the
Definition 4.3.12. Se way that f : D ⊂ R −→ R is bounded on S ⊂ D if
∃ M > 0, : | f (x)| 6 M, ∀x ∈ S.
Definition 4.3.13. We say that f : D ⊂ R −→ R, is null at x 0 ∈ Acc(D) if
lim f (x) = 0.
x→x0
Corollary 4.3.14 (bounded×null=null). Let f , g : D ⊂ R −→ R, x 0 ∈ Acc(D), such that
i) f be bounded in some Ix0 \{x 0 };
ii) g be null at x 0 .
Then f · g is null at x 0 .
Proof. — Exercise.
Example 4.3.15. Show that
lim x sin
x→0+
1
= 0.
x
Sol. — Let f (x) := sin x1 , defined on R\{0} and g(x) := x defined on R, therefore both defined on D := R\{0}. Clearly 0 ∈ Acc(D), f
is bounded on D (hence in any I0 \{0}) and g is null at 0. Therefore f · g is null at 0, and this gives the conclusion.
Finally, as for sequences, monotone functions have always unilateral limit.
71
Theorem 4.3.16. Let f : D ⊂ R −→ R be an increasing function and x 0 ∈ Acc(D). Then
∃ f (x 0 −) = sup{ f (y) : y ∈ D, y < x 0 }, ∃ f (x 0 +) = inf{ f (y) : y ∈ D, y > x 0 },
and f (x 0 −) 6 f (x 0 +).
Proof. — Let’s prove the first. Set α := sup{ f (y) : y ∈ D, y < x 0 } and consider the case α ∈ R (α = ±∞ is left to the reader as
exercise). We have to check that
∀(an ) ⊂ D∩] − ∞, x 0 [, an −→ x 0, =⇒ f (an ) −→ α.
Fix ε > 0. Then, by definition of sup, there exists an element of { f (y) : y ∈ D, y < x 0 }, that is a value f (x ε ) for some x ε < x 0 , such
that
f (x ε ) > α − ε.
Now, being an −→ x 0 , definitively an > x ε , therefore, being f %,
f (an ) > f (x ε ) > α − ε, definitively.
But an < x, f (an ) 6 α, so we deduce | f (an ) − α| 6 ε, definitively, that is f (an ) −→ α.
Of course a dual statement holds for decreasing functions.
4.4
Rules of calculus
The rules of calculus for limits of functions works like the rules for limits of sequences.
Proposition 4.4.1. Let f , g : D ⊂ R −→ R, x 0 ∈ Acc(D) be such that
lim f (x) = ` 1 ∈ R, lim g(x) = ` 2 ∈ R.
x→x0
x→x0
Then
i) limx→x0 ( f (x) ± g(x)) = ` 1 ± ` 2 .
ii) limx→x0 f (x)g(x) = ` 1 ` 2 .
iii) if ` 2 , 0, limx→x0
f (x)
g(x)
=
`1
`2 .
Proof. — Exercise.
In terms of continuity we have immediately the
Corollary 4.4.2. Let f , g : D ⊂ R −→ R be continuous at x 0 ∈ D. Then f ± g and f · g are continuous at x 0 . Also
continuous at x 0 if g(x 0 ) , 0.
Proof. — Exercise.
Let’s see an immediate consequence of the Corollary.
f
g
is
72
Proposition 4.4.3. Polynomials are continuous functions on all R. Rational functions (ratios of polynomials) are
continuous functions where defined.
Proof. — Consider first powers: because x n = x · · · x n−times and of course being x ∈ C (R), clearly x n ∈ C (R). Again, cx n = c · x n
is the product between a constant c (which is of course C (R) and x n ∈ C (R), hence it is C (R). Finally, a polynomial is a sum of
p
monomials. If f = q where p, q are polynomials, we have f is continuous where q , 0, that is where f is defined.
In some cases it is possible to have rules also for infinite limits, as for sequence. Just like in that case, we will use directly
the short notations to present the rules:
Proposition 4.4.4.
(±∞) + ` = ±∞, (+∞) + (+∞) = +∞, (−∞) + (−∞) = −∞,
(+∞) · ` = sgn(`)∞, (` , 0), (−∞) · ` = −sgn(`)∞, (` , 0),
(+∞) · (+∞) = +∞, (+∞) · (−∞) = −∞, (−∞) · (−∞) = +∞,
`
= 0, (` ∈ R),
±∞
+∞
= sgn(`)∞, (` , 0),
`
+∞ −∞
=
= +∞,
0+
0−
+∞ −∞
=
= −∞.
0−
0+
−∞
= −sgn(`)∞, (` , 0).
`
Proof. — Exercise.
The sense of f (x) −→ 0+ is given by the
Definition 4.4.5. Let f : D ⊂ R −→ R, x 0 ∈ Acc(D). We say that f (x) −→ `+ (here, of course, ` ∈ R) as x −→ x 0 if
f (x) −→ `, x −→ x 0, and f > `, in some Ix0 \{x 0 }.
Are indeterminate forms:
(±∞) + (∓∞), (opposite signs), (±∞) · 0,
0 ±∞
,
.
0 ±∞
In addition, we also the following forms for the powers
1±∞, (0+) 0, (0+) +∞,
that are reduced to 0 · ∞ through
f (x) g(x) = eg(x) log f (x) .
Example 4.4.6. Compute
√
lim
x→0
√
1+x− 1−x
.
x
Sol. — As x −→ 0 clearly 1 + x, 1 − x −→ 1 (why? because they are polynomials so, they are
√ quantities and we can compute
√ continuous
limits just by taking the value at x = 0). Now: it seems natural that if 1 + x −→ 1 then 1 + x −→ 1 = 1. This is nothing but the
73
√
√
continuity of the square root at the point 1. We accept it, it will be proved in the next Chapter. Similarly 1 − x −→ 1 = 1, so that we
have a form 00 . As for sequences, in limits the important quantities are the biggest, and the way to measure "sizes" is the ratio. Now
r
r
√
1+x
1+x
1 √
=
−→
= 1, x −→ 0.
√
1−x
1
1−x
√
This should mean that none among 1 ± x is bigger than the other. To solve the problem we recur to an algebraic trick:
√
√
√
√
√
√
1+x− 1−x
1+x− 1−x 1+x+ 1−x
(1 + x) − (1 − x)
2x
= √
=
= √
√
√
√
√
x
x
1+x− 1−x
x 1+x+ 1−x
x 1+x+ 1−x
= √
4.4.1
2
2
−→ √
√
√ = 1.
1+x+ 1−x
1+ 1
Change of variable
A new important option for limits of functions is the change of variable. Often, computing a limit
lim f (x),
x→x0
it seems natural to introduce a new variable y = h(x). Generally, the new variable is connected with the old one by a
complete 1 − 1 mechanism, that is the relation y = h(x) is invertible
y = h(x), x = h−1 (y) =: g(y).
Suppose that
x −→ x 0, ⇐⇒ y −→ y0 .
Can we say
?
lim f (x) = lim f (g(y)).
x→x0
y→y0
Provided g(y) , x 0 as y ∈ Iy0 the answer is yes and it is consequence of the following
Proposition 4.4.7. Assume that
i) x 0 ∈ Acc(D( f )) and limx→x0 f (x) = `;
ii) y0 ∈ Acc(D(g)) and limy→y0 g(y) = x 0 ;
iii) g(y) ∈ Ix0 \{x 0 } for any y ∈ Iy0 \{y0 } for some neighborhood Iy0 .
Then ∃ limy→y0 f (g(y)) = `.
Proof. — Let’s check the definition: take (yn ) ⊂ D(g)\{y0 }, yn −→ y0 . By ii)
g(yn ) −→ x 0 .
Moreover, by iii), g(yn ) , x 0 definitively (because yn ∈ Iy0 \{y0 } definitively). But then, by i),
f (g(yn )) −→ `.
74
Remark 4.4.8. The iii) in the previous statement is fundamental. Indeed, let
0,



f : R −→ R, f (x) := 


 1,
x , 0,
g : R −→ R, g(y) = 0, ∀y ∈ R.
x = 0.
Clearly
lim f (x) = 0, ma lim f (g(y)) = lim f (0) = lim 1 = 1.
x→0
y→0
y→0
Example 4.4.9. Compute
y→0
√
lim
x→+∞
e
log x
√
x
.
√
p
∞ . Be careful:
Sol. — Clearly log x −→ +∞, so log x −→ +∞ hence e log x −→ +∞. In other words we have an indeterminate form ∞
√
√
√
√
√
e log x , x (someone could think that e log x = elog x = x,which is wrong!!). Let’s change variable setting
√
e log x
√
x
√
y:= log x−→+∞, y 2 =log x, x=e y
2
=
y2
ey
ey
= 2 = ey− 2 .
p
y
2
ey
e2
+∞·1
y2
y2
Because y − 2 = 2 1 − y2 −→ +∞,
√
√
y2
e log x y= log x−→+∞
=
lim ey− 2 −→ e−∞ = 0.
lim
√
y→+∞
x→+∞
x
The consequence of change of variable for continuity is that (roughly) composing continuous functions we get still
continuous functions. Precisely
Theorem 4.4.10. Let g be continuous at y0 , f be continuous at g(y0 ). Then f ◦ g is continuous at y0 .
Proof. — We have
lim f (g(y))
y→y0
x=g(y), y−→y0, x−→g(y0 )
=
lim
x→g(y0 )
f (x) = f (g(y0 )).
The requirement iii) is not required here because there’s no matter with values at the points y0 and x 0 .
Warning! 4.4.11. Be careful: the Thm gives only a sufficient condition for continuity of f ◦ g. It may happens, however,
that f ◦ g is continuous without f and g continuous. Consider
1



y,

g(y) := 


 0,

y , 0,
y = 0,
1


x,

f (x) := 


 0,
x , 0,
x = 0.
y,



=⇒ f ◦ g(y) = 


 0,
Clearly, none among f and g is continuous at 0. However f ◦ g is clearly continuous at 0.
y , 0,
y = 0,
75
4.5
Fundamental limits
As for sequences, treating with limits (and in particular with indeterminate forms) requires to compare different quantities.
The dominant one are the bigger, not strictly respect to the order but respect to the ratio. For this reason we will introduce
the
Definition 4.5.1. We say that
• f is lower order respect to g at x 0 (notations: f (x) = o(g(x)) or f (x) x0 g(x)) if
f (x)
−→ 0, x −→ x 0 .
g(x)
• f is asymptotic to g at x 0 (notation: f (x) ∼x0 g(x)) if
f (x)
−→ 1, x −→ x 0 .
g(x)
4.5.1
Comparison exponentials/powers/logarithms at +∞
It is clear that
x α −→ +∞, x −→ +∞, ∀α > 0, a x −→ +∞, x −→ +∞, ∀a > 1.
It is also easy to check that
x α +∞ x β, ∀α > β > 0, a x +∞ bx, ∀a > b > 1.
Indeed
!x
xβ
1
bx
b
b
= α−β −→ 0, (because α − β > 0), x =
−→ 0, (because 0 < < 1).
xα
a
a
a
x
More difficult is the mixed comparison. Similarly to sequences we have the
Theorem 4.5.2.
x x +∞ a x +∞ x α +∞ logb x, ∀a > 1, α > 0, b > 1.
Proof. — It is obtained by technical argument by the analogous one for sequences, omitted.
Example 4.5.3. Compute
lim
x→+∞
Sol. — Being x +∞ log x,
log x
x
1
log x
! logx x
.
−→ 0, and log1 x −→ 0+, we have the indeterminate form (0+) 0 . Notice that
1
log x
! log x
x
=e
log x
x
log
1
log x
= e−
(log x)(log(log x))
x
,
so we have to compute
lim
x→+∞
(log x)(log log x)
,
x
∞ . Changing variable,
which is a form ∞
(log x)(log log x) y=log x−→+∞ y log y
y 2 log y
=
= y
−→ 0, because ey +∞ y 2, y +∞ log y.
y
x
e
e
y
76
A classical limit often encountered in applications is the following
Proposition 4.5.4.
lim x α | log x| β = 0, ∀α, β > 0.
x→0+
Proof. — The limit presents as a form 0 · ∞. Changing variable,
α
lim x | log x|
β y=log x−→−∞
x→0+
=
lim e
y→−∞
αy
zβ
β
zβ
|y| β z=−αy−→+∞
1
=
lim αz = β lim z = 0.
|y| = lim −αy
z→+∞ e
y→−∞ e
α z→+∞ e
β
Example 4.5.5. Compute
1
x
lim
x→0+
!x
.
Sol. — We have a form (+∞) 0 , and, by continuity of the exponential,
!
1
1 x
= e x log x = e−x log x −→ e−0 = 1.
x
4.5.2
Trigonometric functions
In this subsection we will derive continuity of sin and cos and two fundamental limits associated to these functions. The
key tool is an inequality which evidence is the content of the next figure:
sin x
Precisely,
x
tan x
π
0 6 sin x 6 x 6 tan x, ∀x ∈ 0,
.
2
(4.5.1)
lim sin x = 0, lim cos x = 1.
(4.5.2)
Proposition 4.5.6. sin, cos ∈ C (R).
Proof. — First step: let’s prove that
x→0
x→0
The first follows immediately by (4.5.1). Indeed
0 6 sin x 6 x,
two policemen
=⇒
lim sin x = 0.
x→0+
On the other side
lim sin x
x→0−
therefore lim x→0 sin x = 0.
y=−x−→0+
=
lim sin(−y) = lim − sin y = 0,
y→0+
y→0+
77
x
1
cos x
2
The second is basically reduced to the first. Let’s write
1 − cos x = (1 − cos x)
1 − (cos x) 2
(sin x) 2
1 + cos x
=
=
.
1 + cos x
1 + cos x
1 + cos x
(4.5.3)
√
Now, as x ∈]0, π4 ] we have cos x > cos π4 = 22 = √1 (see figure).
2
Therefore, by (4.5.3) we have
√
π
(sin x) 2
2
2
.
0 6 1 − cos x 6
=
(sin
x)
,
∀x
∈
0,
√
4
1 + √1
2+1
2
By two policemen,
lim (1 − cos x) = 0, =⇒
x→0+
lim cos x = 1.
x→0+
Now,
lim cos x
x→0−
y=−x−→0+
=
lim cos(−y) = lim cos y = 1.
y→0+
y→0+
Conclusion: basically se proved the continuity at 0 of sin and cos. Take a generic x 0 now. By addition formula we have
sin x = sin(x 0 + (x − x 0 )) = sin x 0 cos(x − x 0 ) + sin(x − x 0 ) cos x 0 −→ sin x 0, x −→ x 0,
and similarly for cos.
Example 4.5.7.
π
tan ∈ C R\
+ kπ : k ∈ Z , cot ∈ C (R\ {kπ : k ∈ Z}) .
2
sin , so tan is continuous where cos , 0, that is on R\{ π + kπ : k ∈ Z}. Similarly for cot.
Sol. — We have tan := cos
2
Example 4.5.8. Find the domain D and the continuity set of
1
h(x) = cot .
x
1 , k ∈ Z, that is
Sol. — Clearly D(h) = {x ∈ R : x , 0, x1 , kπ, k ∈ Z}. So x , 0 and x , kπ
(
)!
1
D(h) = R\ {0} ∪
: k ∈ Z, k , 0 .
kπ
About continuity: h is the composition of
f :=]−1
1 g:=cot
1
7−→ cot .
x
x
Therefore, by the composition rule: if f is continuous at x 0 (and this requires x 0 , 0) and g = cot is continuous at f (x 0 ) = x10 (and
here we need x10 , kπ) then h = g ◦ f is continuous at x 0 . We deduce that h is continuous on its domain.
x 7−→
78
In particular: sin is null at 0, as well as 1−cos. But how do they vanish? The point is to compare them with the fundamental
quantities vanishing at 0, that is powers x n . It turns out that
Theorem 4.5.9.
sin x
1 − cos x 1
= .
= 1, lim
x→0 x
x→0
2
x2
lim
In other language: sin x ∼0 x and cos x ∼0 1 −
(4.5.4)
x2
2 .
g
f
Proof. — By (4.5.1) we have, as x ∈ 0, π2 ,
0 6 sin x 6 x 6 tan x, ⇐⇒ 0 6 sin x 6 x 6
that is
cos x 6
x
1
sin x sin x>0
, ⇐⇒ 0 6 1 6
6
,
cos x
sin x
cos x
π
sin x
6 1, ∀x ∈ 0,
.
x
2
As x −→ 0+, by two policemen we have
lim
x→0+
On the other side
sin x
= 1.
x
sin x y=−x−→0+
sin(−y)
sin y
=
lim
= lim
= 1,
x
−y
x→0−
y→0+
y→0+ y
lim
and this proves the first of (4.5.4). About the second we have
sin x 2
1 − (cos x) 2
1
1
1 − cos x
1 − cos x 1 + cos x
=
−→ , x −→ 0.
=
=
1 + cos x
x
1 + cos x
2
x2
x2
x 2 (1 + cos x)
Finally
cos x
1−
x2
2
=
− cos x
x2
2
−1
=
1 − cos x − 1
x2
2
−1
=
1 − cos x x 2
1
1 0
1
− 2
−→
−
= 1.
x2 − 1
x −1
2 −1 −1
x2
2
2
Example 4.5.10. Compute
lim
x→0
(1 − cos(3x)) 2
.
x 2 (1 − cos(x))
Sol. — It is easy to recognize a form 00 (use continuity of cos). We manipulate the expression to reduce it to known limits:
(1 − cos(3x)) 2
x 2 (1 − cos(x))
#
"
# "
#
"
1 − cos(3x) 2 1
1
(3x) 4 1 − cos(3x) 2 1 − cos(x) −1
= (3x) 2
=
(3x) 2
x 2 x 2 1−cos(x)
x4
(3x) 2
x2
x2
"
= 34
# "
#
1 − cos(3x) 2 1 − cos(x) −1
.
(3x) 2
x2
Now
lim
x→0
so
1 − cos(3x) y=3x−→0
1 − cos y 1
= ,
=
lim
2
y→0
(3x) 2
y2
! 2 ! −1
(1 − cos(3x)) 2
1
81
4 1
=
3
=
.
2
2
2
x→0 x 2 (1 − cos(x))
lim
(4.5.5)
79
4.5.3
Exponentials, logarithms and powers at 0
We start by the extension of the limit for the Napier number e:
Proposition 4.5.11.
lim
x→+∞
1
1+
x
!x
= e,
lim
x→−∞
In particular
lim
x→+∞
1+
1
1+
x
!x
= e, lim (1 + x) 1/x = e.
x→0
a x
= e a, ∀a ∈ R.
x
(4.5.6)
(4.5.7)
Proof. — The first is obtained by technical arguments on the limit (2.8.1), we omit this part because it doesn’t add any new
comprehension. For the second
!
!
!
!
y − 1 −y
y y
1 x y=−x−→+∞
1 −y
= lim
= lim
=
lim 1 −
lim 1 +
y→+∞
y→+∞ y − 1
y→+∞
x→−∞
x
y
y
= lim
y→+∞
1+
!
!
!
!
1 y−1
1
1 z
1
z=y−1−→+∞
1+
=
lim 1 +
1+
= e · 1 = e.
z→+∞
y−1
y−1
z
z
For the third
lim (1 + x)
1
1/x y= x −→+∞
x→0+
lim (1 + x) 1/x
x→0−
=
y= x1 −→−∞
=
!
1 y
= e,
lim 1 +
y→+∞
y
lim
y→−∞
1+
!
1 y
= e,
y
e da ciò segue la conclusione. Infine, se a > 0 per esempio
!
! !a
1 y
a x y= ax −→+∞
1 ay
=
lim 1 +
.
= lim
1+
lim 1 +
y→+∞
y→+∞
x→+∞
x
y
y
y
Now: 1 + y1 −→ e. By continuity of powers (see next Chapter, of course the proof won’t depend on this result)
! !a
1 y
1+
−→ e a .
y
Assuming continuity for powers, exponentials and logarithms (see next Chapter) we have
Corollary 4.5.12.
log(1 + x)
ex − 1
(1 + x) α − 1
= 1, lim
= 1, lim
= α, ∀α , 0.
x→0
x→0
x→0
x
x
x
In other words: log(1 + x) ∼0 x, e x ∼0 1 + x and (1 + x) α ∼0 1 + αx.
lim
Proof. — For the first
lim
x→0
For the second
log(1 + x)
1
= lim log(1 + x) = lim log(1 + x) 1/x = log e = 1.
x
x→0 x
x→0
e x − 1 y=e x −1−→0
y
1
=
lim
= lim log(1+y) = 1.
x
x→0
y→0 log(1 + y)
y→0
lim
y
Finally,
(1 + x) α − 1
eα log(1+x) − 1 y=α log(1+x)−→0
ey − 1
ey − 1 y/α
= lim
=
lim y/α
= lim
α = 1 · 1 · α = α.
x
x
x→0
x→0
y→0 e
− 1 y→0 y ey/α − 1
lim
(4.5.8)
80
Example 4.5.13. Compute
log(1 + (sin x) 2 )
.
x→0
1 − cos x
lim
Sol. — As x −→ 0 sin x −→ 0, so 1 + (sin x) 2 −→ 1, and log(1 + (sin x) 2 ) −→ log 1 = 0 (continuity of logarithm). Also the
denominator goes to 0, so we have a form 00 . The numerator is of type log(1 + y) with y −→ 0. Recalling (4.5.8) and (4.5.4) it’s natural
to write
log(1 + (sin x) 2 ) log(1 + (sin x) 2 ) (sin x) 2
x2
=
.
1 − cos x
(sin x) 2
x 2 1 − cos x
Now
log(1 + y)
log(1 + (sin x) 2 ) y=(sin x) 2 −→0
=
lim
lim
= 1,
y
y→0
x→0
(sin x) 2
therefore
x2
1
log(1 + (sin x) 2 ) log(1 + (sin x) 2 ) (sin x) 2
=
−→ 1 · 12 ·
= 2.
2
2
1 − cos x
1
−
cos
x
1/2
(sin x)
x
Example 4.5.14. Compute
2
lim (cos x) 1/x .
x→0
Sol. — We have a form 1+∞ . Transforming into an exponential
2
1
(cos x) 1/x = e x 2
log(cos x)
,
so we are reduced to compute
log (1 + (cos x − 1))
log (1 + (cos x − 1)) cos x − 1
log(cos x) 00
= lim
= lim
.
cos x − 1
x→0
x→0
x→0
x2
x2
x2
lim
Noticed that
log 1 + y
log (1 + (cos x − 1)) y=cos x−1−→0
=
lim
= 1,
cos x − 1
y
x→0
y→0
lim
we have
!
log (1 + (cos x − 1)) cos x − 1
1
1
−→
1
·
−
=− .
cos x − 1
2
2
x2
By continuity of the exponential we have
2
1
(cos x) 1/x = e x 2
Example 4.5.15. Compute
log(cos x)
−→ e−1/2 .
√5
√
1 + 3x 4 − 1 − 2x
lim √3
.
√
x→0
1+x− 1+x
Sol. — Easily we have o form 00 . Writing roots as powers
√5
√
1 + 3x 4 − 1 − 2x
√
√3
1+x− 1+x
(1 + 3x 4 ) 1/5 − 1 − (1 − 2x) 1/2 − 1
(1 + 3x 4 ) 1/5 − (1 − 2x) 1/2
=
= (1 + x) 1/3 − (1 + x) 1/2
(1 + x) 1/3 − 1 − (1 + x) 1/2 − 1
=
1/2 −1
(1+3x 4 ) 1/5 −1 4
3x − (1−2x)
(−2x)
−2x
3x 4
1/3
1/2
(1+x) −1
x − (1+x)x −1 x
x
=
(1+3x 4 ) 1/5 −1 3
3x
3x 4
1/3
(1+x) −1
x
1/2 −1
− (1−2x)
(−2)
−2x
1/2
− (1+x)x −1
81
Now
(1 + y) 1/5 − 1 1
(1 + 3x 4 ) 1/5 − 1 y=3x 4
=
lim
= ,
y
5
y→0
x→0
3x 4
lim
and similarly
(1 − 2x) 1/2 − 1
1 (1 + x) 1/3 − 1
1 (1 + x) 1/2 − 1
1
−→ ,
−→ ,
−→ .
−2x
2
x
3
x
2
Gluing all togheter
4.6
√5
√
1 · 0 − 1 (−2)
1 + 3x 4 − 1 − 2x
−→ 5 1 21
= 6.
√
√3
1+x− 1+x
3 − 2
Hyperbolic functions
It’s time to introduce two new functions:
sinh x :=
e x − e−x
e x + e−x
, x ∈ R, (hyperbolic sin), cosh x :=
, x ∈ R, (hyperbolic cosine).
2
2
Despite they do not seem to be relatives of sin and cos, there’re several analogies that suggest the contrary. Actually, all
these functions are the same when the variable is a complex number. Without to into this level, we can anyway see the
analogies. First: sinh and cosh are called hyperbolic functions because of the following easy remarkable identity
(cosh x) 2 − (sinh x) 2 = 1, ∀x ∈ R.
(4.6.1)
This means that the point (cosh x, sinh x) belongs to the hyperbola ξ 2 − η 2 = 1 in the plane ξη. The first important
properties of sinh and cosh are contained into the
Proposition 4.6.1. It holds
i) cosh 0 = 1, sinh 0 = 1.
ii) cosh is even, sinh is odd.
iii) cosh and sinh fulfill addition formulas similar to that one for sin and cos.
sinh(x + y) = sinh x cosh y + sinh y cosh x, cosh(x + y) = cosh x cosh y − sinh x sinh y, ∀x, y ∈ R.
iv) cosh x, sinh x ∼+∞
ex
2 ,
cosh x ∼−∞
e−x
2 ,
−x
sinh x ∼−∞ − e 2 .
v) remarkable limits:
lim
x→0
sinh x
cosh x − 1 1
= 1, lim
= .
x→0
x
2
x2
In particular: sinh x ∼0 x and cosh x ∼0 1 +
x2
2 .
Moreover sinh, cosh ∈ C (R).
Proof. — i),ii) and iii) are easy checked by direct inspection. About iv) we have
ex
ex 


1 + e−2x ∼ , because 1 + e−2x −→ 1, x −→ −∞.


2
 2
e x + e−x 
cosh x =
=

2

e−x

e−x 


1 + e2x ∼
, because 1 + e2x −→ 1 x −→ −∞.
 2
2
82
cosh x
sinh x
1
Similarly for sinh. About v),
sinh x
e x − e−x
e2x − 1
=
= e−x
−→ 1, x −→ 0, by (4.5.8).
x
2x
2x
Inoltre
2
cosh x − 1 cosh x − 1 cosh x + 1 (cosh x) 2 − 1
1
1
1
(4.6.1) (sinh x)
=
=
=
−→ .
2
2
2
2
cosh x + 1
cosh x + 1
cosh x + 1
2
x
x
x
x
About continuity, finally, this depends on the continuity of exponentials. Indeed:
!
1 x
1
e x + e−x
=
e + x .
cosh x =
2
2
e
Accepting e x ∈ C (R), then e1x ∈ C (R), hence also cosh. Similar argument for sinh.
Example 4.6.2. Compute
log(2 − cos x)
x→0 cosh x − 1
lim
Sol. — The limit is a form 00 : cos x −→ 1, hence 2 − cos x −→ 1, so log(2 − cos x) −→ log 1 = 0; the denominator goes to
cosh 0 − 1 = 1 − 1 = 0.
log(2 − cos x) log(1 + (1 − cos x))) log(1 + (1 − cos x)) 1 − cos x
x2
1
=
=
−→ 1 · · 2 = 1.
2
cosh x − 1
cosh x − 1
1 − cos x
cosh x − 1
2
x
4.7
Exercises
Exercise 4.7.1. For any of the following statements say if they are true or false, explaining you answer.
1. Acc(N) = ∅.
2. Acc(R) = R.
3. Let (an ) ⊂ R such that an −→ ` and let S := {an : n ∈ N} be the set formed with points of the sequence. Then ` ∈ Acc(S).
4. 0 ∈ Acc(Q).
5. if x ∈ Acc(S) then x < Acc(S c ).
6. if S is finite then Acc(S) = ∅.
7. if S is infinite then Acc(S) , ∅.
Exercise 4.7.2. Let S := {2n : n ∈ Z}. Then Acc(S) =
∅. 2n : n ∈ Z, n < 0 . {0}. {0, +∞}.
Justify carefully your answer, explaining why one is true and the other are false.
83
(
)
Exercise 4.7.3. Let S := (−1) n n1 : n ∈ Z . Then Acc(S) =
∅. [0, 1]. {0}. {−1, 0, 1}.
Justify carefully your answer, explaining why one is true and the other are false.
Exercise 4.7.4. Let S := [a, b] with a < b. Then Acc(S) =
[a, b]. ]a, b[. R. none of the previous.
Justify carefully your answer, explaining why one is true and the other are false.
(
)
Exercise 4.7.5. Let S := (1 + cos(nπ)) n+1
n−1 : n ∈ N, n > 2 . Then Acc(S) =
{0, 1, 2}. {0, 1}. {0, 2}. {2}.
Justify carefully your answer, explaining why one is true and the other are false.
Exercise 4.7.6. Let S := {n + 2n : n ∈ Z}. Then Acc(S) =
{−∞, +∞}. {0, +∞}. {−∞, 0, +∞}. {−∞, 0}.
Justify carefully your answer, explaining why one is true and the other are false.
Exercise 4.7.7. Let x 0 ∈ R be an accumulation point for S. Then
if x 0 ∈ S then x 0 < Acc(S c ).
if x 0 ∈ S c then x 0 ∈ Acc(S c ).
if [x 0 − r, x 0 + r] ⊂ S for some r > 0 then x 0 < Acc(S c ).
none of previous answers.
Justify carefully your answer, explaining why one is true and the other are false.
Exercise 4.7.8. Classify eventual indeterminate forms and compute:
1.
x 3 − 5x + 1
.
x→−∞
x+2
lim
4.
lim
x→−∞
p
2x + 4x 2 + x .
√
2x 2 + 3
7. lim
x→−∞ 4x + 2
10.
|x − 2|
.
x→2+ (x 2 + 1)(x − 2)
lim
√
√
1 + sin x − 1 − sin x
13. lim
.
x→π
sin x
2.
5.
3x − 2
lim √
.
√
x→+∞ 4x + 1 + x + 1
x+1
lim √
.
6x 2 + 3 + 3x
√3
√
x4 + 1 + x5 + x2
.
8. lim √4
√
x→+∞ x 9 − 2x − 1 − 5 x 7 − x 5
11.
14.
x→−1
1
.
x→0− 1 + x + |x |
x
lim
lim
x→+∞
√
3.
6.
√
√
2− x
.
x→2+ x − 2
lim
p
lim ( 9x 2 + 1 − 3x).
x→+∞
√
9.
lim r
x→+∞
x
.
q
√
x+ x+ x
x2 − 1
.
x→1 x 3 − 1
12. lim
√ x + 1 − x x.
Exercise 4.7.9. Determine a, b ∈ R in such a way that f ∈ C (R) where
sin x,



1. f (x) := 
ax 2 + b,


 3,
x < π2 ,
π 6 x < 2,
2
x > 2.
−2 sin x,



a sin x + b,
2. f (x) := 


 cos x,
x < − π2 ,
− π2 6 x < π2 ,
x > π2 .
84
Exercise 4.7.10. For any of the following functions i) find the domain D; ii) the subset S ⊂ D where the function is continuous; iii) if
S ( D, is it possible to extend f to all D in a continuous way?
cos π2 x
x2 − 1
sin x
tan(2x)
1.
.
,
2.
.
3.
.
4.
x−1
x
tan x
x2 − 1
|x|
.
x
5.
√
9.
1
.
1 + x + |xx |
6.
x 4 − 3x 2 + 2
.
x
10.
x log x
.
x−1
7. sin π
11. log
x
.
|x|
p
8. x log |x|.
1 + x2 − x .
x
12. xe x−1 .
!
ex
.
e2x − e x + 1
√
Exercise 4.7.11. Discuss continuity of f : R −→ R, f (x) := [x] + x − [x].
√
13.
x + 1 − 2x + 1.
14. sin
Exercise 4.7.12. Is the function
f (x) := e1/x sin x, x , 0,
extendable by continuity at x = 0?
Exercise 4.7.13. Let f : R\{0} −→ R defined as

a cos x + log(1 − x),




f (x) := 

 sinh 2x − a cos √1 ,
x +1
x

x < 0,
x > 0.
Are there values of a ∈ R such that f is extendable by continuity at x = 0?
Exercise 4.7.14 (?). Find a, b ∈ R such that
lim
x→+∞
p
x 4 + x 2 + 1 − (ax 2 + bx) ∈ R.
Exercise 4.7.15. Order in increasing way (with respect to +∞ ) the following quantities:
1+ √ 1
√
2x
x
log x
22 , x 2 , x x, 2log x+log(log x), x
.
Exercise 4.7.16. Compute
1.
4.
7.
lim
x→+∞
e x + e2x
2x + e x
lim √
x→+∞
2
.
2.
log x + x 2
.
x + x log x + x 2
log x − 1
lim √
x→0+ x log4 x
5.
8.
x
xx
10. lim
.
x→0+ x
lim x x .
x→0+
lim
x→+∞
x log x
.
(log x − 1) 2
2
x log(x ) + cos x
.
x→+∞
4x + π x
lim
2
1
11. lim e−x log x
x→0+ x
√
xe x − e2 x +1
13. lim
x→+∞
e2x + x 4
2
3.
6.
9.
lim x 1/x .
x→+∞
lim
x→+∞
log(log(log x))
, (α > 0).
(log(log x)) α
(log x) x + sin(x x )
.
x→+∞
6 x + x 2 log x
lim
!
log x 1/x
12. lim
.
x→+∞
x
85
Exercise 4.7.17. Reducing to fundamental limits compute the following limits:
1 − cos x
x.
x→0 1 − cos 2
1. lim
√
4. lim
x→0
x2 − x + 1 − 1
sin x
esin(3x) − 1
.
x→0 sinh(2x)
7. lim
10. lim x
2
x−1
x→1
16.
lim
.
1
lim e x tan x.
x→0+
19.
lim
x→+∞
1+
arctan x x
.
x
log(1 + x)
.
tan x
x→0
6. lim
cos x − cos(2x)
.
1 − cos x
x→0
9.
!
x+2 x
.
x→+∞ x + 1
12.
!
1 tan x
.
x→0+ x
14.
17.
x→+∞
!
x+1 x
.
x2
xx − 1
.
x→1+ (log x − x + 1) 2
31.(?) lim
.
x
18.
x→ 2
x→1
!
x+1 x
.
x2
√
1 + sin(3x) − 1
.
x→0 log(1 + tan(2x))
24. lim
√4
cos x − 1
x→0 log(1 + x sin x − x 3 )
!x
lim
32.(?) lim (1 − e x ) sin x .
x→0−
x→+∞
1+
21. lim
log(2 − cos(2x))
.
x→0
x2
x2 + x + 1
x→+∞
x2 + 1
lim
√
1 + x + x2 − 1
.
sin(4x)
x→0
log cos x
20. lim √4
.
x→0 1 + x 2 − 1
29.
lim
limπ (1 + cos x) tan x .
x→0+
26. lim
1+
x
15. lim x x−1 .
√
1 + x2 − 1
.
x→0 1 − cos x
25. lim
! log x
lim
lim (log x)(log(1 − x)).
23. lim
lim
x3
.
x→0 tan x − sin x
5. lim
esin(3x) − 1
.
x
x→0
22. lim
28.
sin(πx 2 )
.
x→1 x − 1
8. lim
.
log x
x→+∞
3. lim
1
11. lim
x→+∞ log x
1
x xx −1
13.
1 − (cos x) 3
.
x→0 (sin x) 2
2. lim
27.
log(1 + 2x − 3x 2 + 4x 3 )
.
x→0 log(1 − x + 2x 2 − 3x 3 )
lim
(e2x +x − 1) sin x
.
√3
x→0
1 + x3 − 1
2
.
30. lim
3
86
Chapter 5
Continuity
Continuity arises as a local property that connect the behavior of a function f in a neighborhood of a point x 0 ∈ D( f ) to
the value of f at x 0 , that is
f continuous at x 0, ⇐⇒ lim f (x) = f (x 0 ).
x→x0
If a function f is continuous at any point of its domain, then it turns out that this has global consequences, that is it implies
property concerning the full graph of a function. These properties concern important problems as the existence of extrema
(min/max) for a function, zeroes, invertibility, and this is the main goal of this Chapter.
5.1
Weierstrass Theorem
Weiestrass Thm concerns minimum and maximum of a continuous functions on a closed and bounded interval. Let first
introduce the formal
Definition 5.1.1. Let f : D ⊂ R −→ R. We say that x max ∈ D is a global maximum point for f on D if
f (x max ) > f (x), ∀x ∈ D.
Similarly we define global minimum points.
To say that f has global min/max points is equivalent to say that the set of values assumed by f
f (D) := { f (x) : x ∈ D} ,
has min/max as subset of R. In this case we pose
min f := min { f (x) : x ∈ D} , max f := max { f (x) : x ∈ D} ,
D
D
Of course, if x min is a global minimum, f (x min ) = minD f .
One of the most beautiful and important results of the entire Mathematics is undoubtedly the
Theorem 5.1.2 (Weierstrass). Any continuous function on a closed and bounded interval has global min/max on it.
Proof. — The proof is subtle and based on Bolzano–Weiestrass Thm 2.4.2.
First step: f ([a, b]) is bounded. Suppose, on the contrary, that f ([a, b]) is unbounded, for instance from above,
@ K > 0, : f (x) 6 K, ∀x ∈ [a, b], ⇐⇒ ∀K > 0, ∃x ∈ [a, b], : f (x) > K .
87
88
Take K = n ∈ N and call x n ∈ [a, b] such that f (x n ) > n. Then (x n ) ⊂ [a, b] and by the Bolzano–Weierstrass Thm 2.4.2 there exists
(x nk ) ⊂ (x n ) such that x nk −→ ξ. But then
+∞ ←− f (x nk ) −→ f (ξ), (by continuity), =⇒ f (ξ) = +∞,
which is a nonsense being f (ξ) ∈ R.
Second step: existence of min / max. Let’s consider the case of max. Call
M := sup { f (x) : x ∈ [a, b]} .
Being f bounded, M < +∞. Let’s prove that there exists x max ∈ [a, b] such that f (x max ) = M: if this is true we are done! Because
M is the sup,
1
∀n ∈ N, ∃x n ∈ [a, b], : M − 6 f (x n ) 6 M.
n
Again: (x n ) ⊂ [a, b], by Bolzano–Weierstrass there exists (x nk ) ⊂ (x n ) such that x nk −→ x max ∈ [a, b]. Then f (x nk ) −→ f (x max )
so
M 6 f (x max ) 6 M, =⇒ f (x max ) = M.
It is easy to convince that all the hypotheses ( f continuous, the interval being closed and bounded) play a fundamental
role and if some of them is lacking the Thm is false. Convince yourself by graphic examples! Another thing worth to be
mentioned is the fact that the Weierstrass Thm doesn’t give a real method to find min / max. To do this we will need to
have the Differential Calculus.
5.2
Zeroes theorem and intermediate values theorem
Walking from the top of a mountain to the bottom of the sea we will have, sooner or later, to enter in the water.
Theorem 5.2.1 (Bolzano). Let f ∈ C ([a, b]) such that f (a) and f (b) have opposite signs. Then
∃ c ∈ [a, b] : f (c) = 0.
Proof. — This proof, differently by that one for the Weierstrass Thm, is constructive, in the sense that it gives a method to look for
zeroes. Suppose, for instance, f (a) < 0 < f (b). Consider the mean point between a and b, that is a+b
2 . We have the following
alternative:
i) f a+b
= 0, we are lucky and the proof is finished.
2
a+b
ii) f
> 0: we restrict our search to the interval [a, a+b
2
2 ] =: [a1, b1 ].
< 0: we restrict our search to the interval [ a+b
iii) f a+b
2
2 , b] =: [a1, b1 ].
In cases ii) iii), setting a0 = a and b0 = b, we have
b − a0
a0 6 a1 6 b1 6 b0, b1 − a1 = 0
, f (a1 ) < 0 < f (b1 ).
2
Let’s repeat the argument on the interval [a1, b1 ]. Consider again the mean point: either we found a zero, or there’s a new interval
[a2, b2 ] such that
b − a1
b −a
a0 6 a1 6 a2 6 b2 6 b1 6 b0, b2 − a2 = 1
= 0 2 0 , f (a2 ) < 0 < f (b2 ).
2
2
Iterating the procedure we have the following alternative: either we stop after a certain number of steps because we found a zero, or we
will never end, constructing and infinite family of intervals [an, bn ] such that
b −a
a0 6 . . . 6 an−1 6 an 6 bn 6 bn−1 6 . . . 6 b0, bn − an = 0 n 0 , f (an ) < 0 < f (bn ).
2
89
Being an % and bn & they both have a limit (resp. a 6 b) and because
b −a
0 6 b − a 6 bn − an 6 0 n 0 −→ 0, =⇒ a = b.
2
Moreover, by continuity and permanence of sign
f (a) = lim f (an ) 6 0, f (b) = lim f (bn ) > 0,
n
n
and because f (a) = f (b) it must be f (a) = f (b) = 0.
Immediately we have the
Corollary 5.2.2 (intermediate values thm). Continuous functions maps intervals into intervals, that is: f (I) is an interval
for any I interval. In particular
if α, β ∈ f (I), I interval, =⇒ [α, β] ⊂ f (I).
Proof. — Let f ∈ C (D) I ⊂ D interval. Suppose, by contradiction, that J := f (I) is not an interval. There exists α < y < β with
α, β ∈ J but y < J. Because α, β ∈ J = f (I), α = f (a) and β = f (b) for some a, b ∈ I. Suppose for instance a < b (otherwise
exchange them) and let
g : [a, b] ⊂ I −→ R, g(x) := f (x) − y.
Clearly g ∈ C ([a, b]), g(a) = f (a) − y = α − y < 0, g(b) = f (b) − y = β − y > 0: by zeroes Thm there exists c ∈ [a, b] ⊂ I such that
g(c) = 0, that is f (c) = y: contradiction!
5.3
The theorem of continuous inverse
We know that monotonicity plays a special role in Analysis. Also in relation with continuity we have important properties.
The most important is the following
Theorem 5.3.1. Any monotone and surjective function between intervals is necessarily continuous. Precisely: if f : I −→
J = f (I) with I, J intervals is monotone, then f ∈ C (I).
Proof. — Let f : I −→ J, for instance f %, I, J ⊂ R intervals with f (I) = J. Suppose that f is not continuous at some x 0 ∈ I, that
is
lim f (x) , f (x 0 ).
x→x0
Notice that, being f %
∃ f (x 0 −) = lim f (x) = sup{ f (x) : x < x 0 }, f (x 0 +) = lim f (x) = inf{ f (x) : x > x 0 },
x→x0 +
x→x0 −
and
f (x 0 −) 6 f (x 0 ) 6 f (x 0 +).
To assume that f is discontinuous means that at least one of the two previous inequalities is strict. The next figure shows the situation.
It is clear that a break should be present into the graph and this should discover points on the y axis which are not ordinates for points
on the graph. In other words also the image f (I) should be disconnected, but this contradicts the assumption that f (I) is an interval.
Let’s argument precisely. To assume f discontinuous implies
f (x 0 −) < f (x 0 +).
Now, consider the interval ] f (x 0 −), f (x 0 +)[. If x < x 0 , by monotonicity f (x) 6 f (x 0 −) and as x > x 0 we have f (x) > f (x 0 +). It
follows that
] f (x 0 −), f (x 0 +)[⊂ J = f (I).
90
f Hx0 L
y
f Hx0 -L
J
I
x0
But J is supposed to be interval, so any value of ] f (x 0 −), f (x 0 +)[ has to be f (x) for some x. But the unique x possible, by what we
have seen above, is x = x 0 . So there’s just a value in ] f (x 0 −), f (x 0 +)[ belonging to f (I), and this is a contradiction!
Corollary 5.3.2. Recalling properties of elementary functions we have that: powers, exponentials and logarithms are
continuous where defined.
Proof. — For instance: expa :] − ∞, +∞[−→]0, +∞[ is increasing for a > 1, decreasing for a < 1, surjective in any case: it is therefore
continuous. And so on.
Combining intermediate values thm with monotone functions on intervals thm we get the
Corollary 5.3.3. Let f : I −→ J = f (I) strictly monotone and continuous on I interval. Then:
∃ f −1 : J −→ I, f −1 ∈ C (J).
Proof. — Being f continuous and I interval, f (I) =: J is an interval. Moreover f is strictly monotone, hence injective. Therefore
f : I −→ f (I) = J is bijective, so there exists
f −1 : J −→ I.
Clearly f −1 has the same monotonicity of f (easy), and by definition is surjective among intervals: by Thm 5.3.1 it is continuous.
On this result is based the construction of inverse functions for elementary functions.
Arcsin and arccos
Globally sin and cos are not invertible functions. But when we restrict them to specific intervals we can give a meaning
to inversion. Consider for instance
π π
f := sin : − ,
−→ [−1, 1].
2 2
f
g
Looking at the geometrical meaning of sin we see that f % strictly and f − π2 , π2 = [−1, 1]: by the continuous inverse
mapping thm,
π π
arcsin := sin−1 : [−1, 1] −→ − ,
, arcsin ∈ C ([−1, 1]).
2 2
Similarly is defined arccos, inverting cos : [0, π] −→ [−1, 1].
91
y
y
Π
Π
2
1
-1
x
Π
2
-
Π
2
-1
1
x
Figure 5.1: arcsin (left) and arccos (right).
Arctan
Also the tangent, like sine and cosine, is not an invertible function. However, if we restrict to a period here we get a nice
function for the inversion. Precisely, let
π π
tan : − ,
−→] − ∞, +∞[.
2 2
f
f
sin
% strictly. Being tan(−x) = − tan x we have
On 0, π2 we have sin % strictly and cos & strictly: it is clear that tan = cos
g
g
g
f
π
π π
the same behavior on − 2 , 0 . The moral is that tan % strictly on − 2 , 2 . Setting
π π J := tan − ,
=] − ∞, +∞[,
2 2
by the continuous inverse mapping thm
π π
arctan := tan−1 :] − ∞, +∞[−→ − ,
, arctan ∈ C (R).
2 2
Clearly arctan 0 = tan−1 0 = 0, and
lim arctan x
y=arctan x, x=tan y
x→−∞
=
π
limπ y = − ,
2
y→− 2
lim arctan x =
x→+∞
π
.
2
y
Π
2
x
-
Π
2
Figure 5.2: arctan.
Arcsh and arccosh
The function sinh : R −→ R is strictly increasing how can be easily seen. Its inverse is called hyperbolic arcsin
arcsinh : R −→ R. By the continuous inverse mapping thm it is continuous. We can also derive an analytic expression
for arcsinh :
arcsinh y = x, ⇐⇒ sinh x = y, ⇐⇒
e x − e−x
1
= y, ⇐⇒ e x − x = 2y, ⇐⇒ e2x − 2ye x − 1 = 0.
2
e
92
This gives
p
q
2y ± 4y 2 + 4
= y ± y 2 + 1.
e =
2
x
Now, y −
p
y 2 + 1 is always negative (evident for y < 0, easy for y > 0) so the unique possibility is
!
q
2
arcsinh y = log y + y + 1 .
(5.3.1)
Similarly, the hyperbolic cosine is not globally invertible on R, but cosh : [0, +∞[−→ [1, +∞[ is strictly increasing and
surjective (easy): by the continuous inverse mapping thm it is well defined and continuous the function hyperbolic arc
cosine arccosh : [1, +∞[−→ [0, +∞[. In this case
!
q
(5.3.2)
arccosh y = log y + y 2 − 1 .
x
x
y
y
Figure 5.3: arcsinh (left) and arccosh (right).
5.4
Exercises
Exercise 5.4.1. Say if the equation x 2 + 2 = e x has solutions and in which number.
Exercise 5.4.2. Consider the equation 3x 3 − 8x 2 + x + 3 = 0. Show that there’re three solution respectively in ] − ∞, 0[, ]0, 1[ and
1 .
]1, +∞[. Determine that one in ]0, 1[ with an approximation of 10
Exercise 5.4.3. Let f ∈ C (R). Suppose that sin f (x) = 1 for all x ∈ R. What can you deduce on f ? And if you know that f (0) = π2 ?
Exercise 5.4.4. Let p be an n−th degree polynomial with n odd. Show that the equation p(x) = 0 has at least one solution.
Exercise 5.4.5 (?). Let f : R −→ R be continuous on R. Suppose that lim x→±∞ f (x) = 0. Show that if f is not identically 0 then f
has at least one among max f or min f different by 0. Is it always true that both are , 0? Prove or exhibit a counterexample.
Exercise 5.4.6 (?). Let f : R −→ R be continuous on R. Suppose that lim x→±∞ f (x) = +∞. Show that f has global min over R.
Exercise 5.4.7 (?). Let f : [a, b] −→ R be continuous and injective. Show that if f (a) < f (b) then f (a) < f (x) < f (b) for all
x ∈ [a, b]. Is it true or false that f is strictly increasing?
Chapter 6
Differential Calculus
The Differential Caculus may be correctly considered one of major inventions of the human thought. Its huge amount of
applications shows its importance: of course to the entire Mathematics, but also to Physics and all applied science, from
Engineering, Natural Sciences, Economy. Our goal is to introduce to principal concepts and techniques, emphasizing
intuitive sense and geometrical significance.
A classical way to introduce the concept of derivative is through the problem to determine the tangent to a plane curve.
Imagining the curve as the graph of a certain function f : D ⊂ R −→ R, we want to find a method to define the tangent
to the graph at some point (x 0, f (x 0 )). A generic (not vertical) straight line passing by (x 0, f (x 0 )) is described by the
equation
y = m(x − x 0 ) + f (x 0 ), x ∈ R.
So the problem is: how could we compute the angular coefficient m in order that the straight line be tangent to the graph
of f ? Of course we should define the concept of tangence. We will proceed in the following way: we will first find a
candidate then we will check that in a suitable sense is what we are looking for. The geometrical idea is very easy: to
describe a straight line we need two points, so consider a second point along the graph of f , let say (x 0 + h, f (x 0 + h))
with h , 0 (in such a way we have two points≡a unique straight line).
y
Hx0 +h,f Hx0 +hLL
Hx0 ,f Hx0 LL
x0
x0 +h
x
Then, the angular coefficient for this cord is
m=
ordinates variation
f (x 0 + h) − f (x 0 )
=
.
abscissas variation
h
Clearly, such m depends on h and the corresponding line won’t be tangent but it will "cut" the graph as shown in the figure.
However, is natural to think that as h −→ 0 the straight line will move up to a "limit position", that is should be that one
of a tangent line. Therefore, the corresponding slope should be given by
lim
h→0
f (x 0 + h) − f (x 0 )
.
h
93
94
This limit, if it exists, is called derivative of f at point x 0 and it is usually denoted by one of the following notations
df
(x 0 ), (Leibniz).
dx
Working on geometrical intuition we may expect that f 0 > 0 indicates an increasing function while f 0 < 0 a decreasing
function. In other words, to solve the inequality f 0 (x) > 0 should give important information about monotonicity, hence a
method to find min/max for a function f . This property makes differential calculus a powerful tool to study optimization
problems. But not only to this, differential calculus shows several interesting other tools applied to many other problems.
For instance the Taylor formula, an approximation formula for any function through a polynomial, is the base of numeric
automatic calculus.
f 0 (x 0 ), (Newton),
6.1
Definition and first properties
Since now we will say that x 0 is an interior point to D ⊂ R if there exists Ix0 ⊂ D. We will write x 0 ∈ Int(D).
Definition 6.1.1 (derivative). Let f : D ⊂ R −→ R, x 0 ∈ Int(D). We say that f is differentiable at x 0 if
∃ lim
h→0
f (x 0 + h) − f (x 0 )
h
x=x0 +h
≡
lim
x→x0
f (x) − f (x 0 )
=: f 0 (x 0 ) ∈ R.
x − x0
The number f 0 (x 0 ) is called derivative of f at x 0 . The straight line of equation
y = f (x 0 ) + f 0 (x 0 )(x − x 0 ),
is called tangent to f at (x 0, f (x 0 )).
As we said in the introduction, once we have a definition of tangent straight line we have to find a method to check that it
is indeed tangent. The qualitative geometrical idea is to call tangent a straight line such that it approximates better than
any other the graph of f . First of all, for a generic non vertical straight line passing through (x 0, f (x 0 ))
y = m(x − x 0 ) + f (x 0 ),
we define the gap between the function and the straight line by
ε(x) := f (x) − (m(x − x 0 ) + f (x 0 )) .
We may expect that ε(x) −→ 0 as x −→ x 0 . The point is that the error is smaller when m = f 0 (x 0 ). The precise sense is
Theorem 6.1.2. f is differentiable at x 0 iff there exists m ∈ R such that
f (x) − (m(x − x 0 ) + f (x 0 )) = o(x − x 0 ).
In such a case m = f 0 (x 0 ). In particular, if f is differentiable at x 0 the Taylor formula holds
f (x) = f (x 0 ) + f 0 (x 0 )(x − x 0 ) + o(x − x 0 ).
Proof. — Notice that
f (x) − (m(x − x 0 ) + f (x 0 )) = o(x − x 0 ), ⇐⇒
But
f (x) − (m(x − x 0 ) + f (x 0 ))
−→ 0.
x − x0
f (x) − (m(x − x 0 ) + f (x 0 ))
f (x) − f (x 0 )
f (x) − f (x 0 )
=
− m −→ 0, ⇐⇒ ∃ lim
= m,
x→x
x − x0
x − x0
x − x0
0
that is iff f is differentiable and m = f 0 (x 0 ).
(6.1.1)
95
y
f HxL-Hf Hx0 L+f 'Hx0 LHx-x0 LL=oHx-x0 L
x0
x
x
An easy remark is the following
Proposition 6.1.3.
f differentiable at x 0, =⇒ f continuous at x 0 .
Proof. — Easy: by Taylor formula
x−→x0
f (x) = f (x 0 ) + f 0 (x 0 )(x − x 0 ) + o(x − x 0 ) −→ f (x 0 ).
Warning! 6.1.4. Although all the advices, there’s nothing to do: there’s always someone that believe that continuity
implies differentiability and not vice versa. This is wrong! A classical example is the function f (x) := |x|. It is easy
to check that @ f 0 (0).
y
x
Indeed,
lim
h→0
that it doesn’t exists because
f (h) − f (0)
|h|
= lim
,
h
h→0 h
h
|h|
−h
|h|
= lim
= 1, lim
= lim
= −1.
h
h
h
h→0+
h→0−
h→0− h
h→0+
lim
This example shows the utility to define the unilateral derivative.
Definition 6.1.5. Let f : D ⊂ R −→ R, x 0 ∈ Int(D ∩ [x 0, +∞[). We say that f is differentiable from the right at x 0 if
∃ lim
h→0+
f (x 0 + h) − f (x 0 )
=: f +0 (x 0 ) ∈ R.
h
Similarly is defined the left derivative f −0 (x 0 ).
Proposition 6.1.6. A function f is differentiable at x 0 iff it is differentiable from the left and from the right at x 0 and
f −0 (x 0 ) = f +0 (x 0 ) = f 0 (x 0 ).
Proof. — Evident.
96
6.2
Derivatives of elementary functions
Basically, elementary functions are differentiable where defined. We say that a function f is differentiable on a set D if it
is differentiable at any point x ∈ D. The function
f 0 : D ⊂ R −→ R
is called derivative. For elementary functions the discussion of differentiability depends on the fundamental limits we
have seen at Section 4.5.
Proposition 6.2.1. We have
• constant functions are differentiable on R with null derivative.
• e x is differentiable on R and (e x ) 0 = e x .
• log x is differentiable on ]0, +∞[ and log0 (x) = x1 .
• sin x and cos x are differentiable R and sin0 (x) = cos x while cos0 (x) = sin x.
• sinh x and cosh are differentiable R and sinh0 (x) = cosh x while cosh0 = sinh.
• x n , n ∈ N is differentiable on R and (x n ) 0 = nx n−1 .
• x m , m ∈ Z, m < 0 is differentiable on R\{0} and (x m ) 0 = mx m−1 .
• x α , α ∈ R, is differentiable on ]0, +∞[ (and also at 0 from the right if α > 1) and (x α ) 0 = αx α−1 .
• |x| is differentiable on R\{0} and |x| 0 = sgnx.
Proof. — Let f (x) ≡ C. Then
lim
h→0
C −C
0
f (x + h) − f (x)
= lim
= lim = 0.
h
h
h→0
h→0 h
Passing to the exponential,
e x+h − e x
eh − 1
= e x lim
= ex .
h
h
h→0
h→0
lim
For the logarithm we have
log x 1 + hx − log x
log 1 + hx 1
log(x + h) − log x
1
lim
= lim
= lim
= .
h
h
h/x
x
x
h→0
h→0
h→0
Let’s pass to trigonometric functions. For sine we have
!
sin(x + h) − sin x
sin x cos h + sin h cos x − sin x
cos h − 1
sin h
= lim
= lim sin x
h
+
cos
x
= cos x.
h
h
h
h→0
h→0
h→0
h2
lim
A similar computation shows that cos 0 = − sin. Also sinh and cosh works similarly. For powers let’s start with n ∈ N:
n
1 X
(x + h) n − x n
= *.
h
h
,k=0
n
k
!
n
1X
x n−k hk − x n +/ =
h
k=1
-
n
k
!
x n−k hk =
As k > 2 se have hk−1 −→ 0 as h −→ 0. So
(x + h) n − x n
=
h
h→0
lim
n
1
!
x n−1 = nx n−1 .
n
X
k=1
n
k
!
x n−k hk−1 .
97
If m ∈ Z, m = −n with n > 0, then, for x , 0,
(x + h) m − x m
h
=
!
1
1
1
1 x n − (x + h) n
1
x n − (x + h) n
−
=
=
n
n
n
n
n
n
h (x + h)
x
h (x + h) x
(x + h) x
h
1 −→ 2n −nx n−1 = −nx −n−1 = mx m−1 .
x
More complex the case of real exponent. Taking x > 0
α
1 + hx − 1 t= hx , h=t x
(x + h) α − x α
(1 + t) α − 1
lim
= lim x α
=
x α lim
= x α−1 α.
h
h
tx
t→0
h→0
h→0
If x = 0 and α > 1 then the incremental ratio becomes
hα
= lim hα−1 = 0,
h→0+
h→0+ h
lim
that confirm the rule αx α−1 for x = 0. Finally, for modulus it is an easy exercise.
The derivative of tan, arcsin, arccos, arctan, arcsinh and arccosh will be compute later.
6.3
Rules of calculus
We need efficient rules to compute derivatives of complex functions composed either by algebraic operation or compositions
by elementary functions. We start by algebraic rules:
Proposition 6.3.1. Let f , g be differentiable at x 0 . Then
i) f ± g is differentiable at x 0 and ( f ± g) 0 (x 0 ) = f 0 (x 0 ) ± g 0 (x 0 ).
ii) f · g is differentiable at x 0 and ( f · g) 0 (x 0 ) = f 0 (x 0 )g(x 0 ) + f (x 0 )g 0 (x 0 ).
0
0
0
0 )− f (x0 )g (x0 )
iii) if g(x 0 ) , 0 then f /g is differentiable at x 0 and gf (x 0 ) = f (x0 )g(xg(x
.
)2
0
In particular we have the formulas
(α f + βg) (x 0 ) = α f (x 0 ) + βg (x 0 ), (linearity),
0
0
0
1
g
!0
(x 0 ) = −
g 0 (x 0 )
, (if g(x 0 ) , 0).
g(x 0 ) 2
Proof. — The proof is easy. For the sum we have:
( f + g)(x 0 + h) − ( f + g)(x 0 )
h
=
f (x 0 + h) + g(x 0 + h) − f (x 0 ) + g(x 0 )
h
=
f (x 0 + h) − f (x 0 ) g(x 0 + h) − g(x 0 )
+
−→ f 0 (x 0 ) + g 0 (x 0 ).
h
h
For the product we have
( f · g)(x 0 + h) − ( f · g)(x 0 )
h
=
f (x 0 + h)g(x 0 + h) − f (x 0 )g(x 0 )
h
g(x 0 + h) − g(x 0 )
f (x 0 + h) − f (x 0 )
g(x 0 + h) + f (x 0 )
−→ f 0 (x 0 )g(x 0 ) + f (x 0 )g 0 (x 0 ).
h
h
In the last passage we used g(x 0 + h) −→ g(x 0 ) because g is continuous being differentiable. Similarly works the formula for the ratio.
The particular cases are immediate by the general formulas.
=
98
Let’s pass now to the composition. Leibniz’s notations suggest the rule:
g( f (x)) 0 =
d(g( f ))
dg d f
=
= g 0 ( f (x)) f 0 (x).
dx
d f dx
On course this is nothing but a formal suggestion, but the rule is true!
Theorem 6.3.2 (chain rule). Assume that ∃ f 0 (x 0 ) and ∃ g 0 ( f (x 0 )). Then
∃ (g ◦ f ) 0 (x 0 ) = g 0 ( f (x 0 )) f 0 (x 0 ).
(6.3.1)
Proof. — A natural computations would be
g( f (x 0 + h)) − g( f (x 0 ))
g( f (x 0 + h)) + g( f (x 0 )) f (x 0 + h) − f (x 0 )
g ◦ f (x 0 + h) − g ◦ f (x 0 )
=
=
.
h
h
f (x 0 + h) − f (x 0 )
h
To be a true identity we need f (x 0 + h) − f (x 0 ) , 0 because if f (x 0 + h) − f (x 0 ) = 0 the identity doesn’t make sense. However, in this
g( f (x0 +h))+g( f (x0 ))
f (x0 +h)− f (x0 )
= 0 and even if
doesn’t make sense, it seems
case we could notice that the left hand side is 0, that
h
f (x0 +h)− f (x0 )
reasonable that its natural value should be g 0 ( f (x 0 )). To make rigorous this argument, let’s introduce the auxiliary function
g( f (x 0 + h)) + g( f (x 0 ))


,



f (x 0 + h) − f (x 0 )

φ(h) := 


 0
 g ( f (x 0 )),
f (x 0 + h) − f (x 0 ) , 0,
f (x 0 + h) − f (x 0 ) = 0,
Then
g ◦ f (x 0 + h) − g ◦ f (x 0 )
f (x 0 + h) − f (x 0 )
= φ(h)
, ∀h , 0.
h
h
0
To finish, just notice that φ(h) −→ g ( f (x 0 )), and by this (6.3.1) easily follows.
Remark 6.3.3 (Important!). Be careful: the chain rule is a sufficient condition in order that g ◦ f is differentiable. The
composition may be differentiable even if the components are not! For instance: consider the composition
x 7−→ |x| 7−→ |x| 2, that is f (x) = |x|, g(y) = y 2 .
Then g ◦ f (x) = |x| 2 = x 2 which is clearly differentiable on R. Nevertheless, hypothese for the chain rule are not fulfilled. Indeed f is
not differentiable at 0.
√
Example 6.3.4. Discuss differentiability and compute the derivative of h(x) := 1 + sin x.
Sol. — The domain is {x ∈ R : 1 + sin x > 0} = R. The function is a composition of two functions
f
g √
√
x 7−→ 1 + sin x 7−→ 1 + sin x, f (x) = 1 + sin x, g(y) = y.
By the chain rule, if i) f is differentiable at x and ii) g is differentiable at f (x), by the chain rule it follows that g ◦ f is differentiable
at x. Now: i) f is differentiable at any x ∈ R (sum of functions differentiable on R); ii) g is differentiable at f (x) iff f (x) > 0, that is
1 + sin x > 0 or sin x > −1. Now: sin > −1 and sin x = −1 iff x = 23 π + k2π, k ∈ Z. It follows that we can apply the chain rule on all
x ∈ R\{ 23 π + k2π : k ∈ Z}. Moreover, being
1
f 0 (x) = cos x, g 0 (y) = √ ,
2 y
it follows that
(
)
1
3
∃ (g ◦ f ) 0 (x) = √
cos x, ∀x ∈ R\
π + k2π : k ∈ Z .
2
2 1 + sin x
99
What about points 23 π + k2π? Here the chain rule doesn’t apply, but as the previous remark shows it could be (in principle) possible
that the function is differentiable. Let’s consider for instance the point x 0 = 32 π. We have to compute
q
√
1 + sin 32 π + h
f 23 π + h − f 23 π
1 − cos h
lim
= lim
= lim
.
h
h
h
h→0
h→0
h→0
For instance
r
r
√
√
1 − cos h
1 − cos h
1 − cos h
1 − cos h
1
1
=
,
lim
= −√
= lim
=
−
lim
√
h
h
h→0+
h→0+
h→0−
h2
h2
2 h→0−
2
lim
so ∃h+0 ( 23 π) = √1 while ∃h−0 ( 32 π) = − √1 .
2
6.4
2
Fundamental theorems of Differential Calculus
In this Section we will present the most important theorems of Differential Calculus: Fermat, Rolle, Lagrange and Cauchy.
Actually, the last two are just corollaries of the second which depends strongly on the first. Nevertheless each of them has
an his own specificity and their consequences will be the content of the rest of the Chapter and more.
Plotting a graph of a smooth function (that is with tangent in any point of the domain) we recognize the following property:
at any minimum or maximum the tangent is horizontal, that is f 0 (x) = 0. Actually this happens only if the extreme point
is in the interior of the domain and regards not only global min/max but also local points: let’s first introduce this concept.
Definition 6.4.1. Let f : D ⊂ R −→ R. A point x min ∈ D is called local minimum for f on D if
∃ Ixmi n : f (x min ) 6 f (x), ∀x ∈ D ∩ Ixmi n .
Similarly is defined a local maximum. Local min/max are called local extreme points.
Of course a global extreme is also local, but not vice versa.
Theorem 6.4.2 (Fermat). Let f : D ⊂ R −→ R, x 0 ∈ Int(D) be a local extreme point. If f is differentiable at x 0 then
f 0 (x 0 ) = 0.
Proof. — Consider the case of a local minimum x min ∈ Int(D). Because x 0 ∈ Int(D) we may assume that
∃ Ix mi n ⊂ D : f (x min ) 6 f (x), ∀x ∈ Ix mi n =]x min − r, x min + r[
Then,
if |h| < r, =⇒ f (x min + h) > f (x min ), ⇐⇒ f (x min + h) − f (x min ) > 0.
Therefore
f (x min + h) − f (x min )

> 0, 



h


 =⇒ f 0 (x min ) = 0.



f
(x
+
h)
−
f
(x
)

min
min

f 0 (x min ) = f +0 (x min ) = lim
6 0, 
h

h→0+
f 0 (x min ) = f −0 (x min ) = lim
h→0−
Remark 6.4.3. If x min is not in the interior of D things are slightly different. For instance: if D = [a, b] and we have a
minimum at x = a then f +0 (a) > 0, while if the minimum is at x = b we have f −0 (b) 6 0.
Definition 6.4.4. Values x 0 such that f 0 (x 0 ) = 0 are called stationary points for f .
100
y
I x0
a
Ξ
Ξ
b
x
Figure 6.1: Lagrange theorem.
Remark 6.4.5. So Fermat Thm says: any local extreme point in the interior of the domain of a differentiable function is
a stationary point. But, notice, a stationary point is not necessarily a local extreme point. Take f (x) = x 3 and consider the
point x = 0. We have f 0 (x) = 3x 2 so f 0 (0) = 0, but 0 is not a local extreme point.
The Rolle theorem express an intuitive fact: if a smooth function f on an interval has equal values at the extremes, then
somewhere there’s a point with horizontal tangent.
Theorem 6.4.6 (Rolle). Let f : [a, b] −→ R be continuous on [a, b] and differentiable on ]a, b[ ( 1 ) . Then
if f (a) = f (b), =⇒ ∃ξ ∈]a, b[ : f 0 (ξ) = 0.
Proof. — Intuition suggests that f should have at least one of two global extreme in ]a, b[. Let’s see this: because f ∈ C ([a, b]), by
Weierstrass thm there’re global min/max. Let’s call them x min and x max . If f (x min ) = f (x max ) it means that f is constant, and then
f 0 (ξ) = 0 for any ξ ∈]a, b[. Otherwise, if f (x min ) < f (x max ) at least one among x min, x max will lie in ]a, b[ (being f (a) = f (b)).
That is: one of the extreme points is in the interior of [a, b]. By Fermat Thm we conclude that in that point f 0 = 0.
Consider now a smooth function f : [a, b] −→ R. If you connect the initial and the final points of the graph, that is
(a, f (a)) and (b, f (b)), with a straight line there’s at least a tangent to the graph parallel to that line.
Theorem 6.4.7 (Lagrange). Let f : [a, b] −→ R be continuous on [a, b] and differentiable on ]a, b[. Then
∃ ξ ∈]a, b[ :
Equivalently:
f (b) − f (a)
= f 0 (ξ).
b−a
∃ ξ ∈]a, b[ : f (b) − f (a) = f 0 (ξ)(b − a).
(6.4.1)
(6.4.2)
This last is called finite increment formula.
Proof. — It’s just a consequence of the Rolle theorem. Indeed, let’s "lower" f to make extreme points at the same quote. To this aim
consider the auxiliary function
h : [a, b] −→ R, h(x) := f (x) −
f (b) − f (a)
(x − a), x ∈ [a, b].
b−a
Clearly h is continuous on [a, b] and differentiable on ]a, b[. Moreover
h(a) = f (a), h(b) = f (b) −
f (b) − f (a)
(b − a) = f (b) − ( f (b) − f (a)) = f (a).
b−a
Therefore, by Rolle thm, there exists ξ ∈]a, b[ such that h 0 (ξ) = 0. But
f (b) − f (a)
0 = h 0 (ξ) = f 0 (ξ) −
, ⇐⇒
b−a
f (b) − f (a)
= f 0 (ξ).
b−a
1This means: it is not required that f is differentiable at a and b. Of course, if it is, it is better!
101
As immediate consequence of the Lagrange formula we have an often very useful test for the existence of the derivative.
We will state it for the right derivative, it holds of course for the left derivative and, combining the two version, for the
derivative.
Proposition 6.4.8. Let f : D ⊂ R −→ R, x 0 ∈ D be such that [x 0, x 0 + r] ⊂ D for some r > 0. Assume that f is right
continuous at x 0 and differentiable for x > x 0 .
` ∈ R, =⇒ ∃ f +0 (x 0 ) = `.




Suppose ∃ lim f (x) =: ` ∈ R ∪ {±∞}, =⇒ 
x→x0 +
 ` = ±∞, =⇒ @ f 0 (x ),
+ 0

0
Proof. — Recall that
f (x 0 + h) − f (x 0 )
.
h
h→0+
Now, applying Lagrange thm to f on the interval [x 0, x 0 + h] (the hypotheses are fulfilled) there exists ξh ∈]x 0, x 0 + h[ such that
f +0 (x 0 ) = lim
f (x 0 + h) − f (x 0 )
= f 0 (ξh ).
h
Being x 0 < ξh < x 0 + h we have ξh −→ x 0 + by two policemen, so f (ξh ) −→ ` and this means
lim
h→0+
f (x 0 + h) − f (x 0 )
= lim f 0 (ξh ) = `.
h
h→0+
Warning! 6.4.9. As we said in the statement, nothing can be said in the case the limit limx→x0 + f 0 (x) doesn’t exist. This
means, precisely: this limit could not exists, but the right derivative could exists. Consider the function

x 2 sin x1 ,



f (x) := 

 0,
x , 0,
x = 0.
Esasily (bounded×null) we have f continuous at x = 0. Moreover
h2 sin h1
1
f (h) − f (0)
= lim
= lim h sin = 0,
h
h
h
h→0
h→0
h→0
f 0 (0) = lim
always by the same rule. On the other side
!
1
1
1
lim f 0 (x) = lim 2x sin − cos
= − lim cos ,
x
x
x
x→0
x→0
x→0
doesn’t exist (even unilateral)!
Example 6.4.10. Let f : R\{0} −→ R defined as

a arctan x1 + (b + 1) log(1 − x),



f (x) := 


 sinh bx
− a cos √1x ,
x 2 +1

x < 0,
x > 0.
Are there values a, b ∈ R such that f is extends continuously at x = 0? Is the extension differentiable at x = 0?
Sol. — f extends continuously at x = 0 iff f (0−) = f (0+), the common value defining also the value to be given to the extension of
f . We have
!
1
π
f (0−) = lim a arctan + (b + 1) log(1 − x) = −a ,
x
2
x→0−
102
while, being lim x→0+ sinh xbx
2 +1 = 0, we have
!
1
bx
− a cos √ , ⇐⇒
∃ lim sinh 2
x→0+
x
x +1
a = 0,
because @ lim x→0+ cos √1 . In such case f (0+) = f (0−) = 0, so f is extendable continuously at x = 0 for any b ∈ R. So we have the
x
unique possible extension is
(b + 1) log(1 − x), x < 0,








0,
x = 0,
f (x) = 







bx
x > 0,
 sinh x 2 +1 ,
As x , 0 we have
b+1 ,

x < 0,
− 1−x




0

f (x) =  b(x 2 +1)−2bx 2


 cosh bx
, x > 0.
x 2 +1
(x 2 +1) 2

By this it follows that
!
bx
b(x 2 + 1) − 2bx 2
b+1
= −b − 1, f 0 (0+) = lim cosh 2
= b.
f 0 (0−) = lim −
x→0+
x→0− 1 − x
x +1
(x 2 + 1) 2
Therefore, applying the test for the existence of the derivative, ∃ f −0 (0) = −b − 1 and ∃ f +0 (0) = b. But then ∃ f 0 (0) iff f −0 (0) = f + (0).
This means −b − 1 = b, that is b = − 12 .
The Cauchy thm is just a generalization of the Lagrange one.
Theorem 6.4.11 (Cauchy). Let f , g : [a, b] −→ R be continuous on [a, b] and differentiable on ]a, b[, with g(a) , g(b)
and g 0 (x) , 0 for any x ∈]a, b[. Then
∃ ξ ∈]a, b[ :
f (b) − f (a)
f 0 (ξ)
= 0
.
g(b) − g(a)
g (ξ)
(6.4.3)
f (b)− f (a)
Proof. — The proof follows applying Rolle’s thm to h(x) := f (x) − g(b)−g(a) (g(x) − g(a)). Exercise.
6.5
Hôpital’s rules
As we know, a typical problem computing limits is to compute
lim
x→x0
f (x)
,
g(x)
with an indeterminate form 00 or ∞
∞ . Hôpital’s rules are a very useful tool to treat this type of limits, transforming them
∞
into other and (hopefully!) easier limits. There’re four rules ( 00 at finite or at infinity, ∞
at finite or infinity), basically with
the same philosophy, but with different proofs.
Theorem 6.5.1 ( 00 at finite). Let f , g : D ⊂ R −→ R, null at x 0 ). Suppose moreover that
i) f , g are differentiable for x , x 0 ;
103
ii) g(x), g 0 (x) , 0 for x , x 0 ;
iii) there exists
lim
x→x0
f 0 (x)
= ` ∈ R ∪ {±∞}.
g 0 (x)
Then
∃ lim
x→x0
f (x)
= `.
g(x)
Proof. — Let’s do the proof for the limit as x −→ x 0 +. Being f , g null at x 0 we can extend both at x 0 by continuity (assigning value
0). Therefore
f (x)
f (x) − f (x 0 )
=
.
g(x)
g(x) − g(x 0 )
Let’s apply the Cauchy thm (our assumptions on f , g assure that we can do it): there exists ξ x ∈]x 0, x[ such that
f (x) − f (x 0 ) Cauchy 6.4.11 f 0 (ξ x )
=
.
g(x) − g(x 0 )
g 0 (ξ x )
As x −→ x 0 +, by two policemen thm, ξ x −→ x 0 +. Therefore, by iii),
lim
x→x0 +
f (x)
f 0 (ξ x )
= lim
= `.
g(x) x→x0 + g 0 (ξ x )
Example 6.5.2. Compute
lim
x→0
sinh x − x
.
sin(x 3 )
Sol. — Let f (x) := sinh x − x, g(x) := sin(x 3 ). Clearly f and g null at 0, are differentiables on R and because
g 0 (x) = 3x 2 cos(x 3 ),
we have g 0 = 0 iff x = 0 or x 3 = π2 + kπ, k ∈ Z. In particular g 0 , 0in a neighborhood of 0 without x = 0, that is ii) holds. Moreover
f 0 (x)
cosh x − 1
1
cosh x − 1 1 1 1
lim 0
= lim
= lim
= · = .
3 2 6
x→0 g (x)
x→0 3x 2 cos(x 3 )
x→0 3 cos(x 3 )
x2
Therefore, by first Hôpital’s rule, the proposed limit exists and has value 61 .
Remark 6.5.3. In practical applications, Hôpital’s rules are applied according the following notation:
f (x) H
f 0 (x)
.
= lim 0
x→x0 g(x)
x→x0 g (x)
lim
f0
This means that equality holds iff hypotheses of the rule holds. Never forget that the existence for the limit g0 implies the existence
f
f0
f
(with same value) for the limit g but not vice versa: the limit g0 couldn’t exists while the limit g exists! For instance:
1
x 2 sin x1
x x 2 sin x
1
= lim
= lim 1 x · x sin = 0
x
x
x→0 sin x
x→0 sin x
x→0
by the bounded×null rule. Applying the Hôpital’s rule,
lim
x 2 sin x1 H
2x sin x1 − cos x1
1
= lim
= − lim cos ,
cos x
x
x→0 sin x
x→0
x→0
that doesn’t exist! This, of course, it doesn’t contradict the Hôpital thm because this gives just a sufficient condition in order that
f (x)
f 0 (x)
lim x→x0 g(x) exists through existence of lim x→x0 g0 (x) .
lim
104
Hôpital’s rules may be iterated in their application.
Example 6.5.4. Compute
!
1
1
−
.
lim
x→0 log x
x−1
Sol. — Clearly
!
1
x − 1 − log x
1
= lim
−
x−1
x→0 log x
x→0 (x − 1) log x
lim
where we recognize a form 00 . By the first Hôpital’s rule,
0
1 − x1
1
x − 1 − log x H
x−1
1
1
0, H
= lim
= lim
= lim
= lim
= .
1
2
x→0 log x + 1 + 1
x→0 (x − 1) log x
x→0 log x + (x − 1)
x→0 x log x + x − 1
x→0 2 + log x
x
lim
Theorem 6.5.5 ( 00 at infinity). Let f , g differentiable on [a, +∞[, null at +∞ with g, g 0 , 0. Then
∃ lim
x→+∞
f 0 (x)
f (x)
= ` ∈ R ∪ {±∞}, =⇒ ∃ lim
= `.
x→+∞ g(x)
g 0 (x)
Proof. — We will reduce to the first rule:
f y1
f (x) y:= x1 , x→+∞, y→0+
F (y)
=: lim
lim
=
lim
,
x→+∞ g(x)
y→0+ g 1
y→0+ G(y)
y
where F (y) := f y1 , G(y) := g y1 . Let’s apply the first rule to the couple F, G: both are null at 0+, differentiable on ]0, a1 [ and being
!
1
1
, 0, ∀y ∈]0, 1/a[.
G 0 (y) = − 2 g 0
y
y
Moreover
− y12 f 0 y1
f 0 y1 x= y1
F 0 (y)
= lim
= lim
lim
= lim
x→+∞
y→0+ G 0 (y)
y→0+ − 1 g 0 1
y→0+ g 0 1
y
y
y2
by hypothesis: the conclusion now follows.
f 0 (x)
= `,
g 0 (x)
0
Theorem 6.5.6 ( ∞
∞ ). Let f , g be infinite functions at x 0 , differentiable in a neighborhood of x 0 with g, g , 0. Then
∃ lim
x→x0 +
f 0 (x)
f (x)
= ` ∈ R ∪ {±∞}, =⇒ ∃ lim
= `.
x→x0 + g(x)
g 0 (x)
Proof. — Omitted.
Warning! 6.5.7. Differently to what it could seems, Hôpital’s rules should never be applied in a blind mechanical way
because they could arise a useless never ending iteration. For instance, consider
e−1/x
x
x→0+
lim
0
0,
H
=
1
−1/x
2e
lim x
1
x→0+
e−1/x
x→0+ x 2
= lim
On the other hand
1
e−1/x
x
= lim 1/x
x
x→0+
x→0+ e
lim
∞
∞,
H
=
lim
0
0,
H
=
1
−1/x
2e
lim x 2
x→0+
x
− x12
x→0+ − 1 e1/x
x2
= lim
e−1/x
= ...
x→0+ x 4
= lim
1
x→0+ e1/x
1
+∞
= 0.
105
6.6
Derivative and monotonicity
Intuitively f % iff f 0 > 0. With some suitable prescriptions this is actually what happens and the interesting part (that is
f 0 > 0 implies f %) it is an immediate consequence of the Lagrange finite increment formula.
Theorem 6.6.1. Let f : I ⊂ R −→ R be differentiable on I interval. Then
f % ( f &) on I ⇐⇒ f 0 > 0, ( f 0 6 0) on I.
Moreover, if f 0 > 0 (or f 0 < 0) on I the monotonicity is strict.
Proof. — =⇒ Suppose f (x) 6 f (y) for any x, y ∈ I, x 6 y. Then
f (x + h) > f (x), ∀h > 0, =⇒ f 0 (x) = f +0 (x) = lim
h→0+
f (x + h) − f (x)
> 0,
h
f (x + h) − f (x)
>0
f (x + h) 6 f (x), ∀h < 0, =⇒ f 0 (x) = f −0 (x) = lim
h
h→0−
by permanence of sign.
⇐= Suppose f 0 > 0 on I and take x, y ∈ I, x < y. The interval [x, y] ⊂ I (because I is interval itself). Being f differentiable on I
it is continuous. In other words, f is continuous on [x, y] and differentiable on ]x, y[: applying the Lagrange finite increment formula
(6.4.2) on [x, y], there exists ξ ∈]x, y[ such that
f (y) − f (x) = f 0 (ξ)(y − x) > 0.
This means f (x) 6 f (y) and because x 6 y are arbitrary, we proved f %. If f 0 > 0 on I then f 0 (ξ) > 0 and we obtain f (x) < f (y),
that is f % strictly.
Remark 6.6.2. Be careful! f could be strictly increasing without f 0 > 0. For instance: take f (x) = x 3 . This is a clearly
strictly increasing function on R but f 0 (0) = 0.
In particular we have the
Corollary 6.6.3. Let f : I −→ R, be differentiable on I\{x 0 } interval, x 0 ∈ Int(I), continuous at x 0 . Suppose that
f 0 (x) 6 0, x < x 0, f 0 (x) > 0, x > x 0, =⇒ x 0 is a minimum for f on I.
Similar statement for maximum.
Remark 6.6.4. It is not required that ∃ f 0 (x 0√). This is to include cases where the function has an angle point (like
modulus at x = 0) or a cusp (like the function |x| at x = 0) that are clearly minimum (or maximum with other examples)
points. Of course: if ∃ f 0 (x 0 ), never mind!
Proof. — If f 0 6 0 on I∩] − ∞, x 0 [ (which is of course an interval) then, by previous thm, f & on it. Therefore
f (x) > f (y), ∀ x < y < x 0, =⇒ f (x) > lim f (x) = f (x 0 ), by continuity at x 0 .
y→x0 −
Similarly, f (x) > f (x 0 ) as x > x 0 . We conclude that
f (x) > f (x 0 ), ∀x ∈ I,
that is x 0 is a minimum point for f on I.
106
Figure 6.2: Angle point (left) and cusp (right).
We finish with a curiosity. At 99% of times, intuition works. But there’s still that 1% of subtleties which makes our
apparently clear theory something at all trivial. The question is the following: suppose that f 0 (x 0 ) > 0. The intuition
suggest that f % at least on a small neighborhood of x 0 . This is false!
Example 6.6.5. Let

x + 2x 2 cos x1 , x , 0,


f (x) := 


x = 0.
 0,
0
Check that ∃ f (0) = 1 but f is not monotone in any neighborhood of 0.
Sol. — Easily f ∈ C (R) and differentiable clearly as x , 0. For x = 0 we may notice that
!
f (h) − f (0)
1
f 0 (0) = lim
= lim 1 + 2h cos
= 1.
h
h
h→0
h→0
As x , 0,
1
1
− 2 sin .
x
x
Here you notice that as x −→ 0, 4x cos x1 −→ 0 while 2 sin x1 oscillates between −2 and 2 so, reasonably, f 0 will assume always positive
and negative values when x is arbitrarliy close to 0. For instance
f 0 (x) = 1 + 4x cos
!
3,

π


1
f0 π
= 1 − 2 sin
+ kπ = 1 + 2(−1) k = 

2

2 + kπ
 −1,
k even,
k odd.
This shows that we cannot find an I0 where f %, because otherwise it should be f 0 > 0 in I0 but points
any I0 when k is big enough and in those points
6.7
f0
< 0.
π
2
1
+kπ
with k odd belongs to
Differentiable inverse mapping theorem
Recall the continuous inverse mapping thm: any continuous strictly monotone function f : I −→ J with I, J intervals has
a continuous inverse. We want now to replace continuous with differentiable. Let’s first introduce an important class of
functions:
107
Definition 6.7.1. We say that f ∈ C 1 (D) if f is continuous and differentiable on D and f 0 ∈ C (D).
Of course, continuity in the previous definition is just pleonastic because implied by differentiability.
Theorem 6.7.2. Let f ∈ C 1 (I) on I interval with f 0 , 0 on I. Then f is invertible between I and J := f (I), f −1 ∈ C 1 (R)
and
1
( f −1 ) 0 (y) = 0 −1
, ∀y ∈ J.
(6.7.1)
f ( f (y))
Proof. — First: being f 0 ∈ C (I) and f 0 , 0, necessarily f 0 > 0 on I or f 0 < 0 on I. Indeed: if f 0 would assume both positive and
negative values, being continuous by zeroes thm it should take also value 0 somewhere (because we are working on intervals). The
conclusion is that f % stricly or f & strictly. In particular
f : I −→ J := f (I), is strictly monotone continuous function.
By the continuous inverse mapping thm 6.6.1 we have
∃ f −1 : J −→ I, f −1 ∈ C (J).
Let’s compute its derivative. We have to compute
f −1 (y + h) − f −1 (y)
.
h
h→0
lim
Setting
x := f −1 (y), x + k := f −1 (y + h), ⇐⇒ k = f −1 (y + h) − f −1 (y) −→ 0, as h −→ 0
being f −1 continuous. So, changing variable in the limit we get
f −1 (y + h) − f −1 (y)
k
1
1
1
= lim
= lim f (x+k)− f (x) = 0
= 0 −1
.
h
f (x)
h→0
k→0 f (x + k) − f (x)
k→0
f ( f (y))
( f −1 ) 0 (y) = lim
k
By this, in particular,
∈ C (J) (composition of continuous functions) and because f 0 , 0 by hypothesis, it follows immediately
−1
0
−1
that ( f ) ∈ C (J), so f ∈ C 1 (J).
f 0 ◦ f −1
Arcsin, arccos
Let f = sin. On ] − π2 , π2 [ we have sin0 = cos , 0, so f ∈ C 1 (] − π2 , π2 [) with f 0 , 0 on such interval. Therefore, by the
differentiable inverse mapping thm 6.7.2 the inverse, arcsin, is differentiable on J = sin(] − π2 , π2 [) =] − 1, 1[ and
arcsin0 (y) =
1
1
=
.
sin0 (arcsin y) cos(arcsin y)
Now: as y ∈] − 1, 1[, arcsin y ∈] − π2 , π2 [ so cos(arcsin y) > 0. But then, by the remarkable identity for sin and cos,
q
1
1
cos x = 1 − (sin x) 2, =⇒ arcsin0 (y) = p
=p
, ∀y ∈] − 1, 1[.
2
1 − (sin(arcsin y))
1 − y2
Similarly we proceed with arccos. The final result is
arccos0 (y) = − p
1
1 − y2
, ∀y ∈] − 1, 1[.
108
Arctan
Consider tan :=
sin
cos
g
f
: − π2 , π2 −→ R. On such interval tan ∈ C 1 and being
tan0 (x) =
π π
1
cos x cos x + sin x sin x
2
=
≡
1
+
(tan
x)
>
0,
∀x
∈
− ,
,
2 2
(cos x) 2
(cos x) 2
we can apply the differentiable inverse mapping thm: arctan := tan−1 ∈ C 1 and
arctan0 (y) =
1
1
1
=
=
, ∀y ∈ R.
tan0 (arctan y) 1 + (tan(arctan y)) 2 1 + y 2
Arcsinh, arccosh
We could compute derivatives for the function just deriving their analytical expression (5.3.1) and (5.3.2). Alternatively,
recalling that cosh2 − sinh2 = 1
arcsinh 0 (y) =
1
1
1
1
=p
=p
,
=
sinh (arcsinh y) cosh(arcsinh y)
(sinh(arcsinh y)) 2 + 1
y2 + 1
0
where we have chosen the positive root because cosh > 0. Similarly,
arccosh 0 (y) = p
6.8
1
y2 − 1
.
Convexity
Looking at the graph of a a smooth function we may notice another geometrical property that we could call curvature of
the graph. Intuitively it seems clear when the graph has the curvature upward or downward, like parabolas. We want now
to translate into a precise mathematical concept this property. The key is the following remark: when the curvature is
upward, connecting any two points (x, f (x)) and (y, f (y)) on the graph, this one turns out to stay below to the segment.
f HyL - f HxL
Hz,f HxL+
Hz-xLL
y-x
Hz,f HzLL
x
z
y
This property will be called convexity. The analytical definition is the following
Definition 6.8.1. Let f : I ⊂ R −→ R, I interval. We say that f is convex on I if
∀x, y ∈ I, f (z) 6 f (x) +
f (y) − f (x)
(z − x), ∀z ∈ [x, y].
y−x
(6.8.1)
If > holds for any couples x, y ∈ I we say that f is concave on I.
Convexity is a property with a rich geometrical sense. For instance, looking the previous picture, we may notice that for a
convex function the slopes of cords on the graph are increasing moving through the right. Here’s a precise statement:
109
x
z
y
Proposition 6.8.2. Let f : I ⊂ R −→ R, I interval. Then, f is convex on I iff
∀x, z, y ∈ I, x < z < y,
f (y) − f (z)
f (z) − f (x)
6
.
z−x
y−z
(6.8.2)
Proof. — It is not difficult to show that (6.8.1) and (6.8.2) are just the same: taking x < z < y,
f (z) 6 f (x) +
Now, being
we have
that is
f (y) − f (x)
(z − x), ⇐⇒
y−x
f (z) − f (x)
f (y) − f (x)
6
.
z−x
y−x
f (y) − f (x)
f (y) − f (z) + f (z) − f (x)
f (y) − f (z) y − z
f (z) − f (x) z − x
=
=
+
,
y−x
y−x
y−z
y−x
z−x
y−x
f (z) − f (x)
f (y) − f (x)
6
, ⇐⇒
z−x
y−x
f (z) − f (x)
f (y) − f (z) y − z
f (z) − f (x) z − x
6
+
,
z−x
y−z
y−x
z−x
y−x
!
f (z) − f (x)
z−x
f (y) − f (z) y − z
1−
6
, ⇐⇒
z−x
y−x
y−z
y−x
f (z) − f (x)
f (y) − f (z)
6
.
z−x
y−z
Consider now the points x < x + h < y + k < y and look at the following picture.
x
By previous proposition
x+h
y+k
y
f (x + h) − f (x)
f (y + k) − f (x + h)
f (y + k) − f (y)
6
6
.
h
(y + k) − (x + h)
k
Forgetting the middle term and letting h −→ 0+, k −→ 0− we have the
Theorem 6.8.3. Let f : I ⊂ R −→ R, I interval, f convex on I. The
if x < y, and ∃ f +0 (x), f −0 (y), =⇒ f +0 (x) 6 f −0 (y).
In particular:
if f is differentiable on I then f is convex iff f 0 % on I.
Proof. — The first statement has been deduced in the introductory considerations to this thm. By this it follows immediately that if f
is differentiable on I and convex then f 0 %.
110
Let’s prove the vice versa. Assume that f 0 %: to prove convexity we prove that (6.8.2) holds. Fix x < z < y. By Lagrange thm
applied on intervals [x, z] and [z, y] we have
∃ ξ ∈]x, z[ : f (z) − f (x) = f 0 (ξ)(z − x), ∃ η ∈]z, y[ : f (y) − f (z) = f 0 (η)(y − z).
But then
f (z) − f (x)
f (y) − f (z)
= f 0 (ξ) 6 f 0 (η) =
.
z−x
y−z
Remark 6.8.4. Of course: f differentiable is concave on I intervall iff f 0 &.
For differentiable functions we have a further characterization of convexity. Look again at the next picture. You’ll notice
that f convex is above all its tangent straight lines.
y
x
Precisely:
Proposition 6.8.5. Let f : I ⊂ R −→ R be differentiable on I interval. Then f is convex on I iff
f (y) > f (x) + f 0 (x)(y − x), ∀y ∈ I, ∀x ∈ I.
(6.8.3)
Proof. — =⇒ Take the (6.8.2) with points x < x + h < y. We have
f (x + h) − f (x)
f (y) − f (x + h)
6
.
x
y−x−h
Letting h −→ 0+ and recalling that f being differentiable is also continuous, we get
f 0 (x) = f +0 (x) = lim
h→0+
f (x + h) − f (x)
f (y) − f (x + h)
f (y) − f (x)
6 lim
=
,
h
y−x−h
y−x
h→0+
that is nothing but the (6.8.3).
⇐= Let’s prove the (6.8.2). Let x < z < y. By assumption (x − z < 0)
f (x) > f (z) + f 0 (z)(x − z),
f (y) > f (z) + f 0 (z)(y − z),
=⇒
f (x) − f (z)
f (y) − f (z)
6 f 0 (z) 6
,
x−z
y−z
that is, again, the (6.8.2).
How can we practically check if f is convex? Assuming f differentiable, we have seen that
f convex ⇐⇒ f 0 % on I.
Thinking to f 0 as a new function, if this function would be again differentiable on intervals we would have f 0 % iff
( f 0 ) 0 > 0. The derivative of the derivative will be called second derivative. To have a sense at some point x 0 it is necessary
that f 0 be defined at least in a neighborhood of x 0 (recall that we defined the derivative of a function that has to be in the
interior of the domain).
111
Definition 6.8.6 (second derivative). Let f : D ⊂ R −→ R be a differentiable function on some Ix0 ⊂ D in such a way
that the derivative f 0 : Ix0 −→ R. We say that f is two times differentiable at x 0 if
∃ ( f 0 ) 0 (x 0 ) =: f 00 (x 0 ). (called second derivative at x 0 ).
The combination of the thms 6.6.1 and 6.8.3 produces
Theorem 6.8.7. Let f : I ⊂ R −→ R be a two times differentiable function on I interval. Then
f is convex on I ⇐⇒ f 00 > 0 on I.
Proof. — Evident.
Definition 6.8.8 (flex point). Let f : I ⊂ R −→ R be differentiable on I interval. If f is convex (or concave) on x < x 0
and concave (convex) on x > x 0 we say that x 0 is a flex point for f .
Corollary 6.8.9. If f is two times differentiable on I interval and changes sign at point x 0 , then x 0 is a flex point.
6.9
Plot of the graph of a function
A classical application of Differential Calculus is to the problem to plot a graph of a given function. It is in general
a complex problem done by several steps that involve lots of different operations. The major difficulty is then to get a
coherent result. Usually we proceed by the following steps:
• preliminaries — with this we intend: to find the domain of the function, sign (but not always it is possible to solve
the inequality f > 0), the behavior at the extremes of the domain (limits and asymptotes, straight lines that will be
introduced in a moment), continuity and eventual points where the function is extendable with continuity;
• studying the first derivative — differentiability, behavior of the derivative at the extremes of its domain of definition
(it is better to remind: even if the analytic expression could be defined on a set bigger than the domain of the initial
function, the domain of the derivative is always a subset of the domain of the function! Just to be concrete:
log0 (x) = x1 it doesn’t mean that log0 (−1) makes sense!); hence the sign of the derivative, monotonicity of the
function and eventual local/global extreme points;
• studying the first derivative — differentiability of f 0, sign of f 00, convexity or concavity and eventual flexes.
Once we have all this informations we can plot a graph to have a qualitative idea of how is done f . We taught about
asymptotes: these are eventual straight lines that resemble the function. There’re three type of asymptotes: vertical,
horizontal and oblique:
Figure 6.3: Vertical, horizontal and oblique asymptotes.
112
• if for some x 0 ∈ R we have
lim f (x) = ±∞, (even unilateral).
x→x0
f becomes vertical at x 0 , similarly to x = x 0 . We say that x = x 0 is a vertical asymptote for f .
• if
lim f (x) = ` ∈ R,
x→±∞
f becomes horizontal at ±∞, like y = ` at ±∞. We say that y = ` is an horizontal asymptote.
• if
lim f (x) = ±∞,
x→±∞
f may be similar to y = mx + q with m ∈ R\{0} and q ∈ R. Precisely, we say that y = mx + q is an oblique
asymptote if
lim f (x) − (mx + q) = 0.
x→+∞
How can be found, in this case, m and q? Notice that
f (x) − (mx + q) = 0, =⇒
lim
x→±∞
lim
x→±∞
f (x) − (mx + q)
= 0.
x
But
f (x) − (mx + q)
f (x)
f (x)
= lim
− m, ⇐⇒ m = lim
.
x→±∞
x→±∞
x
x
x
Such limit must be , 0, otherwise lim x→±∞ f (x) − (mx + q) = lim x→±∞ ( f (x) − q) = ±∞. Once we found m we can find q
0 = lim
x→±∞
q = lim
x→±∞
f (x) − mx .
Therefore, y = mx + q is an oblique asymptote ±∞ iff
∃ lim
x→±∞
Example 6.9.1. Given the function
f (x)
=: m ∈ R\{0}, ∃ lim f (x) − mx =: q ∈ R.
x→±∞
x
(6.9.1)
f (x) = arctan(e x − 1) + log |e x − 4|
find its domain, the behavior at the extremes of the domain (limits and eventual asymptotes), continuity, differentiability,
monotonicity, eventual extreme points and plot a qualitative graph.
Sol. — Domain: clearly D( f ) = {x ∈ R : e x − 4 , 0} = {x , log 4} =] − ∞, log 4[∪] log 4, +∞[.
Limits and asymptotes: We have to check the behavior at ±∞ and log 4±.
f (−∞) = lim
x→−∞
π
arctan(e x − 1) + log |e x − 4| = arctan(−1) + log | − 4| = − + log 4,
4
by which we see also that the line y = − π4 + log 4 is horizontal asymptote at −∞. At +∞ easily we have f (+∞) = +∞, so it there may
be an oblique asymptote y = mx + q at +∞. Let’s determine the eventual m and q:
m = lim
f (x)
log(e x − 1) y=e x −1
log y
= lim
=
lim
= 1.
x→+∞
y→+∞
x
x
log(y + 1)
q = lim
f (x) − x = lim
x→+∞
x→+∞
x→+∞
π
ex − 4 π
arctan(e x − 1) + log |e x − 4| − x = + lim log
= .
2 x→+∞
ex
2
113
Therefore, the straight line y = x + π2 is oblique asymptote +∞. It remains the behavior at log 4±. Being |e x − 4| −→ 0+ as x → log 4
we have immediately that
lim f (x) = −∞.
x→log 4
Therefore, x = log 4 is vertical asymptote for f .
Continuity and differentiability: being composition of continuous and differentiable functions on their domain f is continuous and
differentiable. About f 0 we have
!
ex
1
1
1
x
x
x
f 0 (x) =
+
sgn(e
−
4)e
=
e
+
1 + (e x − 1) 2 |e x − 4|
1 + (e x − 1) 2 e x − 4
= ex
e x − 4 + 1 + (e x − 1) 2
e x − 4 + 1 + e2x − 2e x + 1
e2x − e x − 2
= ex
= ex
x
2
x
x
2
x
(1 + (e − 1) )(e − 4)
(1 + (e − 1) )(e − 4)
(1 + (e x − 1) 2 )(e x − 4)
There’re no points where it is interesting to compute limits for f 0 .
Monotonicity: We have
e2x − e x − 2
> 0.
ex − 4
f 0 (x) > 0, ⇐⇒
1+3
x
x
Now, setting y = e x , we have y 2 − y − 2 > 0, iff y 6 1−3
2 = −1 or y > 2 = 2, therefore iff e 6 −1 (never) or e > 2, that is
x
x > log 2. Moreover e − 4 > 0 iff x > log 4. Therefore we get the following table:
−∞
sgn(e2x − e x − 2)
sgn(e x − 4)
sgn( f 0 )
f
log 2
+
%
log 2
log 4
+∞
log 4
+
−
&
+
+
+
%
Being f continuous at x = log 2 we deduce that this is a maximum on ] − ∞, log 4[. Because f is upper unbounded (for instance) the
point is only local maximum. There’re not minimum points. We may conclude with the following graph:
y
log 2
log 4
x
Example 6.9.2. Let
f (x) =
x log x
.
(log x − 1) 2
Determine: the domain of f , sign, the behavior at the extremes of the domain (limits and asymptotes), continuity and
differentiability, eventual points where f can be extended by continuity/differentiability, monotonicity and extreme points.
Finally plot a qualitative graph.
Sol. — Domain: Clearly D( f ) = x ∈ R : x > 0, log x , 1 = {x ∈ R : x > 0, x , e} =]0, e[∪]e, +∞[.
Sign: We have
f > 0, ⇐⇒ log x > 0, ⇐⇒ x > 1,
and f = 0 iff log x = 0, that is x = 1.
114
Limits and asymptotes: We have to check the behavior of f at 0+, e± and +∞. We have
0−
x log x
+∞
= 0−, (because lim x→0+ x α | log x| β = 0),
2
x→0+ (log x − 1)
f (0+) = lim
e
x log x
0+
=
+∞, =⇒ x = e vertical asymptote,
x→e± (log x − 1) 2
f (e±) = lim
+∞
x log x
x
+∞
= lim
x→+∞ (log x − 1) 2
x→+∞ log x f (+∞) = lim
1
= +∞,
2
1 − log1 x
−→1
At +∞ we could have an oblique asymptote y = mx + q. Being
m = lim
x→+∞
f (x)
log x
= lim
= 0,
x→+∞ (log x − 1) 2
x
it follows that the asymptote doesn’t exist.
Continuity and differentiability: It is immediate that f is continuous and differentiable on its domain and
!0
log x + x x1 (log x − 1) 2 − x log x 2(log x − 1) x1
x log x
0
f (x) =
=
(log x − 1) 2
(log x − 1) 4
=
log x + 1 (log x − 1) − 2 log x
(log x) 2 − 2 log x − 1
=
.
3
(log x − 1)
(log x − 1) 3
By the previous point, being f (0+) = 0 we may extend f by continuity from the right at 0. Let’s see if such extension is also
differentiable. To this aim notice that
ξ 2 − 2ξ − 1
(log x) 2 − 2 log x − 1 ξ:=log x
=
lim
= 0−,
ξ→−∞ (ξ − 1) 3
x→0+
(log x − 1) 3
lim f 0 (x) = lim
x→0+
so, by a well known result, the extension of f at 0 it is alto differentiable.
Monotonicity, extreme points: We have
√
√
√
√
(log x) 2 − 2 log x − 1 > 0, ⇐⇒ log x 6 1 − 2, ∨ log x > 1 + 2, ⇐⇒ x 6 e1− 2, ∨ x > e1+ 2,
(log x − 1) 3 > 0, ⇐⇒ log x − 1 > 0, ⇐⇒ log x > 1, ⇐⇒ x > e.
By this we get the following table:
√
0
sgn(N )
sgn(D)
sgn( f 0 )
f
e1− 2
+
−
−
&
√
e1− 2
−
−
+
%
√
e
e
e1+ 2
−
+
−
&
√
√
e1+ 2
+
+
+
%
+∞
√
Now: f is continuous at √x = e1− 2 so this√ is a minimum point for f on√]0, e[. Similarly x = e1+ 2 is a minimum for f on ]e, +∞[.
Moreover because f (e1− 2 ) < 0 < f (e1+ 2 ) we conclude that x = e1− 2 is also a global minimum. There are not maximum points
because f is upper unbounded.
These kind of methods can be applied fruitfully to the study of inequalities.
115
y
x
1
ã
Example 6.9.3. Solve the inequality
ã1+
2
2x log x + 1 > x 2 .
Sol. — Consider the function f (x) := 2x log x + 1 − x 2 . Clearly D( f ) =]0, +∞[. Moreover
f (0+) = lim 2x log x + 1 − x 2 = 1,
x→0+
!
2
1
2x log x + 1 − x 2 = lim −x 2 1 − 2 − log x = −∞
x→+∞
x→+∞
x
x
f (+∞) = lim
being log x +∞ x. Now, f is continuous and differentiable on ]0, +∞[ and
f 0 (x) = 2 log x + 2x
1
− 2x = 2 log x + 2 − 2x = 2 log x + 1 − x .
x
Therefore
f 0 > 0, ⇐⇒ g(x) := log x − x + 1 > 0.
Let’s study the sign of g considering again g as function. We have g(0+) = −∞ and g(+∞) − ∞. Moreover g is continuous and
differentiable on ]0, +∞[ and
1
g 0 (x) = − 1.
x
Therefore
(x>0)
1
1
g 0 (x) > 0, ⇐⇒
− 1 > 0, ⇐⇒
> 1, ⇐⇒ x 6 1.
x
x
Being g continuous at x = 1 this is a global maximum for g. Being g(1) = 0 this means g 6 0 for any x ∈]0, +∞[. Then f 0 6 0 on
]0, +∞[ and f 0 = 0 iff g = 0 iff x = 1. Being finally f (0+) = 1 and f (+∞) = −∞ we deduce f (x) > 0 iff x ∈]0, 1[.
y
y
1
x
1
x
Figure 6.4: g and f .
6.10
Applied Calculus
It is a good idea now to have a break from pure maths and travel a little bit into some applied problem to show all the
versatility of differential calculus. Of course the choice of examples could be infinite, but we hope to give an idea at least!
116
Example 6.10.1. Determine radius r and height h of a cylindrical can having fixed volume V in such a way that the
surface S be minimum.
Sol. — The surface of the can is S = 2πr 2 + 2πr h, its volume is V := πr 2 h. Being V fixed
h=
V
2V
V
, =⇒ S = 2πr 2 + 2πr 2 = 2πr 2 +
.
r
πr 2
πr
Now
r
2V
4πr 3 − 2V
V
3 V
3
3
S (r) = 4πr − 2 =
> 0, ⇐⇒ 4πr − 2V > 0, ⇐⇒ r >
, ⇐⇒ r >
.
2
2π
2π
r
r
q
q
q
V ], S % on [ 3 V , +∞[. It follows that r = 3 V is a minimum point for S. In such case
Therefore S & on r ∈]0, 3 2π
2π
2π
0
r
r=
3
V
V
, h= 2 =
2π
πr
r
3
4V
.
π
A curiosity: if V = 0, 33 dm3 ≡ 333 cm3 (as in the case of cans used for drinks), we found r = 3, 75 cm and h = 7, 51 cm. These are
sizes which minimize the cost of the can (and you can check by yourself how this goal is attained by drinks producers. . . ).
Theorem 6.10.2 (Snellius’ law of refraction). The ratio of the sines of the angles of incidence and refraction of a light ray
traveling into two different homogeneous media is equal to the ratio of propagation speed into the respective media. In
symbols
sin α v1
= .
sin β v2
A
Α
X
Β
B
Proof. — The basic assumption is
Basic Axiom of Light Theory: light moves along paths minimizing the travel time.
For instance, assuming that we are in an homogeneous media, the Basic Axiom gives that light paths are just straight lines from one
point to another into the media.
With this in our hands, let’s formalize the problem. We have a point A in the first media, where the propagation speed is v1 and we
want to reach a second point B in the second media, where the propagation speed is v2 . The ray leaving from A go straight up to some
boundary point X where it enters into the second media. Therefore the total traveling time is
T=
AX
XB
+
.
v1
v2
For simplicity we assume that the two media occupy each one one half of the plane. Hence, if A = (a1, a2 ), B = (b1, b2 ) and X = (x, 0)
we have
q
q
(a1 − x) 2 + a22
(x − b1 ) 2 + b22
T (x) =
+
.
v1
v2
117
It is easy to see that such T has a unique minimum (T (−∞) = T (+∞) = +∞, T ∈ C (R) assures that T has a minimum). At a minimum
point, by Fermat Thm T 0 (x) = 0,
a −x
x − b1
T 0 (x) = q 1
+ q
= 0.
2
v1 (a1 − x) 2 + a2 v2 (x − b1 ) 2 + b22
Translating everything we may assume such point is x = 0. So,
a1
0 = T 0 (0) =
q
v1
On the other side
a1
q
a12
+
so
T 0 (0) = 0, ⇐⇒
6.11
a22
a12
−
+
b1
q
b1
= sin α,
q
.
v2 b21 + b22
a22
b21
+ b22
= sin β,
v
sin α sin β
sin α
= 1.
−
= 0, ⇐⇒
v1
v2
sin β
v2
Taylor formula
Recall that since the initial definition we have seen that
f is differentiable at x 0 ⇐⇒ f (x) = f (x 0 ) + f 0 (x 0 )(x − x 0 ) + o(x − x 0 ).
This formula has an interesting numerical interpretation: in first approximation, any (differentiable) function f is a first
order polynomial. Of course we have only a qualitative information about "how small" is the approximation: is o(x − x 0 )
that is, when x − x 0 is small, then o(x − x 0 ) is much smaller. We want now to investigate two questions:
• first, is it possible to increase the quality of the approximation with higher degree polynomials?
• second, is it possible to give a precise (quantitative) estimate of the approximation?
Both questions are connected with automatic calculus problems. How to say to a computer to compute sin x for any
x? We cannot imagine to miniaturize a man inside the computer such that, with a suitable unit circle, he measures x
radiants, hence he measures the coordinates of the corresponding point and gives us the answer! If we could say sin x
is approximatively a polynomial this would change things: polynomials are typical functions that can be computed by a
machine involving only elementary operations like products and sums.
To translate in mathematical formalism the first question, we are looking for some formula of type
f (x) =
n
X
ck (x − x 0 ) k + o((x − x 0 ) n ).
k=0
To guess the cn assume that f be exactly a polynomial, that is
f (x) =
n
X
ck (x − x 0 ) k .
k=0
Then c0 = f (x 0 ). Deriving,
f 0 (x) =
n
X
k=1
kck (x − x 0 ) k−1, =⇒ f 0 (x 0 ) = c1 .
118
Deriving once more
f 00 (x) =
X
k (k − 1)ck (x − x 0 ) k−2, =⇒ f 00 (x 0 ) = 2 · 1 · c2, ⇐⇒ c2 =
k=2
f 00 (x 0 )
f 00 (x 0 )
=
.
2·1
2!
Deriving again and calling f 000 = ( f 00 ) 0,
f 000 (x) =
X
k (k − 1)(k − 2)ck (x − x 0 ) k−3, =⇒ f 00 (x 0 ) = 3 · 2 · 1 · c3, ⇐⇒ c3 =
k=3
We may now guess that c4 =
f 0000 (x0 )
4!
f 000 (x 0 )
.
3!
and so on. To proceed let’s introduce the
Definition 6.11.1. Let f : D ⊂ R −→ R, x 0 ∈ Int(D), k ∈ N, k > 2. Suppose that
f (1) := f 0, f (2) := ( f 0 ) 0, f (3) := (( f 0 ) 0 ) 0, . . . , f (k−1) := (. . . ( f 0 ) 0 . . .) 0
(where the last derivative is repeated k − 1 times) exist in a neighborhood Ix0 . We say that there exists the k−th derivative
of f at x 0 if
f (k−1) (x 0 + h) − f (k−1) (x 0 )
f (k) (x 0 ) := lim
∈ R.
h→0
h
By definition we set f (0) ≡ f .
With this notations, the above argument shows that
if f (x) =
n
X
ck (x − x 0 ) k , =⇒ f (x) =
k=0
n
X
f (k) (x 0 )
(x − x 0 ) k .
k!
k=0
We are now ready for the
Theorem 6.11.2 (Peano). Let f be differentiable n − 1 times in Ix0 and such that exists f (n) (x 0 ). Then
f (x) =
n
X
f (k) (x 0 )
(x − x 0 ) k + o((x − x 0 ) n ).
k!
k=0
(Taylor formula)
The point x 0 is called center . If x 0 = 0 the formula is called McLaurin asymptotic expansion.
Proof. — We will limit to the case n = 2. We have to prove that
f 00 (x )
f (x) − f (x 0 ) + f 0 (x 0 )(x − x 0 ) + 2 0 (x − x 0 ) 2
lim
= 0.
x→x0
(x − x 0 ) 2
This is nothing but a form 00 . Applying the Hôpital’s rule we have
f 00 (x )
f (x) − f (x 0 ) + f 0 (x 0 )(x − x 0 ) + 2 0 (x − x 0 ) 2
f 0 (x) − f 0 (x 0 ) + f 00 (x 0 )(x − x 0 )
H
lim
=
lim
x→x0
x→x0
2(x − x 0 )
(x − x 0 ) 2
!
1
f 0 (x) − f 0 (x 0 )
lim
− f 00 (x 0 ) = 0,
=
2 x→x0
x − x0
because of the definition of f 00 (x 0 ) = ( f 0 ) 0 (x 0 ).
(6.11.1)
119
The McLaurin expansions
f (x) =
n
X
f (k) (0) k
x + o(x n ).
k!
k=0
are very important for the applications. Let’s see these formulas for the main elementary functions.
Example 6.11.3 (exponential).
ex =
n
X
xk
+ o(x n ), ∀n ∈ N.
k!
k=0
(6.11.2)
Sol. — Let f (x) := e x . Clearly f is derivable infinitely many times being f 0 = f . This means that the McLaurin formula can be
written up to any order n. We have
f (k) (x)
f (k) (0)
k
x
0 e
1
1 (e x ) 0 = e x
1
2 (e x ) 0 = e x
1
3 (e x ) 0 = e x
1
..
..
..
.
.
.
Therefore f (k) (0) = 1 for every k, hence the (6.11.2) follows.
Example 6.11.4 (sinh, cosh).
sinh x =
m
X
j=0
m
X x 2j
x 2j+1
+ o(x 2m+1 ), cosh x =
+ o(x 2m ), ∀m ∈ N
(2 j + 1)!
(2
j)!
j=0
(6.11.3)
Sol. — Let f = sinh, g = cosh. Being f 0 = g, g 0 = f we have that f and g are derivable infinitely many times, hence the McLaurin
expansions holds up to any order n. We have
k
0
1
2
3
..
.
f (k) (x)
sinh x
(sinh x) 0 = cosh x
(cosh x) 0 = sinh x
(sinh x) 0 = cosh x
..
.
f (k) (0)
0
1
0
1
..
.
g (k) (x)
cosh x
(cosh x) 0 = sinh x
(sinh x) 0 = cosh x
(cosh x) 0 = sinh x
..
.
k
0
1
2
3
..
.
g (k) (0)
1
0
1
0
..
.
We see that f (k) (0) = 0, 1 according to k even or odd. By this follows that the McLaurin expansion contains only powers with odd
exponent. By choosing n = 2m + 1 we get easily the first of the (6.11.3). Similar for g.
Example 6.11.5 (sin, cos).
sin x =
m
X
j=0
m
(−1) j
X
x 2j+1
x 2j
+ o(x 2m+1 ), cos x =
(−1) j
+ o(x 2m ), ∀m ∈ N.
(2 j + 1)!
(2
j)!
j=0
(6.11.4)
120
Sol. — Let f = sin, g = cos. Also in this case f and g are both derivable infinitely many times being f 0 = (sin x) 0 = cos x = g and
g 0 = (cos x) 0 = − sin x = − f . To compute the coefficients notice that
f (k) (x)
sin x
(sin x) 0 = cos x
(cos x) 0 = − sin x
(− sin x) 0 = − cos x
(− cos x) 0 = sin x
(sin x) 0 = cos x
..
.
k
0
1
2
3
4
5
..
.
f (k) (0)
0
1
0
−1
0
1
..
.
k
0
1
2
3
4
5
..
.
g (k) (x)
cos x
(cos x) 0 = − sin x
(− sin x) 0 = − cos x
(− cos x) 0 = sin x
(sin x) 0 = cos x
(cos x) 0 = − sin x
..
.
g (k) (0)
1
0
−1
0
1
−1
..
.
Therefore f (k) (0) = 0 for k even, f (k) (0) = ±1 for k odd. Precisely we see that writing k = 2 j + 1 we have f (2j+1) (0) = (−1) j . By
this the first of the (6.11.4) follows and similarly the second is obtained.
Example 6.11.6 (logarithm).
log(1 + x) =
n
X
k=1
(−1) k−1
xk
+ o(x n ).
k
(6.11.5)
Sol. — Let f (x) = log(1 + x). Clearly f is defined and derivable for x = 0. By looking at derivatives of f it is easy to check that it
admits derivatives of any order. To compute the coefficients notice that
k
0
1
2
3
4
5
..
.
f (k) (x)
log(1 + x)
1 = (1 + x) −1
(log(1 + x)) 0 = 1+x
((1 + x) −1 ) 0 = −(1 + x) −2
(−(1 + x) −2 ) 0 = 2(1 + x) −3
(2(1 + x) −3 ) 0 = −3 · 2(1 + x) −4
(−3 · 2(1 + x) −4 ) 0 = 4 · 3 · 2(1 + x) −4
..
.
f (k) (0)
0
1
−1
2
−3 · 2
4·3·2
..
.
Apart for the first coefficient (null) the others have alternate sign +, −, +, −, . . . and absolute value 1, 1, 2, 3 · 2, 4 · 3 · 2, . . .. Precisely:
f (k) (0) = (−1) k−1 (k − 1)! hence
log(1 + x) =
n
n
X
X
(−1) k−1 (k − 1)! k
(−1) k−1 k
x + o(x n ) =
x + o(x n ).
k!
k
k=1
k=1
Warning! 6.11.7. Be careful: with the McLaurin formula for the logarithm we don’t mean the McLaurin formula for
the function log x: this is not even defined at x = 0, hence f (0) and any of its derivatives don’t make sense at all! As
consequence, the McLaurin formula for log x is meaningless.
Example 6.11.8 (power).
(1 + x) α = 1 +
n
X
k=1
α
k
!
x k + o(x n ), dove
α
k
!
:=
α(α − 1)(α − 2) · · · (α − k + 1)
, ∀α ∈ R, ∀n ∈ N.
k!
(6.11.6)
121
Sol. — Let f (x) = (1 + x) α . We have
f (k) (x)
(1 + x) α
((1 + x) α ) 0 = α(1 + x) α−1
(α(1 + x) α−1 ) 0 = α(α − 1)(1 + x) α−2
0
α(α − 1)(1 + x) α−2 = α(α − 1)(α − 2)(1 + x) α−3
..
.
k
0
1
2
3
..
.
f (k) (0)
1
α
α(α − 1)
α(α − 1)(α − 2)
..
.
Now the conclusion follows easily.
Warning! 6.11.9. As for the logarithm,! with the McLaurin expansion of the power we don’t mean the McLaurin formula for x α but
of (1 + x) α . Moreover the notation
α
k
is used without confusion with the binomial coefficient, coinciding with it when α ∈ N.
Example 6.11.10. Compute the McLaurin asymptotic expansion to the fourth order of the function
f (x) = log(1 + e x )
Sol. — We need the first four derivatives of f :
f 0 (x)
=
ex
,
1 + ex
f 00 (x)
=
e x (1 + e x ) − e x e x
ex
=
,
x
2
(1 + e )
(1 + e x ) 2
f 000 (x)
=
x
x
e x − e2x
e x (1 + e x ) 2 − e x 2(1 + e x )e x
x 1 + e − 2e
=
e
=
,
(1 + e x ) 4
(1 + e x ) 3
(1 + e x ) 3
f 0000 (x)
=
(1 − 2e x )(1 + e x ) − 3(1 − e x )
(e x − 2e2x )(1 + e x ) 3 − (e x − e2x )3(1 + e x ) 2 e x
= ex
x
6
(1 + e x ) 4
(1 + e )
By this
1
1
1
f (0) = log 2, f 0 (0) = , f 00 (0) = , f 000 (0) = 0, f 0000 (0) = − .
2
4
8
Therefore
log(1 + e x ) = log 2 +
6.11.1
1
1/4 2 0 3 −1/8 4
1
1
1 4
x+
x + x +
x + o(x 4 ) = x + x 2 −
x + o(x 4 ).
2
2!
3!
4!
2
8
224
Computing limits by using asymptotic expansion
In this subsection we present a powerful method in computing limits. We will introduce it by a first example:
Example 6.11.11. Compute
lim
x→0
(1 − cos(3x)) 2
.
x 2 (1 − cos x)
122
2
Sol. — Call N and D numerator and denominator. Because 3x −→ 0 and cos t = 1 − t2 + o(t 2 ) as t −→ 0 (we choose the "shortest"
not trivial expansion to write the minor number of terms), we have
(3x) 2
N (x) = (1 − cos(3x)) = 1 − 1 −
+ o (3x) 2
2
2
!! 2
!2
81 4
9 2
2
=
x + o(9x ) =
x + 9x 2 o(9x 2 ) + o(9x 2 ) 2 .
2
4
Now, look at this expression with some intuitive numerical sense. You’ll remind that, computing limits, the biggest term (with respect
4
2
2
2 2
to ratios) gives the behavior (like in a sidecar: is the motor cycle the driving term). Look at terms 81
4 x , 9x o(x ) and o(9x ) . The
first is like x 4 ; the second is x 2 times something smaller than x 2 . It sounds reasonable that
9x 2 o(9x 2 ) = o(x 4 ).
But, is it true? Recall that f = o(g) means f /g −→ 0. So we have to check if
9x 2 o(9x 2 )
o(9x 2 )
o(9x 2 )
9x 2 o(9x 2 )
−→
0.
But:
=
9
=
81
−→ 0,
x4
x4
x2
9x 2
because o(♥)
♥ −→ 0 if ♥ −→ 0. By the same intuition it seems reasonable that
o(9x 2 ) 2 = o(x 4 ), Indeed,
o(9x 2 ) 2
o(9x 2 ) o(9x 2 )
o(9x 2 ) o(9x 2 )
=
=
81
−→ 0 · 0 = 0.
x2
x2
x2
9x 2
9x 2
We conclude that
81 4
81 4
x + o(x 4 ) + o(x 4 ) =
x + o(x 4 ),
4
4
4
because it is evident that o(x 4 ) + o(x 4 ) = o(x 4 ). We can now say that N (x) ∼ 81
4 x because
N (x) =
N (x) =
!
4 o(x 4 )
81 4
81 4
x 1+
=
x · 1x .
4
81 x 4
4
In other words: we reduced the numerator to some power! If we can do the same for the denominator we are done because it is much
easier to compare powers than complicate expressions.
!!
!
x2
1
1
x4
x2
D(x) = x 2 (1 − cos x) = x 2 1 − 1 −
+ o(x 2 ) = x 2
+ o(x 2 ) = x 4 + x 2 o(x 2 ) = x 4 + o(x 4 ) =
· 1x .
2
2
2
2
2
Therefore
81 x 4 · 1
x
N (x)
81
= 44
=
· 1 x −→ 1.
x ·1
D(x)
2
x
2
In the previous example we met some of the rules of calculus with infinitesimal quantities. They are easy to understand
and the reader is invited to develop some intuitive numerical sense on them.
Proposition 6.11.12. As x −→ 0:
• o(x) + o(x) = o(x);
• co(x) = o(cx) = o(x) for any c ∈ R\{0};
• x n = o(x m ) if n > m (also n, m reals, in this case x −→ 0+ to make sense);
• o(x n ) = o(x m ) if n > m (also n, m reals, in this case x −→ 0+ to make sense);
• x n o(x m ) = o(x n+m );
123
• o(x n )o(x m ) = o(x n+m );
• (x + o(x)) n = x n + o(x n ) (also n real, in this case x −→ 0+ to make sense);
• o(x + o(x)) = o(x).
Warning! 6.11.13. All these properties have the form ♦ = o(♥). This means ♥♦ −→ 0 as ♥ −→ 0. The order is important
and it cannot be inverted! For instance o(x 2 ) = o(x) is true but of course o(x) = o(x 2 ) is false!
n
Proof. — i) and ii) are easy (exercise). iii): x n = o(x m ) as n > m iff xxm −→ 0. But
xn
= x n−m −→ 0, (n > m).
xm
iv), v) and vi) are similar. vii) We will limit to the case n ∈ N. By Newton binomial formula
(x + o(x)) n =
n
X
k=0
n
k
!
x k o(x n−k ) = x n +
n−1
X
k=0
n
k
!
o(x n ) = x n + o(x n ),
by previous properties.
vii) is more delicate. We have to prove that o(x+o(x))
−→ 0. It would be natural to write
x
o(x + o(x))
o(x + o(x)) x + o(x)
=
−→ 0,
x
x + o(x)
x
but we have to be careful with division by 0. However, this is easily solved noticing that
!
o(x)
= x · 1x,
x + o(x) = x 1 +
x
and because 1 x −→ 1, 1 x , 0 in some I0 \{0}: but then x + o(x) = x · 1 x , 0 as x ∈ I0 \{0}, and this authorizes previous passages.
Example 6.11.14. Compute
lim √
x→0+
√4
2
cos x − e−x
.
√
x log(1 + x sin x) − x 3 + x(e x − 1)
Sol. — Immediately we recognize a form 00 . Recalling that
cos t = 1 −
we have
N (x)
=
t2
+ o(t 2 ), et = 1 + t + o(t), (1 + t) α = 1 + αt + o(t),
2
! 1/2
!
√4
2
x2
x2
cos x − e−x = 1 −
+ o(x 2 )
− 1−
+ o(x 2 )
2
2
= 1+
!
!!
1
x2
x2
x2
− + o(x 2 ) + o − + o(x 2 ) − 1 +
+ o(x 2 ).
2
2
2
2
!
x2
x2
+ o(x 2 ) + o − + o(x 2 ) .
4
2
2
2
2
By rules of calculus with o, o − x2 + o(x 2 ) = o(x 2 ), so N (x) = x2 + o(x 2 ) = x2 · 1 x .
=
124
Passing to the denominator,
sin t = t + o(t), log(1 + t) = t + o(t),
therefore
D(x)
=
√
√
√ x log 1 + x x + o( x) − x 3 + x (1 + x + o(x) − 1) = x 1/2 log 1 + x 3/2 + o(x 3/2 ) − x 3 + x 2 + o(x 2 )
= x 1/2 x 3/2 + o(x 3/2 ) + o x 3/2 + o(x 3/2 ) − x 3 + x 2 + o(x 2 )
= x 2 + o(x 2 ) − x 3 + x 2 = 2x 2 + o(x 2 ) = 2x 2 · 1 x ,
being x 3 = o(x 2 ). In conclusion
2
x ·1
x
N (x)
1
= · 1 x −→ 1 x .
= 22
D(x)
4
2x · 1 x
Example 6.11.15. Compute, as α > 0, the limit
lim+
x→0
log (1 + x α ) − sin(x 8 )
e x 2α − 1
Sol. — Being α > 0, x α −→ 0+ as x −→ 0+, so we recognize a form 00 . Recalling that
log(1 + ξ) = ξ −
sin ξ = ξ −
eξ = ξ +
we have
ξ2
ξn
+ . . . + (−1) n+1
+ o(ξ n ),
2
n
ξ3
ξ 2n+1
+ . . . + (−1) n
+ o(ξ 2n+1 ),
3!
(2n + 1)!
ξ2
ξn
+...+
+ o(ξ n ),
2!
n!















 as ξ → 0.















N (x) = x α + o(x α ) − (x 8 + o(x 8 )) = x α − x 8 + o(x α ) + o(x 8 ).
There’re three cases: α < 8, α = 8 and α > 8. In the first one
x 8 + o(x 8 ) = o(x α ), =⇒ N (x) = x α + o(x α ).
If α = 8 we have N (x) = o(x 8 ) which is too vague to know the precise behavior of the numerator. To solve the impasse, we extend the
asymptotic expansion for log and sin to get
!
(x 8 ) 3
x 16
(x 8 ) 2
+ o((x 8 ) 2 ) − x 8 −
+ o((x 8 ) 3 ) = −
+ o(x 16 ).
N (x) = x 8 −
2
3!
2
Finally, as α > 8, we have x α + o(x α ) = 0(x 8 ), so
Summarizing:
N (x) = x 8 + o(x 8 ).
x α · 1x,







 1 16
− 2 x · 1x,
N (x) = 






 8
 x · 1x,
α < 8,
α = 8,
α > 8.
125
About the denominator the discussion is easier:
D(x) = x 2α + o(x 2α ) = x 2α · 1 x .
Therefore










N (x) 
=
D(x) 










6.11.2
x α ·1 x
x 2α ·1 x
= x −α · 1 x −→ +∞,
− 1 x 16 ·1
α < 8,
= x2 16 ·1 x = − 21 · 1 x −→ − 12 ,
x
α = 8,
8
= xx2α·1·1x = x 8−2α · 1 x −→ +∞,
x
α > 8.
Applications to convergence for numerical series
The method we introduced in the previous subsection is quite flexible and can be used fruitfully for convergence of series.
Let’s see this with an example.
Example 6.11.16. Determine α > 0 such that the series
∞
X
n=1
n
2
1
1 − cos
n
!
!
1
1
− sin
,
nα
n
converges.
Sol. — Notice that the two parentheses are null as n −→ +∞. But how much? Let’s use the asymptotic expansion to answer: recall
first that
x2
x3
cos x = 1 −
+ o(x 2 ), sin x = x + o(x) = x −
+ o(x 3 ), x → 0,
2
6
so
!
!
1
(1/n) 2
1 2+
1
1
1
= 2 + o 2 ∼ 2,
+ o*
1 − cos =
n
2
n
2n
n
2n
,
while
1
1
1
1
1
− sin = α −
+o
nα
n n
n
n
!!
!
1
1
1
= α − +o
.
n
n
n
Here we have three cases: 0 < α < 1, α = 1 and α > 1. If 0 < α < 1 then
!
!
!
!
1
1
1
1
1
1
1
1
1
=o α , o
= o α , =⇒ α − sin = α + o α ∼ α .
n
n
n
n
n
n n
n
n
Therefore, if 0α < 1 we have
1 1
1
1n ∼ α .
α
2
n
2n
2n
In the case α = 1 the expansion for sin is not enough because we have
!
1
1
1
− sin = o
,
n
n
n
a n = n2
which is almost useless. Extending the expansion for sin we have
!
!
1
1 1
1 (1/n) 3
1 3 ++
1
1
1
− sin = − * −
+ o*
= 3 + o 3 ∼ 3,
n
n n ,n
6
n
n
6n
,
-- 6n
126
so
a n = n2
1
1 1
1n ∼
.
2n2 6n3
12n3
Finally, if α > 1, we have
!
!
1
1
1
1
1
1
1
=
o
,
=⇒
−
sin
=
−
+
o
∼− ,
nα
n
nα
n
n
n
n
hence
a n = n2
!
1
1
1
−
1n ∼ − .
2
n
2n
2n
Summarizing:









an ∼ 









6.12
1/2
nα ,
0<α<1
the series converges iff α > 1 by asymptotic comparison, hence never in this case;
1/12
,
n3
α=1
the series converges by asymptotic comparison;
− n1 ,
α>1
the series diverges by asymptotic comparison.
Exercises
Exercise 6.12.1. Let
1



x sin , x ∈ R\{0},


x
f (x) := 



 0,
x = 0.

Show that i) f is continuous at x = 0; ii) is not differentiable at x = 0.
Exercise 6.12.2. Let
1



x 2 sin ,


x
f (x) := 



 0,

i) Show that f is continuous at x = 0. ii) Is it differentiable at x = 0?
x ∈ R\{0},
x = 0.
Exercise 6.12.3. Let

sin x,
−1 6 x < 0,





f (x) := 

(sin x 2 ) 5



, 0 < x 6 1.

 (tan x 3 ) 2
Say if f is extendable continuously at x = 0. In this case, is the extension differentiable at x = 0?
Exercise 6.12.4. Using carefully the rules of calculus, say where the following functions are differentiable:
x 2 − 2x − 1
.
x−1
q
6. |x| 3 + 1.
1.
2. sin(log x).
x
7. |x + 1| x−1 .
3. (1 + e x ) 3 .
8.
q
√
sin x.
4. (sin x) cos x .
5. log 1 + | log |x|| .
9. cosh |x|.
10. sinh |x|.
Exercise 6.12.5. For any of the following functions say if they are differentiable at x = 0:
1. e− |x | . 2. x|x|. 3. |x sin x|. 4. x[x − 1]. 5. [x](x − 1).
127
Exercise 6.12.6. Compute the derivatives:
2. x43 + 5x 4 − x75 + x18 .
1. x 6 − 2x 3 + 6x.
√
5.
x 2 + 1 − x.
6.
√
√ x+1−1 .
x+1+1
4
x 3 − x13 + 3 .
1+x 2
1+x
5
x
7. √
√ .
3
√
1+ x
2
.
r q
√
4
8.
x 3 x x.
3.
4.
9. (sin x) 3 − 3 sin x.
x .
10. cos
x
11. tan x + cos1 x .
12. x 2 tan(x 2 + x + 1).
x2
x
13. 1+cos
x + tan 2 .
14. x arcsin x.
15. sin(2 arctan x).
x − arctan x.
16. 1+x
2
17. arcsin(sin x).
x
18. arctan 1−cos
sin x .
√
19. arctan x − 1 + x 2 .
20. x(log x − 1).
21. log tan x2 .
22. log log x.
23. x tan x + log(cos x).
24. arctan log x12 .
25. ee .
26. xe1−cos x .
27. 2 log x .
29. cosh(sinh x).
30. √ 1
31.
x
1+e−
x
33. x x .
√
x
.
34. (x x ) x .
x
28. log(sinh x − 1).
q
arctan sinh x3 .
35. sin x log x .
32. x x .
1
36. (cos x) x .
Exercise 6.12.7. Given the following f : [−1, 1] → R find a, b ∈ R such that ∃ f 0 (0):
(a + 1) arcsin x − 6(b + 3) sin x,



f (x) = 

√

4
 2a(x + x) − (b + 3)( x + tan x),
if − 1 6 x 6 0,
if 0 < x 6 1,
Exercise 6.12.8. Given the following f : [−1, 1] → R find a, b ∈ R such that ∃ f 0 (0):
1
4


 b(x + 3x) + (2 − a) cos x ,
f (x) = 


x
 (2b + 3)(e − 1) + (2 − a) tan 3x,
se − 1 6 x < 0,
se 0 6 x 6 1,
Exercise 6.12.9. Determine a ∈ R such that
π
cos 2x



log x ,

f (x) := 



 a,
0 < x < 1, x > 1,
x = 1,
be continuous at x = 1. For such value, discuss differentiability at x = 0.
Exercise 6.12.10. Compute
2 − 2 cos x − (sin x) 2
.
x→0
x4
2. lim
cos(2x) − cos x
.
x→+∞
x2
x 2 sin x1
.
x→0 sin x
1. lim
5.
lim
9. lim
e x − e−x − 2x
.
x − sin x
x→0
3. lim
6. lim
tan x − sin x
.
x→0
x3
7.
10. lim log x (e x − 1).
x→0+
11. lim x 1−x .
x→0
sin x − log cos x
.
x sin x
x 3 (log x) 2
.
x→+∞
ex
lim
1
x→1
4. lim
x→0
8.
x − sin x
.
x3
1
lim (e x + 1) x .
x→+∞
!
1
1
.
−
x−1
x→1 log x
12. lim
128
Exercise 6.12.11 (?). Compute
1
e − (1 + x) x
.
x
x→0
2. lim
!
1
2
− 2 .
x→0+ 1 − cos x
x
5. lim (1 + x 2 ) (sin x)2 .
1. lim
4. lim
7.
lim
x→+∞
x 2 log
x
+x .
x+1
−x x + x
.
x→1+ log x − x + 1
√
3. lim √
x→0+
− 1 log(x + 1)
6. lim
2
(1 + x) x − e2
.
x
x→0+
8. lim
x sin x + x 2
1 − log(x + 1)
1 + sin x
x→0
1
x→0
ex
9.
.
!1
x
.
!


2x + 1 6x
− e3  .
lim x 
x→+∞ 
2x


Studying functions
Exercise 6.12.12. For any of the following functions find: domain, eventual symmetries, behavior at the boundary of the domain
(included eventual asymptotes), sign (if possible), continuity, differentiability, limits of f 0 at the boundaries of the domain of f 0 , sign
of f 0 , monotonicity, local and global extreme points, convexity and flexes. Finally plot the graph.
1. f (x) := x 5 + x 4 − 2x 3 .
2. f (x) := x 2 log |x|.
q
3. f (x) := x x−1
x+1 .
4. f (x) := log 1 − log1|x | .
√
5. f (x) := arcsin 2x 1 − x 2 .
6. f (x) := 2x + arctan x 2x−1 .
sinh x .
7. f (x) := 2x + sinh
x−1
p
8. f (x) := x | log x|.
9. f (x) := x x .
Exercise 6.12.13. For each of the following functions find: domain, sign (only for numbers 6,7,8,9), limits and asymptotes, continuity
and differentiability, compute f 0 , limits of f 0 at the boundary of its domain, eventual points where f could be extended continuously
and with derivative, monotonicity, extreme points and plot a graph:
1 sinh x
−
.
1. arctan 3
sinh x 2. arctan(e x − 1) + log |e x − 4|.
x
4. 2x + arctan 2
.
x −1
5. 2 arcsin
7.
p
√
3x − 3x 2 + 2x
1
+ x.
cosh(x − 1)
8. log sinh2 x + 2 sinh x + 4
1
3. arctan (x + 1)e x .
6.
(x + 1) 1/3
log3 (x + 1)
q
9. arcsin 1 − (log x) 2 .
Exercise 6.12.14. Study the following inequalities
1. log x <
√
x. 2. log x −
x−1
x2
x − 2 log |x + 1|
> 0, 3. e x > 1 + x +
. 4. √
6 0.
x
2
e x − 2x − 1
Exercise 6.12.15 (?). Study, in function of the parameter α > 0, solutions of
ey = αy, y ∈ R,
129
Exercise 6.12.16 (?). Find all the possible values α ∈ R such that the following function be monotone increasing:
f (x) := αx −
x3
.
1 + x2
Exercise 6.12.17 (??). Find all the possible values α ∈ R such that the function f α (x) := eαx − α 2 x be monotone on [0, +∞[.
Exercise 6.12.18 (?). Let p > 1. Prove the inequality
21−p 6
Deduce by this the inequality
21−p 6
tp + 1
6 1, ∀t > 1.
(t + 1) p
xp + yp
6 1, ∀x, y > 0.
(x + y) p
Exercise 6.12.19 (??). For which values α ∈ R the function f α (x) := e x − αx 3 is convex on R?
Exercise 6.12.20. Compute
e x + e−x − 2
.
x→0 1 − cos x
2. lim
1. lim
3.
lim
x→+∞
x→0
1
x
x − x 2 log 1 +
x cos x − sin x
1 + x 2 − e x + (sin x) 3
.
4. lim
2
log(1 + x 2 ) + x 2 + (tan x) 2 + sin x
.
x→0
x 3 + log(1 + x)
6. lim √
log(2 − cos(2x))
.
x→0 log(sin(3x) + 1) 2
8. lim
x→0+
(sin x) 3 + x 3
1 − cos x + (tan x) 2 + arctan x
.
esin x − 1 − x
.
x→0
x2
7. lim
2
3
log(1 + x arctan x) − e x + 1
.
√
x→0
1 + 2x 4 − 1
e (sin x) − 1 − (tan x) 3
.
x→0 x 3 e x 2 − e (sin x) 2
10. lim
9. lim
2
log(1 + sin x) − x + x2
(tan x) 3 + x 5
x→0
.
x 3 + x 2 (sin x) 2 + sin x 2
.
x→0
x 4 + x 3 + x sin x
!!
5. lim
11. lim
2
7
.
xe x − cos(x 4 ) + 1 − x
.
x→0+ sinh x 4 − log(1 + x 4 )
12. lim
x2
e 2 − cosh x + (sinh x) 3
13. lim
.
x→0+ x log cosh x + (x log x) 4
Exercise 6.12.21. Compute, in function of α > 0, the following limits:
ln(1 + x α ) − sin x
.
1 − cos(x α )
x→0
3. lim
x 2 − arctan(x 2 )
.
x→0 e x 2 − cos x 2α
6. lim+
log(cos x)
.
xα
x→0
2. lim+
cos(x α ) − e x
.
x→0+ x log(1 + x α ) − x α
5. lim+
sin(x α ) − sinh(x 2 )
.
x→0+ log(1 + 2x 2 ) − (cos(2x) − 1)
8.
1. lim
4. lim
7.
lim
xe x − cos(x 2 ) + 1 − x
.
x→0+ sinh x α − log(1 + x 4 )
lim
11.
cos x − e−x
.
x→0+ cosh x + cos(x α ) − 2
lim
2
e−αx − cos x + log(1 + x) 2
.
x→0+
x3
lim
ln(1 + x) − x
.
α
e x − 1 + x log x
.
x→0 sin x 2α + 1 − cos x 2
α
2
3
10.
x→0+
α
e x − 1 − x α − 21 sin x
9.
log(1 + x 3 ) − e x − 1
.
x→0+ x sinh(x 2 ) − sin x(cos x − 1)
lim
130
Exercise 6.12.22 (?). Compute
q
√
√
x
3
1 − sin x − e x + 3
lim
.
arctan x
x→0+
Exercise 6.12.23 (?). Find α > 0 (if they exist) such that the following limit is finite and different by 0:
lim
√ √
log x + e x − sin x − x
xα
x→0+
.
Exercise 6.12.24. Find a, b ∈ R such that the following limit exists finite and not 0:
lim
a sin x − 2b log(1 + x) + 23 (a − 2)x 3
x2
x→0
Exercise 6.12.25 (?). Compute
!
√
3
2 log 2
lim (sin x) −2 2 1+sin x − 2 −
sin x .
3
x→0
Exercise 6.12.26. Discuss convergence for each of the following series:
1.
+∞
X
n=1
4.
1
sin .
n
2.
+∞ X
1
1
1 + e n − 2e 2n .
n=1
+∞
X
log(n + 1) − log n
.
√
n + log n
n=1
5.
+∞
X
+∞
X
√
3.
n log
n=1
e
√1
n
n=1
!
!
1
− 1 log 1 + √ .
n
∞
X
6.
e
√1
n
!
2n2 + 3
.
2n2 + 2
!
−1
sin
n=1
!
1 1
−
.
n n
Exercise 6.12.27. Determine for which α > 0 the following series are convergent:
1.
3.
√
1
2
n *cos
− e− n α + .
n
n=1 ,
+∞
X
∞
X
log cos
n=1
5.
+∞
X
√
n sinh
n=1
7.
2.
(n + sin n)
n=1
!
!
1
1
− α .
n
n
1
1
− log 1 + α
n
n
∞
X
4.
+∞
X
1
n e 2n α − cosh
n=1
!!
.
+∞
X
!
1
1
n3 sin 4 − e n α + 1 .
n
n=1
6.
8.
!
1
1
−
sin
.
nα
nα
!
1
.
n
+∞
X
!
1
1
n2 sin α − sinh 3 .
n
n
n=1
∞
X
nα
n=1
1
1
− arctan
n
n
!
Exercise 6.12.28 (?). Find α, β ∈ R such that the following series is convergent:
∞
X
!! 
 β
1
1 2 
nα e n2 − cos + log 1 +
 .
n
n


n=0
Exercise 6.12.29 (?). Discuss simple and absolute convergence for
1.
+∞
X
n=1
sin
!
+∞
∞
X
X
(−1) n
(−1) n
. 2.
log 1 +
. 3.
(−1) n
n
n
n=1
n=1
1+
!
!
∞
X
1 n
1
− e . 4.
(−1) n
n
n + (−1) n
n=2
131
Exercise 6.12.30 (??). Let a > 0. Discuss convergence for
∞
X
p
sin π n2 + a2 .
n=1
Exercise 6.12.31. Find radius r and height h of a cylindrical can with fixed surface S and volume maximum as possible.
Exercise 6.12.32. Among all the rectangles inscribed in a circumference of radius r, find those with maximum area.
Exercise 6.12.33. Find the maximum area for an isoscele triangle inscribed in a circumference of radius r.
Exercise 6.12.34. Find the minimum surface cone with circular base and height perpendicular to the base inscribed into a sphere of
radius r.
Exercise 6.12.35. Consider the part of the parabola y = −x 2 + 4 in the first quarter. The tangent to the parabola at some point form a
triangle with axes x and y. Find those with minimum and maximum area (if they exists).
Exercise 6.12.36. Among all the convex polygon with vertex on a circumference of radius r, find (if they exist) those with maximum
perimeter.
Exercise 6.12.37. A cylindrical can with base radiu r and height h costs C per cm2 of aluminium and C/3 per cm2 to paint the lateral
surface. Determine, in function of C and of the volume V fixed, r and h such that the cost of the can is minimum.
Exercise 6.12.38. A plane course connects two airports A and B. The plane take off by A ascending along a straight line up to height
h, then it flight at speed vC up the beginning of the descent to B. If p ∈] − 1, 1[ is the slope of the ascent/descent the speed of the plane
is v(p) = vC (1 − p). Find p in such a way that the flight is shorter as possible.
Exercise 6.12.39. A sail boat go back up on a regatta field long `. The wind has intensity F and constant direction opposite to the
direction of the boat. If θ is the angle between the boat course and the wind direction, the pressure on the sail is F sin θ. We suppose
that the course is done by two straight parts and the initial angle with the wind is α. Determine the time the boat has to turn in such a
way that the total time is minimum. Which is the angle α that minimizes the total time?
Exercise 6.12.40. In a fabric of boxes someone poses the following problem: what is the maximum volume of a box that can be
constructed by a suitably plied piece of cardboard which is a cut as a unique piece by a square cardboard of side L?
132
Chapter 7
Primitives
In few words, we want now to invert the operation of derivative:
given f on D find F differentiable on D such that F 0 = f on D.
The function F is called primitive of f . To solve this problem is very important. First of all it will allows, see the next
two Chapters, to treat in a satisfactory way the problem to compute plane areas. Hence, it is useful to solve differential
equations. Despite the mechanical nature of derivative, to compute primitives is at all an easy problem. Often it is not
possible to conclude the computation in a finite number of steps. There’re however several classes of primitives for which
there’re standardized methods.
Let’s start by the
Definition 7.0.1. Given f : D ⊂ R −→ R, a function F : D ⊂ R −→ R differentiable on D such that F 0 = f on D (that is
F 0 (x) = f (x) for any x ∈ D) is called primitive of f . We use the notation
Z
Z
F (x) =
f (x) dx, or, shortly, F =
f.
It is clear that the problem has not a unique solution: if F 0 = f then (F + c) 0 = f for any c ∈ R. These are actually the
unique typo of solutions on D intervals:
Proposition 7.0.2. Two primitives of f on D interval differ by an additive constant.
Proof. — Let F, G two primitives of f on D interval. Then
F 0 = f , G 0 = f , on D, =⇒ F 0 − G 0 = (F − G) 0 = 0, on D.
The conclusion follows now by the following
Lemma 7.0.3. Let H differentiable such that H 0 = 0 on D interval. Then H is constant.
Proof. — It s still a consequence of the Lagrange’s finite increment formula. taken x, y ∈ D, x < y, then [x, y] ⊂ D (because D is
interval), hence H is differentiable on ]x, y[ and continuous [x, y]. Therefore
∃ξ ∈]x, y[ : H (y) − H (x) = H 0 (ξ)(y − x) = 0.
So H (x) = H (y) for any x, y ∈ D, that is H is constant.
By the proposition, once we know a primitive
R
f (x) dx on an interval, we know all the primitives.
133
134
7.1
Elementary and quasi–elementary primitives
Reading the derivatives of elementary functions we have a first table of primitives:
Z
e x dx = e x,
x ∈] − ∞, +∞[
Z
sin x dx = − cos x,
x ∈] − ∞, +∞[,
Z
cos x dx = sin x,
x ∈] − ∞, +∞[,
Z
sinh x dx = cosh x,
x ∈] − ∞, +∞[,
Z
cosh x dx = sinh x,
x ∈] − ∞, +∞[,
Z
x α dx =
Z
1
dx = log |x|,
x
x ∈ R\{0},
1
dx = arctan x,
1 + x2
x ∈] − ∞, +∞[,
Z
Z
√
Z
1
1 − x2
dx = arcsin x,
1
dx =
(cos(x)) 2
Z
√
Z
√
1
x2
+1
1
x2
−1
x ∈ R, se α ∈ N,



 x ∈ R\{0}, se α ∈ Z, α < −1,

 x ∈]0, +∞[, se α ∈ R, α , −1,

x α+1
,
α+1
Z
x ∈] − 1, 1[,
(1 + (tan x) 2 ) dx = tan x,
x ∈] − ∞, +∞[\
dx = arcsinh x,
x ∈ R,
dx = arccosh x,
x ∈ [1, +∞[.
(
π
2
)
+ kπ, k ∈ Z ,
Notice that some functions (log, arcsin, arctan), that are elementary for us, don’t appear in this table. This is because we
don’t have in the table functions whose derivative is respectively log, arcsin, arctan. We will see later how to compute
them.
Applying the chain rule we have an extension of the previous table. For instance
e f (x)
0
= e f (x) f 0 (x), ⇐⇒
Z
e f (x) f 0 (x) dx = e f (x) .
135
At the same way
Z
e f (x) f 0 (x) dx = e f (x),
Z
f 0 (x)
dx = log | f (x)|,
f (x)
Z
sin( f (x)) f (x) dx = − cos( f (x)),
Z
cos( f (x)) f 0 (x) dx = sin( f (x)),
Z
sinh( f (x)) f 0 (x) dx = cosh( f (x)),
Z
cosh( f (x)) f 0 (x) dx = sinh( f (x)),
Z
f (x) α f 0 (x)dx =
Z
f 0 (x)
dx = arctan f (x),
1 + f (x) 2
f 0 (x)
dx = tan f (x),
(cos f (x)) 2
0
f (x) α+1
,
α+1
f 0 (x)
Z
dx = arcsin f (x),
p
1 − f (x) 2
Z
Z
f 0 (x)
dx = arcsinh f (x),
p
1 + f (x) 2
Z
f 0 (x)
p
f (x) 2 − 1
dx = arccosh f (x).
Basically, any computation for a primitive reduces to one of previous cases: it is therefore important to be quite able to
recognize these particular forms.
7.2
Rules of calculus
Rules of calculus for primitive are the reverse reading of rules of calculus for derivative. The first follows immediately by
the linearity of derivatives. Suppose that F and G are primitives of f and g respectively: then
Z
(αF + βG) 0 = αF 0 + βG 0 = α f + βg, ⇐⇒ αF + βG =
(α f + βg).
But F =
R
f and G =
Z
R
g so
(α f (x) + βg(x)) dx = α
Z
f (x) dx + β
Z
g(x) dx, ∀α, β ∈ R. (linearity)
(7.2.1)
The rule for the derivative of the product gives an important formula, often helpful in computations.
Z
Z
Z
linearity
( f g) 0 = f 0 g + f g 0, ⇐⇒ f g =
f 0 g + f g 0 , ⇐⇒
f 0g = f g −
f g 0,
that is
Z
f 0 (x)g(x) dx = f (x)g(x) −
Z
f (x)g 0 (x) dx. (calculus by parts)
(7.2.2)
0
The aim of this formula is to transform the calculus
of the primitive
R
R of some function on which we recognize the form f g
0
0
0
into the primitive of f g . The hope is that f g is easier than f g. There’re of course several situations, presented in
the following examples.
136
Example 7.2.1. Compute
Z
x log x dx.
2 0
Sol. — Notice that x = x2 , while we don’t have nothing to write like log x = (. . .) 0 . So the choice is "obliged": applying the part
formula
!0
Z
Z
Z
Z 2
Z 2
x2
x2
x2
1
x2
x
x 1
log x dx =
log x 0 dx =
x dx
x log x dx =
log x −
log x −
dx =
log x −
2
2
2
2
2 x
2
2
x2
x2
log x −
.
2
4
This shows well what does it means "to transform in to a simpler problem": after the application of the formula we have to compute an
elementary primitive!
=
Example 7.2.2. Compute
Z
log x dx.
Sol. — Let’s write log x = 1 · log x = (x) 0 · log x. Then
Z
Z
Z
Z
1
1 dx = x log x − x.
log x dx =
(x) 0 log x dx = x log x −
x · dx = x log x −
x
Sometimes we have more choices for the derivative factor, but the choices are not equivalent.
Example 7.2.3. Compute
Z
x sin x dx.
2 0
Sol. — We could write x sin x = x2 sin x = x(− cos x) 0 . Using the first identity
Z
x sin x dx =
Z
x2
2
!0
sin x dx =
x2
sin x −
2
Z
x2
x2
1
cos x dx =
sin x −
2
2
2
Z
x 2 cos x dx,
and we could
0 that the new problem appears more difficult than the initial one. Of course we could iterate the method writing
3 say
x 2 cos x = x3 cos x = x 2 (sin x) 0 . The second choice bring us to the starting point!
Z
x sin x dx
=
x2
1
sin x −
2
2
=
x2
1 2
sin x −
x sin x −
2
2
Z
x 2 cos x dx =
Z
x2
1
sin x −
2
2
Z
x 2 (sin x) 0 dx
! Z
2x sin x dx =
x sin x dx,
3 0
which is a correct but useless identity. If we choose the option x 2 cos x = x3 cos x we get a still complicate primitive
Z
x sin x dx =
1
x2
sin x −
2
2
Z
x 2 cos x dx =
1 x3
x2
sin x −
cos x +
2
2 3
Z
!
x3
sin x dx = . . .
3
Let’s come back to the starting point and let’s take the other road: x sin x = x(− cos x) 0 . We have
Z
Z
x sin x dx = −x cos x +
cos x dx = −x cos x − sin x.
137
Sometimes an iteration gives an equation for the desired primitive.
Example 7.2.4. Compute
Z
e2x sin(3x) dx.
Sol. — We have
Z
e2x sin(3x) dx
=
Z
e2x
2
!0
e2x
3
sin(3x) −
2
2
sin(3x) dx =
!0
Z
e2x cos(3x) dx
=
e2x
3
sin(3x) −
2
2
=
!
Z
3
9
e2x
sin(3x) − cos(3x) −
e2x sin(3x) dx,
2
2
4
e2x
2
Z
cos(3x) dx =
e2x
3 e2x
3
sin(3x) −
cos(3x) +
2
2 2
2
Z
!
e2x sin(3x) dx
and by this
Z
e
2x
!
!
3
2 2x
3
4 e2x
sin(3x) − cos(3x) =
e
sin(3x) − cos(3x) .
sin(3x) dx =
13 2
2
13
2
The second important "technique", after parts, is the change of variable. Let’s start by considering an example, to compute
Z √
e x dx.
It would be natural to set y =
√
x. So
√
Z
e
x
dx
√
y= x, x=y 2
=
Z
??? dy
What have we to write in place of ??? ? Let’s take the question from a general point of view: we want to compute
Z
f (x) dx
and to do this we want to introduce a new variable y = ϕ(x). If this is a real change of variable we expect that x ←→ y,
that y = ϕ(x) is invertible in x = ψ(y). Then
Z
F (x) =
f (x) dx, =⇒ (F (ψ(y))) 0 = F 0 (ψ(y))ψ 0 (y) = f (ψ(y))ψ 0 (y),
that is
F (ψ(y)) =
And because F =
R
f , we obtained the following
Z
f (x) dx y=ϕ (x),
Z
f (ψ(y))ψ 0 (y) dy.
=
Z
f (ψ(y))ψ 0 (y) dy.
x=ψ (y)
This is called change of variable formula. An easy way to remind it is to observe that if
Z
Z
y = ϕ(x), x = ψ(y), =⇒ dx = ψ 0 (y) dy, =⇒
f (x) dx =
f (ψ(y))ψ 0 (y) dy.
(7.2.3)
138
√
√
Let’s see in the initial example how does it works: we set y = x (this means ϕ(x) = x), hence x = y 2 (this means
ψ(y) = y 2 ). Therefore dx = 2y dy, so
!
Z √
Z
Z
Z
√ √
parts
e x dx =
ey 2y dy = 2
y(ey ) 0 dy = 2 ey y −
ey dy = 2ey (y − 1) = 2e x ( x − 1).
Easy to play, isn’t it?
Example 7.2.5. Compute
Z p
e2x − 1 dx.
√
2y
y
Sol. — Set y = e2x − 1, that is y 2 = e2x − 1, 2x = log(1 + y 2 ), x = 12 log(1 + y 2 ). We have dx = 21 1+y 2 dy = 1+y 2 dy, so
√
Z p
Z
Z
Z 2
y= e2x −1, x= 12 log(1+y 2 )
y
y2
y +1−1
e2x − 1 dx
=
y·
dy
=
dy
=
dy
1 + y2
1 + y2
1 + y2
Z
Z
p
p
1
=
1 dy −
dy = y − arctan y = e2x − 1 − arctan e2x − 1.
2
1+y
In some cases the new variable is given directly in the relation x = ψ(y). We don’t need to invert to proceed into the change
of variables because we have dx = ψ 0 (y) dy. Of course to finish the computation, sometimes to know the connection
y = ϕ(x) is necessary. Here’s some interesting examples based upon identities
cos2 + sin2 = 1, cosh2 − sinh2 = 1.
Example 7.2.6. Compute
Z p
1 − x 2 dx.
Sol. — Set x = sin y (that is y = arcsin x). Then dx = cos y dy so
Z p
Z q
Z q
Z
(∗)
1 − x 2 dx =
1 − (sin y) 2 cos y dy =
(cos y) 2 cos y dy =
| cos y| cos y dy =
Now, being y = arcsin x ∈ [− π2 , π2 ], it follows that cos y > 0, so
Z
Z
Z
Z
(∗)
=
(cos y) 2 dy =
(cos y)(sin y) 0 dy = cos y sin y − (− sin y)(sin y) dy = cos y sin y + (1 − (cos y) 2 ) dy,
that is
hence
Z
(cos y) 2 dy =
1
cos y sin y − y ,
2
Z p
1 p
1
1 − x 2 dx = (cos(arcsin x)x − arcsin x) =
x 1 − x 2 − arcsin x .
2
2
Example 7.2.7. Compute
Z p
1 + x 2 dx.
Sol. — Set x = sinh y (that is y = arcsinh x). Then dx = cosh y dy so
Z p
Z q
Z q
Z
Z
2
2
2
1 + x dx =
1 + (sinh y) cosh y dy =
(cosh y) cosh y dy =
| cosh y| cosh y dy =
(cosh y) 2 dy,
139
being cosh y > 1 > 0. Now
Z
Z
Z
(cosh y) 2 dy = cosh y sinh y − (sinh y) 2 dy = cosh y sinh y − (cosh y) 2 − 1 dy
by which we get
Z
(cosh y) 2 dy =
1
cosh y sinh y + y
2
and finally
Z p
1
1 p
x 1 + x 2 + arcsinh x .
1 + x 2 dx = (x cosh(arcsinh x) + arcsinh x) =
2
2
7.3
Primitive of rational functions
This is a class of functions for which there exists an algorithm for the calculus of the primitive of any rational function. This algorithm
is based on few particular cases:
Z
Z
1
αx + β
dx,
dx, (a , 0, n ∈ N).
(ax + b) n
(ax 2 + bx + c) n
We discourage the reader to memorize formulas. It is enough to have clear the method.
Case
R
1
(ax+b) n
dx
This is basically a quasi elementary primitive:
Z
Case
R
1
1
dx =
(ax + b) n
a
αx+β
(ax 2 +bx+c) n
Z







a

dx
=
n

(ax + b)






1
log |ax + b|,
a
n = 1,
1 (ax + b) −n+1
1
1
=−
,
a
−n + 1
a (n − 1)(ax + b) n−1
n > 1.
dx.
We divide this into few steps. Let’s start with the special case
Z
1
dx.
ax 2 + bx + c
Of course we will have three alternatives according that ∆ := b2 − 4ac be > 0, = 0 or < 0.
• ∆ > 0. Then
ax 2 + bx + c = a(x − ξ)(x − η), con ξ , η.
We can write the decomposition
1
1
1
1
A
B
=
=
+
a (x − ξ)(x − η)
a x−ξ x−η
ax 2 + bx + c
!
where A and B can be easily determined in such a way that the identity holds. Imposing it we need

B = −A,



⇐⇒ 


 A= 1 .
ξ−η

It is clear that this system has always a unique solution A, B when ∆ > 0. But then
!
Z
Z
1
1
A
B
1
dx =
dx
=
+
A log |x − ξ | + B log |x − η| .
a
x−ξ x−η
a
ax 2 + bx + c
A + B = 0,



A(x − η) + B(x − ξ) ≡ 1, ⇐⇒ 


 −Aη − Bξ = 1,
140
• ∆ = 0. In this case
Z
Z
1 −1
1
1
dx =
=−
.
a x−ξ
a(x − ξ)
a(x − ξ) 2
R
• ∆ < 0. In this case we have two complexes roots. The idea is to reduce to the special case x 21+1 dx = arctan x. Let’s see how:
we first rewrite our polynomial in the form 1+square. Precisely
!
!
!
b 2
b2
b 2 4ac − b2 +
b
c
c
= a* x +
− 2 + + = a* x +
+
ax 2 + bx + c = a x 2 + 2 x +
.
2a
a
2a
a2a
4a
4a2 ,
,
ax 2 + bx + c = a(x − ξ) 2, =⇒
1
dx =
ax 2 + bx + c
2
b then
Notice that α := 4ac−b
> 0, and if we write β := 2a
4a2
!
x+β 2
ax 2 + bx + c = a (x + β) 2 + α = aα * √
+ 1+ ,
α
,
so
Z
1
1
dx =
aα
ax 2 + bx + c
Z
1
1+
1
2 dx = √
a
α
x+β
√
Z
α
√
1
x+β
1/ α
2 dx = √ arctan √ .
a
α
α
x+β
1+ √
α
Let’s pass to the case
αx + β
dx.
ax 2 + bx + c
This is easily reduced to the previous one. First we adjust the numerator to make the appearance of the derivative of the denominator:
Z
(ax 2 + bx + c) 0 = 2ax + b,
then
Z
2aβ
2ax + b
α
α −b
dx
+
dx
2
2
2a
ax + bx + c
ax + bx + c
!Z
α
α 2a β
1
dx,
=
log |ax 2 + bx + c| +
−b
2a
2a α
ax 2 + bx + c
and now we are reduced to the previous case.
Z
αx + β
dx
ax 2 + bx + c
Example 7.3.1. Compute
=
α
2a
Z
2aβ
2ax + α
α
dx =
2a
ax 2 + bx + c
Z
Z
x+3
dx.
x 2 + 2x + 2
Sol. — Reduce first the numerator to the derivative of the denominator (x 2 + 2x + 2) 0 = 2x + 2. We have
!
Z
Z
Z
Z
1
2x + 6
1
2x + 2
4
x+3
dx
=
dx
=
dx
+
dx
2
2
x 2 + 2x + 2
x 2 + 2x + 2
x 2 + 2x + 2
x 2 + 2x + 2
Z
1
1
= log |x 2 + 2x + 2| + 2
dx.
2
x 2 + 2x + 2
Now: ∆ = 22 − 4 · 1 · 2 = 4 − 8 = −4 < 0, so we have to complete the square. This is easy because
Z
Z
1
1
x 2 + 2x + 2 = x 2 + 2x + 1 + 1 = (x + 1) 2 + 1, =⇒
dx
=
dx = arctan(x + 1),
2
x + 2x + 2
1 + (x + 1) 2
therefore
Z
x2
x+3
1
1
dx = log |x 2 + 2x + 2| + 2 arctan(x + 1) ≡ log(x 2 + 2x + 2) + 2 arctan(x + 1).
2
2
+ 2x + 2
141
It would remain the case
αx + β
dx, n > 2,
(ax 2 + bx + c) n
that can be treated by parts. We omit this case, and we will show how to proceed in one of next examples.
Z
General case
We are now ready for the general case, that is to compute
Z
p(x)
dx, dove p, q polynomials.
q(x)
The first remark is that we can assume that the degree of p is strictly less than the degree of q. Otherwise we can divide p by q obtaining
something like
p(x)
pH(x)
= r (x) +
, with r polynomial and deg pH < deg q.
q(x)
q(x)
Hence, let’s assume that deg p < deg q. Now, any polynomial q can be always rewritten in the following canonical form
q(x) = a(x − x 1 ) m1 · · · (x − x k ) mk (x 2 + b1 x + c1 ) n1 · · · (x 2 + bh x + ch ) nh
where a ∈ R, x j ∈ R and x i , x j , m j ∈ N, b j , c j ∈ R with ∆ j := b2j − 4c < 0, nh ∈ N. Then, the following Hermite decomposition
holds:
mk
nh
m1
n1
X
X
X
Ak, j
Bh, j x + Ch, j +
A1, j
B1, j x + C1, j
1 X
p(x)
/.
= *.
+
.
.
.
+
+
+
.
.
.
+
j
j
2
j
2
q(x)
a
(x − x 1 )
(x − x k )
(x + b1 x + c1 )
(x + bh x + ch ) j
j=1
j=1
j=1
, j=1
R p(x)
In particular, the calculus of q(x) dx is reduced to the particular cases. About the coefficients A, B, Cs they can be determined
imposing that the previous decomposition holds. Let’s see with some examples how this works.
Example 7.3.2. Compute
Z
x2 + 1
dx.
x3 + 1
Sol. — Notice that
x 3 + 1 = (x + 1)(x 2 − x + 1),
and this is just the canonical form being x 2 − x + 1 irreducible. Then the Hermite decomposition is
x2 + 1
x2 + 1
A
Bx + C
+
=
=
.
x + 1 x2 − x + 1
x 3 + 1 (x + 1)(x 2 − x + 1)
To determine A, B, C, first notice that
A
Bx + C
A(x 2 − x + 1) + (Bx + C)(x + 1)
( A + B)x 2 + (B + C − A)x + ( A + C)
+
=
=
,
3
x + 1 x2 − x + 1
x +1
x3 + 1
therefore
A+ B = 1



A
Bx + C
x2 + 1
2
2
 B + C − A = 0,
+ 2
=
,
⇐⇒
(
A
+
B)x
+
(B
+
C
−
A)x
+
(
A
+
C)
=
x
+
1,
⇐⇒

x + 1 x − x + 1 (x + 1)(x 2 − x + 1)

 A + C = 1,
that is A = 31 , B = 23 , C = 23 . By this
Z 2
Z
Z
Z
x +1
1
1
2
x+1
1
1
2x − 1 + 3
dx
=
+
=
log
|x
+
1|
+
dx
3
2
3
x
+
1
3
3
3
x +1
x −x+1
x2 − x + 1
Z
1
1
1
2
= log |x + 1| + log |x − x + 1| +
dx.
3
3
x2 − x + 1
142
About the last, noticed that ∆ < 0, we have to complete the square:
x2 − x + 1 = x −
so
Z
In conclusion:
Z
!
!
!
1
1 2
1 2 3 3 * 2x − 1 2
+ 1+ ,
+1− = x−
+ =
√
2
4
2
4 4,
3
-
1
4
dx =
3
x2 − x + 1
Z
!
2
2x − 1
=
arctan
.
√
√
2
3
3
2x−1
√
1
1+
3
!
x2 + 1
1
2x − 1
1
2
2
dx
=
arctan
.
log
|x
+
1|
+
log
|x
−
x
+
1|
+
√
√
3
3
x3 + 1
3
3
Example 7.3.3. Compute
Z
2x 4 + x 3 + 3x 2 + 1
dx.
x(x 2 + 1) 2
Sol. — The Hermite decomposition is
A Bx + C
Dx + E
2x 4 + x 3 + 3x 2 + 1
= + 2
+ 2
.
x
x(x 2 + 1) 2
x +1
(x + 1) 2
Imposing the identity we get easily the system

















Therefore
Z
2x 4 + x 3 + 3x 2 + 1
dx
x(x 2 + 1) 2
A + D = 2,
E = 1,
2A + B + D = 3, , ⇐⇒
C + E = 0,
A = 1,
=
x+1
1
1
+
−
x x 2 + 1 (x 2 + 1) 2
Z
= log |x| +
Resta da calcolare
R
Z
1
dx.
(x 2 +1) 2
A = 1, B = 0, C = −1, D = 1, E = 1.
!
dx = log |x| +
1
log(x 2 + 1) + 2 arctan x +
2
Z
Z
x
dx + arctan x −
x2 + 1
Z
1 + x2 − x2
dx
(x 2 + 1) 2
x2
dx.
(x 2 + 1) 2
Operando l’artificio visto sopra si ha
1
dx =
2
(x + 1) 2
x2 + 1 − x2
dx =
(x 2 + 1) 2
Z
Z
1
−
2
x +1
Z
x2
dx = arctan x −
(x 2 + 1) 2
Z
x2
dx.
(x 2 + 1) 2
About the last primitive we proceed by parts:
Z
x2
dx
(x 2 + 1) 2
=
Z
=
1 x
− arctan x .
2
2 x +1
x
1
x dx =
2
(x 2 + 1) 2
Gluing with previous computations we get the final answer.
Z
!0
!
Z
1
1
1
1
x
dx
=
x
−
dx
2 x2 + 1
x2 + 1
x2 + 1
143
7.3.1
Some standard changes of variable
Consider a primitive of type
√
n
Z
R(x,
ax + b) dx
p(x,y)
where R is rational (that is R(x, y) = q(x,y) , with p e q polynomials). Then setting
y=
so
√
n
Z
R(x,
yn − b
n
, dx = y n−1 dy,
a
a
ax + b, x =
√
n
ax + b)dx =
Z
R
!
yn − b
n n−1
,y
y
dy,
a
a
is a primitive of a rational function of y as easily seen.
Example 7.3.4. Compute
√
Z
√
Sol. — Set y =
√
x
dx.
x+1
x, x = y 2 , so dx = 2y dy. Hence
!
√
Z 2
Z
Z
Z
Z
y −1+1
1
x
y
2y dy = 2
dy = 2
(y − 1)dy +
dy
dx =
√
y+1
y+1
y+1
x+1
=2
!
!
√
√
(y − 1) 2
( x − 1) 2
+ log |1 + y| = 2
+ log |1 + x| .
2
2
Consider now the case
Z
R(e x )dx,
where again R is a rational function. It is then natural to set y = e x , that is x = log y, dx = y1 dy, and by this it is easy to check that
Z
Z
1
R(e x )dx =
R(y) dy,
y
is the primitive of a rational function.
Example 7.3.5. Compute
Z
cosh x + sinh x
dx.
1 + 2 cosh x
x
−x
x
−x
Sol. — Notice that writing cosh x = e +e
, sinh x = e −e
, we have
2
2
Z
Z e x +e−x e x −e−x
Z
Z
+
cosh x + sinh x
ex
e2x
2
2
dx =
dx
=
dx
=
dx,
x +e−x
e
1
2x
x
1 + 2 cosh x
e + ex + 1
1+2 2
1 + e + ex
which is, clearly, the primitive of a rational function of e x . Setting y = e x , x = log y, dx = y1 dy, we have
Z
Z
Z
Z
e2x
y2
1
y
1
2y + 1 − 1
dy
=
dx
=
dy
=
dy
2
e2x + e x + 1
y2 + y + 1 y
y2 + y + 1
y2 + y + 1
Z
Z
Z
1
2y + 1
1
1
1
1
1
2
=
dy
−
dy
=
log
|y
+
y
+
1|
−
dy.
2
2
2
2
y2 + y + 1
y2 + y + 1
y2 + y + 1
144
Now, ∆ = 12 − 4 · 1 · 1 = −3 < 0, therefore we have to complete the square:
!
!
!
1 2 1
1 2 3 3*
2y + 1 2 +
− +1= y+
+ =
1+
y2 + y + 1 = y +
√
2
4
2
4 4,
3
hence
Z
In conclusion
Z
1
4
dy =
2
3
y +y+1
Z
4
2y + 1
2 dy = √ arctan √ .
3
3
2y+1
√
1
1+
3
2e x + 1
1
2
cosh x + sinh x
.
dx = log(e2x + e x + 1) − √ arctan √
1 + 2 cosh x
2
3
3
Here we consider the case of a primitive of type
Z
R(sin x, cos x)dx,
where again R is a rational function. Here the change of variable is more technically involved being
x
y = tan , x = 2 arctan y.
2
This is because of the following parametric formulas
2
1 − tan x2
2 tan x2
2y
1 − y2
sin x =
=
,
cos
x
=
.
2 =
2
1 + y2
1 + y2
1 + tan x2
1 + tan x2
1 dy hence
Therefore dx = 2 1+y
2
Z
R(sin x, cos x)dx =
Z
!
2y 1 − y 2
2
R
,
dy,
1 + y2 1 + y2 1 + y2
and this is again the primitive of a rational function.
Example 7.3.6. Compute
Z
1
dx.
sin x − cos x
Sol. — Operating with the standard change of variable we get
Z
Z
Z
1
1
2
1
dx =
dy
=
2
dy.
2
2
2y
1−y
sin x − cos x
1+y
2y − 1 + y 2
−
2
2
1+y
1+y
√
√
Because y 2 + 2y − 1 = (y − (−1 − 2))(y − (−1 + 2)) we have
β
1
α
=
√ +
√ ,
2y − 1 + y 2
y+1+ 2 y+1− 2
where α = − √1 and β = √1 . Hence
2
Z
2
1
dx
sin x − cos x
=2
Z
√
√
√
√
√
√
1
|y + 1 − 2|
dy
=
−
2
log
|y
+
1
+
2|
+
2
log
|y
+
1
−
2|
=
2
log
√
2y − 1 + y 2
|y + 1 + 2|
√
| tan
= 2 log
| tan
x
2
x
2
√
+ 1 − 2|
√ .
+ 1 + 2|
(7.3.1)
145
7.4
Exercises
Exercise 7.4.1.
Z √
1 + tan x
dx,
(cos x) 2
1.
Z
3.
Z
5.
Z
7.
Z
9.
Z
11.
Z
13.
Z
15.
g
1 x2
2e
1
− 3(x−cos
x) 3
log |x 2 + 3x + 4|
g
8.
− 12 e−2x+5
g
10.
2sin x
log 2
log(e x + 1)
2
3
2
xe x dx,
1 + sin x
dx,
(x − cos x) 4
2x + 3
dx,
x 2 + 3x + 4
f
f
e−2x+5 dx,
2sin x cos x dx,
ex
dx,
ex + 1
1
dx,
x(log x) 2
Z
dx,
q
2.
e x − e−x
dx,
e x + e−x
Z
Z
e x arctan e x
dx,
1 + e2x
Z
x x x(2 log x + 1)
dx,
p
2
1 − xx
6.
Z
(cos x) 3 dx,
Z
(sinh x) 3 dx,
Z √
arcsin(log x)
x2
2
− log | cos x|
log(e x + e−x )
f
1 (arctan e x ) 2
2
g
arcsin x x
sin x − 31 (sin x) 3
g
2
12.
tan x dx,
4.
− log1 x
1
17.
Z
(1 + tan x) 3
f p
14.
Z
f
f
1
3
3 (cosh x)
− cosh x
g
f √
g
2 x − 12 (log x) 2
x − log x
dx,
x
sin x − cos x
dx,
sin x + cos x
16.
− log | sin x + cos x|
x 1 − (log x) 2
Exercise 7.4.2 (by parts).
Z
1.
x log x dx,
Z
3.
f
arcsin x dx,
Z
5.
Z
7.
f
13.
1
2 (x
√
f
sin(log x) dx,
eαx cos( βx) dx,
Z p
a2
−
− sin x cos x)
x2
x
2
e α x (α cos( βx)
α2 +β 2
√
dx,
sin log x − cos log x
g
x
2
a2
−
x2
g
+ β sin( βx))
+
a2
2
arcsin
x
a
Z
x α log x dx, (α , −1),
Z
xe x dx,
2.
f √ √
g
2e x x − 1
e x dx,
Z
11.
g
√
x arcsin x + 1 − x 2
(sin x) 2 dx,
Z
9.
2
log x − x4
4.
f
arctan x dx,
Z
Z
10.
14.
(log x) 2
dx,
x2
x arctan x − 12 log(1 + x 2 )
g
−
√
arcsin x dx,
f
Z
p
log(x + 1 + x 2 ) dx,
f
f
Z
12.
1
log x − α+1
xe x − x
x arcsin x
dx,
√
1 − x2
8.
Z
6.
x α+1
α+1
g
√
x − 1 − x 2 arcsin x
(log x) 2
x
log x
− 2 x − x2
g
√
√
x − 12 arcsin x + 12 1 − x 2
g
√
√
x log(x + 1 + x 2 ) − 1 + x 2
146
Exercise 7.4.3 (by change of variable).
Z
1
1.
dx,
√
x 2x − 1
f
2.
√ g
f √
2( x + e x )
4.
g
√
2 log | x + 1|
6.
√
Z
3.
Z
5.
Z √
g
√
2 arctan 2x − 1
1+e x
dx,
√
x
1
√ dx,
x+ x
f
Z
f √
g
√
2 e x − 1 − arctan e x − 1
e x − 1 dx,
√
arcsin x dx,
xe2x
Z
√
e2x
dx,
−1
f
f
g
√
√
x − 12 arcsin x + 12 1 − x 2
g
√
√
(x − 1) e2x − 1 + arctan e2x − 1
√
Z
7.
e x log(1 + e x ) dx,
Exercise 7.4.4 (by change of variable).
Z p
a2 − x 2 dx.
1.
Z
5
√
2.
Z p
Z
1
x2
−
Z
x
(e − 1) log(1 + e x ) − e x
a2
6.
dx
Exercise 7.4.5.
a2 + x 2 dx.
3.
1
dx.
√ √
x 2x + 1
7.
5x + 9
dx,
x 2 + 2x + 3
Z
1.
Z
2.
x2
8.
3x − 1
dx,
− 2x + 5
5
2
Z p
Z
4.
x2
Z
4.
1
dx.
x(2 − x)
8.
Z
√
√
Z
√ g
√
x 3 − 2 x + 2 arctan x
1
x2
+ a2
dx.
1
dx.
√
3 + 2x − x 2
√
√
log(x 2 + 2x + 3) + 2 2 arctan x+1
2
f
3
2
log(x 2 − 2x + 5) + arctan x−1
2
f
x2
dx,
− 7x + 10
2
3
x 2 − a2 dx.
x 2 − 5x + 9
dx,
x 2 − 5x + 6
Z
3.
f √
x3
dx,
1+x
f
g
g
x + 3 log x−3
x−2 4
x + 25
3 log |x − 5| − 3 log |x − 2|
g
Exercise 7.4.6.
Z
1.
Z
2.
15x 2 − 4x − 81
dx,
(x − 3)(x + 4)(x − 1)
x6
Z
4.
x
(x
+ 2) 2 (x
(x 2
− 1)
x
2 log |x − 1| − log |x| − (x−1)
2
f
x−1
dx,
x 2 (x 2 + x + 1)
Z
6.
1
dx,
+ x4
Z
5.
3 log |x − 3| + 5 log |x + 4| + 7 log |x − 1|
x3 + 1
dx,
x(x − 1) 3
Z
3.
x2 + 2
dx,
+ x + 1)(x + 1) 2
g
√
3 log |x| − 32 log(x 2 + x + 1) − √1 arctan 2x+1
f
arctan x + x1 − 3x1 3
3
dx,
3
1 + 1 log |x − 1|
− 12 log |x + 2| − 23 x+2
9
g
√
3 − 3 arctan 2x+1
√
log |x + 1| − 12 log(x 2 + x + 1) − x+1
3
147
Exercise 7.4.7.
Z
1.
√
Z
1
dx,
√
x + 2x + 3
√
Z
1+ x+1
3.
dx,
√
x+1−1
r
Z
1 1−x
4.
dx,
x 1+x
2.
Z
r
5.
Exercise 7.4.8.
g
f √
√
√
2 x − 4 4 x + log( 4 x + 1)
1
√ dx,
x + 4x
x
1−x
f
cosh x + sinh x
dx,
1 + 2 cosh x
Z
3.
Exercise 7.4.9.
2 arctan
1
2
Z
Z
2.
x
log(e2x + e x + 1) − 13 arctan 2e√ +1
2 tanh x2 +1
√
3
1 e2x
2 1+e2x
3
√2
3
arctan
f
sin x
dx,
1 + sin x
2 + cos x
dx,
(1 + 2 cos x) sin x
Z
5 + cos x
dx,
(5 + 3 cos x) cos x
f
1+x+ 1−x
1
dx,
sin x
Z
3.
+ log √1+x−√1−x
+ 12 arctan e x
√
√
1−x
1+x
q
q
x − 3 arctan
x
(3 − x) 1−x
1−x
1
dx,
sinh x + 2 cosh x
1.
4.
q
Z
g
g
√
√
x + 1 + 4 x + 1 + 4 log( x + 1 − 1)
dx,
ex
,
(1 + e2x ) 2
2.
f
!3
Z
1.
√
√
= 12 log | 2x + 3 − 1| + 32 log( 2x + 3 + 3)
1+tan
log 1−tan
x
2
x
2
log tan
2
x + 1+tan
x 2
g
x
2
g
sin x log 1+2
cos x 1
x
− arctan 2 tan 2
148
Chapter 8
Riemann Integral
The main goal of integration theory is to find a general method to compute areas of plane figures. To fix ideas, let’s
consider a function f : [a, b] −→ [0, +∞[. Suppose we wish to measure the area of the region delimited above by the the
graph of f and below by the x−axis, that is the ares of the set (called trapezoid)
Trap( f ) := {(x, y) : a 6 x 6 b, 0 6 y 6 f (x)} .
AHf L
a
b
The main idea is quite natural and goes back to Archimede: to fill the trapezoid by rectangles. The formal definition is,
however, a little bit complex. This makes proofs of the properties of the area not very easy and because their comprehension
wont affect our goals we will just skip.
8.1
Definition of integrable function
Let’s begin with the
Definition 8.1.1 (subdivision). A subdivision of an interval [a, b] is a finite set of points π := {x 0, x 1, . . . , x N } such that
a = x 0 < x 1 < . . . < x N = b.
We pose
|π| :=
max
k=0,..., N −1
|x k+1 − x k |.
The set of all the subdivisions of [a, b] will be denoted by Π[a, b].
149
150
Definition 8.1.2 (inferior and superior sum). Let f : [a, b] −→ R be a bounded function. If π ∈ Π[a, b] we call inferior
sum of f over π the quantity
N
−1
X
mk (x k+1 − x k ), mk :=
inf
f (x).
S(π) :=
x ∈[x k , x k+1 ]
k=0
Similarly, the superior sum of f over π is
S(π) :=
N
−1
X
Mk (x k+1 − x k ), Mk :=
k=0
a=x0
x1
x2
xk
xk+1
xN-1
b=xN
a=x0
x1
sup
x ∈[x k , x k+1 ]
x2
xk
f (x).
xk+1
xN-1
b=xN
Figure 8.1: At left, the area of an inferior sum, at right the same for a superior sum
Proposition 8.1.3. Let f : [a, b] −→ R be a bounded function. Then
i) S(π) 6 S(π), ∀π ∈ Π[a, b];
ii) S(π1 ) 6 S(π2 ), S(π2 ) 6 S(π1 ), ∀π1, π2 ∈ Π[a, b] with π1 ⊂ π2 .
Proof. — Exercise.
For a positive function, an inferior sum represents an approximation by defect of the area of Trap( f ) while a superior sum
represents an approximation by excess of the same quantity. The next step is to define the best approximations by defect
and by excess of the area of Trap( f ):
Definition 8.1.4. Let f : [a, b] −→ R be bounded. We call inferior area and superior area
A( f ) :=
sup
π ∈Π[a,b]
S(π), A( f ) :=
inf
π ∈Π[a,b]
S(π).
Proposition 8.1.5. Let f : [a, b] −→ R be bounded. Then
−∞ 6 A( f ) 6 A( f ) 6 +∞.
(8.1.1)
Proof. — Exercise.
When the best approximation by defect coincides with the bast approximation by excess we call the common value integral
of f :
151
Definition 8.1.6 (Riemann integral). Let f : [a, b] −→ R be a bounded function. We say that f is Riemann integrable
on [a, b] if A( f ) = A( f ) ∈ R. In that case we set
Z b
f (x) dx := A( f ) ≡ A( f ).
a
The set of Riemann integrable functions over [a, b] is denoted by R ([a, b]).
Example 8.1.7. Constants are integrable and
b
Z
C dx = C(b − a).
a
Sol. — Indeed, in this case
S(π) =
X
mk (x k+1 − x k ) =
k
X
C(x k+1 − x k ) = C(b − a),
k
and simlarly S(π) = C(b − a), hence A( f ) = A( f ) = C(b − a).
8.2
Classes of integrable functions
Not every bounded function is integrable:
Example 8.2.1 (Dirichlet function). The function


 0,
f (x) := 

 1,

x ∈ Q ∩ [0, 1],
x ∈ Qc ∩ [0, 1]
is not integrable on [0, 1].
Sol. — Let π = {x 0, x 1, . . . , x N } be a subdivision of [0, 1]. Because of the density of rationals/irrationals in the real line
mk =
Therefore
S(π) =
N
−1
X
inf
x ∈[x k , x k+1 ]
f (x) = 0, Mk :=
mk (x k+1 − x k ) = 0, S(π) =
k=0
N
−1
X
sup
x ∈[x k , x k+1 ]
f (x) = 1.
Mk (x k+1 − x k ) =
k=0
N
−1
X
(x k+1 − x k ) = 1.
k=0
Hence
A( f ) =
sup
π ∈Π[0,1]
S(π) = 0, A( f ) =
inf
π ∈Π[0,1]
S(π) = 1.
It is not easy to furnish a characterization of an integrable function. It is however possible to find good sufficient conditions
that allow a function to be Riemann integrable. A first important example is given by the
Theorem 8.2.2. Let f : [a, b] −→ R be monotone. Then f ∈ R ([a, b]). Moreover, if (πn ) ⊂ Π[a, b] is a sequence of
subdivisions such that |πn | −→ 0 then
b
Z
a
f (x) dx = lim
n→+∞
NX
n −1
k=0
n
− x kn ).
f (x kn )(x k+1
(8.2.1)
152
Proof. — Suppose, for instance, f %. If π is a subdivision, then
mk :=
inf
x ∈[x k ,x k+1 ]
hence
S(π) =
N
−1
X
f (x) = f (x k ), Mk :=
f (x k )(x k+1 − x k ), S(π) =
sup
x ∈[x k , x k+1 ]
N
−1
X
k=0
f (x) = f (x k+1 ),
f (x k+1 )(x k+1 − x k ).
k=0
Recalling the defs of A( f ), A( f ), we have
0 6 A( f ) − A( f )
6 S(π) − S(π) 6
N
−1
X
N
−1
X
f (x k+1 ) − f (x k ) (x k+1 − x k ) 6 |π|
f (x k+1 ) − f (x k )
k=0
k=0
= |π| f (b) − f (a) .
Now, taking a subdivision with |π| small (for instance, dividing [a, b] in equal N parts, |π| = b−a
N ), we have that A( f ) = A( f ). Finally,
the (8.2.1) is nothing but S(πn ).
Example 8.2.3 (Archimede).
b
Z
x 2 dx =
0
b3
, ∀b > 0.
3
Sol. — Let f (x) := x 2 , x ∈ [0, b]. f is increasing hence integrable. By the (8.2.1) and taking the subdivision of [0, b] that divides it in
n equal parts, that is
b
x k = k , k = 0, 1, . . . , n,
n
we have
Z b
n−1
n−1
X b !2 b
b3 X 2
b3 (n − 1)n(2(n − 1) + 1)
b3
2
k
x dx = lim
= lim 3
=
.
k = lim 3
n→+∞
n→+∞ n
n n n→+∞ n
6
3
0
k=0
k=0
Example 8.2.4.
b
Z
e x dx = eb − 1, ∀b > 0.
0
Sol. — Let f (x) := e x x ∈ [0, b], increasing hence integrable. Proceeding as in the previous example we get
Z b
0
n−1
n−1
b X kb
b X b/n k
b 1 − eb
1 − eb
e n = lim
e
= lim
= lim
= e b − 1.
b/n
n→+∞ n
n→+∞ n
n→+∞ 1−e b/n
n→+∞ n 1 − e
k=0
k=0
e x dx = lim
b/n
The class of monotone functions is wide but of course not every function is monotone. Another interesting class is the
following:
Theorem 8.2.5. If f ∈ C ([a, b]) then f ∈ R ([a, b]).
Proof. — We will prove the statement with the slightly more restrictive assumption that f ∈ C 1 ([a, b]), that is f , f 0 ∈ C ([a, b]). Let
π ∈ Π[a, b] be a subdivision of [a, b]. We have
0 6 A( f ) − A( f ) 6 S(π) − S(π) =
N
−1
X
k=0
Mk (x k+1 − x k ) −
N
−1
X
k=0
mk (x k+1 − x k ) =
N
−1
X
k=0
(Mk − mk )(x k+1 − x k ).
153
By Weierstrass theorem, there exist ξk , η k ∈ [x k , x k+1 ] such that
Mk =
sup
x ∈[x k , x k+1 ]
f (x) =
max
x ∈[x k , x k+1 ]
f (x) = f (η k ), mk =
hence
0 6 A( f ) − A( f ) 6
N
−1
X
inf
x ∈[x k ,x k+1 ]
f (x) =
min
x ∈[x k , x k+1 ]
f (x) = f (ξk ),
f (η k ) − f (ξk ) (x k+1 − x k ).
k=0
By Lagrange formula
f (η k ) − f (ξk ) = f 0 (ck )(η k − ξk ) 6 K |π|,
where K := max[a,b]
| f 0|
(finite thanks to Weierstrass theorem). We conclude that
0 6 A( f ) − A( f ) 6 K |π|
N
−1
X
(x k+1 − x k ) = K |π|(b − a),
k=0
that is small as we wish taking |π| small, hence A( f ) = A( f ).
8.3
Properties of the Riemann Integral
The main natural properties of the Riemann integral are summarized by the
Proposition 8.3.1. The following properties hold:
i) (linearity) if f , g ∈ R ([a, b]) then α f + βg ∈ R ([a, b]), ∀α, β ∈ R and
Z b
Z b
Z
(α f (x) + βg(x)) dx = α
f (x) dx + β
a
a
In particular: if f > 0 on [a, b] then
R
b
a
g(x) dx.
b
(8.3.2)
g(x) dx.
a
f (x) dx > 0.
iii) (triangular inequality) if f ∈ R ([a, b]) then | f | ∈ R ([a, b]) and
Z b
Z b
f (x) dx 6
| f (x)| dx.
a
a
iv) (decomposition) if f ∈ R ([a, b]), R ([b, c]) then f ∈ R ([a, c]) and
Z c
Z b
Z
f (x) dx =
f (x) dx +
a
a
(8.3.3)
c
f (x) dx.
b
v) (restriction) if f ∈ R ([a, b]) then f ∈ R ([c, d]) for every [c, d] ⊂ [a, b] and
d
Z
c
f (x) dx =
b
Z
a
(8.3.1)
a
ii) (isotonicity) if f , g ∈ R ([a, b]) and f 6 g on [a, b] then
Z b
Z
f (x) dx 6
a
b


 1,
f (x) χ[c,d] (x) dx, where χ[c,d] (x) = 


 0,
x ∈ [c, d],
x < [c, d].
154
vi) if f , g ∈ R ([a, b]) athen f g ∈ R ([a, b]) ( 1 )
vii) if f ∈ R ([a, b]) then setting f + := max{ f , 0} and f − := max{− f , 0} (called positive part and negative part of f )
we have f +, f − ∈ R ([a, b]).
For future convenience, we extend the operation of integral in the following way:
Definition 8.3.2. If f ∈ R ([a, b]) we set
a
Z
b
Z
f (x) dx := −
f (x) dx.
b
a
A very important property of the Riemann integral is the
Theorem 8.3.3 (mean value theorem). Let f ∈ C ([a, b]). There exists then c ∈ [a, b] such that
Z b
f (x) dx = f (c)(b − a).
a
The quantity
1
b−a
R
b
a
f (x) dx is called integral mean of f over [a, b].
Proof. — The proof is very easy: by Weierstrass thm f has min/max over [a, b], let’s say
m = min f (x), M = max f (x).
x ∈[a,b]
x ∈[a,b]
Then
m 6 f (x) 6 M, ∀x ∈ [a, b],
that is
Z b
isotonicity
=⇒
a
Z b
m dx 6
Z b
m(b − a) 6
a
f (x) dx 6 M (b − a), ⇐⇒ m 6
a
1
b−a
Z b
f (x) dx 6
M dx,
a
Z b
a
f (x) dx 6 M.
By the intermediate values thm there exists c ∈ [a, b] such that
f (c) =
8.4
1
b−a
Z b
f (x) dx.
a
Fundamental Theorem of Integral Calculus
In this section we will present the most important and deep result concerning Riemann integrals. Let’s start by the
Definition 8.4.1. Let f ∈ R ([a, b]) and c ∈ [a, b] be fixed. We call integral function of f centered at c the function
Z x
Fc : [a, b] −→ R, Fc (x) :=
f (y) dy, x ∈ [a, b].
c
Notice that if f ∈ R ([a, b]) then (restriction) f ∈ R ([c, x]) for every x ∈ [a, b] hence the integral function is properly
defined. An integral function is in a sense a natural object that measures the area (with sign) choosing c as point of area
0. To understand the connection between Fc and f let’sRconsider a function (in blue in the next picture) and its integral
c
function centered at some point c. First of all Fc (c) = c f (y) dy = 0. Moreover, increasing x from c, being initially
f > 0 we’ll have Fc %.
1But be careful! It is not true that
R
b
a
f (x)g(x) dx =
R
b
a
f (x) dx
R
b
a
g(x) dx .
155
f HxL
x
Fc HxL
à f HyLdy
c
c
x
c
As soon as f becomes negative, Fc starts to decrease, while when f returns to be > 0 Fc return to increase. In other
words:
Fc % ⇐⇒ f > 0.
Hence f works as Fc0 . It turns out that f = Fc0 :
Theorem 8.4.2. Let f ∈ C ([a, b]). Then, any of its integral functions Fc is a primitive of f , that is
Fc0 (x) = f (x), ∀x ∈ [a, b], ∀c.
Proof. — Let h > 0 and notice that
Fc (x + h) − Fc (x)
h
Z
Z x
Z x
Z x+h
Z x
1 * x+h
1
f (y) dy −
f (y) dy + = *
f (y) dy +
f (y) dy −
f (y) dy +
h, c
c
x
c
- h, c
Z x+h
1
=
f (y) dy.
h x
=
This is an integral mean (of f over [x, x + h]): being f continuous, by the mean value theorem there exists ξh ∈ [x, x + h] such that
Z x+h
1
Fc (x + h) − Fc (x)
=
f (y) dy = f (ξh ).
h
h x
As h −→ 0+, ξh −→ x+ hence, being f continuous,
lim
h→0+
Fc (x + h) − Fc (x)
= lim f (ξh ) = f (x).
h
h→0+
This shows that the right derivative of Fc at x is f (x). Similarly we can proceed with the left derivative hence to conclude.
By the fundamental theorem of integral calculus it follows a formula that connects the calculus of an integral with the
calculus of primitives:
Corollary 8.4.3 (fundamental formula of integral calculus). Let f ∈ C ([a, b]) and F be any primitive of f on [a, b]. Then
Z b
x=b
f (x) dx = F (b) − F (a) =: F (x) .
(8.4.1)
x=a
a
Proof. — Indeed: take the integral function Fa . By the fundamental theorem of integral calculus, Fa0 = f . Being F also a primitive
on [a, b], Fa − F is constant by Proposition 7.0.2, that is Fa − F ≡ k ∈ R. Therefore
Fa (a) − F (a) = k, Fa (b) − F (b) = k.
On the other hand
Fa (a) =
so
Z b
a
Z a
a
f (x) dx = 0, Fa (b) =
Z b
f (x) dx,
a
f (x) dx = Fa (b) = F (b) + k = F (b) + (Fa (a) − F (a)) = F (b) − F (a).
156
Example 8.4.4. Compute
log 4
Z
0
√
√
ex − 1
dx.
8e−x + 1
x
Sol. — Let f (x) := 8ee−x−1
+1 . Clearly f ∈ C ([0, +∞[) hence f ∈ C ([0, log 4]) ⊂ R ([0, log 4]). To compute the integral let’s apply the
fundamental formula. To this aim let’s compute a primitive for f :
Z √ x
Z
Z
Z 2
2y
e −1 y
2y 2
y +9−9
√
dx
=
dy
=
dy
=
2
dy
8e−x + 1 y= e x −1, x=log(1+y 2 )
y2 + 9
y2 + 9
8 1 2 + 1 1 + y2
1+y
"Z
=2
Z
dy − 9


√
#
Z
x
√


1
y
1
x − 1 − 6 arctan e − 1 .
dy
=
2
=
2
y
−
3
arctan
e
dy
=
2
y
−


y 2
3
3

y2 + 9
+ 1 
3


Quindi, per la formula fondamentale del calcolo integrale, avremo che
√
Z log 4 √ x
x=log 4
√
√
e −1
e x − 1 x
= 2 3 − π.
−x + 1 dx = 2 e − 1 − 6 arctan
8e
3
0
x=0
8.5
Integration formulas
Another way to review the (8.4.1) is the following: if f ∈ C 1 ([a, b]) then
b
Z
f 0 (x) dx = f (b) − f (a).
(8.5.1)
a
By this some remarkable formulas follow:
Proposition 8.5.1 (integration by parts). Let f , g ∈ C 1 ([a, b]). Then
b
Z
a
x=b
f (x)g 0 (x) dx = f (x)g(x) −
x=a
b
Z
f 0 (x)g(x) dx.
(8.5.2)
a
Proof. — Just notice that
( f g) 0 (x) = f 0 (x)g(x) + f (x)g 0 (x), =⇒
Z b
a
f g 0 (x) dx =
Z b
a
f 0 (x)g(x) + f (x)g 0 (x) dx.
By the (8.5.1) we have
Z b
a
x=b
f g 0 (x) dx = f (x)g(x) ,
x=a
and now the conclusion follows.
Proposition 8.5.2 (change of variable). Let f ∈ C ([a, b]), ψ : [c, d] −→ [a, b], ψ ∈ C 1 , bijection with inverse ψ −1 ∈ C 1 .
Then
Z b
Z d
f (x) dx =
f (ψ(y))|ψ 0 (y)| dy.
(8.5.3)
a
c
157
Proof. — First, notice that ψ 0 > 0 or ψ 0 < 0 on [c, d]. Indeed: if ψ 0 would change sign, then by continuity it should vanish somewhere.
But being
ψ −1 (ψ(y)) = y, ∀y ∈ [c, d], =⇒ (ψ −1 ) 0 (ψ(y))ψ 0 (y) = 1, ∀y ∈ [c, d]
this is impossible. Suppose then that ψ 0 > 0 on [c, d], in particular ψ be strictly increasing. Then, if F is a primitive of f
Z d
f (ψ(y))|ψ 0 (y)| dy
=
c
Z d
c
f (ψ(y))ψ 0 (y) dy =
Z d
F 0 (ψ(y))ψ 0 (y) dy =
c
= F (ψ(d)) − F (ψ(c)) = F (b) − F (a) =
Z b
a
Z d
F (ψ(y)) 0 dy
c
F 0 (x) dx =
Z b
f (x) dx.
a
Example 8.5.3 (Area of a disk). Compute the area of a disk of radius r.
Sol. — Clearly
Z rp
Area x 2 + y 2 6 r 2 = 2
r 2 − x 2 dx.
−r
√
The function f (x) := r 2 − x 2 is continuous on [−r, r], hence integrable. We have
Z 1q
Z rp
Z rr
Z 1q
x 2
y:= rx , x=r y=:ψ(y)
dx
=
r
r 2 − x 2 dx = r
1−
1 − y 2 r dy = r 2
1 − y 2 dy.
r
−1
−r
−r
−1
Setting y = sin θ =: ψ(θ), ψ is a change of variable C 1 and increasing between [− π2 , π2 ] and [−1, 1]. Therefore
Z 1q
−1
1 − y 2 dy =
π
2
Z
q
− π2
1 − (sin θ) 2 cos θ dθ =
π
2
Z
| cos θ| cos θ dθ =
− π2
Z
π
2
− π2
(cos θ) 2 dθ.
Integrating by parts
Z
π
2
− π2
(cos θ) 2 dθ
=
Z
=
Z
π
2
− π2
π
2
− π2
θ= π
(cos θ)(sin θ) 0 dθ = [(cos θ)(sin θ)]θ=−2 π −
2
1 − (cos θ) 2 dθ = π −
whence
π
2
Z
2
− π2
Z
π
2
− π2
Z
π
2
− π2
(− sin θ)(sin θ) dθ =
Z
π
2
− π2
(sin θ) 2 dθ
(cos θ) 2 dθ
(cos θ) 2 dθ = π, ⇐⇒
π
2
Z
− π2
(cos θ) 2 dθ =
π
.
2
In conclusion, Area(x 2 + y 2 6 r 2 ) = 2 · π2 r 2 = πr 2 .
8.6
Generalized integrals
For many applications the restrictions of the Riemann integral are too strong. In particular, the definition demands to
consider bounded functions on closed and bounded sets (intervals [a, b]). This excludes two important cases:
R +∞
• the case when the integration interval is unbounded as, for instance, for 0 e−x dx;
• the case when the function is unbounded as, for instance, for
R
1 1
√
0
x
dx.
158
Let’s consider first the problem to give a sense to
R
+∞
a
f (x) dx. It is natural to consider the partial area
R
Z
f (x) dx
a
and send R −→ +∞. If the limit exists we will say that the integral
R
+∞
a
f (x) dx exists.
Definition 8.6.1. Let f ∈ C ([a, +∞[). We pose
+∞
Z
R
Z
f (x) dx := lim
R→+∞
a
f (x) dx
a
if the limit exists finite. In this case we say that f is integrable in generalized sense on [a, +∞[.
RR
Notice that being f ∈ C ([a, +∞[) in particular f ∈ C ([a, R]) for any R > a, hence the integral a f exists (and it can be
eventually computed by the method of the fundamental formula of integral calculus). Of course an analogous definition
Rb
holds for −∞ f (x) dx. Let’s see some examples.
a
R
Figure 8.2: In red
Example 8.6.2.
+∞
Z
∃
a
R
R
a
f (x) dx.
1
dx, ∀a > 0, ⇐⇒ α > 1.
xα
Sol. — Of course f (x) := x1α ∈ C (]0, +∞[) so f ∈ C ([a, +∞[) for any a > 0. If R > a,
x −α+1 x=R = R −α+1 − a−α+1 ,


Z R
Z R

−α+1 x=a
−α+1
−α+1


1
−α
dx =
x dx = 
α


a x
a

 log x x=R = log R − log a,

x=a
By this is evident that
a1−α ,

Z R


α−1

1

lim
dx = 

R→+∞ a x α
 +∞,

Similarly we have that
R
b
1
−∞ |x | α
α > 1,
α 6 1.
dx < +∞ iff α > 1 (exercise).
Example 8.6.3.
+∞
Z
∃
a
eβx dx, ⇐⇒ β < 0.
α , 1,
α = 1.
159
Sol. — Here f (x) := eβx is continuous on R, therefore on [a, +∞[. Moreover
Z R
e
a
βx
e β x x=R = e β R − e β a ,



β x=a
β
β

dx = 



 R − a,
β , 0,
β = 0.
RR
It is now clear that lim R→+∞ 0 eβx dx ∈ R iff β < 0. In such case
!
Z +∞
eβa
eβR eβa
=−
−
.
eβx dx = lim
β
β
β
R→+∞
a
Similarly
R
b βx
e
−∞
dx < +∞ iff β > 0 (exercise).
Definition 8.6.4. Let f ∈ C(R). We pose
Z +∞
Z
a
f (x) dx :=
f (x) dx +
f (x) dx,
−∞
−∞
+∞
Z
a
if both generalized integrals exist (it is easy to check that this doesn’t depend on a).
Let’s consider now the case of an unbounded function f :]a, b] −→ R, for instance f (a+) = +∞.
Definition 8.6.5. Let f ∈ C (]a, b]). We pose
Z b
b
Z
f (x) dx := lim
r→a+
a
f (x) dx
r
if the limit exists finite. In this case we say that f is integrable in generalized sense on ]a, b].
a
r
Figure 8.3: In red the integral
b
Rb
r
f (x) dx.
Rb
Notice that in the previous Definition we used the same notation of the Riemann integral, namely a f (x) dx, to define
the generalized integral. This could potentially lead to some confusion if both are defined. There’s no confusion at all: it
turns out that
Proposition 8.6.6. If f ∈ C ([a, b]) then f is integrable in generalized sense on ]a, b] and the generalized integral coincides
with the Riemann integral.
Proof. — Let F be a primitive of f (there exists such a primitive in virtue of the fundamental theorem of integral calculus: any integral
function of f is a primitive indeed!). Then
Z b
f (x) dx = F (b) − F (r).
r
But F, being primitive, is derivable, hence continuous. Therefore
Z b
Z b
f (x) dx = F (b) − F (r) −→ F (b) − F (a) =
f (x) dx.
r
a
160
Example 8.6.7.
b
Z
∃
a
1
dx, ⇐⇒ α < 1.
(x − a) α
1
Sol. — Indeed f (x) := (x−a)
α ∈ C (]a, b]). Moreover
Z b
r
−α+1
−α+1
(x−a) −α+1 x=b


− (r−a)
,
= (b−a)

−α+1
−α+1
−α+1

x=r

1

dx
=

(x − a) α


 log(x − a) x=b = log(b − a) − log(r − a),

x=r
α , 1,
α = 1.
By this it is immediate to deduce that
Z b
lim
r→a+
A similar definition holds for
R
b
a
r
(b−a) −α+1




−α+1 ,
1

dx = 
α
(x − a)

 +∞,

b
c
Z
f (x) dx :=
8.7
c
a
and
R
b
c
f (x) dx +
a
a
R
α > 1.
f (x) dx with f ∈ C ([a, b[). Finally, if f ∈ C (]a, b[) we pose
Z
if both generalized integrals
α < 1,
b
Z
f (x) dx.
c
exist. It is easy to check that this definition doesn’t depend on c ∈]a, b[.
Convergence criteria for generalized integrals
Differently
R +∞than Riemann integrals, the only knowledge that f ∈ C ([a, +∞[) is not enough to say that the generalized
integral a f converges. Therefore the question of convergence is a non trivial problem. Unfortunately it is not always
RR
possible to apply the definition because it is impossible to compute explicitly the integral a f . This situation reminds
the case of numerical series, where to discuss the convergence of the sum
X
an
n
we look for some test able to say if the sum will converge or less without computing explicitly the sum. The analogy with
series is actually striking because, as we will see here, most of the results valid for series find a suitable translation for
generalized integrals.
8.7.1
Constant sign integrand
As for series we start considering the case of generalized integrals for constant sign functions. We will treat here the case
f > 0, being evident that suitable results hold for f 6 0.
Proposition 8.7.1. Let f ∈ C ([a, +∞[) be such that f > 0 on [a, +∞[. Then
R
Z
∃ lim
R→+∞
f (x) dx ∈ R ∪ {+∞}.
a
161
R]
Proof. — It is enough to notice that a f (x) dx % because if R2 > R1 then
Z R2
Z R1
Z R2
Z R1
isotonicity
f (x) dx =
f (x) dx +
f (x) dx
>
f (x) dx.
a
a
R1
a
RR
So the integral function R 7−→ a f (x) dx is increasing and the conclusion follows by limits of monotone functions.
Therefore to say if the generalized integral is convergent it is enough to show that it cannot be +∞.
Theorem 8.7.2 (comparison). Let f , g ∈ C ([a, +∞[) be such that
0 6 f (x) 6 g(x), ∀x ∈ [a, +∞[.
Then
+∞
Z
∃
g(x) dx ∈ R, =⇒ ∃
+∞
Z
f (x) dx ∈ R.
a
a
We call g an integrable dominant for f on [a, +∞[.
Proof. — By isotonicity,
Z R
Z R
f (x) dx 6
a
so
Z +∞
a
f (x) dx = lim
R→+∞
Z R
g(x) dx,
a
Z R
f (x) dx 6 lim
R→+∞
a
a
g(x) dx =
Z +∞
a
g(x) dx < +∞.
Example 8.7.3. Discuss convergence for the Gauss integral
Z +∞
2
e−x dx
−∞
R +∞
2
2
Sol. — We study convergence of 0 e−x dx. Clearly f (x) := e−x ∈ C ([0, +∞[) and f > 0. It is evident that
2
e−x 6 e−x , x > a,
for some a. Indeed:
2
2
e−x 6 e−x , ⇐⇒ e x −x > 1, ⇐⇒ x 2 − x > 0, ⇐⇒ x 6 0, ∨ x > 1.
R +∞
R1
2
2
Therefore a = 1 works. On [1, +∞[ g(x) = e−x is a integrable dominant, hence 1 e−x dx < +∞ and because of course 0 e−x dx
R +∞
2
we conclude that 0 e−x dx exists.
y
2
y=e-x
y=e-x
1
More flexible for applications is the
x
162
Corollary 8.7.4 (asymptotic comparison test). Let f , g ∈ C ([a, +∞[), f , g > 0 on [a, +∞[. Then, if f ∼+∞ g we have
Z +∞
Z +∞
∃
f (x) dx ⇐⇒ ∃
g(x) dx.
a
a
Proof. — Same idea of asymptotic comparison for series. Exercise.
Example 8.7.5. Discuss in function of α > 0 the convergence of the integral
Z +∞
1
(x − 1) arctan α dx.
x
1
Sol. — The function f α (x) := (x − 1) arctan x1α ∈ C ([1, +∞[). Clearly, f > 0 on [1, +∞[. Let’s see the asymptotic behavior at +∞.
Being α > 0, x1α −→ 0+ as x −→ +∞ and because arctan t ∼0 t, we have
1
x
1
f α (x) ∼+∞ (x − 1) α ∼+∞ α = α−1 ,
x
x
x
R +∞ 1
R +∞
f α iff ∃
therefore, by asymptotic comparison, ∃
dx iff α − 1 > 1 that is iff α > 2.
x α−1
Remark 8.7.6 (useful!). It is often impossible and useless to know exactly the sign of a function. When we apply the asymptotic
comparison we can say that if f ∼+∞ g and g > 0 then f > 0 in a neighborhood of +∞.
Here are the versions of previous tests for generalized integrals of unbounded functions.
Theorem 8.7.7 (comparison). Let f , g ∈ C (]a, b]) such that 0 6 f (x) 6 g(x), ∀x ∈]a, b]. Then, if g is an integrable
dominant,
Z b
Z b
∃
g(x) dx ∈ R, =⇒ ∃
f (x) dx ∈ R.
a
a
Corollary 8.7.8 (asymptotic comparison). Let f , g ∈ Rloc (]a, b]), f , g > 0 on ]a, b]. If f ∼a+ g then
b
Z
∃
f (x) dx ⇐⇒ ∃
a
8.7.2
b
Z
g(x) dx.
a
Non constant sign integrand
As for series, when we want to sum a generic function things are much harder. As in that case an important concept is
Definition 8.7.9. Let f ∈ Rloc ([a, +∞[). We say that f is absolutely integrable on [a, +∞[ if
Z +∞
∃
| f (x)| dx.
a
Analogous definition for all other generalized integrals.
As for series
163
Proposition 8.7.10. Any absolutely integrable function is integrable.
Proof. — Same as that one of Thm 3.3.4.
Similarly as for series, it is possible to show that absolute integrability is stronger than integrability. For instance it is
possible to prove that
Z +∞
Z +∞
sin x sin x
dx ∈ R, but
x dx = +∞.
x
0
0
Example 8.7.11. Discuss convergence for
+∞
Z
0
cos x
dx.
x2 + 1
Sol. — The integrand is continuous on [0, +∞[, hence is locally integrable. Moreover
Z +∞
cos x 1
π
=: g(y), and
g(x) dx = arctan x| x=+∞
6 2
2
x=0 = 2 .
x +1
x +1
0
R +∞
Therefore 0 | f (x)| dx < +∞ by comparison, that is the proposed integral is absolutely convergent.
8.8
Functions of integral type
Let’s consider now a function of type
β(x)
Z
Φ(x) :=
f (y) dy.
α(x)
Notice that α(x) ≡ c and β(x) = x corresponds to Φ(x) ≡ Fc (x). In general, Φ is an extension of the concept of integral
function, and for this reason it is called function of integral type. This kind of functions are used in several context and it
is important to have tools of calculus on them.
Proposition 8.8.1. Let f ∈ C ([a, b]), α, β : I ⊂ R −→ R be such that α(x), β(x) ∈ [a, b] for every x ∈ I. Suppose
furthermore that α and β be derivable on I. Then
Z
Φ(x) :=
is derivable and
β(x)
α(x)
f (y) dy, x ∈ I,
Φ0 (x) = f ( β(x)) β 0 (x) − f (α(x))α 0 (x), x ∈ I.
Proof. — Let c ∈ [a, b] be fixed. Then
Φ(x) =
Z β(x)
α(x)
f (y) dy =
Z c
α(x)
f (y) dy +
Z β(x)
c
f (y) dy = −
Z α(x)
c
f (y) dy +
Z β(x)
c
f (y) dy = −Fc (α(x)) + Fc ( β(x)).
Being f continuous, by the fundamental theorem of calculus and chain rule
Φ 0 (x) = −Fc0 (α(x))α 0 (x) + Fc0 ( β(x)) β 0 (x) = − f (α(x))α 0 (x) + f ( β(x)) β 0 (x).
164
Example 8.8.2. Compute
1
lim
x→0 x
2x
Z
x
sin y
dy.
y
sin y
Sol. — We can think to f (y) := y as defined and continuous on R setting f (0) = 1. Clearly
Z 2x
Z 0
Z 2x
sin y
sin y
sin y
dy =
dy +
dy −→ 0,
y
y
y
x
x
0
as x −→ 0, whence the limit is an indeterminate form 00 . By the Hôpital rule
R 2x sin y
lim
x
y
x
x→0
sin(2x)
dy (H )
· 2 − sinx x · 1
sin(2x) − sin x (H )
2 cos(2x) − cos x
= lim 2x
= lim
= lim
= 1.
1
x
1
x→0
x→0
x→0
Example 8.8.3. Let
2x
Z
2
e−y dy.
F (x) :=
x
Find the domain of F, the sign, symmetries, limits and eventual asymptotes at the boundary of the domain, continuity,
differentiability, monotonicity, extreme points, convexity. With all these informations plot a graph of F.
2
Sol. — Domain: The integrand f (y) := e−y ∈ C (R), therefore f ∈ R ([x, 2x]) for any x ∈ R (of course if x < 0 the interval should
be [2x, x]). We conclude that D(F) =] − ∞, +∞[.
2
Sign: Because f (y) := e−y > 0 for any y, if x < 2x then the integral is positive by isotonicity, otherwise (if 2x < x) the integral is
R 2x
Rx
Rx
negative because x f = − 2x f and in this case 2x f > 0 by isotonicity. We conclude that F > 0 iff x 6 2x, iff x > 0. Clearly
F (0) = 0.
Symmetries: Let’s see what happens to F (−x). We have
F (−x) =
Z −2x
2
e−y dy
z=−y, y=−z
=
−x
Z 2x
−
x
2
e−(−z) dz = −F (x).
Therefore F is odd. It is therefore enough to study F on [0, +∞[.
Limits and asymptotes — Let’s compute F (+∞). We may notice that
F (x) =
Z 2x
x
2
e−y dy =
Z 0
x
2
e−y dy +
Z 2x
0
2
e−y dy = −
Z x
0
2
e−y dy +
Z 2x
0
2
e−y dy −→ −
Z +∞
0
2
e−y dy +
Z +∞
0
2
e−y dy = 0,
R +∞
2
because, as we know, the integral 0 e−y dy converges. We deduce that the straight line y = 0 is horizontal asymptote at +∞.
Continuity and differentiability: because the integrand is continuous and F being an integral type function with extremes x, 2x ∈
C 1 (R), it follows that F is continuous and differentiable and
2
2
2
2
2
2
F 0 (x) = (2x) 0 e−y − (x 0 )e−y = 2e−4x − e−x = e−x 2e−3x − 1 .
y=2x
y=x
Monotonicity and min/max: by previous point we have
0
F (x) > 0, ⇐⇒ 2e
−3x 2
1
log 2
− 1 > 0 ⇐⇒ −3x 6 log = − log 2 ⇐⇒ x 2 6
⇐⇒ |x| 6
2
3
2
r
log 2
.
3
165
We can summarize this in the following table:
q
log 2
−∞ −
3
−
&
sgn(F 0 )
F
It is then clear that x = −
q
log 2
3
q
−
log 2
3
q
+
%
is a global minimum and x =
q
log 2
3
q
log 2
3
log 2
3
+∞
−
&
is a global maximum.
Convexity: by F 0 we see that clearly F is twice differentiable and
2
2
F 00 (x) = 2e−4x (−8x) − e−x (−2x) = 2xe−x
2
1 − 8e−3x
2
By this
2
F 00 (x) > 0 ⇐⇒ x 1 − 8e−3x > 0.
Because
2
2
1 − 8e−3x > 0, ⇐⇒ e−3x 6
p
1
⇐⇒ −3x 2 6 − log 8 = −3 log 2 ⇐⇒ x 2 > log 2 ⇐⇒ |x| > log 2,
8
we have
−∞
sgn(x)
2
sgn(1 − 8e−3x )
sgn(F 00 )
F
p
− log 2
−
+
−
↓
p
− log 2
−
−
+
↑
0
p
log 2
+
−
−
↓
0
p
Therefore, x = ± log 2 as well as x = 0 are flexes. The graph is the following.
y
x
-
log 2
log 2
3
3
p
log 2 + ∞
+
+
+
↑
166
8.9
Exercises
Exercise 8.9.1. Compute
Z π/3
Z 1
1
dx.
1
+
x4
0
1.
2.
0
Z 3√
x+1+1
dx.
3.
√
x+1−1
1
Z log 4 √ x
e −1
5.
dx.
8e−x + 1
0
Z e2
q
√
log x
2
4.
(log x) 2 − 1 e (log x) −1 dx.
x
e
Z 4
6.
1
√
Z 1
8.
√
Z 2 log 2
9.
e x arctan √
log 2
1
ex − 1
log x
√
√ dx.
x x + 2x + x
e2x + e x
dx.
2x
x
− log 3 e − 4e + 3
Z − log 2
x−1
dx.
√
(
x
+
1)(
x − 2)
1/4
7.
1
dx.
(cos x) 3
Z 2 sinh plog x 2
10.
dx.
x
1
dx.
Exercise 8.9.2. Compute the area interior to the ellypse
x2 y2
+
= 1.
a2 b2
Exercise 8.9.3. Compute
Z +∞
1
1.
dx.
x(log
x) 3
2
1
2
Z
5.
0
1
dx.
√ √
x 2x + 1
Z 1/2
9.
0
Z +∞
2.
1
Z 2
6.
√
0
1
√ dx.
(1 + x) x
3.
Z +∞
2x log x
dx.
(1
+ x2 )2
1
4.
1
dx.
x(2 − x)
7.
Z 1 2
3x + 2
dx.
x 3/2
0
8.
Z +∞
1
dx.
2 + 3x + 4
x
−∞
Z 3
1
dx.
√
0
3 + 2x − x 2
1
dx.
x(log x) 2
Exercise 8.9.4. Let α > 0 and
f α (x) :=
(e2x
e2x + e x
, x ∈ R.
− 2e x + 2) α
R0
Compute a primitive for f 1 and use this to compute −∞ f 1 (x) dx. In general and without computing the integrals, for which values
R +∞
α > 0 does it converges 0 f α (x) dx?
Exercise 8.9.5. Let α > 0 and
1
f α (x) := (x − 1) arctan α , x ∈]0, +∞[.
x
R1
R +∞
Compute, if it exists 0 f (x) dx. Without computing any integral, determine the set of all α > 0 such that 1 f α (x) dx is finite. Are
R +∞
there values α > 0 such that 0 f α (x) dx converges?
Exercise 8.9.6. Let
!
1
f α (x) = x α log 1 +
.
x
R +∞
Discuss integrability at +∞ for f α and compute, if it exists, 1 f − 1 (x) dx.
2
167
Exercise 8.9.7. Determine α ∈ R such that
1
Z 1
(arctan x) α+ 2
dx,
√
(1 − x) α
0
converges and compute its value for α = − 12 .
Exercise 8.9.8. Determine α, β ∈ R such that the following integral converges
1
2
Z
0
arctan(x α )
dx
(1 − cos x) β
Exercise 8.9.9. Let
1
, x ∈∈]0, +∞[.
(e x − 1) α
R +∞
Determine values α > 0 such that f α is integrable at 0+, at +∞ and the integral 0 f α (x) dx exists. Compute, if it exists,
R1
f (x) dx.
0 1/2
f α (x) := e x arctan
Exercise 8.9.10. Let
√
1
f α (x) := α log(1 + 3 x), x ∈ [0, +∞[.
x
R1
Compute, if it exists 0 f 1 (x) dx. Hence find all possible values of α such that f is integrable at +∞ (?).
Exercise 8.9.11 (?). Compute
1. lim
x→0
3. lim
Z x
x
1 − ex
2
Z 2
1 * x y√
√
e y cos y dy − x cos x + x + .
x→0 x 2
0
,
-
2
ey dy.
2. lim
0
Rx 2
et − 7 sin(t 2 ) dt − 12 sin x + 11x
0
x→0
x7
.
Exercise 8.9.12 (?). Let
Z sin x
Z x
1
cos t
dt, g(x) :=
dt,
f (x) :=
1
+
t
2
+
et
0
0
f
Say if g is continuously extendable at x = 0. In such case, is the extension differentiable?
Exercise 8.9.13 (??). Find a, b, c ∈ R such that
Z 2x
log t
dt ∼+∞ cx a (log x) b .
1+t
x
Exercise 8.9.14 (??). Find C , 0 and α ∈ R such that
Z +∞
x
2
2
e−y dy ∼+∞ C
e−x
.
xα
Exercise 8.9.15 (?). Let
Z x
log 1 − e
Φ(x) :=
y−1
y2
!
dy.
0
Determine: the domain of Φ, sign, limits (if finite or less) and asymptotes, continuity, differentiability, limits of F 0 , monotonicity,
min/max, and plot a graph of Φ.
168
Exercise 8.9.16 (?). Let
!
Z x
4
t−1
+
log 1 + 2
dt.
3
t +4
0
Determine: the domain of Φ, limits (if finite or less) and asymptotes, continuity, differentiability, limits of F 0 , monotonicity, min/max,
convexity and plot a graph of Φ. Deduce the sign of Φ. Extra: compute Φ explicitly.
Φ(x) := log
Exercise 8.9.17 (?). Let
Φ(x) :=
Z x
sin y
dy.
2
0 1+y
Determine: the domain of Φ, symmetries, sign, limits (if finite or less) and asymptotes, continuity, differentiability, limits of F 0 ,
monotonicity, min/max, and plot a graph of Φ. Find values α ∈ R such that
f (x)
∈ R\{0}.
x→0+ x α
lim
Exercise 8.9.18 (?). Show that
Z x
sin t
dt 6 1 + log x, ∀x > 1.
t
0
Exercise 8.9.19 (??). Show that
Z x
sin t
x2
dt
6
, ∀x ∈ R.
2
2
0 1+t
Chapter 9
Basic Differential Equations
Ordinary Differential Equations (ODEs) is a wide topic of Mathematical Analysis. His relevance is due to the importance
in applications in Physics, Engineering, Biology, Economy, etc. An ODE is first of all an equation which involves as
unknown a function of one variable, let say y = y(t) (t ∈ R). The equation is a relation between y and some of its
derivatives up to a certain order which is called order of the equation. This explains the D and the E of ODE. The O is to
distinguish these equations to the same type of equations with the unknown function of several variables (these equations
are called Partial Differential Equations, PDEs). Instead to give a formal definition of what is an ODE, let see how it arise
in concrete examples.
9.1
9.1.1
Motivating models
Decay phenomena
Let’s consider the following
Example 9.1.1. A water pipe loose by percolation a fraction ν of its volume per unit of time. What happens if a constant
flux φ > 0 per unit of time is added to the pipe?
Sol. — Let’s call V (t) the volume of the water inside the pipe, Vmax the maximum volume and V (0) the initial volume. The variation
of volume V (t) on a small time interval, that is V (t + dt) − V (t) is given by a reduction −νV (t) dt due to percolation and by an increase
φ dt due to the flux. Therefore
V (t + dt) − V (t) = −νV (t) dt + φ dt.
Dividing by dt and letting dt −→ 0 we get the equation
V 0 (t) = −νV (t) + φ.
Leaving apart the formal details, we could notice that the equation is equivalent to
V 0 (t)
= 1.
−νV (t) + φ
At the left hand side we recognize, more or less, a derivative. Indeed
log | − νV (t) + φ| 0 =
−νV 0 (t)
= −ν,
−νV (t) + φ
by the equation. So we could say that
log | − νV (t) + φ| = −νt + C,
169
170
where C is some constant that can be found imposing that V (0) = V0 that is C = log | − νV0 + φ|. Now, we would have
| − νV (t) + φ| = e−νt+C , ⇐⇒ V (t) =
φ 1 −νt+C
± e
,
ν ν
φ
φ
φ
φ
But + or −? According the value for C, V0 = ν ± ν1 | − νV0 + φ| = ν ± ν − V0 . We see that if ν > V0 we have to take − otherwise +.
Therefore
φ
φ
φ−νV0 −νt
φ

= ν 1 − e−νt + V0 e−νt , if ν > V0,

 ν − ν e

φ

V (t) = 
1 − e−νt + V0 e−νt .
=
ν

 φ + νV0 −φ e−νt = φ 1 − e−νt + V e−νt , if φ < V .
0
0
 ν
ν
ν
ν
φ
As time goes on, V stabilizes through a limit level V (+∞) = ν .
The equation V 0 (t) = −νV (t) + φ is an example of first order linear equation. Linear means, roughly, that the dependence
by the unknown is through a first degree polynomial. As we will see for ODE this means a particularly simple structure
for solutions of the equations and explicit formulas for the solution.
9.1.2
Newton equations
The most classical example is given by Newton equations of motion. Precisely, the Newton second law says that if a
particle of mass m is moving under the effect of some force F then
ma = F,
where a is the acceleration of the particle. Denoting by x(t) the position at time t (we assume for simplicity that the
particle is moving along a straight line so we can describe its position with a function x(t)), the Newton law translates into
the equation
mx 00 (t) = F.
Usually forces depend on the position of the particle and eventually on the velocity. The could also depend on time, so the
second principle assumes the form
mx 00 (t) = F (t, x(t), x 0 (t)).
A classical example is the following. Consider a mass m moving under the action of a spring and of a friction along the
line. If κ > 0 is the elastic constant of the spring at rest as the particle is in the origin, the first force is
−κx(t).
The minus means that when the particle is out of the origin the force tends to move the particle toward the origin. On the
other side, the friction depends on the velocity (assuming that the line is homogenous): more faster the particle is, more
strong the friction is. So we could describe the action of friction as a decelerating action of the form
−νx 0 (t),
(ν > 0 is called viscosity). Putting all together we obtain the equation
mx 00 (t) = −κx(t) − νx 0 (t).
If the viscosity vanishes, we expect a perfect infinite oscillatory motion, that is x(t) like a sinusoid, whereas if we set off
the spring and we leave only the viscosity we expect that if the mass has some initial velocity, it will tends to stop sometime
later. Combining the two we expect an attenuated oscillation motion. But: how to prove it? And what about if we would
have a precise measure of the attenuation?
171
If an external force f (t) (that is, independent by the mass) acts on the mass the equation, the equation assumes the
form
mx 00 (t) = −κx(t) − νx 0 (t) + f (t).
With this simple equation we describe lots of phenomena like forced oscillations. This is a second order linear equation,
linear because the dependence by x, x 0, x 00 is by a so called linear function, that is a first degree polynomial.
9.1.3
Bernoulli brachistochrone
A point mass m starts from a point A and has to reach a point B at lower quote than A. Suppose that it moves only by
effect of gravity (without friction), what is the path that connect A to B in such a way that the time is minimum?
Clearly we are assuming here that the mass m moves along a plane line. The problem was well known to Galileo who
shown that between a straight line and a circular profile the second is better. But this doesn’t give the complete answer.
We look for a plane curve γ(t) = (x(t), y(t)) connecting A to B, such that if the body moves along γ the time is minimum.
How to formalize this? To simplify things we assume that the curve is actually the graph of a function y = y(x).
Here’s the ingenious argument by Bernoulli to derive an equation fulfilled by y. To start, suppose to divide the path γ
in small pieces approximatively straight. Consider now two consecutive pieces. On each of them we may assume that the
velocity is approximatively constant. Now, to have the minimum time it recall a natural phenomenon: the way of the light
to travel. Also the light, indeed, travel in such a way to employ the minimum time (this is the Fermat postulate). A little
analysis shows then that if vi is the modulus of the speed and θ i is the angle with the vertical, then the Snellius law must
hold:
vi+1 sin θ i+1
=
vi
sin θ i
A
B
By this we deduce that
sin θ i
≡ C.
vi
Now, passing to the limit we get the relation
sin θ(x)
≡ C,
v(x)
Now it is now difficult to see what this means in terms of y. Indeed, y 0 (x) represents the tangent to the curve, that is
p
π
1
1 − (sin θ(t)) 2
1
0
y (x) = tan
− θ(t) =
=
, =⇒ sin θ(t) = p
,
2
tan θ(t)
sin θ(t)
1 + y 0 (t) 2
Moreover, by the conservation of energy,
Ecin + E pot = costant, ⇐⇒
1 2
mv − mgy = costant.
2
Assuming that initially the mass is at rest at quote 0, we have
1 2
mv = mgy, ⇐⇒ v 2 = 2gy,
2
172
that is
sin θ
1
1
= C, ⇐⇒ p
= C, ⇐⇒ y(t) 1 + y 0 (t) 2 =
=: K .
p
0
2
v
2Cg
2gy(t) 1 + y (t)
This is called Bernoulli equation. As you can see, this equation involves y and y 0 but the connection is something non
linear in y, y 0. As we will see this equation can be explicitly solved being a particular case of a separable variables
equation.
9.2
First order linear equations
The first type of ODE we consider is the following
y 0 (t) = a(t)y(t) + b(t), t ∈ I.
(9.2.1)
Here a, b : I ⊂ R −→ R are known function (called coefficients). If b ≡ 0 we say that the equation is homogeneous. In
this case the set of solutions has a linear structure. Indeed we can notice that if ϕ and ψ are solutions, then also αϕ + βψ
is a solution (here α, β ∈ R). Indeed
(αϕ + βψ) 0 (t) = αϕ 0 (t) + βψ 0 (t) = αa(t)ϕ(t) + βa(t)ψ(t) = a(t)(αϕ + βψ)(t), t ∈ I.
We will prove now a general theorem which gives the solution of (9.2.1):
Theorem 9.2.1. Let a, b ∈ C (I), I ⊂ R interval. Then, the solutions of
y 0 (t) = a(t)y(t) + b(t), t ∈ I,
are
y(t) = e
R
"Z
a(t)dt
e
−
R
#
a(t)dt
b(t) dt + c , t ∈ I,
where c ∈ R is a constant. The (9.2.2) is called general integral.
Proof. — Assume that y ∈ C 1 (I) is a solution. Then
y 0 (t) − a(t)y(t) = b(t).
Now, if A(t) =
R
a(t) dt, that is A0 (t) = a(t), then
f
g0
e−A(t) y(t) = e−A(t) (−A(t)) 0 y(t) + e−A(t) y 0 (t) = e−A(t) y 0 (t) − a(t)y(t) = e−A(t) b(t),
That is e−A(t) y(t) is a primitive of e−A(t) b(t) or
!
Z
Z
e−A(t) y(t) =
e−A(t) b(t) dt + c, ⇐⇒ y(t) = e A(t)
e−A(t) b(t)dt + c ,
which is (9.2.2). Now, let’s check that this is actually a solution. We have
#
"Z
#0
f
g 0 "Z
y 0 (t) = e A(t)
e−A(t) b(t) dt + c + e A(t)
e−A(t) b(t) dt + C
"Z
= a(t)e A(t)
#
f
g
e−A(t) b(t) dt + c + e A(t) e−A(t) b(t)
= a(t)y(t) + b(t).
(9.2.2)
173
Example 9.2.2. Find the general integral for the equation
y 0 (t) −
Sol. — We have
y 0 (t) =
2
y(t) = 1, t ∈]0, +∞[.
t
2
2
y(t) + 1 = a(t)y(t) + b(t), where a(t) = , b(t) = 1.
t
t
Therefore
y(t) = e
R
2
t dt
Z
e−
R
2
t dt
!
!
!
!
Z
Z
1
1
2
dt
+
C
=
t
dt + C = e2 log t
e−2 log t dt + C = t 2
−
+
C
= −t + Ct 2 .
t
t2
You shouldn’t be surprised because we don’t have uniqueness. Just the simplest equation
y 0 = 0,
has infinitely many solutions (all the constants). To fix one specific solution we need to impose further conditions to the
solution. A classical problem is the so called Cauchy Problem or passage problem:
ϕ 0 (t) = a(t)ϕ(t) + b(t), t ∈ I,




CP (t 0, y0 ) 
 ϕ(t ) = y .
0
0

Here, of course, t 0 ∈ I. It is clear that there’s a unique solution of this problem. Indeed, calling
Z
R
R
a(t) dt
U (t) := e
e− a(t) dt b(t) dt,
by the previous Thm
ϕ(t) = U (t) + ce A(t), =⇒ y0 = ϕ(t 0 ) = U (t 0 ) + ce A(t0 ), ⇐⇒ c =
y0 − U (t 0 )
.
e A(t0 )
Example 9.2.3. Solve the Cauchy Problem
2y(t)



y 0 (t) −
= t, t > 1


1
− t2




 y(2) = 0.

Sol. — Rewriting the equation in the canonical form
y 0 (t) =
Now
Z
2
dt =
1 − t2
Z
2
y(t) + t, =⇒ y(t) = e
1 − t2
2
dt =
(1 − t)(1 + t)
Z
R
2
1−t 2
dt
Z
1
1
+
dt = −
1−t 1+t
e
Z
−
R
2
1−t 2
dt
!
t dt + C .
t + 1 1
.
dt + log |1 + t| = log t−1
t − 1 Because t ∈]1, +∞[, t+1
t−1 > 0, therefore
t +1
y(t) = elog t −1
Z
!
!
Z
t +1
t+1
t−1
e− log t −1 t dt + C =
t dt + C .
t−1
t+1
(9.2.3)
174
Now
Z
t
t−1
dt =
t+1
Z
t
t+1−2
dt =
t+1
Z
Z
t dt − 2
t2
t
dt =
−2
t+1
2
Z
dt + 2
Z
t2
1
dt =
− 2t + 2 log |t + 1|,
t+1
2
and finally
ϕ(t) =
Imposing ϕ(2) = 0 we have
9.3
!
t + 1 t2
− 2t + 2 log(1 + t) + C , t ∈]0, +∞[.
t−1 2
2(2 − 4 + 2 log 3 + C) = 0, ⇐⇒ C = 2(1 − log 3).
First order separable variables equations
An equation of type
y 0 (t) = a(t) f (y(t))
is called separable variables equation. The point is that this kind of equations can be solved (at least in principle) by
integrations. Let’s see how and why. To understand better the argument let’s consider the Cauchy problem CP (t 0, y0 ). We
have basically two cases:
• Suppose f (y0 ) = 0. Then clearly y(t) ≡ y0 is a solution because
y 0 (t) = 0, a(t) f (y(t)) = a(t) f (y0 ) = 0.
• Suppose f (y0 ) , 0. Assume that f (y(t)) , 0 for any t where y is defined. We can separate the variables, that is
rewrite the equation as
y 0 (t)
y 0 (t) = a(t) f (y(t)), ⇐⇒
= a(t).
f (y(t))
Integrating
Z
a(t) dt + c =
Z
y 0 (t)
dt
f (y(t))
u:=y(t), du=y0 (t) dt
=
Z
1
du =: G(u) = G(y(t)).
u=y(t)
f (u)
R
Assuming G = 1f known, we have a sort of algebraic equation for y. Because y is not yet explicit we call this
implicit form for the solution. If G is invertible we may go on to obtain
!
Z
y(t) = G−1
a(t) dt + c .
This is the explicit form. The value of c is determined imposing the passage condition y(t 0 ) = y0 .
This argument could be made formally precise (where we assumed f (y(t)) , 0 for any t) as
Theorem 9.3.1. Let a ∈ C ([a, b]) and f ∈ C 1 (I) with I ⊂ R. Then, for any (t 0, y0 ) ∈ [a, b] × I there exists a solution y of
y 0 (t) = a(t) f (y(t)),





 y(t ) = y .
0
0

In particular
175
• if f (y0 ) = 0 then y(t) ≡ y0 is a (stationary) solution.
R 1
• if f (y0 ) , 0 then, setting G(z) := f (z)
dz a primitive of 1/ f , we have
G(y(t)) =
Z
a(t) dt + c,
(9.3.1)
for a suitable c. If G is invertible we have
y(t) = G
Z
−1
!
a(t) dt + c .
(9.3.2)
The (9.3.1) is called implicit form for the solution, the (9.3.2) is called explicit form for the solution.
Proof. — Omitted.
Example 9.3.2. Solve the Cauchy problem
y 0 (t) = 1 + y(t) 2,






 y(0) = y0 .
Sol. — This is a separable variables equation y 0 = a(t) f (y) with a(t) ≡ 1 and f (y) = 1 + y 2 clearly fulfills the hypotheses of the
previous Thm. Of course because f (y0 ) = 1 + y02 > 0 we don’t have stationary solutions and we can separate variables:
y 0 (t)
y (t) = 1 + y(t) , ⇐⇒
= 1, ⇐⇒
1 + y(t) 2
2
0
Now,
Z
y 0 (t)
dt =
1 + y(t) 2
Z
Z
y 0 (t)
dt =
1 + y(t) 2
Z
1 + C = t + C.
1
= arctan u| u=y(t) = arctan y(t),
du
2
1+u
u=y(t)
so
arctan(y(t)) = t + C, ⇐⇒ y(t) = tan(t + C).
Imposing y(0) = y0 we get y0 = tan C, that is C = arctan y0 and finally
y(t) = tan(t + arctan y0 ).
Evidently this solution "lives" up to the time when t + arctan y0 = π2 (in the future) and t + arctan y0 = − π2 (in the past), that is in this
case
π
π
It0 = − − arctan y0, − arctan y0 .
2
2
Solution of Bernoulli brachistochrone
Recall the equation of the Bernoulli brachistochrone
y(x)(1 + y 0 (x) 2 ) = K .
This equation is, by fact, a separable variables equation. Indeed, first notice that
s
02
y(1 + y ) = K, ⇐⇒ y
02
K
=
− 1, ⇐⇒ y 0 =
y
s
K
−1=
y
K−y
,
y
176
where we choose the positive root because y 0 > 0 means "the trajectory descend (hence y > 0 in our setting). In this form the equation
becomes a separable variables equation. Looking for a non constant solution we have
y0
r
y
= 1, ⇐⇒ t + c =
K−y
Z
1 dt =
Z
y0
r
y
u=y(t)
dt =
K−y
Z r
u
du.
K −u
Setting
r
v :=
u
Kv 2
2Kv(1 + v 2 ) − Kv 2 2v
2Kv
, ⇐⇒ u =
, =⇒ du =
dv =
dv
2
K −u
1+v
(1 + v 2 ) 2
(1 + v 2 ) 2
so
Z r
u
du
K −u
=
Z
=−
2Kv
v·
dv = K
(1 + v 2 ) 2
Z
"
!0
#
Z
v
1
1
dv = K −
v· −
+
dv
1 + v2
1 + v2
1 + v2
p
Kv
+ K arctan v = − u(K − u) + K arctan
2
1+v
r
u
,
K −u
by which it follows that
Z r
y
dy =
K−y
Z
p
1 + c, ⇐⇒ − y(K − y) + K arctan
r
y
= x + c.
K−y
(9.3.3)
This is nothing but the implicit form for the solution. To extract y we use the following trick. Introduce the new variable θ as
θ = arctan
r
y
y
(tan θ) 2
, ⇐⇒
= (tan θ) 2, ⇐⇒ y = K
= K (sin θ) 2,
K−y
K−y
1 + (tan θ) 2
and inserting this into the (9.3.3) we get
q
K
− K (sin θ) 2 (K − K (sin θ) 2 ) + Kθ = x + c, ⇐⇒ − sin(2θ) + Kθ = x + c,
2
that it may be rewritten as
K
K
(2θ − sin(2θ)) − c ≡
(ϑ − sin(ϑ)) − c
2
2
where we set ϑ := 2θ. In other words we get that the point (x, y(x)) has coordinates
x=

x = K2 (ϑ − sin(ϑ)) − c,






 y = K sin ϑ 2 .

2
This decribe a curve in the plane (x, y) whose trace is just the graph of the function we are looking for. The constants K and c are
determined imposing the passage conditions for the origin (assumed to be coinciding with A) and the final point B.
A
B
177
9.4
Second Order Linear Equations
More involved the theory for second order linear ODE, that is for equations of type
y 00 (t) = a(t)y 0 (t) + b(t)y(t) + f (t).
(9.4.1)
If f ≡ 0, the equation is said homogenous and if a(t) ≡ a, b(t) ≡ b the equation is said to have constant coefficients. We
will actually limit to treat this case even for simplicity, that is we will consider an equation of type
y 00 + ay 0 + by = f (t).
To begin we will consider the homogeneous case
y 00 (t) + ay 0 (t) + by(t) = 0.
To solve in general this equation notice the following: if we call D the derivative, then previous equation can be rewritten
as
(D 2 + aD + b)y = 0.
The polynomial
λ 2 + aλ + b
is called characteristic polynomial and basically it contains all the information to look for solutions.
Theorem 9.4.1. Let λ 2 + aλ + b be the characteristic polynomial associated to the equation y 00 + ay 0 + by = 0. Then the
general integral is
c1 w1 (t) + c2 w2 (t), c1, c2 ∈ R,
where the couple (w1, w2 ) is,
• (eλ1 t , eλ2 t ) where λ 1,2 are the zeros of the char. pol. if ∆ > 0;
• (eλ1 t , teλ1 t ) where λ 1 is the zero of the char. pol. if ∆ = 0;
• (eαt cos( βt), eαt sin( βt)) where λ 1,2 = α ± i β are the complex zeros of the char. pol. if ∆ < 0.
The couple (w1, w2 ) is called fundamental system of solutions.
Proof. — We have three cases:
• Case ∆ > 0: the characteristic polynomial can be factorized as
λ 2 + aλ + b = a(λ − λ 1 )(λ − λ 2 ),
so we may expect that
D 2 + aD + b = (D − λ 1 )(D − λ 2 ),
hence
(D2 + aD + b)y = 0, ⇐⇒ (D − λ 1 )(D − λ 2 )y = 0.
Now call ψ = (D − λ 2 )y. Then
(D − λ 1 )ψ = 0, ⇐⇒ ψ 0 = λ 1 ψ, ⇐⇒ ψ = ceλ1 t .
But then
(D − λ 2 )y = c1 eλ1 t , ⇐⇒ y 0 = λ 2 y + c1 eλ1 t .
This is a first order linear equation that may be easily solved by the general formula (9.2.2), obtaining
!
!
Z
Z
c1
y(t) = eλ2 t
e−λ2 t c1 eλ1 t dt + c2 = eλ2 t c1
e (λ1 −λ2 )t dt + c2 =
eλ1 t + c2 eλ2 t ,
λ1 − λ2
and being c1, c2 arbitrary, we get finally
y(t) = c1 eλ1 t + c2 eλ2 t .
178
• Case ∆ = 0: we can repeat the same computations as before, just to the point
!
Z
y(t) = eλ2 t c1
e (λ1 −λ2 )t dt + c2 ,
but now λ 1 = λ 2 , therefore
y(t) = eλ1 t c1
Z
!
dt + c2 = c1 teλ1 t + c2 eλ2 t .
• Case ∆ < 0: in this case the characteristic polynomial is irreducible. However
a 2 4b − a2
+
= (λ − α) 2 + β 2 = 0, ⇐⇒ (λ − α) 2 = − β 2 .
λ 2 + aλ + b = λ +
2
4
Therefore
(D 2 + aD + b)y = 0, ⇐⇒ (D − α) 2 y = − β 2 y.
Now, notice that
(D − α)y = y 0 − αy = eαt D e−αt y , =⇒ (D − α) 2 y = eαt D e−αt eαt D e−αt y
= eαt D2 e−αt y ,
and by this
(D − α) 2 y = − β 2 y, ⇐⇒ eαt D2 e−αt y = − β 2 y, ⇐⇒ D 2 e−αt y = − β 2 e−αt y.
Finally, setting for a while φ = e−αt y, the previous equation becomes
D 2 φ = − β 2 φ,
and two solutions of this are φ1 (t) = cos( βt), φ2 (t) = sin( βt). This means that
y1 (t) = eαt cos( βt), y2 (t) = eαt sin( βt)
are two solutions of the initial equation and the general integral is in this case
c1 eαt cos( βt) + c2 eαt sin( βt).
Once we have solved the homogeneous one we need just one particular solution (of non homogeneous equation):
Proposition 9.4.2. Let (w1, w2 ) be a fundamental system of solutions for the homogeneous equation
and U a particular solution of the equation
y 00 + ay 0 + by = 0,
(9.4.2)
y 00 + ay 0 + by = f (t).
(9.4.3)
Then, the general integral of (9.4.3) is
y(t) = c1 w1 (t) + c2 w2 (t) + U (t), c1, c2 ∈ R.
Proof. — Just notice that if y solves (9.4.3), then y − U solves (9.4.2). Indeed
(y − U) 00 + a(y − U) 0 + b(y − U) = y 00 + ay 0 + by − (U 00 + aU 0 + bU) = f − f = 0, =⇒ y − U = c1 w1 + c2 w2 .
179
A general ingenious method to look for this particular solution was provided by Lagrange and it is called the variation of
arbitrary constants formula. Let’s see how does it works. His idea was to look at a function U of type
U (t) = c1 (t)w1 (t) + c2 (t)w2 (t), t ∈ I.
Of course c1 and c2 must be variables unless f ≡ 0 (why?). The idea is to find c1, c2 imposing to a such U to be a solution
of the equation.
Theorem 9.4.3 (Lagrange). Let (w1, w2 ) a fundamental system of solutions of (9.4.2). Then
!
!
Z
Z
w2 (t)
w1 (t)
U (t) = −
f (t) dt w1 (t) +
f (t) dt w2 (t), t ∈ I,
W (t)
W (t)
(9.4.4)
is a particular solution of (9.4.3) (W is the wronskian of (w1, w2 )).
Proof. — Let U = c1 w1 + c2 w2 . Then
U 0 = c10 w1 + c1 w10 + c20 w2 + c2 w20 .
To simplify computations we impose (for the moment arbitrarily) the condition
c10 w1 + c20 w2 = 0.
Then
U 00 = c10 w10 + c1 w100 + c20 w20 + c2 w200 .
Hence
U 00 = aU 0 + bU + f , ⇐⇒ c10 w10 + c20 w20 = f .
We may conclude that U is a solution iff
c10 w1 + c20 w2 = 0,





 0 0
0 0
 c1 w1 + c2 w2 = f .
This can be seen as a 2 × 2 linear system in the unknown (c10 , c20 ) and coefficients the matrix
 w1

 0
 w1
(9.4.5)
w2 
 .

0
w2 
Now, to find c10 , c20 we apply the Cramer rule. Calling W (t) the determinant of the previous matrix,
W (t) := w1 w20 − w2 w10 . (wronskian of (w1, w2 ))
It is easy to check that in all cases W (t) , 0 for any t:
 eλ1 t
det 
 λ 1 eλ1 t
and
eλ2 t

 eλt
 = e (λ1 +λ2 )t (λ − λ ). det 
2
1


λt
λ 2 eλ2 t 
 λe

eαt cos( βt)
det 
 eαt (α cos( βt) − β sin( βt))
teλt

 = e2λt ,

λt
(1 + λt)e 
eαt sin( βt)

 = βe2αt .

αt
e (α sin( βt) + β cos( βt)) 
Therefore, by Cramer rule,
−w2 (t) f (t)
w (t) f (t)
, c20 (t) = 1
.
W (t)
W (t)
Z
Z
w2 (t)
w1 (t)
c1 (t) = −
f (t) dt, c2 (t) =
f (t) dt,
W (t)
W (t)
c10 (t) =
that is
so we get just the (9.4.4).
(9.4.6)
180
Example 9.4.4. Find the general integral of the equation
y 00 (t) + y 0 (t) − 6y(t) = 2e−t , t ∈ R.
Sol. — We start computing the fundamental system of solutions of the homogeneous equation. The characteristic polynomial is
λ 2 + λ − 6 = 0, ∆ = 1 + 24 = 25 > 0, λ ± =
√
−1 ± 25 −1 ± 5
=
= 2, −3.
2
2
Therefore the fundamental solutions are w1 (t) = e2t , w2 (t) = e−3t with wronskian
W (t) = (−3 − 2)e−t = −5e−t .
By Lagrange formula (9.4.4) we have
U (t)
e−3t
2e−t dt e2t +
−5e−t
Z
=−
Z
=−
2 −t
2 −t
1
e −
e = − e−t .
15
10
3
e2t
2
2e−t dt e−3t =
−5e−t
5
Z
2
e−3t dt e2t −
5
Z
e2t dt e−3t
Therefore, the general integral is
1
ϕ(t) = c1 e2t + c2 e−3t − e−t , c1, c2 ∈ R.
3
9.4.1
Applications
Damped oscillations
Consider the equation of motion of a mass m subjected to an elastic force (of elastic constant κ) on a viscous media (viscosity ν):
mx 00 (t) = −κx(t) − νx 0 (t), ⇐⇒
mx 00 (t) + νx 0 (t) + κx(t) = 0.
As we said in the introduction this is a model for a point mass m moving along a straight line (position x(t)) subject to an elastic force
F = −κx and to friction F = −νv = −νx 0 (here we are assuming that the mass is light, for heavy masses we should consider a force of
type F = −νv 2 ). The equation is just a linear second order equation with constant coefficients. Its characteristic equation is
mλ 2 + νλ + κ = 0.
√
Because ∆ = ν 2 − 4mκ we have that if ∆ > 0, that is if ν 2 > 4mκ, ν > 4mκ, the fundamental solutions
√ are of exponential type, so we
haven’t oscillations. Also as ∆ = 0 we have the same. To have oscillations we need ∆ < 0, that is ν < 4mκ. In this case
√
ν
−∆
λ± = −
±i
,
2m
2m
and the fundamental system of solutions is
√
√
ν
ν
−∆ +
−∆ +
w+ (t) = e− 2m t cos *
t , w− (t) = e− 2m t sin *
t .
, 2m , 2m ν
In this case the oscillations are attenuated by the exponential e− 2m t that goes to 0 as t −→ +∞. The heavier the mass is, the stronger
the attenuation is.
181
Resonance
Consider the equation
y 00 (t) = −k 2 y(t) + sin(kt).
(9.4.7)
This equation is often used as model for the so–called phenomenon of resonance. For instance it was used in the case of Takoma bridge
replacing the equation for the angle θ(t) (non linear) with an its linear approximation. Just to give an idea of the argument recall the
equation
κ
µ 0
1
θ 00 = −
sin(2θ) −
θ +
f (t).
m`
m` 2
m` 2
The idea is that for small oscillations, that is for θ(t) small, sin(2θ) ∼ 2θ so the previous equation "linearizes" to
θ 00 = −
2κ
µ 0
1
θ−
θ +
f (t).
2
m`
m`
m` 2
Using an oscillating force, rescaling and setting friction to 0 we get something like (9.4.7). The interesting aspect of this equation is
that presents unbounded solutions. To show this, let’s first compute the general integral. The characteristic equation is λ 2 = −k 2 , that
is λ = ±ik, therefore the fundamental system of solutions for the homogeneous equations is w1 (t) = cos(kt), w2 (t) = sin(kt). The
wronskian is W (t) ≡ k and a particular solution is
U (t) = −
Z
!
!
Z
sin(kt)
cos(kt)
sin(kt) dt cos(kt) +
sin(kt) dt sin(kt).
k
k
Now
Z
sin(kt) 2 dt
"
#
Z
1
sin(kt) cos(kt) − k
cos(kt) 2 dt
k
Z
Z
1
1
= − sin(2kt) + (1 − sin(kt) 2 ) dt = − sin(2kt) + t −
sin(kt) 2 dt
2k
2k
=
Z
sin(kt) sin(kt) dt = −
1
k
Z
sin(kt) (cos(kt)) 0 = −
and by this we have
Z
sin(kt) 2 dt =
t sin(2kt)
−
.
2
4k
Moreover
Z
1
cos(kt)
sin(kt) dt =
k
2k
Z
sin(2kt) dt = −
cos(2kt)
.
4k 2
In conclusion
U (t) =
!
t
cos(2kt)
sin(2kt)
−
cos(kt) −
sin(kt).
2
2k
4k
4k 2
By this the conclusion is evident.
Figure 9.1: Plot of U.
182
9.4.2
Cauchy problem
We have seen that the general solution of the equation
y 00 + ay 0 + by = f (t),
(9.4.8)
has the form
y = c1 w1 + c2 w2 + U,
where c1, c2 ∈ R are constants, w1, w2 is a fundamental system of solutions and U is a particular solution of the equation.
To determine a unique solution means to determine a unique couple c1, c2 and for this is reasonable that two conditions
will be needed. An interesting problem for many applications is the so called initial values problem or Cauchy problem:
CP (t 0, y0, y00 )
y 00 + ay 0 + by = f (t),



 y(t 0 ) = y0,

 y 0 (t ) = y 0 .
0

0
In physical terms, this correspond to look for a trajectory passing at a certain time t 0 for y0 with a fixed velocity y00 . Now,
it turns out that
Theorem 9.4.5. Let a, b, f ∈ C (I), w1, w2 be a fundamental system of solutions for the homogeneous equation associated
to (9.4.8) (that is f ≡ 0). Then the Cauchy Problem CP (t 0, y0, y00 ) has a unique solution for any t 0 ∈ I and y0, y00 ∈ R.
Proof. — It is easy: we have to prove that there exists a unique c1, c2 such that
y = c1 w1 + c2 w2 + U,
is a solution of CP (t 0, y0, y00 ). Just impose the two conditions: we get
c1 w1 (t 0 ) + c2 w2 (t 0 ) + U (t 0 ) = y0,





 c w 0 (t ) + c w 0 (t ) + U 0 (t ) = y 0 ,
2 2 0
0
 1 1 0
0
c w (t ) + c2 w2 (t 0 ) = y0 − U (t 0 ),


 1 1 0
⇐⇒ 

 c w 0 (t ) + c w 0 (t ) = y 0 − U 0 (t ).
2 2 0
0
 1 1 0
0
Now, look at this as a system 2 × 2. The coefficient matrix is the dear old wronskian matrix which is, in our assumption, invertible.
Therefore the system has a unique solution c1, c2 .
Example 9.4.6. Find the solution of the Cauchy Problem
y 00 (t) + y(t) = et , t ∈ R,



 y(0) = 0,

 y 0 (0) = 1.

Sol. — The characteristic equation is λ 2 + 1 = 0, that is λ = ±i. Therefore w1 (t) = cos t, w2 (t) = sin t is a fundamental system of
solutions for the homogenous equation. The wronskian is
 cos t
W (t) = det 
 − sin t
sin t 
 = (cos t) 2 + (sin t) 2 = 1.

cos t 
Therefore a particular solution, by the Lagrange formula, is
!
!
!
!
Z
Z
Z
Z
sin t t
cos t t
U (t) = −
e dt cos t +
e dt sin t = −
et sin t dt cos t +
et cos t dt sin t
1
1
=−
et
et
et
(sin t − cos t) cos t + (cos t + sin t) sin t = .
2
2
2
183
Hence the general integral is
ϕ(t) = c1 cos t + c2 sin t +
et
.
2
Now, imposing the initial conditions we get the system

c1 + 12 = 0,






 c + 1 = 1,
 2 2
9.5
1
1
1
⇐⇒ c1 = − , c2 = , =⇒ ϕ(t) =
sin t − cos t + et .
2
2
2
Exercises
Exercise 9.5.1. Find the general integral of the following equations:
1. y 0 + (cos t)y = 12 sin(2t), t ∈ R.
3. y 0 + 2t y = 2t 3, t ∈ R.
log t
4. y 0 − 1t y + t = 0, t ∈]0, +∞[.
t y = t, t ∈] − 1, 1[.
2. y 0 − 1−t
2
g
f
5. y 0 + (tan t)y = t 3, t ∈ − π2 , π2 .
7. y 0 + y = sin t, t ∈ R.
8. y 0 + (cos t)y = (cos t) 2, t ∈ R.
9. y 0 = t 22t+1 y + 2t(t 2 + 1), t ∈ R.
Exercise 9.5.2. Solve the Cauchy Problem
2
6. y 0 + 2t y = te−t , t ∈ R.
√3
2


y 0 (t) + t3t
t,

3 +5 y(t) =




 y(0) = 1.

Exercise 9.5.3. Consider the equation
π
1
, t ∈ 0,
.
sin t
2
i) Find the general integral. ii) Is it true that for every solution limt→0+ y(t) = −∞ ? iii) Are there solutions such that ∃ limt→ π − y(t) ∈ R.
2
In this case, what is the value of the limit?
y 0 − (tan t)y =
Exercise 9.5.4. Consider the equation
y 0 + (sin t)y = sin t, t ∈ R.
i) Find
its general integral. ii) Are there solutions y such that ∃ limt→+∞ y(t) ∈ R. iii) Find the solution of the Cauchy Problem
y π2 = 1.
Exercise 9.5.5. Consider the equation
1
y 0 (t) = − y(t) + arctan t.
t
Find its general integral on ] − ∞, 0[ and on ]0, +∞[. Does it exists a y : R −→ R solution on both ] − ∞, 0[ and ]0, +∞[. In this case,
what is y(0)?
Exercise 9.5.6. Solve the Cauchy problems
y2 − y − 2



y0 =
arcsin t,


3
1. 




 y(0) = 3.
p
y 2y − 1


0


 y = cosh t .
2. 




 y(0) = 1.

cos2 (2y)


y0 =



t(2 − log2 t)
3. 





π
 y(1) = 2 .
p
(et + 1)y 1 − y


0

,

 y =
et + 2
4. 




 y(0) = 1/2.
Exercise 9.5.7. Find, in function of the initial condition y(0) = y0 the solution of the Cauchy problem
y 0 = 4y(1 − y),






 y(0) = y0 .
Plot quickly a qualitative graph of the various solutions.
184
Exercise 9.5.8. Consider the Cauchy problem
y 0 = y(1 − y 2 ),






 y(0) = 1/2.
Determine the implicit form for the solution. Is it true that the solution is defined for all times t ∈ R?
Exercise 9.5.9. For each of the following equations find a fundamental system of solutions and write the general integral.
1. y 00 − 3y 0 + 2y = 0. 2. y 00 − 2y 0 + 2y = 0. 3. y 00 − 4y + 3y = 0. 4. y 00 + y 0 = 0. 5. y 00 − y 0 + y = 0.
Exercise 9.5.10. Find the general integral of the following equations:
1. y 00 (t) + y 0 (t) − 6y(t) = 2e−t .
2. y 00 − y 0 + y = et .
3. y 00 + 4y 0 + 2y = t 2 .
4. y 00 + 2y 0 = et .
5. y 00 − y = cos t.
1 .
6. y 00 + y = cos
t
7. y 00 + 2y 0 + 2y = 2t + 3 + e−t .
8. y 00 − 2y 0 + 2y = et cos t.
Exercise 9.5.11. For each of the following equations find the general integral and the solution of the Cauchy Problem with initial
conditions y(0) = y 0 (0) = 0.
1. y 00 − y = t.
2. y 00 + 4y = et .
3. y 00 + y = t.
4. y 00 + y 0 − 6y = −4et .
5. y 00 − 8y 0 + 17y = 2t + 1.
1 .
6. y 00 + y = cos
t
Exercise 9.5.12. Consider the following differential equation
y 00 (t) − y 0 (t) = tet , t ∈ R.
i) Find its general integral. ii) Are there solutions such that limt→+∞ y(t) ∈ R? iii) Find the solution of the Cauchy Problem y(0) = 1,
y 0 (0) = 0.
Exercise 9.5.13. Find the general integral of the equation
y 00 (t) − 5y 0 (t) − 6y(t) = 16e−2t , t ∈ R.
Hence, say if there exists a solution such that y(0) = 0 and limt→+∞ y(t) = 0.
Exercise 9.5.14. Consider the equation
y 00 (t) + y 0 (t) = t + cos t, t ∈ R.
Find its general integral. Say it there are solutions y such that ∃ limt→+∞ y(t) ∈ R. Say it there are solutions y such that y(0) = 0 and
limt→−∞ y(t) = +∞.
Exercise 9.5.15. Consider the equation
π π
1
, t∈ − ,
.
cos t
2 2
Find its general integral. Is it true that any solution of the equation is such that limt→ π − y(t) = +∞? Say if there are solutions such
y 00 (t) + y(t) =
2
that y(t) ∼ Ct 2 per t → 0 (for some C , 0).
Exercise 9.5.16. Find the general integral of the equation
y 00 (t) + 4y 0 (t) + 4 =
Are there solutions of the equation such that ∃ limt→0+ y(t)?
e−2t
, t ∈]0, +∞[.
t2
185
Modeling Problems
Exercise 9.5.17. A radioactive material decay of 20% in 10 days. Find his halving time.
Exercise 9.5.18. In an hospital a radioactive substance is accumulated into a vessel at rate of 2m3 each month. The radioactivity has a
decay rate estimated to be proportional to the quantity present in the vessel according a constant of proportionality k = −1. Knowing
that initially the vessel is empty find the total amount of radioactive substance contained when the vessel is full.
Exercise 9.5.19. The water in a pool with squared base of side 10m and depth 2m evaporates at rate of 5 liters each hour. The bottom
of the pool is porous with a certain speed ph expressed in liter/minute, where p is a constant and h is the level of the water into the pool.
Once the pool is full of water, it takes 24 hours in order to have the pool completely empty. The problem is: determine p.
Exercise 9.5.20. A vessel has capacity 10 liters. A valve is opened on the bottom of the vessel with flow per hour proportional with
√
constant 3 to the 3 of the total volume present inside the vessel. If initially the vessel is full, compute how long it takes the vessel to be
completely empty. Suppose now that a constant flux F of fluid is infused into the vessel. Are there values of F such that the fluid into
the vessel reach, at long times, an equilibrium?
Exercise 9.5.21. The queue formed after a car accident on an highway reduces at some rate inversely proportional to the square root of
the length of the queue with some constant c of proportionality. Knowing that to reduce to the half a queue of 1km it takes 10min, how
long it takes to reduce to the half a queue of 2km? Do a queue reduces to 0 in a finite time?
Exercise 9.5.22. In a fish breeding the population of fishes is assumed to follows a logistic evolution
y 0 (t) = 0.1y(t) − by(t) 2,
where b is to be determined. You know that initially there’re 500kg of fishes and after one year there’re 1.250kg of fishes. Determine b.
Exercise 9.5.23 (?). A particle of mass m fall down under action of gravity and air friction in such a way that the equation of motion is
ma(t) = −mg − mkv(t).
Find an equation for v as function of the quote x and find v = v(x). This is the case of a light particle. If we have an heavy particle the
friction changes as −mkv(t) 2 . What can you say in this case?
Exercise 9.5.24 (??). A swimmer want to cross a river of section `. He starting point and the arriving point are aligned orthogonally
to the direction of the river. The water into the river flow at constant speed v. The swimmer want to follow a trajectory always directed
to his destination with constant speed V < v. Does he will reach the other side of the river? Describe the trajectory of the swimmer
through a suitable differential equation and find under which conditions the swimmer we be succeful.
Exercise 9.5.25 (?). A ship of mass m moves from rest under a constant propelling force m f and against a resistance mkv 2 . Determine
the speed v = v(a) as function of the covered distance a. Suppose that, fixed a, the engines are reversed. What is the distance necessary
to stop the ship?
Exercise 9.5.26. A particle of unit mass moves along the x−axis under the attraction of a forse of magnitude 4x towards the point
x = 0 and a resistance equal in magnitude to twice the velocity. The particle is released at rest at x = a. Determine all the positions at
instantaneous rest.
Exercise 9.5.27. A mass m is attached to two springs along the vertical with same elastic constant k. Initially the mass is at rest
positions for the springs. Determine the motion of the mass taking account of gravity.