Real Analysis

Real Analysis
1.
State and prove generalized Mean Value theorem.
In calculus, the mean value theorem states, roughly: given an arc between two endpoints,
there is at least one point at which the tangent to the arc is parallel to thesecant through its
endpoints.
The theorem is used to prove global statements about a function on an interval starting from local
hypotheses about derivatives at points of the interval.
More precisely, if a function f(x) is continuous on the closed interval [a, b], where a < b, and
differentiable on the open interval (a, b), then there exists a point c in (a, b) such that
[1]
This theorem can be understood intuitively by applying it to motion: If a car travels one
hundred miles in one hour, then its average speed during that time was 100 miles per hour.
To get at that average speed, the car either has to go at a constant 100 miles per hour during
that whole time, or, if it goes slower at one moment, it has to go faster at another moment as
well (and vice versa), in order to still end up with an average of 100 miles per hour.
Therefore, the Mean Value Theorem tells us that at some point during the journey, the car
must have been traveling at exactly 100 miles per hour; that is, it was traveling at its average
speed.
A special case of this theorem was first described by Parameshvara (1370–1460) from
the Kerala school of astronomy and mathematics in his commentaries
on Govindasvāmi and Bhaskara II.[2] The mean value theorem in its modern form was later
stated by Augustin Louis Cauchy(1789–1857). It is one of the most important results
in differential calculus, as well as one of the most important theorems in mathematical
analysis, and is essential in proving the fundamental theorem of calculus. The mean value
theorem follows from the more specific statement ofRolle's theorem, and can be used to
prove the more general statement of Taylor's theorem (with Lagrange form of the remainder
term).
2.
If f and g are of functions of bounded variation on [a, b], prove that f + g is of
bounded variation.
3.
State and prove Cauchy condition for infinite product.
convergence condition of infinite product
Let us think the sequence u1 u1u2 u1u2u3 In the complex analysis,
one often uses the definition of the convergence of an infinite
product
k=1uk where the case limk
u1u2 uk=0 is excluded. Then
one has the
Theorem 1 The infinite product
k=1uk of the non-zero complex
numbers u1 , u2 , ... is convergent iff for every positive number there
exists a positive number n such that the condition
un+1un+2 un+p−1
p
+
is true as soon as n n .
Corollary. If the infinite product converges, then we necessarily
have limk
uk=1 . (Cf. the necessary condition of convergence of
series.)
When the infinite product converges, we say that the value of the
infinite product is equal to limk
u1u2 uk .
4.
Prove that every absolutely continuous function is the indefinite integral of its
derivative.
5.
State and prove Bernstein theorem.
This proof is attributed to Julius König.[1]
Assume without loss of generality that A and B are disjoint. For any a in A or b in B we can form a
unique two-sided sequence of elements that are alternately in A and B, by repeatedly
applying f andg to go right and g − 1 and f − 1 to go left (where defined).
For any particular a, this sequence may terminate to the left or not, at a point where f −
1
or g − 1 is not defined.
Call such a sequence (and all its elements) an A-stopper, if it stops at an element of A, or
a B-stopper if it stops at an element of B. Otherwise, call it doubly infinite if all the elements
are distinct orcyclic if it repeats.
By the fact that f and g are injective functions, each a in A and b in B is in exactly one such
sequence to within identity, (as if an element occurs in two sequences, all elements to the left
and to the right must be the same in both, by definition).
By the above observation, the sequences form a partition of the whole of the disjoint union
of A and B, hence it suffices to produce a bijection between the elements of A and B in each
of the sequences separately.
For an A-stopper, the function f is a bijection between its elements in A and its elements in B.
For a B-stopper, the function g is a bijection between its elements in B and its elements in A.
For a doubly infinite sequence or a cyclic sequence, either f or g will do.
In the alternate proof, Cn can be interpreted as the set of n-th elements of Astoppers (starting from 0).
Indeed, C0 is the set of elements for which g − 1 is not defined, which is the set of starting
elements of A-stoppers, C1 is the set of elements for which
but
is defined
is not, i.e. the set of second elements of A-stoppers, and so on.
The bijection h is defined as f on C and g everywhere else, which means f on Astoppers and g everywhere else, consistently with the proof below.
[edit]Visualization
The definition of h can be visualized with the following diagram.
Displayed are parts of the (disjoint) sets A and B together with parts of the mappings f and g.
If the set A ∪ B, together with the two maps, is interpreted as a directed graph, then this
bipartite graph has several connected components.
These can be divided into four types: paths extending infinitely to both directions, finite cycles
of even length, infinite paths starting in the set A, and infinite paths starting in the set B (the
path passing through the element a in the diagram is infinite in both directions, so the
diagram contains one path of every type). In general, it is not possible to decide in a finite
number of steps which type of path a given element of A or B belongs to.
The set C defined above contains precisely the elements of A which are part of an infinite
path starting in A. The map h is then defined in such a way that for every path it yields a
bijection that maps each element of A in the path to an element of B directly before or after it
in the path. For the path that is infinite in both directions, and for the finite cycles, we choose
to map every element to its predecessor in the path.
6. State and prove Littlewood?s third principle.
Most of his work was in the field of mathematical analysis. He began research under the
supervision of Ernest William Barnes, who suggested that he attempt to prove the Riemann
hypothesis: Littlewood showed that if the Riemann hypothesis is true then the Prime Number
Theorem follows and obtained the error term. This work won him his Trinity fellowship.
He coined Littlewood's law, which states that individuals can expect miracles to happen to them,
at the rate of about one per month.
He continued to write papers into his eighties, particularly in analytical areas of what would
become the theory of dynamical systems.
He is also remembered for his book of reminiscences, A Mathematician's Miscellany (new edition
published in 1986).
Among his own Ph. D. students were Sarvadaman Chowla, Harold Davenport and Donald C.
Spencer. Spencer reported that in 1941 when he (Spencer) was about to get on the boat that
would take him home to the United States, Littlewood reminded him: "n, n alpha, n beta!"
(referring to Littlewood's conjecture).
His collaborative work, carried out by correspondence, covered fields in Diophantine
approximation and Waring's problem, in particular. In his other work Littlewood collaborated
with Raymond Paley onLittlewood–Paley theory in Fourier theory, and with Cyril Offord in
combinatorial work on random sums, in developments that opened up fields still intensively
studied. He worked with Mary Cartwright on problems in differential equations arising out of early
research on radar: their work foreshadowed the modern theory of dynamical
systems. Littlewood's inequality on bilinear forms was a forerunner of the
later Grothendieck tensor norm theory.
[edit]With
Hardy
He collaborated for many years with G. H. Hardy. Together they devised the first Hardy–
Littlewood conjecture, a strong form of the twin prime conjecture, and the second Hardy–
Littlewood conjecture.
In a 1947 lecture, the Danish mathematician Harald Bohr said, "To illustrate to what extent Hardy
and Littlewood in the course of the years came to be considered as the leaders of recent English
mathematical research, I may report what an excellent colleague once jokingly said: 'Nowadays,
there are only three really great English mathematicians: Hardy, Littlewood, and Hardy–
Littlewood.'" [2]:xxvii
There is a story (related in the Miscellany) that at a conference Littlewood met a German
mathematician who said he was most interested to discover that Littlewood really existed, as he
had always assumed that Littlewood was a name used by Hardy for lesser work which he did not
want to put out under his own name; Littlewood apparently roared with laughter. [citation needed] There
are versions of this story involving both Norbert Wiener and Edmund Landau, who, it is claimed,
"so doubted the existence of Littlewood that he made a special trip to Great Britain to see the
man with his own eyes".[3]
7. State and prove chain rule for differentiation.
Elementary rules of differentiation
Unless otherwise stated, all functions are functions of real, R, numbers that return real, R, values;
although more generally, the formulae below apply wherever they are well defined[1][2] includingcomplex, C, numbers [3].
[edit]Differentiation
is linear
Main article: Linearity of differentiation
For any functions f and g and any real numbers a and b the derivative of the function h(x) = af(x)
+ bg(x) with respect to x is
In Leibniz's notation this is written as:
Special cases include:

The constant multiple rule
 The sum rule

The subtraction rule
[edit]The
product rule (Leibniz rule)
Main article: Product rule
For the functions f and g, the derivative of the function h(x) = f(x) g(x) with
respect to x is
In Leibniz's notation this is written
[edit]The
chain rule
Main article: Chain rule
The derivative of the function of a function h(x) = f(g(x)) with
respect to x is
In Leibniz's notation this is written as:
However, by relaxing the interpretation of h as a function,
this is often simply written
[edit]The
inverse function rule
Main article: inverse functions and differentiation
If the function f has an inverse function g, meaning
that g(f(x)) = x and f(g(y)) = y, then
In Leibniz notation, this is written as
8.
Give an example to show that a sequence of continuous functions need not
converge to a continuous function.