### AN EXTENSION OF CHEBYSHEVIS INEQUALITY AND ITS

```AN EXTENSION OF CHEBYSHEV’S INEQUALITY AND ITS
CONNECTION WITH JENSEN’S INEQUALITY
CONSTANTIN P. NICULESCU
Abstract. The aim of this paper is to show that Jensen’s Inequality and
an extension of Chebyshev’s Inequality complement one another, so that they
both can be formulated in a pairing form, including a second inequality, that
provides an estimate for the classical one.
1. Introduction
The well known fact that the derivative and the integral are inverse each other
has a lot of interesting consequences, one of them being the duality between convexity and monotonicity. The purpose of the present paper is to relate on this
basis two basic inequalities in Classical Analysis, precisely those due to Jensen and
Chebyshev.
Both refer to mean values of integrable functions. Restricting ourselves to the
case of …nite measure spaces (X; ; ), let us recall that the mean value of any
integrable function f : X ! R can be de…ned as
Z
1
fd :
M (f ) =
(X) X
A useful remark is that the mean value of an integrable function belongs to any
interval that includes its image; see [5], page 202.
In the particular case of an interval [a; b]; endowed with the Lebesgue measure,
the two aforementioned inequalities reads as follows:
Jensen’s Inequality. Suppose that f : [a; b] ! R is an integrable function and
' is a convex function de…ned on an interval containing the image of f; such that
' f is integrable too. Then
' (M (f ))
M (' f ) :
Chebyshev’s Inequality. If g; h : [a; b] ! R are two nondecreasing functions
then
M (g)M (h) M (gh):
Our goal is to show that Jensen’s Inequality and an extension of Chebyshev’s
Inequality complement one another, so that they both can be formulated in a pairing
form, including a second inequality, that provides an estimate for the classical one.
1991 Mathematics Subject Classi…cation. Primary: 26A51, 26D15.
Key words and phrases. Convex functions, subdi¤erential, mean value.
Partially supported by MEN Grant 39683/1998.
Published in J. of Inequal. & Appl., 6 (2001), nr. 4, pp. 451-462. MR1888436.
1
2
CONSTANTIN P. NICULESCU
2. Preliminaries
Before entering the details we shall need some preparation on the smoothness
properties of the two type of functions involved: the convex and the nondecreasing
ones.
Suppose that I is an interval (with interior Int I) and f : I ! R is a convex
function. Then f is continuous on Int I and has …nite left and right derivatives at
each point of Int I. Moreover,
(*)
x < y in Int I ) D f (x)
D+ f (x)
D f (y)
D+ f (y)
which shows that both D f and D+ f are nondecreasing on Int I. It is not di¢ cult
to prove that f must be di¤erentiable except for at most countably many points. See
[5], pp. 271-272. On the other hand, simple examples show that at the endpoints of
I could appear problems even with the continuity. Notice that every discontinuous
convex function f : [a; b] ! R comes from a continuous convex function f0 : [a; b] !
R; whose values at a and/or b were enlarged. In fact, any convex function f de…ned
on an interval I is either monotonic or admits a point such that f is nonincreasing
on ( 1; ] \ I and nondecreasing on [ ; 1) \ I:
In the case of convex functions, the role of derivative is played by the subdi¤ erential, a mathematical object which for a function f : I ! R is de…ned as the set
@f of all functions ' : I ! [ 1; 1] such that '(Int I) R and
f (x)
f (a) + (x
a)'(a); (8) x; a 2 I:
Geometrically, the subdi¤erential gives us the slopes of supporting lines for the
graph of f:
The well known fact that a di¤erentiable function is convex if (and only if) its
derivative is nondecreasing has the following generalization in terms of subdi¤erential:
Lemma 1. Let I be an interval. The subdi¤ erential of a function f : I ! R is
non-empty if and only if f is convex. Moreover, if ' 2 @f; then
D f (x)
'(x)
D+ f (x)
for every x 2 Int I: Particularly, ' is a nondecreasing function.
Proof. Necessity. Suppose …rst that f is a convex function de…ned on an open
interval I. We shall prove that D+ f 2 @f: For, let x; a 2 I; with x a: Then
f ((1
t)a + tx)
t
f (a)
f (x)
f (a)
for each t 2 (0; 1]; which yields
f (x)
f (a) + D+ f (a) (x
a):
If x a; then a similar argument leads us to f (x) f (a) + D f (a) (x a); or,
D f (a) (x a) D+ f (a) (x a); because x a 0:
Analogously, we can argue that D f 2 @f: Then from ( ) we infer that any
' 2 @f is necessarily nondecreasing.
If I is not open, say I = [a; 1); we can complete the de…nition of ' by letting
'(a) = 1:
AN EXTENSION OF CHEBYSHEV’S INEQUALITY
3
Su¢ ciency. Let x; y 2 I; x 6= y; and let t 2 (0; 1): Then
f (x)
f (y)
f ((1
f ((1
t)x + ty) + t(x y) ' ((1 t)x + ty)
t)x + ty) (1 t)(x y) ' ((1 t)x + ty) :
By multiplying the …rst inequality by 1
them side by side, we get
(1
t)f (x) + tf (x)
t, the second by t and then adding
f ((1
t)x + ty)
i.e., f is convex.
Let us consider now the case of nondecreasing functions. It is well known that
each nondecreasing function ' : I ! R has at most countably many discontinuities (each of the …rst kind); less known is that ' admits primitives (also called
antiderivatives). According to Dieudonné [2], a primitive of ' means any function
: I ! R which is continuous on I; di¤erentiable at each point of continuity of ',
and such that 0 = ' at all those points. An example of a primitive of ' is
Z x
(x) =
'(t) dt; x 2 I;
a
a being arbitrarily …xed in I:
Because ' is nondecreasing, an easy computation shows that
function. In fact, is continuous, so it su¢ ces to show that
x+y
2
(x) +
2
(y)
or, the last inequality is equivalent to
Z (x+y)=2
Z y
'(t) dt
'(t) dt;
x
(x+y)=2
is a convex
for all x; y 2 I;
for all x; y 2 I; x
y
the later being clear because ' is nondecreasing.
On the other hand, by Denjoy-Bourbaki Theorem (The Generalized Mean Value
Theorem) any two primitives of ' di¤er by a constant. See [2], 8.7.1. Consequently,
all primitives of ' must be convex too!
3. The main results
We can now state our …rst result, the complete form of Jensen’s Inequality:
Theorem A. Let (X; ; ) be a …nite measure space and let g : X ! R be a
integrable function. If f is a convex function given on an interval I that includes
the image of g and ' 2 @f is a function such that ' g and g (' g) are integrable,
then the following inequalities hold:
0
M (f
g)
f (M (g))
M (g (' g))
M (g) M (' g) :
Proof. The …rst inequality is that of Jensen, for which we give the following simple
argument: If M (g) 2 Int I; then
f (g(x))
f (M (g)) + (g(x)
M (g)) '(M (g)) for all x 2 X
and the Jensen’s inequality follows by integrating both sides over X: The case where
M (g) is an endpoint of I is straightforward because in that case g = M (g) a.e.
The second inequality can be obtained from
f (M (g))
f (g(x)) + (M (g)
g(x)) '(g(x)) for all x 2 X
4
CONSTANTIN P. NICULESCU
by integrating both sides over X:
Corollary 1. (See [3], for the case where f is a smooth convex function)1 . Let f
be a convex function de…ned on an open interval I and let ' 2 @f . Then
!
n
n
X
X
0
f
k f (xk )
k xk
k=1
Xn
k=1
k=1
k xk '(xk )
Xn
k=1
k xk
Xn
k=1
k '(xk )
Pn
for every x1 ; :::; xn 2 I and every 1 ; :::; n 2 [0; 1]; with k=1 k = 1:
Corollary 1 above allows us to say something more even in the case of most familiar inequalities. Here are three examples, all involving concave functions; Theorem
A and Corollary 1 both work in that case, simply by reversing the inequality signs:
The …rst example concerns the sine function and improves on a well known inequality from Trigonometry: If A; B; C are the angles of a triangle (expressed in
p
X
3 3 X
sin A
A cos A
0
2
3
i.e.,
p
p
X
3 3 X
3 3
A cos A
sin A
:
2
3
2
P
To get a feeling of the lower estimate for sin A; just test the case of the triangle
with angles A = 2 ; B = 3 and C = 6 !
Our second example concerns the function ln and improves on the AM-GM
Inequality:
p
p
x1 + ::: + xn
1 Xn
1 Xn
1
n
n
x1 :::xn
x1 :::xn exp
xk
1 :
k=1
k = 1 xk
n
n
n
The exponent in the right hand side can be further evaluated via a classical
inequality due to P. Schweitzer [7] (as strengthened by V. Cîrtoaje [1]), so we can
state the AM-GM Inequality as follows: If 0 < m x1 ; :::; xn M; then
(M + m)2
[1 + ( 1)n 1 ](M m)2
+
4M m
8M mn2
p
x1 + ::: + xn
n
x1 :::xn
:
n
Stirling’s Formula suggests the possibility of further improvement, a subject
which will be considered elsewhere.
Theorem A also allows us to estimate tricky integrals such as
Z =4
4
1
I=
ln(1 + tan x) dx = ln 2 = 0: 34657:::
2
0
By Jensen’s Inequality,
!
Z =4
Z =4
4
4
(1 + tan x) dx
ln(1 + tan x) dx ln
x1 + ::: + xn
exp 1
n
0
0
1 After the publication of this paper we learned that the case of smooth convex functions was
…rst considered by S. S. Dragomir and N. M. Ionescu in their paper Some converse of Jensen’s
inequality and applications, Rev. Anal. Numér. Théor. Approx. 23 (1994), 71-78.
AN EXTENSION OF CHEBYSHEV’S INEQUALITY
5
which yields a pretty good upper bound for I because the di¤erence between the
two sides is (approximately) 1:8952 10 2 : Notice that an easy computation shows
that
Z =4
1
1
(1 + tan x) dx =
+ ln 2;
4
2
0
Theorem A allows us to indicate a valuable lower bound for I; precisely,
I
4
ln
Z
!
=4
4
(1 + tan x) dx +1
0
Z
=4
(1+tan x) dx
0
4
Z
=4
(1+tan x)
1
dx:
0
In fact, Maple V4 shows that I exceeds the left hand side by 1: 9679 10 2 :
Using the aforementioned duality between the convex functions and the nondecreasing ones, we can infer from Theorem A the following result:
Theorem B (The extension of Chebyshev’s Inequality). Let (X; ; ) be a …nite
measure space, let g : X ! R be a
integrable function and let ' be a nondecreasing function given on an interval that includes the image of g and such that
' g and g (' g) are integrable functions. Then for every primitive of ' such
that
g is integrable the following inequalities hold true:
0
M(
g)
(M (g))
M (g (' g))
M (g) M (' g) :
In order to show how Theorem B yields Chebyshev’s Inequality we have to
consider two cases. The …rst one concerns the situation where g : [a; b] ! R is
increasing and h : [a; b] ! R is nondecreasing. In that case we apply Theorem B
to g and ' = h g 1 : When both g and h are nondecreasing, we shall consider
increasing perturbations of g; e.g., g + "x for " > 0: By the previous case,
M ((g + "x)h)
M (g + "x)M (h)
for each " > 0 and it remains to take the limit as " ! 0 + :
The following two inequalities are consequences of Theorem B:
R1
R1
R1
(1) 0 sin x1 sin x1 + cos sin x1 dx
sin x1 dx
sin x1 + cos sin x1 dx >
0
0
2
R1
R1
R1
2
> 0 12 sin x1 + sin sin x1 dx 12 0 sin x1 dx
sin 0 sin x1 dx > 0;
(2)
R1
0
sin x
x
R1
> 0
esin x= x dx
exp
sin x
x
R1
0
dx
sin x
x
exp
dx
R1
0
R1
0
sin x
x
esin x= x dx >
dx > 0:
The …rst one corresponds to the case where g(x) = sin x1 and '(x) = x + cos x;
while the second one to g(x) = sinx x and '(x) = ex ; the fact that the inequalities
above are strict is straightforward. In both cases, the integrals involved cannot be
computed exactly (i.e., via elementary functions).
Using MAPLE V4, we can estimate the di¤erent integrals and obtain that (1)
looks like
0:37673 > 0: 16179 > 0
while (2) looks like 5: 7577
10
3
> 2: 8917
10
3
> 0:
6
CONSTANTIN P. NICULESCU
4. Another estimate of Jensen’s Inequality
The following result complements Theorem A and yields a valuable upper estimate of Jensen’s Inequality:
Theorem C. Let f : [a; b] ! R be a continuous convex function, and let P
[m1 ; M1 ] ; :::;
n
[mn ; Mn ] be compact subintervals of [a; b]: Given 1 ; :::; n 2 [0; 1] with k=1 k =
1; the function
Xn
Xn
E(x1 ; :::; xn ) =
f
k f (xk )
k xk
k=1
k=1
attains its supremum on
= [m1 ; M1 ] ::: [mn ; Mn ] at a boundary point (i.e.,
at a point of @ = fm1 ; M1 g ::: fmn ; Mn g):
Moreover, the conclusion remains valid for every compact convex domain in
[a; b]n .
The proof of Theorem C depends upon the following re…nement of Lagrange
Mean Theorem:
Lemma 2 Let h : [a; b] ! R be a continuous function. Then there exists a point
c 2 (a; b) such that
h(b) h(a)
Dh(c)
Dh(c):
b a
Here the lower and respectively the upper derivative of h at c are de…ned by
Dh(c) = lim inf
x!c
h(x)
x
h(c)
c
and
Dh(c) = lim sup
x!c
h(x)
x
h(c)
:
c
Proof. As in the smooth case, we consider the function
H(x) = h(x)
h(b)
b
h(a)
(x
a
a);
x 2 [a; b]:
Clearly, H is continuous and H(a) = H(b): If H attains its supremum at c 2
(a; b); then DH(c)
0
DH(c) and the conclusion of Lemma 2 is immediate.
The same is true when H attains its in…mum at an interior point of [a; b]: If both
extrema are attained at the endpoints, then H is constant and the conclusion of
Lemma 2 works for every c in (a; b).
Proof of Theorem C. It su¢ ces to show that
E(x1 ; :::; xk ; :::; xn )
sup fE(x1 ; :::; mk ; :::; xn ); E(x1 ; :::; Mk ; :::; xn )g :
for every xk 2 [mk ; Mk ]; k 2 f1; :::; ng :
By reductio ad absurdum, we may assume that
E(x1 ; x2 ; :::; xn ) > sup fE(m1 ; x2 ; :::; xn ); E(M1 ; x2 ; :::; xn )g
for some x1 ; x2 ; :::; xn with xk 2 [mk ; Mk ] for each k 2 f1; :::; ng :
Letting …xed xk 2 [mk ; Mk ] with k 2 f1; :::; ng, we consider the function
h : [m1 ; M1 ] ! R;
h(x) = E(x; x2 ; :::; xn ):
According to Lemma 2, there exists a 2 (m1 ; x1 ) such that h(x1 ) h(m1 )
(x1 m1 ) Dh( ): As h(x1 ) > h(m1 ); it follows that Dh( ) > 0; equivalently,
Df ( ) > Df (
1
+
2 x2
+ ::: +
n xn ) :
AN EXTENSION OF CHEBYSHEV’S INEQUALITY
7
Or, Df is a nondecreasing function on (a; b) (actually Df = D+ f ), which leads
to > 1 + 2 x2 + ::: + n xn ; i.e., to
2 x2 + ::: + n xn
>
:
2 + ::: + n
A new appeal to Lemma 2 (applied this time to h j [x1 ; M1 ]), yields an
2
(x1 ; M1 ) such that
2 x2 + ::: + n xn
:
<
2 + ::: + n
Or, the later contradicts the fact that < .
The following application of Theorem C is P
due to L. G. Khanin [6]: Let p > 1,
n
x1 ; :::; xn 2 [0; M ] and 1 ; :::; n 2 [0; 1]; with k=1 k = 1: Then
Xn
Xn
p
p
+ (p 1) pp=(1 p) M p :
k xk
k xk
k=1
k=1
Particularly,
2
(x1 + ::: + xn )
M2
x21 + ::: + x2n
+
;
n
n
4
which represents an additive converse to Cauchy-Schwarz Inequality.
In fact, according to Theorem C, the function
Xn
Xn
p
p
E(x1 ; :::; xn ) =
;
k xk
k xk
k=1
k=1
n
attains its supremum on [0; M ] at a boundary point i.e., at a point whose coordinates are either 0 or M . Therefore
sup E(x1 ; :::; xn )
=
M p sup fs
(p
1) pp=(1
sp ; s 2 [0; 1]g =
p)
M p:
Another immediate consequence of Theorem C is the following fact, which improves on a special case of a classical inequality due to G. Hardy, J. E. Littlewood
and G. Pólya (cf. [4], p. 89): Let f : [a; b] ! R be a continuous convex function.
Then
f (a) + f (b)
a+b
f (c) + f (d)
c+d
f
f
2
2
2
2
for every a c d b; in [4], one restricts to the case where a + b = c + d. The
problem of extending this statement for longer families of numbers is left open.
References
[1] V. Cîrtoaje, On some inequalities with restrictions, Revista de matematica din Timisoara, 1
(1990), 3-7. (Romanian)
[2] J. Dieudonné, Foundations of Modern Analysis, Academic Press, 1960.
[3] M. O. Drimbe, A connection between the inequalities of Jensen and Chebyshev, Gazeta Mate
matica, CI (1996), nr. 5-6, pp. 270-273. (Romanian)
[4] G. H. Hardy, J. E. Littlewood and G. Pólya, Inequalities, Cambridge Mathematical Library,
2nd ed., 1952, Reprinted 1988.
[5] E. Hewitt and K. Stromberg, Real and Abstract Analysis, Springer-Verlag, 1965.
[6] Khanin L. G., Problem M 1083, Kvant, 18 (1988), no. 1, p. 35 and Kvant 18 (1988), no. 5, p.
35.
[7] Pólya G. and Szegö G., Aufgaben und Lehrsätze aus Analysis, vol. I, Springer-Verlag, 1925.
English edition, Springer-Verlag, 1972.
University of Craiova, Department of Mathematics, Craiova 1100, ROMANIA