isid/ms/2013/12
October 20, 2013
http://www.isid.ac.in/estatmath/eprints
Inertia of the matrix [(pi + pj )r ]
Rajendra Bhatia and Tanvi Jain
Indian Statistical Institute, Delhi Centre
7, SJSS Marg, New Delhi–110 016, India
INERTIA OF THE MATRIX [(pi + pj )r ]
RAJENDRA BHATIA* AND TANVI JAIN**
Abstract. Let p1 , . . . , pn be positive real numbers. It is well
r
known that for every r < 0 the matrix [(pi + pj ) ] is positive definite. Our main theorem gives a count of the number of positive
and negative eigenvalues of this matrix when r > 0. Connections
with some other matrices that arise in Loewner’s theory of operator monotone functions and in the theory of spline interpolation
are discussed.
1. Introduction
Let p1 , p2 , . . . , pn be distinct positive real numbers. The n×n matrix
1
C = [ pi +p
] is known as the Cauchy matrix. The special case pi = i
j
1
]. Both matrices have been studied by
gives the Hilbert matrix H = [ i+j
several authors in diverse contexts and are much used as test matrices
in numerical analysis.
The Cauchy matrix is known to be positive definite. It possesses
ai
h
1
◦r
stronger property: for each r > 0 the entrywise power C = (pi +pj )r
is positive definite. (See [4] for a proof.) The object of this paper is
to study positivity properties of the related family of matrices
Pr = [(pi + pj )r ], r ≥ 0.
(1)
The inertia of a Hermitian matrix A is the triple
In(A) = (π(A), ζ(A), ν(A)) ,
in which π(A), ζ(A) and ν(A) stand for the number of positive, zero,
and negative eigenvalues of A, respectively. Our main result is the
following.
2000 Mathematics Subject Classification. 15A18, 15A48, 15A57, 42A82.
Key words and phrases. Positive definite matrix, conditionally positive definite matrix, inertia, Sylvester’s law, matrix monotone function, Cauchy matrix,
Vandermonde matrix.
Acknowledgements: We thank Dr. Srijanani Anurag Prasad for making the
Figure. The first author is a J. C. Bose National Fellow.
1
2
RAJENDRA BHATIA AND TANVI JAIN
Theorem 1. Let p1 , . . . , pn be distinct positive real numbers, and let
Pr be the n × n matrix defined in (1). Then
(i) Pr is singular if and only if r is a nonnegative integer smaller
than n − 1.
(ii) If r is an integer and 0 ≤ r ≤ n − 1, then
r+1
r+1
In Pr =
, n − (r + 1),
.
2
2
(iii) Suppose r is not an integer, and 0 < r < n − 2.
If brc
In Pr
and if brc
In Pr
=
=
=
=
2k for some integer k, then
(k + 1, 0, n − (k + 1)),
2k + 1 for some integer k, then
(n − (k + 1), 0, k + 1).
(iv) For every real number r > n − 2
In Pr = In Pn−1 .
15
10
5
0
−5
−10
−15
0
1
2
3
4
5
6
Figure 1
Figure 1 is a schematic representation of the eigenvalues of a 6 ×
6 matrix Pr , when pi have been fixed and r varies. This kind of
3
behaviour has been observed in other problems. The Loewner matrix
is defined as
r
pi − prj
Lr =
, r ≥ 0.
pi − pj
It is a famous theorem of C. Loewner [2, 3] that for 0 < r < 1, the
matrix Lr is positive definite. R. Bhatia and J. Holbrook [5] showed
that for 1 < r < 2, the matrix Lr has only one positive eigenvalue.
This encouraged them to speculate what might happen for other values
of r. They made a conjecture that, in the light of our Theorem 1, may
be rephrased as
Conjecture 1. For all r > 0, In Pr = In Lr+1 .
The conjecture of Bhatia and Holbrook remains unproved, except
that it was shown by R. Bhatia and T. Sano [7] that for 2 < r < 3 the
matrix Lr has only one negative eigenvalue.
The matrix
Br = [|pi − pj |r ] , r ≥ 0
has been studied widely in connection with interpolation of scattered
data and spline functions. In [8] N. Dyn, T. Goodman and C. A.
Micchelli establish inertia properties of this matrix as r varies. Some
of our proofs can be adapted to achieve substantial simplifications of
those in [8].
Closely related to Loewner matrices is the matrix
r
pi + prj
Kr =
, r ≥ 0.
p i + pj
M. K. Kwong [9] showed that Kr is positive definite when 0 < r < 1.
Bhatia and Sano [7] showed that Kr has only one positive eigenvalue
when 1 < r < 3. In our ongoing work [6] we have carried this analysis
further, and are led to
Conjecture 2. For all r > 0, In Br = In Kr+1 .
The rest of the paper is devoted to the proof of Theorem 1 followed
by some remarks.
2. The case r ≤ n − 1, r an integer
The Sylvester Law of Inertia says that if A and X are n×n matrices,
and A is Hermitian and X nonsingular, then In X ∗ AX = In A. We
need a small generalisation of this given in the next proposition.
4
RAJENDRA BHATIA AND TANVI JAIN
Proposition 2. Let n ≥ r. Let A be an r × r Hermitian matrix, and
X an r × n matrix of rank r. Then
In X ∗ AX = In A + (0, n − r, 0).
Proof. The matrix X has a singular value decomposition X = U ΣV ∗ ,
in which U ∈ Cr×r , V ∈ Cn×n , Σ ∈ Cr×n ; U and V are unitary, and Σ
can be partitioned as Σ = [S, O] , where S is an r × r positive diagonal
matrix, and O is the null matrix of order r × (n − r). Then
X ∗ AX = V Σ∗ U ∗ AU ΣV ∗ .
By Sylvester’s Law
∗ ∗
In X ∗ AX = In Σ
Σ
U AU
SU ∗ AU S O
= In
.
O
O
In this last 2 × 2 block matrix, the top left block is an r × r matrix.
So
In X ∗ AX = In (SU ∗ AU S) + (0, n − r, 0)
= In A + (0, n − r, 0).
Now let W be the (r + 1) × n Vandermonde
1 1 1 ··· 1
p1 p2 p3 · · · pn
W =
· · · ··· ·
pr1 pr2 pr3 · · · prn
matrix
,
let V1 be the (r+1)×(r+1) antidiagonal matrix with entries
on its sinister diagonal and 0’s elsewhere; i.e.,
0 0 ··· 0
0 0r
r
0 0 ··· 0
0
1
r
V1 =
0
0
0 0 ··· 2
.
· · ··· ·
·
·
r
0 ··· 0
0
0
r
r
0
,
r
1
,...,
It can be seen that for r ≤ n − 1
Pr = W ∗ V1 W.
So by Proposition 2
In Pr = In V1 + (0, n − (r + 1), 0).
(2)
r
r
5
The inertia of V1 can be computed as follows. When r + 1 = 2k, the
entries on the sinister diagonal of V1 are
r
r
r
r
r
r
,
,··· ,
,
,··· ,
,
.
0
1
k−1
k−1
1
0
The eigenvalues of V1 are readily seen to be ± rj , 0 ≤ j ≤ k − 1. So
r+1
r+1
In V1 = (k, 0, k) =
.
, 0,
2
2
When r + 1 = 2k + 1, the entries on the sinister diagonal of V1 are
r
r
r
r
r
r
r
,
,...,
,
,
,...,
.
1
k−1
k
k−1
1 0
0
In this case the eigenvalues of V1 are ± rj , 0 ≤ j ≤ k − 1, together
with kr . Thus
r+1
r+1
In V1 = (k + 1, 0, k) =
, 0,
.
2
2
So, part (ii) of Theorem follows from (2).
3. The cases 0 < r < 1 and 1 < r < 2
(
Let H = Cn and let H1 be its subspace H1 =
x:
n
P
)
xj = 0 .
j=1
Let e = (1, 1, . . . , 1) and E the matrix with all its entries equal to 1.
Then
H1 = e⊥ = {x : Ex = 0} .
A Hermitian matrix A is said to be conditionally positive definite (cpd)
if hx, Axi ≥ 0 for x ∈ H1 . It is said to be conditionally negative definite
(cnd) if −A is cpd. Basic facts about such matrices can be found in
[1]. A cpd matrix is nonsingular if hx, Axi > 0 for all nonzero vectors
x in H1 .
If t is a positive number and 0 < r < 1, then
Z ∞
t
sin rπ
r
λr−1 dλ.
(3)
t =
π
λ
+
t
0
See [2, p.116]. We write this briefly as
Z ∞
t
r
t =
dµ(λ),
(4)
λ+t
0
where µ is a positive measure on (0, ∞), depending on r.
6
RAJENDRA BHATIA AND TANVI JAIN
Theorem 3. The matrix Pr is cnd and nonsingular for 0 < r < 1,
and it is cpd and nonsingular for 1 < r < 2.
Proof. Let 0 < r < 1 and use (4) to write
Z ∞
p i + pj
r
dµ(λ).
(pi + pj ) =
pi + p j + λ
0
(5)
Then use the identity
λ
pi + p j
=1−
,
p i + pj + λ
p i + pj + λ
to see that the matrix
pi + pj
Gλ =
, λ>0
pi + p j + λ
can be expressed as
h
Gλ = E − λ C λ ,
i
(6)
where Cλ = pi +p1j +λ is a Cauchy matrix. This matrix is positive
definite, and Ex = 0 for all x ∈ H1 . It follows from (6) that Gλ is cnd.
R∞
so Pr = Gλ dµ(λ) is also cnd.
0
Now let 1 < r < 2. Using (4) we can express
Z ∞ 2
t
r
t =
dµ(λ).
λ+t
0
So,
Z
r
(pi + pj ) =
0
∞
(7)
(pi + pj )2
dµ(λ)
p i + pj + λ
Use the identity
(pi + pj )2
λ(pi + pj )
= p i + pj −
,
p i + pj + λ
p i + pj + λ
to see that the matrix
Hλ =
(pi + pj )2
, λ>0
pi + p j + λ
can be expressed as
Hλ = DE + ED − λGλ ,
(8)
7
where D = diag(p1 , . . . , pn ) and Gλ is the matrix in (6). If x ∈ H1 ,
then
hx, (DE + ED)xi = hx, DExi + hDEx, xi = 0.
So hx, Hλ xi ≥ 0. In other words, the matrix Hλ is cpd, and hence so
is Pr , 1 < r < 2.
It remains to show that Pr is nonsingular. The Cauchy matrix Cλ
is positive definite. This can be seen by writing
Z ∞
1
e−t(pi +pj +λ) dt,
=
p i + pj + λ
0
which shows that Cλ is the Gram matrix corresponding to the vectors
ui = e−t(pi +λ/2) in L2 (0, ∞). Since pi are distinct, the vectors ui are
linearly independent, and Cλ nonsingular. This shows that for all
nonzero vectors x in H1 , hx, Gλ xi < 0 for all λ > 0. Hence hx, Pr xi < 0
for 0 < r < 1. So Pr is nonsingular. In the same way, we see that
hx, Hλ xi > 0 for all x ∈ H1 , and λ > 0. So Pr is nonsingular for
1 < r < 2.
Each entry of Pr is positive. So, Pr has at least one positive eigenvalue. For r > 0 the matrix Pr is not positive semidefinite as its top
2 × 2 subdeterminant is negative. So Pr has at least one negative
eigenvalue. The space H1 has dimension n − 1. So using the minmax
principle [2, Ch.III] and Theorem 3, we see that
In Pr = (1, 0, n − 1), 0 < r < 1,
and
In Pr = (n − 1, 0, 1), 1 < r < 2.
This establishes the statement (iii) of Theorem 1 for these values of r.
(In fact, Theorem 3 says a little more, in that Pr is cnd/cpd.)
4. Nonsingularity
In this section we show that if n ≥ 2, and r > n − 2, then Pr is
nonsingular. This is a consequence of the following.
Theorem 4. Let c1 , . . . , cn be real numbers not all zero, and for r >
n − 2 let fr be the function defined on (0, ∞) as
n
X
fr (x) =
cj (x + pj )r .
(9)
j=1
Then fr has at most n − 1 zeros.
8
RAJENDRA BHATIA AND TANVI JAIN
Proof. We denote by Z(f ) the number of zeros of a function f on
(0, ∞), and by V (c1 , . . . , cn ) the number of sign changes in the tuple
c1 , . . . , cn . (We follow the terminology and conventions of the classic
[10, Part V].
Let s be any positive real number and (c1 , . . . , cn ) any n-tuple with
V (c1 , . . . , cn ) < s + 1. We will show that
Z(fs ) ≤ V (c1 , . . . , cn ).
(10)
We use induction on V (c1 , . . . , cn ).
Clearly, if V (c1 , . . . , cn ) = 0, then Z(fs ) = 0. Assume that the
assertion is true for all n-tuples (c1 , . . . , cn ) with V (c1 , . . . , cn ) = k −
1 < s. Let (c1 , . . . , cn ) be any n-tuple with V (c1 , . . . , cn ) = k, 0 < k <
s + 1. Without loss of generality assume ci 6= 0 for i = 1, 2, . . . , n.
There exists an index j, 1 < j ≤ n such that cj−1 cj < 0. We may
assume that p1 < p2 < · · · < pn . Choose any number u such that
pj−1 < u < pj and let
ϕ(x) =
n
X
cj (pj − u)(x + xj )s−1 .
j=1
Note that
V (c1 (p1 − u), . . . , cn (pn − u)) = k − 1 < s.
So, by the induction hypothesis Z(ϕ) ≤ k − 1. We have
ϕ(x) =
n
X
cj (pj − u)(x + pj )s−1
j=1
=
n
X
cj (x + pj ) − (x + u)
s
j=1
−(x + u)
=
s
s+1
n
X
cj (x + pj )s−1
j=1
−s
1
0
fs (x) +
f (x) ,
(x + u)s+1
(x + u)s s
where fs0 is the derivative of fs . Let
h(x) =
fs (x)
.
(x + u)s
Then the last equality above says
ϕ(x) =
−(x + u)s+1 0
h (x).
s
9
So Z(ϕ) = Z(h0 ). From the definition of h, it is clear that Z(fs ) =
Z(h). By Rolle’s Theorem Z(h) ≤ Z(h0 ) + 1. Putting these relations
together we see that Z(fs ) ≤ Z(ϕ) + 1 ≤ k. This establishes (10).
Now let r > n − 2, and let c1 , . . . , cn be any n-tuple. Then
V (c1 , . . . , cn ) ≤ n − 1. It follows that Z(fr ) ≤ n − 1, and that proves
the theorem.
Corollary 5. The matrix Pr is nonsingular for all r > n − 2.
Proof. Pr is singular if and only if there exists a nonzero vector c =
(c1 , . . . , cn ) in Rn such that Pr c = 0; i.e.,
n
X
cj (pi + pj )r = 0 for i = 1, 2, . . . , n.
j=1
That implies that the function fr (x) has at least n zeros (the distinct
points p1 , . . . , pn ). This is not possible by Theorem 4.
As a consequence of this the inertia of Pr remains unchanged for
r > n − 2. This establishes part (iv) of Theorem 1.
5. Completing the proof of Theorem 1
We are left with the case 2 < r < n − 2, r not an integer. We will
consider in detail the two cases 2 < r < 3 and 3 < r < 4. The essential
features of the pattern given in part (iii) of the theorem, and of the
proof are seen in these two cases.
Let p = (p1 , . . . , pn ). In Section 3 we introduced the space
n X
o
H1 = x :
xj = 0 = {x : Ex = 0} = e⊥ .
Let
Then
n X
o
X
H2 = x :
xj = 0,
p j xj = 0 .
H2 = {x : Ex = 0, EDx = 0} = {e, p}⊥ ,
where D = diag(p1 , . . . , pn ), and {e, p}⊥ stands for the orthogonal
complement of the span of the vectors e and p. For 1 ≤ k ≤ n − 1, let
pk = (pk1 , . . . , pkn ) and let
o
n X
k
H` = x :
pj xj = 0, 0 ≤ k ≤ ` − 1 .
10
RAJENDRA BHATIA AND TANVI JAIN
Then
H` =
x : EDk x = 0, 0 ≤ k ≤ ` − 1
⊥
= e, p, p2 , . . . , p`−1 .
Evidently, H1 ⊃ H2 ⊃ · · · H` , and dim H` = n − `.
Let m be any nonnegative integer, and let m < r < m + 1. From
(4) we have
Z ∞
(pi + pj )m+1
r
(pi + pj ) =
dµ(λ).
(11)
λ + p i + pj
0
Let
(pi + pj )m+1
.
(12)
Gm,λ =
λ + p i + pj
Use the identity
(pi + pj )3
λ(pi + pj )2
= (pi + pj )2 −
λ + pi + p j
λ + p i + pj
(13)
G2,λ = D2 E + 2DED + ED2 − λG1,λ .
(14)
to see that
If x ∈ H2 , then
hx, (D2 E + 2DED + ED2 )xi = 0.
In Section 3, we saw that hx, G1,λ xi > 0 for all x ∈ H1 , x 6= 0. (The
matrix G1,λ was called Hλ there.) So it follows from (14) that
hx, G2,λ xi < 0 for all x ∈ H2 , x 6= 0 and λ > 0.
This, in turn implies that for 2 < r < 3.
hx, Pr xi < 0 for all x ∈ H2 , x 6= 0.
Since dim H2 = n−2, the minmax principle implies that for 2 < r < 3,
Pr has at least n − 2 negative eigenvalues. We show that its remaining
two eigenvalues are positive.
Consider the matrix Pr when n = 3. We have established in Section
3 that when 1 < r ≤ 2, Pr has two positive and one negative eigenvalue. In Section 4 we have established that this remains unchanged
for r > 2. Now consider any n > 3. Any 3 × 3 principal submatrix
of Pr has two positive eigenvalues, by what we have just said. So, by
Cauchy’s interlacing principle [2, Ch.III], Pr has at least two positive
eigenvalues. The conclusion, then, is Pr has exactly two positive and
n − 2 negative eigenvalues, for all 2 < r < 3.
11
Next consider the case 3 < r < 4. Use the identity
(pi + pj )4
λ(pi + pj )3
= (pi + pj )3 −
,
λ + p i + pj
λ + pi + p j
to see that
G3,λ = D3 E + 3D2 ED + 3DED2 + ED3 − λG2,λ .
(15)
Again, if x ∈ H2 , then one can see that
hx, (D3 E + 3D2 ED + 3DED2 + ED3 )xi = 0.
We have proved that for x ∈ H2 , x 6= 0, we have hx, G2,λ xi < 0
for all λ > 0. Then (15) shows that hx, G3,λ xi > 0, and hence
hx, Pr xi > 0 for all x ∈ H2 , x 6= 0, and 3 < r < 4. So, Pr has at
least n − 2 positive eigenvalues. We have to show that the remaining
two of its eigenvalues are negative.
The argument we gave earlier can be modified to show that when
n = 4, and r > 2, then Pr has two positive and two negative eigenvalues. So for n > 4, Pr has at least two negative eigenvalues. Hence, it
has exactly two negative eigenvalues.
We have established the assertion of the theorem for 2 < r < 3, and
for 3 < r < 4. The argument can be extended to the next interval.
We leave this to the reader. Some remarks are in order here.
1. The proof for the cases covered in Section 3 was simpler because
of the available criterion for the nonsingularity of a cpd/cnd matrix.
More arguments are needed for r > 2.
2. The expressions (14) and (15) display G2,λ and G3,λ as G2,λ =
S − λG1,λ and G3,λ = T − λG2,λ . Though S and T are quite different, the first being quadratic in D and the second cubic, it is a happy
coincidence that both hx, Sxi and hx, T xi vanish for all x ∈ H2 . This
allows us to conclude that hx, G2,λ xi is negative and hx, G3,λ xi is positive on H2 . The same argument carried to the next stage will give
G4,λ = D4 E + 4D3 ED + 6D2 ED2 + 4DED3 + ED4 − λG3,λ . (16)
Now hx, D2 E D2 xi need not vanish for x ∈ H2 , but it does for x ∈ H3 .
So, we can conclude that Pr has at least n − 3 negative eigenvalues
for 4 < r < 5. The successor of (16) G5,λ = W − λG4,λ again has W
satisfying hx, W xi = 0 for x ∈ H3 . So, the inertia of Pr just changes
12
RAJENDRA BHATIA AND TANVI JAIN
sign when we go from 4 < r < 5 to 5 < r < 6. This explains some
features of the theorem.
6. The matrix Br
The arguments used in our analysis of Pr can be applied to other
matrices, one of them being
Br = [|pi − pj |r ] , r ≥ 0.
Inertias of these matrices have been computed in [8]. We summarise
the results of that paper in a succint form parallel to our Theorem 1:
(i) Br is singular if and only if r is an even integer smaller than
n − 1.
(ii) Let r be an even integer r = 2k < n. Then
r+1
r+1
In Br =
, n − (r + 1),
, if k is even
2
2
and
In Br =
r+1
r+1
, n − (r + 1),
, if k is odd .
2
2
(iii) Suppose r is not an even integer and 0 < r < n − 2. If 2k <
r < 2(k + 1), then
In Br = (k + 1, 0, n − (k + 1)) if k is even,
and
In Br = (n − (k + 1), 0, k + 1) if k is odd .
(iv) For every real number r > n − 2,
n
n
In Br =
, 0,
if n is even,
2
2
and
n+1
n−1
, 0,
In Br =
if n is odd .
2
2
13
We briefly indicate how the proofs in [8] can be considerably simplified
using our arguments.
1. Let r = 2k be an even integer. Then
Br = (pi − pj )2k .
Let W be the (r +1)×n Vandermonde matrix introduced in Section 2.
Let V2 be the (r+1)×(r+1) antidiagonal matrix obtained by multiplying V1 on the left by the diagonal matrix Γ = (1, −1, 1, −1, . . . , −1, 1).
Then one can see that
Br = W ∗ V2 W.
Therefore, by Proposition 2, if r + 1 ≤ n, then
In Br = In V2 + (0, n − (r + 1), 0).
When
k is even,
the first k entries on the sinister diagonal of V2 are
r
r
,
.
.
.
,
with
alternating signs ±; the (k + 1)th entry is kr ;
0
k−1
r
the next k entries are k−1
, . . . , 0r with alternating signs ∓. A little
argument shows that the eigenvalues of V2 are kr and ± rj , 0 ≤ j ≤
k − 1. So
r+1
r+1
In V2 =
, 0,
.
2
2
When
diagonal of V2 is
k is odd, the (k + 1)th entryr on the sinister
r
r
− k . The eigenvalues of V2 are − k and ± j , 0 ≤ j ≤ k − 1. So
r+1
r+1
In V2 =
, 0,
.
2
2
From the three equations displayed above we get statement (ii). This
includes the assertion that Br is singular when r is an even integer
smaller than n.
2. Let 0 < r < 2. Then
|pi − pj |r = ((pi − pj )2 )r/2 .
So, using (4) we can write
Z
r
|pi − pj | =
∞
0
(pi − pj )2
dµ(λ).
λ + (pi − pj )2
14
RAJENDRA BHATIA AND TANVI JAIN
Arguing as before, we can express Br as
Z ∞
Br =
T0,λ dµ(λ),
0
where T0,λ = E − λS0,λ , and
1
,
S0,λ =
λ + (pi − pj )2
λ > 0.
This last matrix is positive definite. A simple proof of this goes as
follows
Z ∞
1
2
=
e−t(λ+(pi −pj ) ) dt
2
λ + (pi − pj )
Z0 ∞
2
e−tλ/2 e−t(pi −pj ) e−tλ/2 dt;
=
0
h
i
2
and it is well-known that e−(pi −pj ) is a positive definite matrix. See
[3, p.146].
Since S0,λ is positive definite, for all x ∈ H1 , x =
6 0, we have
hx, T0,λ xi < 0. Hence, the same is true for Br . So, Br is cnd and
nonsingular for 0 < r < 2. (This is a well-known fact. We have given
a proof to ease the passage to the next argument.)
3. Let 2 < r < 4. Then |pi − pj |r = ((pi − pj )2 )s , where 1 < s < 2.
So, using (7) we can write
Z ∞
(pi − pj )4
r
|pi − pj | =
dµ(λ).
λ + (pi − pj )2
0
We leave it to the reader to check that using this we have
Z ∞
Br =
T1,λ dµ(λ),
0
where T1,α = D E − 2DED + ED2 − λT0,λ . If x ∈ H2 , then hx, (D2 E −
2DED + ED2 )xi = 0. So, hx, T1,α xi > 0 for all nonzero vectors x in
H2 . Thus Br has at least n−2 positive eigenvalues. Further arguments
are needed to show it has two negative eigenvalues.
2
4. Let n be an odd integer, n ≥ 3. It is shown in [8] that Br is
nonsingular for r ≥ n − 1. (We have no essential simplification of this
15
part of the proof.) This is a crucial ingredient needed for the rest of
the cases.
5. Our arguments for Pr (using interlacing etc.) can be adapted to
show that for 2 < r < 4, Br has two negative eigenvalues.
6. We can then argue in the same way the case 2k < r < 2(k + 1),
for k = 2, 3, . . . . This gives statement (iii). Statement (i) is included
in this.
7. As in the case of Pr , the inertia of Br stabilises after some stage.
Statement (iv) is a consequence of Remark 4 above.
References
[1] R. B. Bapat and T. E. S. Raghavan, Nonnegative Matrices and Applications,
Cambridge University Press, 1997.
[2] R. Bhatia, Matrix Analysis, Springer, 1997.
[3] R. Bhatia, Positive Definite Matrices, Princeton University Press, 2007.
[4] R. Bhatia, Infinitely Divisible Matrices, Amer. Math. Monthly, 113(2006)
221-235.
[5] R. Bhatia and J. A. Holbrook, Fréchet derivatives of the power function,
Indiana Univ. Math. J., 49(2000) 1155-1173.
[6] R. Bhatia and T. Jain, Inertia of Loewner matrices, in preparation.
[7] R. Bhatia and T. Sano, Loewner matrices and operator convexity, Math. Ann.,
344 (2009) 703-716.
[8] N. Dyn, T. Goodman and C. A. Micchelli, Positive powers of certain conditionally negative definite matrices, Indag. Math., 48 (1986) 163-178.
[9] M. K. Kwong, Some results on matrix monotone functions, Linear Algebra
Appl., 118 (1989) 129-153.
[10] G. Polya and G. Szego, Problems and Theorems in Analysis, Volume II, 4th
ed., Springer, 1971.
16
RAJENDRA BHATIA AND TANVI JAIN
Indian Statistical Institute, New Delhi
E-mail address: [email protected]
Indian Statistical Institute, New Delhi
E-mail address: [email protected]
© Copyright 2026 Paperzz