Expected number of real zeros for random linear combinations of

PROCEEDINGS OF THE
AMERICAN MATHEMATICAL SOCIETY
Volume 00, Number 0, Pages 000–000
S 0002-9939(XX)0000-0
EXPECTED NUMBER OF REAL ZEROS FOR RANDOM LINEAR COMBINATIONS
OF ORTHOGONAL POLYNOMIALS
D. S. LUBINSKY, I. E. PRITSKER, AND X. XIE
(Communicated by Walter Van Assche)
Abstract. We study the expected number of real zeros for random linear combinations of orthogonal
polynomials. It is well known that Kac polynomials, spanned by monomials with i.i.d. Gaussian coefficients,
have only (2/π + o(1)) log n expected real zeros in terms of the degree n. On the other hand, if the basis
is √
given by Legendre (or more generally by Jacobi) polynomials, then random linear combinations have
n/ 3 + o(n) expected real zeros. We prove that the latter asymptotic relation holds universally for a large
class of random orthogonal polynomials on the real line, and also give more general local results on the
expected number of real zeros.
1. Background
Zeros of polynomials with random coefficients have been intensively studied since 1930s. TheP
early work
n
concentrated on the expected number of real zeros E[Nn (R)] for polynomials of the form Pn (x) = k=0 ck xk ,
where {ck }nk=0 are independent and identically distributed random variables. Apparently the first paper
√
that initiated the study is due to Bloch and Pólya [2]. They gave an upper bound E[Nn (R)] = O( n)
for polynomials with coefficients selected from the set {−1, 0, 1} with equal probabilities. Further results
generalizing and improving that estimate were obtained by Littlewood and Offord [21]-[22], Erdős and Offord
[8] and others. Kac [17] established the important asymptotic result
E[Nn (R)] = (2/π + o(1)) log n
as n → ∞,
for polynomials with independent real Gaussian coefficients. More precise forms of this asymptotic were
obtained by many authors, including Kac [18], Wang [32], Edelman and Kostlan [7]. It appears that the
sharpest known version is given by the asymptotic series of Wilkins [33]. Many additional references and
further directions of work on the expected number of real zeros may be found in the books of Bharucha-Reid
and Sambandham [1], and of Farahmand [9]. In fact, Kac [17]-[18] found the exact formula for E[Nn (R)] in
the case of standard real Gaussian coefficients:
Z p
4 1 A(x)C(x) − B 2 (x)
E[Nn (R)] =
dx,
π 0
A(x)
where
A(x) =
n
X
j=0
x2j ,
B(x) =
n
X
jx2j−1
j=1
and C(x) =
n
X
j 2 x2j−2 .
j=1
In the subsequent paper Kac [19], the asymptotic result for the number of real zeros was extended to the
case of uniformly distributed coefficients on [−1, 1]. Erdős and Offord [8] generalized the Kac asymptotic
to Bernoulli distribution (uniform on {−1, 1}), while Stevens [28] considered a wide class of distributions.
Finally, Ibragimov and Maslova [15, 16] extended the result to all mean-zero distributions in the domain of
attraction of the normal law.
2010 Mathematics Subject Classification. Primary: 30C15; Secondary: 30B20, 60B10.
Research of D. S. Lubinsky was partially supported by NSF grant DMS136208 and US-Israel BSF grant 2008399.
Research of I. E. Pritsker was partially supported by the National Security Agency (grant H98230-12-1-0227) and by the
AT&T Foundation.
c
XXXX
American Mathematical Society
1
We state a result on the number of real zeros for the random linear combinations of rather general
functions. It originated in the papers of Kac [17]-[19], who used the monomial basis, and was extended to
trigonometric polynomials and other bases, see Farahmand [9] and Das [4]-[5]. We are particularly interested
in the bases of orthonormal polynomials, which is the case considered by Das [4]. For any set E ⊂ C, we use
the notation Nn (E) for the number of zeros of random functions (1.1) (or random orthogonal polynomials of
degree at most n) located in E. The expected number of zeros in E is denoted by E[Nn (E)], with E[Nn (a, b)]
being the expected number of zeros in (a, b) ⊂ R.
Proposition 1.1. Let [a, b] ⊂ R, and consider real valued functions gj (x) ∈ C 1 ([a, b]), j = 0, . . . , n, with
g0 (x) being a nonzero constant. Define the random function
(1.1)
Gn (x) =
n
X
cj gj (x),
j=0
where the coefficients cj are i.i.d. random variables with Gaussian distribution N (0, σ 2 ), σ > 0. If there is
M ∈ N such that G0n (x) has at most M zeros in (a, b) for all choices of coefficients, then the expected number
of real zeros of Gn (x) in the interval (a, b) is given by
Z p
1 b A(x)C(x) − B 2 (x)
dx
(1.2)
E[Nn (a, b)] =
π a
A(x)
where
(1.3)
A(x) =
n
X
j=0
gj2 (x),
B(x) =
n
X
gj (x)gj0 (x)
and
C(x) =
j=1
n
X
[gj0 (x)]2 .
j=1
Clearly, the original formula of Kac follows from this proposition for gj (x) = xj , j = 0, 1, . . . , n. We sketch
a proof of Proposition 1.1 in Section 3, as we could not find a suitable reference with a complete proof for
Proposition 1.1 in this general form. We note that multiple zeros are counted only once by the standard
convention in all of the above results on real zeros. However, the probability of having a multiple zero for a
polynomial with Gaussian coefficients is equal to 0, so that we have the same result on the expected number
of zeros regardless whether they are counted with or without multiplicities.
2. Random orthogonal polynomials
Let µ denote a positive Borel measure compactly supported on the real line, with infinitely many points
in its support, and with finite power moments of all orders. For n ≥ 0, let
pn (x) = γn xn + ...
denote the nth orthonormal polynomial for µ, with γn > 0, so that
Z
pn pm dµ = δmn .
Using the orthonormal polynomials {pj }∞
j=0 as the basis, we consider the ensemble of random polynomials
of the form
n
X
(2.1)
Pn (x) =
cj pj (x), n ∈ N,
j=0
where the coefficients c0 , c1 , . . . , cn are i.i.d. random variables. Such a family is often called random orthogonal polynomials. If the coefficients have Gaussian distribution, one can apply Proposition 1.1 to study the
expected number of real zeros of random orthogonal polynomials. In particular, Das
√ [4] considered random
Legendre polynomials, and found that E[Nn (−1, 1)] is asymptotically equal to√n/ 3. Wilkins [34] improved
the error term in this asymptotic relation by showing that E[Nn (−1, 1)] = n/ 3 + o(nε ) for any ε > 0. For
√
random Jacobi polynomials, Das and Bhatt [6] concluded that E[Nn (−1, 1)] is asymptotically equal to n/ 3
too. They also provided estimates for the expected number of real zeros of random Hermite and Laguerre
polynomials, but those arguments contain significant gaps. Farahmand [9, 10, 11] considered various generalizations of these results for the level crossings of random sums of Legendre polynomials with coefficients
2
that may have different distributions. Interesting computations and pictures of zeros of random orthogonal
polynomials may be found on the chebfun web page of Trefethen [31].
For the orthonormal polynomials {pj (x)}∞
j=0 associated with the measure µ, define the reproducing kernel
by
n−1
X
Kn (x, y) =
pj (x)pj (y),
j=0
and the differentiated kernels by
Kn(k,l) (x, y) =
n−1
X
(k)
(l)
pj (x)pj (y),
k, l ∈ N ∪ {0}.
j=0
The strategy is to apply Proposition 1.1 with gj = pj , so that
(2.2)
A(x) = Kn+1 (x, x),
(0,1)
B(x) = Kn+1 (x, x)
(1,1)
and C(x) = Kn+1 (x, x).
We use universality limits for the reproducing kernels of orthogonal polynomials (see Lubinsky [23]-[24] and
Totik [29]-[30]), and asymptotic results on zeros of random polynomials (cf. Pritsker [25]) to give asymptotics
for the expected number of real zeros for a wide class of random orthogonal polynomials.
Theorem 2.1. Let K ⊂ R be a finite union of closed and bounded intervals, and let µ be a positive Borel
measure supported on K such that dµ(x) = w(x)dx and w > 0 a.e. on K. If for every ε > 0 there is a closed
set S ⊂ K of Lebesgue measure |S| < ε, and a constant C > 1 such that C −1 < w < C a.e. on K \ S, then
the expected number of real zeros of random orthogonal polynomials (2.1) with Gaussian coefficients satisfy
1
1
(2.3)
lim E[Nn (R)] = √ .
n→∞ n
3
A simple example of the orthogonality measure µ satisfying the above conditions is given by the density w
that is continuous on K except for finitely many points, and has finitely many zeros on K. More specifically,
QJ
one may consider the generalized Jacobi weight of the form w(x) = v(x) j=1 |x−xj |αj , where v(x) > 0, x ∈
K, and αj > −1, j = 1, . . . , J.
Theorem 2.1 is a consequence of more precise and general local results given below. In order to state them,
we need the notion of the equilibrium measure νK of a compact set K ⊂ C. This is the unique probability
measure supported on K that minimizes the energy
ZZ
I[ν] = −
log |z − t| dν(t)dν(z)
amongst all probability measures ν with support on K. The logarithmic capacity of K is
cap(K) = exp (−I[νK ]) .
When we say that a compact set K is regular, this means regularity in the sense of Dirichlet problem (or
potential theory). See Ransford [26] for further orientation.
We also need the notion of a measure µ regular in the sense of Stahl, Totik, and Ullman [27]. If K = supp µ
and
1
,
lim γn1/n =
n→∞
cap(K)
where γn is the leading coefficient of pn , then we say that µ is STU-regular. A sufficient condition for this
is that K consists of finitely many intervals and µ0 = w > 0 a.e. in those intervals.
Theorem 2.2. Let µ be an STU regular measure with compact support K ⊂ R, which is regular in the sense
of potential theory. Let O be an open set in which µ is absolutely continuous, and such that for some C > 1
(2.4)
C −1 ≤ µ0 ≤ C a.e. in O.
Then given any compact subinterval [a, b] of O, we have
1
1
(2.5)
lim E [Nn ([a, b])] = √ νK ([a, b]),
n→∞ n
3
where νK is the equilibrium measure of K.
3
This is a special case of the following result, where µ does not need to be STU regular. The asymptotic
lower bound requires very little of µ.
Theorem 2.3. Let µ be a measure on the real line with compact support K.
(a) Assume that µ0 > 0 a.e. in the interval [a, b]. Then
(2.6)
1
1
E [Nn ([a, b])] ≥ √ νK ([a, b]).
n
3
lim inf
n→∞
(b) Suppose in addition that (2.4) holds, and that [a, b] ⊂ O. Then
(2.7)
1
1
E [Nn ([a, b])] ≤ √ inf νL ([a, b]),
n
3 L
lim sup
n→∞
where the inf is taken over all regular compact sets L ⊂ K such that L ⊃ [a, b], and the restriction µ|L of µ
to L is STU regular.
It is plausible that the right hand sides of (2.6) and (2.7) are equal under mild assumptions such as the
one of part (a). An interesting open problem is to find rates of convergence in the limit relations (2.3) and
(2.5).
3. Proofs
Proof of Proposition 1.1. This proof is based on the discussions of Kac [17, p. 5-10] and Das [5]. The joint
probability density of c = (c0 , c1 , · · · , cn ) is
kck2
dP (c) = (2π)−(n+1)/2 σ −(n+1) e− 2σ2 dc0 dc1 · · · dcn ,
where kck2 = c20 + c21 + · · · + c2n . Since Gn (x) has at most M + 1 zeros in (a, b) for all c by Rolle’s theorem,
Nn (a, b) is integrable over Rn+1 with respect to dP (c). Define
Nn∗ (a, b) = Nn (a, b) − (κ(a) + κ(b))/2,
where
(
κ(x) =
1
0
if Gn (x) = 0,
otherwise.
Since Gn (a) and Gn (b) are continuous random variables, we have
Z
E[Nn (a, b)] =
Nn∗ (a, b) dP (c).
Rn+1
We state the following result from Kac [18, Theorem 1].
Lemma 3.1. If f (x) is continuous for α ≤ x ≤ β and continuously differentiable for α < x < β, and f 0 (x)
vanishes only at a finite number of points in α < x < β, then the number of zeros of f (x) in α < x < β
(multiple zeros are counted once and if either α or β is a zero, it is counted as 1/2) is equal to
Z ∞Z β
1
cos(yf (x)) |f 0 (x)| dx dy.
P.V.
2π −∞ α
In our notation, this gives
Nn∗ (a, b)
1
= P.V.
2π
Z
∞
Z
−∞
a
Z
∞
b
cos(yGn (x)) |G0n (x)| dx dy.
Thus
− n+1
2
E[Nn (a, b)] = (2π)
σ
−(n+1)
Z
−∞
(3.1)
σ −(n+1)
=
2π
Z
b
Z
∞
···
−∞
∞
Rn (x, y) dy dx,
a
−∞
4
kck2
Nn∗ (a, b)e− 2σ2 dc0 dc1 · · · dcn
where
− n+1
2
(3.2)
∞
Z
∞
Z
kck2
e− 2σ2 cos(yGn (x)) |G0n (x)| dc0 dc1 · · · dcn .
···
Rn (x, y) = (2π)
−∞
−∞
The interchange of the integration order is justified by the fact that the integrand is dominated by
e
−
kck2
2σ 2
n
X
|cj | gj0 (x) ,
j=0
which is exponentially small outside bounded sets in Rn+1 . We use the known relation
Z
1 ∞ 1 − cos(uv)
(3.3)
du = |v|
π −∞
u2
to write (3.2) as
Z
1 ∞ du
Rn (x, y) = P.V.
×
π −∞ u2
Z ∞
Z ∞
kck2
n+1
···
e− 2σ2 (cos(yGn (x)) − cos(yGn (x)) cos(uG0n (x))) dc0 dc1 · · · dcn ,
(2π)− 2
(3.4)
−∞
−∞
where the interchange of orders of the integration can be justified as above, and (3.4) is interpreted as
Z
1
lim lim
N →∞ →0 π
(3.5)
−
Z
N
du
.
u2
(· · · )
+
−N
!
Noting that
0
1 iyGn (x)+iuG0n (x)
R e
+ eiyGn (x)−iuGn (x) ,
2
cos(yGn (x)) cos(uG0n (x)) =
we obtain with help of [13, 3.323(2) on p. 337] that
(2π)−
Z
n+1
2
∞
Z
−∞
− n+1
2
(2π)
=
2
(2π)−
=
2
kck2
e− 2σ2 cos(yGn (x)) cos(uG0n (x)) dc0 · · · dcn
−∞
∞
Z
e
−∞

n+1
2
R

R
−
kck2
2σ 2
i
e
n
P
[ycj gj (x)+ucj gj0 (x)]
j=0
n Z
Y
n
Y
∞
c2
e
− 2σj2 +i[ygj (x)+ugj0 (x)]cj
+e
n Z
Y
n
P
[ycj gj (x)−ucj gj0 (x)]
!
j=0
dc0 · · · dcn
− 12 [ygj (x)+ugj0 (x)]2 σ 2
1
2
(2π) σe
+
n
Y
∞

c2
− 2σj2 +i[ygj (x)−ugj0 (x)]cj
e
−∞
j=0
j=0
σ n+1
e
2
dcj +
−∞

− 21 [ygj (x)−ugj0 (x)]2 σ 2
1
2
(2π) σe

j=0
n
P
2
− σ2
i
−∞
j=0
(2π)
=
2
∞
Z
···
R
− n+1
2
=
∞
···
[ygj (x)+ugj0 (x)]2
j=0
+
σ n+1
e
2
2
− σ2
n
P
[ygj (x)−ugj0 (x)]2
j=0
.
For u = 0, we have
− n+1
2
Z
∞
Z
∞
···
(2π)
−∞
e
−
kck2
2σ 2
2
cos(yGn (x)) dc0 · · · dcn = σ
−∞
5
n+1
− σ2
e
n
P
[ygj (x)]2
j=0
.
dcj 
Using abbreviations A = A(x), B = B(x) and C = C(x), we rewrite
σ n+1
Rn (x, y) =
π
Z
σ n+1
−
2π
=
∞
e−
∞
Ay 2
du
u2
−∞
Z
σ2
2
σ2
2
e−
(Ay 2 +Cu2 +2yuB)
σ n+1 − σ2 Ay2
e 2
π
Z
∞
e−
σ2
2
σ2
2
!
(Ay 2 +Cu2 −2yuB)
u2
−∞
1 − e−
du
Cu2 +yuBσ 2
du,
u2
−∞
∞
du +
u2
−∞
Z
where the integral exists as a principal value, in the sense indicated in (3.5). If C(x) = 0 for some x then
B(x) = 0 and R(x, y) = 0 for the same x and all y. Thus we set Byσ 2 = t and σ 2 C = h > 0, so that
Rn (x, y) =
σ n+1 − σ2 Ay2
e 2
π
∞
Z
−∞
1
2
1 − e− 2 hu
u2
+tu
du.
The Taylor expansion
etu = 1 + tu +
∞
X
tm um
,
m!
m=2
together with known identities
Z
∞
−∞
1
2
√
1 − e− 2 hu
du = 2πh and
2
u
Z
∞
P.V.
−∞
tu − 1 hu2
e 2
du = 0,
u2
gives that
σ n+1 − σ2 Ay2
Rn (x, y) =
e 2
π
√
Z
∞
2πh −
−∞
∞
X
tm um
m!
m=2
!
!
2
1
e− 2 hu
du .
u2
Assuming that h > 0, we further obtain that
! 1 2
∞
X
tm um e− 2 hu
du
u2
−∞ m=2 m!
Z ∞
∞
X
2
1
t2m
=
u2(m−1) e− 2 hu du
(2m)!
−∞
m=1
Z
∞
∞
X
1
t2m (2(m − 1))! √
2πh−(m−1)− 2
m−1
(2m)! 2
(m − 1)!
m=1
√
2 m
∞
X
2πh
t
=
.
m!(2m − 1) 2h
m=1
=
(by [13, 3.461(2) on p. 364])
Hence
√
2 m !
∞
X
σ n+1 − σ2 Ay2 √
2πh
t
2
Rn (x, y) =
e
2πh −
π
m!(2m − 1) 2h
m=1
!
r
2 2 m
∞
X
2C n+2 − σ2 Ay2
1
B σ
2m
=
σ
e 2
1−
y
.
π
m!(2m − 1)
2C
m=1
6
Applying [13, 3.461(2) on p. 364] again, we obtain that
!
r
2 2 m
Z ∞
Z
∞
X
2C n+2 ∞ − σ2 Ay2
B σ
1
y 2m dy
Rn (x, y) dy =
σ
e 2
1−
π
m!(2m
−
1)
2C
−∞
−∞
m=1
!
r
r
r
2 2
∞
X
( B2Cσ )m (2m)!
2C n+2
2π
2π
1
=
−
σ
π
Aσ 2 m=1 m!(2m − 1) m!2m Aσ 2 (Aσ 2 )m
!
r
m
∞ X
C n+1
(2m)!
B2
=2
σ
−
A
AC
(m!)2 (2m − 1)4m
m=0
r r
C
B 2 n+1
=2
1−
σ
.
A
AC
Then (3.1) gives us the desired formula
Z √
1 b AC − B 2
dx,
E[Nn (a, b)] =
π a
A
where AC − B 2 ≥ 0 by (1.3) and the Cauchy-Schwarz inequality.
In addition to the reproducing kernels in (2.2), we also use their weighted versions in the proofs below:
K̃n(k,`) (x, y) = µ0 (x)
1/2
µ0 (y)
1/2
n−1
X
(k)
(`)
pj (x) pj (y) .
j=0
Lemma 3.2. Let µ be a measure with compact support and with infinitely many points in its support. Let
O be an open set in which µ is absolutely continuous, and such that for some C > 1 (2.4) holds. Then given
any compact subinterval [a, b] of O, we have
Z
1
1 + o(1) b 1
(3.6)
E [Nn ([a, b])] = √
Kn+1 (x, x) dµ(x).
n
3
a n
Proof. First note that the hypothesis that µ0 ≥ C −1 in O gives [12, Theorem 3.3, p. 104]
1
Kn+1 (x, x) < ∞.
n
n≥1 x∈[a,b]
C1 = sup sup
Next, we use Corollary 1.4 in [24, p. 224]. It gives for all j, k ≥ 0,
Z b (j,k)
K̃n+1 (x, x)
j+k
(3.7)
lim
−π
τj,k dx = 0.
j+k+1
n→∞ a K̃
n+1 (x, x)
Here
τj,k =
0,
j + k odd
1
, j + k even.
(−1)(j−k)/2 j+k+1
Applying (1.2) in a modified form, we obtain that
v
Z bu
u K̃ (1,1) (x, x)
1
1
t n+1
(3.8)
E [Nn ([a, b])] =
3 −
n
π a
K̃n+1 (x, x)
Since
1
n K̃n+1
(0,1)
K̃n+1 (x, x)
K̃n+1 (x, x)
2
!2
1
K̃n+1 (x, x) dx.
n
(x, x) is bounded uniformly in n and in x ∈ [a, b], we can use (3.7) above to obtain
Z q
1
1 b
1
2
2
E [Nn ([a, b])] =
π τ1,1 − (πτ0,1 ) + o (1)
K̃n+1 (x, x) dx
n
π a
n
Z
1 + o(1) b 1
√
=
K̃n+1 (x, x) dx.
3
a n
7
Proof of Theorem 2.2. Note that since µ0 > 0 a.e. in [a, b], this interval is contained in supp νK . In [29, p.
287, Theorem 1], under weaker conditions, Totik proved that for a.e. x ∈ [a, b],
1
dνK
lim Kn+1 (x, x) =
(x).
n→∞ n
dµ
Since
1
dνK
0
lim K̃n+1 (x, x) =
(x) µ0 (x) = νK
(x),
n→∞ n
dµ
n
o∞
and Lemma 3.2 then give the result.
the uniform boundedness of n1 K̃n+1 (x, x)
n=1
Proof of Theorem 2.3. We start with part (a). Given r > 0, and j, k ≥ 0, with τj,k as above, it follows from
[24, p. 250, Proof of Corollary 1.4] that
K̃ (j,k) (x, x)
n+1
j+k
−
π
τ
j,k
K̃n+1 (x, x)j+k+1
K
u
v
x
+
,
x
+
n+1
j!k!
sin
(π
(u
−
v))
K̃
(x,x)
K̃
(x,x)
n+1
n+1
.
≤
sup
−
rj+k |u|,|v|≤r Kn+1 (x, x)
π (u − v) Next, using that µ0 > 0 a.e. in [a, b], we have from [24, p. 223, Theorem 1.1] that


K
u
v


n+1 x + K̃n+1 (x,x) , x + K̃n+1 (x,x)
sin(π(u
−
v))
≥ε →0
meas x ∈ [a, b] : sup −


Kn+1 (x, x)
π(u − v) |u|,|v|≤r as n → ∞, for any given ε, r > 0. Thus also
(
)
K̃ (j,k) (x, x)
n+1
j+k
meas x ∈ [a, b] : −π
τj,k ≥ ε → 0
K̃n+1 (x, x)j+k+1
as n → ∞. Now let ε > 0, and for n ≥ 1, let

v
u (1,1)


u K̃
(x, x)
En = x ∈ [a, b] : t n+1
3 −

K̃n+1 (x, x)

(0,1)
K̃n+1
(x, x)
K̃n+1 (x, x)
2
!2
≤
p
π 2 /3 − ε



.


Then it follows that
meas (En ) → 0 as n → ∞.
Using [30, p. 118, Thm. 2.1], we have for a.e. x ∈ [a, b] that
1
0
lim inf K̃n+1 (x, x) ≥ νK
(x) .
n→∞ n
It then follows, that given ε > 0,
1
0
Fn = x ∈ [a, b] : K̃n+1 (x, x) ≤ νK (x) − ε
n
has
meas (Fn ) → 0 as n → ∞.
(3.9)
Indeed, if we set
fn (x) = min
1
0
K̃n+1 (x, x) − νK (x), 0 ,
n
then by Totik’s result,
lim fn (x) = 0 a.e. in [a, b],
n→∞
while fn is bounded below by
0
−νK
,
so Lebesgue’s Dominated Convergence Theorem gives
Z b
0 = lim
fn ≤ lim inf (−ε)meas (Fn ) .
n→∞
a
n→∞
8
Thus (3.9) holds. Then by (1.2), (3.8) and the definitions of En and Fn , we have
v
!2
Z bu
(0,1)
u K̃ (1,1) (x, x)
K̃n+1 (x, x)
1
1
1
n+1
t
E [Nn ([a, b])] =
−
K̃n+1 (x, x) dx
3
2
n
π a
n
K̃n+1 (x, x)
K̃n+1 (x, x)
Z
p
1
0
≥
π 2 /3 − ε (νK
(x) − ε) dx
π [a,b]\(En ∪Fn )
Z
1 b p 2
0
π /3 − ε (νK
(x) − ε) dx as n → ∞.
→
π a
Now we can let ε → 0.
We pass to the proof of part (b). Let L ⊂ K be a regular compact set such that the restriction µ|L of µ to
L is STU regular, and L contains [a, b] in its interior. By monotonicity of the reproducing kernel (Christoffel
function), if Kn (µ|L , ·, ·) denotes the reproducing kernel of the measure µ|L , then for a.e. x ∈ [a, b] ⊂ L,
Totik’s result [29, p. 287, Theorem 1] gives
1
lim sup Kn+1 (x, x)µ0 (x)
n→∞ n
1
≤ lim sup Kn+1 (µ|L , x, x) µ0 (x) = νL0 (x).
n
n→∞
∞
1
0
Moreover, n Kn+1 (µ|L , x, x) µ (x) n=1 is uniformly bounded in [a, b]. Then Lemma 3.2 implies that
Z b
1
1
lim sup E [Nn ([a, b])] ≤ √
νL0 (x) dx.
n→∞ n
3 a
Finally, taking the inf over all L gives the result.
Lemma 3.3. Let µ be an STU regular measure on the real line with compact support K, and let νK be the
equilibrium measure of K. Suppose that the coefficients of random orthogonal polynomials (2.1) are complex
i.i.d. random variables such that E[| log |c0 ||] < ∞. If E ⊂ C is any compact set satisfying νK (∂E) = 0, then
1
(3.10)
lim E [Nn (E)] = νK (E).
n→∞ n
Pn
Proof. Consider the normalized counting measure τn = n1 k=1 δzk for a polynomial (2.1), where {zk }nk=1
are the zeros of that polynomial, and δz denotes the unit point mass at z. Theorem 2.2 of [25] implies that
measures τn converge weakly to νK with probability one. Since νK (∂E) = 0, we obtain that τn |E converges
weakly to νK |E with probability one by Theorem 0.50 of [20] and Theorem 2.1 of [3]. In particular, we have
that the random variables τn (E) → νK (E) a.s. Hence this convergence holds in Lp sense by the Dominated
Convergence Theorem, as τn (E) are uniformly bounded by 1, see Chapter 5 of [14]. It follows that
lim E[|τn (E) − νK (E)|] = 0
n→∞
for any compact set E such that νK (∂E) = 0, and
|E[τn (E) − νK (E)]| ≤ E[|τn (E) − νK (E)|] → 0
as n → ∞.
But E[τn (E)] = E[Nn (E)]/n and E[νK (E)] = νK (E), which immediately gives (3.10).
Proof of Theorem 2.1. Given any ε > 0, we find a closed set S satisfying the assumptions, and obtain from
Theorem 2.2 that
1
1
lim E [Nn ([a, b])] = √ νK ([a, b])
n→∞ n
3
for any interval [a, b] ⊂ K ◦ \ S, where K ◦ is the interior of K. Note that both E [Nn (H)] and νK (H) are
additive functions of the set H. Moreover, they both vanish when H is a single point by (3.10), because
νK is absolutely continuous with respect to Lebesgue measure on K, see [27, Lemma 4.4.1, p. 117]. Hence
(3.10) gives that
1
1
lim E [Nn (R \ S)] = √ νK (R \ S).
n→∞ n
3
9
Pm
We can find finitely many open intervals Ik ⊂ R, k = 1, . . . , m, covering S, with total length k=1 |Ik | < 2ε.
Let Rk = {x + iy : x ∈ Ik , |y| < 1}, k = 1, . . . , m, so that for R = ∪m
k=1 Rk we have S ⊂ R and νK (∂R) = 0.
Applying Lemma 3.3 again, we obtain that
lim sup
n→∞
1
1 E [Nn (S)] ≤ lim sup E Nn R = νK (R ∩ R) = νK ∪m
k=1 Ik ,
n
n→∞ n
Absolute continuity of νK with respect to dx implies that the last term in the above estimate tends to 0 as
ε → 0. Thus (2.3) follows.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
A. T. Bharucha-Reid and M. Sambandham, Random Polynomials, Academic Press, Orlando, 1986.
A. Bloch and G. Pólya, On the roots of certain algebraic equations, Proc. London Math. Soc. 33 (1932), 102–114.
P. Billingsley, Convergence of Probability Measures, John Wiley & Sons, Inc., New York, 1999.
M. Das, Real zeros of a random sum of orthogonal polynomials, Proc. Amer. Math. Soc. 27 (1971), 147–153.
M. Das, The average number of real zeros of a random trigonometric polynomial, Proc. Camb. Phil. Soc. 64 (1968),
721–729.
M. Das and S. S. Bhatt, Real roots of random harmonic equations, Indian J. Pure Appl. Math. 13 (1982), 411–420.
A. Edelman and E. Kostlan, How many zeros of a random polynomial are real?, Bull. Amer. Math. Soc. 32 (1995), 1–37.
P. Erdős and A. C. Offord, On the number of real roots of a random algebraic equation, Proc. London Math. Soc. 6 (1956),
139–160.
K. Farahmand, Topics in Random Polynomials, Pitman Res. Notes Math. 393 (1998).
K. Farahmand, Level crossings of a random orthogonal polynomial, Analysis 16 (1996), 245–253.
K. Farahmand, On random orthogonal polynomials, J. Appl. Math. Stochastic Anal. 14 (2001), 265–274.
G. Freud, Orthogonal Polynomials, Akademiai Kiado/Pergamon Press, Budapest, 1971.
I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series and Products, Academic Press, Burlington, 2007.
A. Gut, Probability: A Graduate Course, Springer, New York, 2005.
I. A. Ibragimov and N. B. Maslova, The average number of zeros of random polynomials, Vestnik Leningrad University 23
(1968), 171–172.
I. A. Ibragimov and N. B. Maslova, The mean number of real zeros of random polynomials. I. Coefficients with zero mean,
Theory Probab. Appl. 16 (1971), 228–248.
M. Kac, On the average number of real roots of a random algebraic equation, Bull. Amer. Math. Soc. 49 (1943), 314–320.
M. Kac, On the average number of real roots of a random algebraic equation. II, Proc. London Math. Soc. 50 (1948),
390–408.
M. Kac, Nature of probability reasoning, Probability and related topics in physical sciences, Proceedings of the Summer
Seminar, Boulder, Colo., 1957, Vol. I Interscience Publishers, London-New York, 1959.
N. S. Landkof, Foundations of Modern Potential Theory, Springer-Verlag, New York - Heidelberg, 1972.
J. E. Littlewood and A. C. Offord, On the number of real roots of a random algebraic equation, J. Lond. Math. Soc. 13
(1938), 288–295.
J. E. Littlewood and A. C. Offord, On the number of real roots of a random algebraic equation. II, Proc. Camb. Philos.
Soc. 35 (1939), 133–148.
D. S. Lubinsky, A new approach to universality limits involving orthogonal polynomials, Ann. Math. 170 (2009), 915–939.
D.S. Lubinsky, Bulk universality holds in measure for compactly supported measures, J. Anal. Math. 116 (2012), 219–253.
I. E. Pritsker, Zero distribution of random polynomials, preprint. arXiv:1409.1631
T. Ransford, Potential Theory in the Complex Plane, Cambridge Univ. Press, Cambridge, 1995.
H. Stahl and V. Totik, General Orthogonal Polynomials, Cambridge Univ. Press, Cambridge, 1992.
D. C. Stevens, The average number of real zeros of a random polynomial, Comm. Pure Appl. Math. 22 (1969), 457–477.
V. Totik, Asymptotics for Christoffel Functions for general measures on the real line, J. Anal. Math. 81 (2000), 283–303.
V. Totik, Asymptotics of Christoffel functions on arcs and curves, Adv. Math. 252 (2014), 114–149.
N.
Trefethen,
Roots
of
random
polynomials
on
an
interval,
chebfun
web
page
http://www.chebfun.org/examples/roots/RandomPolys.html
Y. J. Wang, Bounds on the average number of real roots of a random algebraic equation, Chinese Ann. Math. Ser. A. 4
(1983), 601–605.
J. E. Wilkins, Jr. An asymptotic expansion for the expected number of real zeros of a random polynomial, Proc. Amer.
Math. Soc. 103 (1988), 1249–1258.
J. E. Wilkins, Jr. The expected value of the number of real zeros of a random sum of Legendre polynomials, Proc. Amer.
Math. Soc. 125 (1997), 1531–1536.
10
School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, USA
E-mail address: [email protected]
Department of Mathematics, Oklahoma State University, Stilwater, OK 74078, USA
E-mail address: [email protected]
Department of Mathematics, Oklahoma State University, Stilwater, OK 74078, USA
E-mail address: [email protected]
11