Asymptotic expansion of oscillatory integrals satisfying

Asymptotic expansion of oscillatory integrals
satisfying Varchenko’s condition
Maxim Gilula
October 25, 2015
Abstract
We consider scalar oscillatory integrals with real analytic phase φ satisfying the analytic condition used by Varchenko in [15]. We first show
Varchenko’s condition implies a decay rate for ∇φ close to the origin. This
decay rate allows us to integrate by parts away from the singularities of
∇φ. We decompose our integral into dyadic boxes close enough to the
origin, estimate each piece using the decay rate, and apply linear programming to obtain Varchenko’s estimates. The techniques in this proof
allow us to compute the exponents appearing in the asymptotic expansion
of scalar oscillatory integrals satisfying Varchenko’s condition. Moreover,
we show the asymptotic expansion holds for all λ > 2.
1
Introduction
In 1976, Varchenko proved what is now a very well known result quantifying
decay of scalar oscillatory integrals
Z
I(λ) =
eiλφ(x) ψ(x)dx
Rd
without assumptions on a uniform lower bound on any derivatives of the phase,
nor the Hessian[15]. In his revolutionary paper, Varchenko used techniques from
complex analysis and algebraic geometry to show that under certain analytic
conditions on a real-valued phase φ, for any smooth amplitude ψ supported in
a sufficiently small neighborhood of the origin, there is a positive constant C
independent of λ such that
|I(λ)| ≤ Cλ−1/t log(λ)d−1−k
(1)
as λ → ∞, where t > 0 and 0 ≤ k ≤ d − 1 are read off from the Newton
polyhedron of φ, and the exponent of λ is sharp (over all ψ) if t > 1. The
importance of Varchenko’s discovery inspired new proofs of these estimates,
e.g., Greenblatt[5] via resolution of singularities, and Kamimoto-Nose[9], with
1
both papers including some generalizations. The first aim of this paper is to
provide a proof of (1) using only integration by parts, linear programming, and
the linear algebraic structure of Rd . Sharpness of the bounds will not be proved.
The second is to develop an asymptotic expansion for I(λ) for large λ. One can
find, for example in Malgrange[12], that as λ → ∞,
I(λ) ∼
d−1
XX
ap,k (ψ)λ−p logd−1−k (λ),
p k=0
where p runs through finitely many arithmetic progressions, independent of
ψ, constructed from positive rational numbers. We will compute the powers
explicitly from the Newton polyhedron of φ without making use of previously
known results about the expansion: there is a simple geometric description of
the arithmetic progression. We will even be able to be more precise about
what exponents of log appear, and show this holds for λ > 2. Since we do not
have sharp estimates for the case t ≤ 1, we cannot be precise about when the
coefficients of the expansion are nonzero in general.
Most results in this paper rely on examining the Newton polyhedron of a
real analytic function. Over the past few decades, this combinatorial object has
been an important tool related to oscillatory integrals. For example, in 2001,
Phong, Stein, and Sturm[13] used the Newton polyhedron to find decay rates
of oscillatory integral operators with polynomial phases and interpreted their
multilinear operators as the analytic notions corresponding to the geometric
notion of Newton distance.
In addition to [13], there have been many inspirational results in the study
of van der Corput’s lemma and multilinear operators in higher dimensions, e.g.,
by Carbery, Christ and Wright in [2], Gressman’s geometric perspective of these
two papers in [7], by Carbery-Wright in [3], assuming only a smooth phase in
R2 by Rychkov in [14], a simple proof assuming convex functions and domains
by Carbery in [1], and many more. There are even results still being discovered
in one dimension, e.g., by Do-Gressman in [4].
2
Terminology
Without loss of generality, we work towards estimating
Z
Rd
eiλφ(x) ψ(x)χRd≥ (x)dx,
(2)
since estimating each orthant above is a symmetric problem for phases satisfying
Varchenko’s condition. The estimate of I(λ) can be deduced by summing
P the estimates over all 2d orthants. Also by symmetry, we assume that φ(x) = α cα xα
has uniformly and absolutely convergent power series in [0, 4]d , although 4 is for
convenience: it can be replaced by any positive real number we wish. Without
2
loss of generality we assume that φ(0) = 0, or else we could factor out eiλφ(0) .
We also assume φ is not identically zero in [0, 4]d .
To estimate (2), we use a partition of unity and reduce to estimating
Z
eiλφ(x) ψ(x)ηε (x)dx
I(λ, ε) =
[ε,4ε]
Qd
where ε = (ε1 , . . . , εd ) ∈ (0, 1)d is small enough, [ε, 4ε] is the box j=1 [εj , 4εj ],
ηε is smooth with support in [ε, 4ε], ψ is our smooth amplitude, and φ satisfies
Varchenko’s condition. We choose {[ε, 4ε]} to be a set of dyadic boxes and
decompose our amplitude into a sum of amplitudes supported in [ε, 4ε] to prove
the final estimate. In order to discuss the main results, we require some more
terminology and notation.
2.1
Basic notation
We write N for the set of nonnegative integers and R≥ for the set of nonnegative
reals. The convention for elements x ∈ Rd is x = (x1 , . . . , xd ), i.e., xi is the ith
component of x. Next, some algebraic conventions are introduced. In addition
∂ αd
∂ α1
to the standard notation, for y ∈ Rd≥ and α ∈ R, that ∂ α = ∂x
α1 · · ·
α ,
∂x d
1
d
y α = y1α1 · · · ydαd and |y| = y1 + · · · + yd , we make use of some less standard
notation for c ∈ R and y, z ∈ Rd≥ :
• yz = (y1 z1 , . . . , yd zd );
• if c > 0, denote the vector (cy1 , . . . , cyd ) by cy ;
Qd
• [y, 4y] is defined to be the box j=1 [yj , 4yj ];
• if the components of y are positive, fy (x) = f (y1−1 x1 , . . . , yd−1 xd );
• c is the vector (c, . . . , c).
In particular, note that (cy )z = chy,wi and (cy x)z = chy,zi xz . Lastly, we write
f (x) . g(x)
for positive real-valued functions f and g to express that there is a positive
constant C independent of x such that f (x) ≤ Cg(x) for all x wherever this
expression makes sense.
2.2
Newton polyhedron
Now we move on to some definitions involving the Newton polyhedron, a key
object of study in the upcoming sections. Note: one can find facts we assume
about polyhedra in, e.g., Grünbaum[8].
We assume φ : Rd → R is analytic around the origin throughout.
3
Definition 1 (Newton polyhedron).PDenote the set of indices of the nonzero
coefficients in the expansion φ(x) = α cα xα by
supp+ (φ) = {α ∈ Nd : cα 6= 0}.
We define the Newton polyhedron of φ to be the convex hull of the union
[
α + Rd≥ ,
α∈supp+ (φ)
and we denote the Newton polyhedron of φ by N+ (φ).
The Newton polyhedron is a slightly bulkier object than we require: most
of the time we will only refer to the compact faces of the Newton polyhedron,
so we define the Newton diagram of φ as the union of all compact faces of
N+ (φ) and denote it by N (φ); furthermore, we define the finite set supp(φ) =
N (φ) ∩ supp+ (φ). We remind the reader that a subset F of a polyhedron P is
a face if there is some supporting hyperplane H of P satisfying H ∩ P = F.
Moreover, we say F is a face of N (φ) if it is a face of N+ (φ), and say H is a
supporting hyperplane of N (φ) if H is a supporting hyperplane of N+ (φ) and
if H ∩ N+ (φ) is compact.
Definition 2 (Newton distance). The Newton distance of φ is defined by the
infimum
t = inf{s ∈ (0, ∞) : (s, . . . , s) ∈ N+ (φ)}.
One can check t ≥ 1/d if φ(0) = 0.
Definition 3 (The polynomials φF ). For any face F ⊂ N (φ), denote by φF the
polynomial
X
φF (x) =
cα xα .
α∈F ∩supp(φ)
Originally used by Varchenko[15] to prove (1), we can finally define the
analytic condition we impose on our phase:
Definition 4 (Varchenko’s condition). We say that φ satisfies Varchenko’s
condition if for all faces F ⊂ N (φ) the polynomials φF satisfy
kx∇φF (x)k =
6 0
for all x such that x1 · · · xd 6= 0, where k · k is the `∞ (Rd ) norm
R (of the vector
x∇φF (x) for fixed x). We say a scalar oscillatory integral Rd eiλφ(x) ψ(x)dx
satisfies Varchenko’s condition if φ does.
Varchenko’s condition is equivalent to the property that for all x and all
F ⊂ N (φ) there is some component of ∇φF (x) that is nonzero away from the
coordinate hyperplanes; the phrasing used in the definition is preferred because
working with x∇φF (x) (x∇φ(x)) is easier than working with ∇φF (x)(∇φ(x))
since the Newton polyhedron of each component of x∇φ(x) is contained in the
Newton polyhedron of φ(x). We see later on why this is important.
4
3
Main results
A crucial tool used to prove the main results is quantifying how ∇φ behaves
near the origin:
Lemma 1. Assume φ satisfies Varchenko’s condition. For all ε ∈ (0, 1)d small
enough, for all x in the box [ε, 4ε], and for all α ∈ N+ (φ), we have the lower
bound
kx∇φ(x)k & εα ,
where the implicit constant is independent of ε.
So we see Varchenko’s condition implies a very useful growth rate for ∇φ around
the origin. Small enough will be made explicit in (11), as we don’t yet have all
of the necessary information to state it here. With this lemma we are able to
prove the most useful result in the paper:
Lemma 2. Assume φ satisfies Varchenko’s condition. Let ψ : Rd → R be
smooth and supported in [1, 4]d . For all ε ∈ (0, 1)d small enough, we have the
estimate
Z
Rd
eiλφ(x) ψε (x)dx . λ−N ε−(N α−1)
(3)
for all λ > 0, all N ∈ N, and all α ∈ N+ (φ), where the implicit constant above
is independent of ε and λ.
Using induction and techniques from the proof of lemma 2, we are able to
prove theorem 1. Next, to introduce theorem 1 we need to briefly discuss another
convention. From now on, when we talk about supporting hyperplanes H of
N+ (φ) we mean only those not containing the origin, and we use a normalization
convention for normal vectors to H: we pick the unique vector w ∈ Rd≥ satisfying
H = {ξ : hξ, wi = 1}. Any supporting hyperplane H of N+ (φ) (not containing
the origin) can be defined this way, and we write Hw for such H, namely Hw =
{ξ : hξ, wi = 1}. It is a simple exercise in linear algebra to show such normals
exist and have rational components, along with other properties; we discuss
these facts in more detail later on. We say w is a normal of the face F of
N+ (φ) if Hw ∩ N+ (φ) = F, and say w is a normal of the face F of N (φ) if F
is a compact face of N+ (φ) with normal w. Note that we can say the normal of
F if F is codimension 1. We also say w is a normal of N (φ) if w is a normal
of any F ⊂ N (φ). It is important to remember that we only refer to normals w
of N+ (φ) not containing the origin. Such normals are guaranteed to exist since
φ(0) = 0 implies N+ (φ) doesn’t contain the origin.
We use the convention of writing w(β) for the set {w : hβ, wi is minimal}
where the minimum is taken over all (finitely many) normals w of codimension 1
faces F of N+ (φ). We write hβ, w(β)i for the scalar minw hβ, wi. This convention
will be used mainly in section 7.
5
Theorem 1. Assume φ satisfies Varchenko’s condition. If ψ : Rd → R is
smooth and supported close enough to the origin, then for λ > 2,
Z
eiλφ(x) ψ(x)dx ∼
Rd
≥
j −1
∞ dX
X
aj,k (ψ)λ−pj logdj −1−k (λ),
(4)
j=0 k=0
where p0 < p1 < · · · is the ordering of the set {hα + 1, w(α + 1)i}α∈Nd and
1 ≤ dj ≤ d is the greatest codimension over all faces intersecting the lines
{s · (α + 1) : s ∈ R, α ∈ Nd , hα + 1, w(α + 1)i = pj }.
P∞
If we rewrite the sum (4) as
`=0 a` F` (λ), where an ∈ {aj,k (ψ)}j,k and
Fn (λ) . Fn+1 (λ) for all n ∈ N, the asymptotic expansion (4) holds in the sense
that there is an implicit constant independent of λ such that
Z
Rd
≥
eiλφ(x) ψ(x)dx −
N
X
a` F` (λ) . FN +1 (λ).
`=0
Some points of theorem 1 require clarification. First, we show that {hα +
1, w(α+1)i}α∈Nd runs through finitely many arithmetic progressions of positive
rationals (and therefore can be ordered as claimed in the theorem). Each normal
w of a codimension 1 face F of N+ (φ) can be uniquely defined by d linearly
independent vectors αi in supp+ (φ) ∩ F . If A is the matrix with rows αi , then
w must satisfy Aw = 1, by definition. Hence, w = A−1 1. The matrix A−1 must
have rational entries, since A has rational entries, therefore w ∈ Qd . In fact, each
component of w must be nonnegative because it is oriented towards the interior
of the Newton polyhedron, so in particular it must be in Rd≥ . The Newton
polyhedron has finitely many codimension 1 faces, so there are finitely many
such w and therefore finitely many rational components wj . The arithmetic
progressions come from these components.
Varchenko showed that the first term of the expansion (4) with nonzero
coefficient is λ−1/t logd−1−k (λ), where k is the smallest dimension over all faces
containing t (d − k the largest codimension), assuming the Newton distance
t of N+ (φ) satisfies t > 1. It is easy to see for any positive scalar c that
w(c(α + 1)) = w(α + 1). Therefore w(1) = w(t) and since ht, w(t)i = 1, we
conclude h1, w(1)i = 1/t. Clearly p0 = h1, w(1)i = 1/t, because over all w with
nonnegative components and all α ∈ Nd , hα + 1, wi ≥ h1, wi, which is bounded
below by 1/t for all normals w of N+ (φ): ht, wi ≥ 1 implies h1, wi ≥ 1/t.
Note: There is an easy geometric way to describe the exponents in theorem
1. First, write hα + 1, wi = ch (α+1)
c , wi. If we take c to be such that (α + 1)/c ∈
(α+1)
∂N+ (φ), then h c , wi = 1 for some normal w of N+ (φ) (of a codimension 1
face whose affine hull does not contain the origin) so α+1
c ∈ w(α+1). Hence, the
set of powers {−pj : j ∈ N} of λ is equal to the set {−c ∈ Q : α+1
c ∈ ∂N+ (φ)}.
The power of log multiplying λ−c is equal to the largest codimension over all
faces containing α+1
c . Varchenko’s estimate is the case α = 0 (c = 1/t).
6
4
4.1
Proof of lemma 1
Motivation
To prove lemma 2, we integrate the left side of (3) by parts N times. Lemma 1
will be used for this purpose, although it is an interesting result in its own right,
reminiscent of Lojasiewicz’s theorem 17[11] (an English version can be found in
[10]). Lemma 1 has much stronger assumptions, but gives a much stronger
result. Greenblatt also proved a very nice version of this lemma in [6](lemma
3.6, under an assumption on the order of the zero, but not on the zero locus).
From now on, x always lies in [1, 4]d and we scale by ε when talking about
elements outside the box [1, 4]d . To integrate by parts, we use Varchenko’s condition to prove a lower bound on ky∇φ(y)k for y close to the origin. We now
illustrate the ideas used to approach this problem. The j th component of y∇φ(y)
is equal to
X
cα αj y α .
(5)
α
For each F ⊂ N (φ), we can write (5) as
X
cα αj y α +
α∈F
X
cα αj y α .
(6)
α∈F
/
The left side of (6) equals the j th component of y∇φF (y), which is a nonzero
vector by Varchenko’s condition. The goal is to show the right side is very
small for all y small enough, there are F and 1 ≤ j ≤ d so that the significant
contribution comes from the polynomial we know. If for all y = εx ∈ [ε, 4ε] we
can find F ⊂ N (φ) such that we only have to worry about the left sum, and if
we can scale so that εα = S for all α ∈ F, we will be able to conclude that for
some 1 ≤ j ≤ d, (5) is bounded below by a uniform constant times
X
X
X
|
cα αj y α | = |
cα αj xα εα | = |S
cα αj xα | & S = εα
α∈F
α∈F
α∈F
for all α ∈ F. The compact face F is chosen so that the terms εα contribute
most when α ∈ F, so we conclude (5) is bounded below by εα for all α ∈ N+ (φ).
The difficulty is in showing the right sum of (6) is negligible for appropriate
F . We will recursively define finitely many boxes [b, 4b−1 ]d , where 0 < b < 1,
on which to apply Varchenko’s condition, because the right side of (6) is not
always negligible if we naively try to use the logic presented above. We might
need to move to larger faces Fd−1 ⊇ · · · ⊇ F0 = F by moving relatively large
summands of the right-hand sum of (6) to the left-hand sum, checking whether
all the summands in the right-hand side are negligible, and applying Varchenko’s
condition on larger and larger boxes depending only on φ. This is the content
of the main proposition below: proposition 3.
7
4.2
Supporting hyperplanes of N+ (φ) and scaling
The following proposition is used to define some constants necessary for applying
Varchenko’s condition to (5). It basically says that we can move from one
hyperplane not containing all vectors from some set to a new hyperplane that
does contain them and such that some scaling holds with respect to the new
hyperplane. The other stuff going on in the statement has to do with the specific
case for which we apply this proposition, but really we are just proving a basic
statement in linear algebra together with some bound on a vector.
Proposition 1. Let 0 < S, C < 1. Let v 1 , . . . , v m ∈ Rd be linearly independent
points satisfying hv i , wi ≥ 1 for all 1 ≤ i ≤ m ≤ d. Let ηi = hv i , wi − 1 ≥
0 and assume that 1 ≥ S ηi ≥ C for all i. Let x ∈ [1, 4]d . There is some
b(v 1 , · · · , v m , C) ∈ (0, 1) and some y ∈ [b, 4b−1 ]d satisfying the equalities
i
i
y v = S ηi x v .
(7)
Furthermore, there is some d−tuple σ ∈ Rd≥ satisfying the bound kσk∞ ≤
ρ(v 1 , . . . , v m )kηk1 such that y = S σ x, so C dρ ≤ S σi xi ≤ 4C −dρ , and therefore we can take b = C dρ .
Proof. Let V be the m×d matrix with rows v 1 , . . . , v m . Let σi ∈ R for 1 ≤ i ≤ m
be indeterminate. Without loss of generality, assume that the first m columns
of V are linearly independent and define the d−tuples σ = (σ1 , . . . , σm , 0, . . . , 0)
and η = (η1 , . . . , ηm , 0, . . . , 0). Solving the equation V σ = η can be reduced to
solving Ṽ σ̃ = η̃ where σ̃ = (σ1 , . . . , σm ), η̃ = (η1 , . . . , ηm ) and Ṽ = {vji }1≤i,j≤m .
Since Ṽ has full rank, we can solve σ̃ = Ṽ −1 η̃. Denoting kṼ −1 k∞ = ρ, we
bound
kσ̃k∞ ≤ kṼ −1 k∞ kη̃k1 = ρkη̃k1 .
Since ηi are nonnegative,
−ρ(η1 + · · · + ηm ) ≤ σi ≤ ρ(η1 + · · · + ηm ).
We can use these bounds to estimate each S σi xi and find precisely which bigger
box we are looking for. We use the inequalities C ≤ S ηi ≤ 1 to bound
C dρ ≤ (S η1 · · · S ηm )ρ ≤ 1 ≤ (S η1 · · · S ηm )−ρ ≤ C −dρ .
Therefore
C dρ ≤ S σi xi ≤ 4C −dρ .
Hence, letting b = C dρ ∈ (0, 1), we see that y ∈ [b, 4b−1 ]d defined by
y = Sσ x
8
satisfies the system of equations (7) because
i
i
y v = (S σ x)v = S hv
i
,σi v i
x
i
= S ηi x v .
Since the sum in the left side of (6) are over all lattice points v ∈ supp(φ) ∩
F, we cannot only consider the linearly independent ones as the proposition
suggests. Therefore, another proposition is required to make sure the scaling
works over all points in the face we are considering.
Proposition 2. Let x ∈ [1, 4]d , η1 , . . . , ηm ∈ R, and S > 0. Assume η is the
Pm
i
d
vi
ηi v i
linear combination η =
i=1 λi ηi . If
Pvm ∈ Ri satisfy y = S x for all
v
η v
1 ≤ i ≤ m, then y = S x where v = i=1 λi v .
Proof. This is simply because
yv =
m
Y
i
y λi v =
i=1
4.3
m
Y
i
(S ηi xv )λi =
i=1
m
Y
i
S λi ηi xλi v = S η xv .
i=1
More notation and the main proposition
Motivated by proposition 1, we define constants required to talk about scaling
over faces F ⊂ N (φ) in order to apply Varchenko’s condition.
For any codimension 1 face F of N (φ) and linearly independent v 1 , · · · , v m ∈
supp(φ) ∩ F, define V to be the m × d matrix with rows v i and for each V pick
Ṽ , a full rank m × m matrix defined by taking m independent columns of V.
Define the constant
ρ = max kṼ −1 k∞ ∈ (0, ∞),
V
where the maximum is taken over all finitely many V (supp(φ) is finite).
Since φ has absolutely and uniformly convergent Taylor series in [0, 4]d , there
is a nonzero a ∈ R such that
X
a = 2 max
αi |cα |4α .
1≤i≤d
α∈Nd
We define the constant C1 :
C1 = min
inf
F ⊂N (φ) x∈[1,4]d
kx∇φF (x)k`∞ (Rd ) .
By Varchenko’s condition, over each compact face F , the infimum over [1, 4]d
defines some positive constant. Since there are finitely many compact faces, C1
must exist and is nonzero. We make the observation that C1 < a because
X
C1 ≤ max
αi |cα |4α ≤ a/2.
1≤i≤d
α∈F
9
Now for 1 ≤ i ≤ d − 1, recursively define the constants
bi+1 =
0
Ci+1
= min
C dρ
i
a
inf
F ⊂N (φ) x∈[bi+1 ,4b−1 ]d
i+1
,
(8)
kx∇φF (x)k`∞ (Rd ) ,
and finally,
0
Ci+1 = min{Ci+1
, Ci /a}.
(9)
Using the convention b1 = 1, it is easy to see that C1 > C2 > · · · > Cd > 0 and
therefore b1 > b2 > · · · > bd > 0.
We define one last constant used in the proof of the main proposition. Let
δ=
inf h
α1 ,α2 ,w
α1 + α2
, wi − 1
2
where the infimum is taken over all α1 , α2 ∈ supp+ (φ) not contained in the same
codimension 1 face, and all normals w of N+ (φ) (corresponding to hyperplanes
Hw ). In the case where there exist such α1 , α2 , we claim that δ > 0. First
notice all α ∈ N+ (φ) and all such w satisfy hα, wi ≥ 1. Since N+ (φ) is convex,
α1 and α2 must lie on some nontrivial line segment contained in N+ (φ). If
δ = 0 then hα1 , wi = hα2 , wi = 1 for some w (we can reduce the infimum to
be taken over boundedly many α), so any convex combination of α1 and α2
satisfies hλ1 α1 + λ2 α2 , wi = 1. This implies α1 , α2 lie on the same codimension
1 face. The fact that δ > 1 is intuitive because the average of α1 , α2 not lying
on the same codimension 1 face lies in the interior of N+ (φ), and we expect all
points ξ in the interior to satisfy hξ, wi > 1. If all elements of supp+ (φ) are
contained in the same codimension 1 face, we use the convention δ = 1.
Now we are ready to set up the main proposition required to estimate
y∇φ(y).
Proposition 3. Let φ have absolutely and uniformly convergent power series
in [0, 4]d satisfying Varchenko’s condition. Let x ∈ [1, 4]d . Fix a face F0 ⊂ N (φ)
and a corresponding normal w. Define the constants bi , Ci as in (8) and (9) for
1 ≤ i ≤ d. Let S ∈ (0, (Cd /a)1/(2δ) ). There is a vector σ ∈ Rd≥ , a compact face
F 0 ⊇ F0 , and 0 ≤ j 0 ≤ d − 1 such that
(i) For all v ∈ F 0 we have the scaling
d
S hv,wi−1 xv = (S σ x)v , where S σ x ∈ [bj 0 +1 , 4b−1
j 0 +1 ] , and
(ii) for all u ∈ supp(φ) − F 0 we have the upper bound
S hu,wi−1 ≤ Cj 0 +1 /a.
10
Proof. First, if every u ∈
/ F0 satisfies S hu,wi−1 ≤ C1 /a, we are done with (ii) by
0
letting σ = 0 and j = 0. (i) is also easy to see since hv, wi = 1 for all v ∈ F0 .
Otherwise, for 1 ≤ j ≤ d − 1 define Λj = {u ∈ supp(φ) : S hu,wi−1 > Cj /a}.
Let us first show that each Λj is contained in N (φ). If some u ∈ Λj is not in
N (φ), then there is some normal w to N+ (φ) such that u ∈
/ Hw ⊃ F ∩ supp(φ).
Therefore, for any v ∈ F,
hu, wi − 1 = hu + v, wi − 2 > 2δ.
Since S 2δ < Cd /a, the vector u cannot lie in any Λj . This implies Λj is contained
in N (φ). By a similar computation, one sees that Λj must actually lie in some
codimension 1 face. Because Λj is contained in a codimension 1 face of N (φ), the
affine hull of Λj must contain some face Fj ⊃ F of N (φ) of maximal dimension,
which implies there is an affine basis {v 1 , . . . , v dim(Fj )+1 } ⊂ Fj ∩Λj for the affine
hull of Λj . By proposition 1, we know there is some d−tuple σ j such that for
all 1 ≤ i ≤ dim(Fj ) + 1 we have the equalities
S hv
i
,wi−1 v i
x
j
i
= (S σ x)v ,
j
d
where S σ x ∈ [bj , 4b−1
j ] . By the definition of bj , we can apply proposition 1,
since S hv
i
,wi−1
> Cj /a. Proposition 2 tells us that for all v ∈ Fj we have
j
S hv,wi−1 xv = (S σ x)v .
Since this holds for all 0 ≤ j ≤ d − 1, we are left with claim (ii). Notice that
dim(F0 ), dim(F1 ), ... is an increasing list of natural numbers strictly bounded
above by d, and in particular dim(Fj ) ≥ j. That means there is some 0 ≤
j ≤ d − 1 such that Fj = Fj+1 , as there can be no d−dimensional face, so let
0
j 0 = min{1 ≤ j ≤ d − 1 : Fj = Fj+1 }. Letting F 0 = Fj 0 , and σ = σ j completes
the proof, as (i) was shown for all j.
With proposition 3 we can finish the proof of lemma 1. It turns out the
scaling we want to consider over the face F = Hw ∩ N (φ) is S w . The way we
think about this is that y = S w x for some S > 0, and some w normal to N (φ)
where Hw ∩ N (φ) = F . We will prove shortly that every y ∈ (0, 4(Cd /a)d/(dδ) )d
can be written this way. We now return to the sums (6). For all 1 ≤ i ≤ d we
can write the ith component of y∇φ(y) for y = S w x as
X
αi cα (S w x)α +
α∈Fj 0
=
X
=
αi cα (S w x)α
α∈F
/ j0
αi cα S hα,wi xα +
α∈Fj 0
prop 1+2
X
X
αi cα S hα,wi
α∈F
/ j0
S
X
j0
αi cα (S σ x)α +
α∈Fj 0
X
α∈F
/ j0
11
αi cα S hα,wi−1 .
(10)
Applying the triangle inequality, the bounds on S hα,wi−1 guaranteed by proposid
tion 3, and Varchenko’s condition on the interval [bj 0 , 4b−1
j 0 ] , for some 1 ≤ i ≤ d
we can bound (10) by
S(Cj0 0 − Cj 0 /a · a/2) ≥ S(Cj 0 /2) & S.
We now show for all ε ∈ (0, (Cd /a)d/(2δ) )d there is some S ∈ (0, (Cd /a)1/(2δ) )
and some supporting hyperplane Hw of N (φ) such that S w = ε, and therefore
S = S hα,wi = (S w )α = εα for all α ∈ Hw , completing the proof of lemma 1.
First note that the d−tuple (1/d, . . . , 1/d) lies on or below N+ (φ). Therefore,
for all α ∈ N (φ) there is some 1 ≤ i ≤ d such that αi ≥ 1/d. If Hw is a
supporting hyperplane of N (φ) containing α, then
1 = hα, wi ≥ αi wi ≥ wi /d,
since every component of α and w are nonnegative. Hence, for every supporting
hyperplane Hw there is some 1 ≤ i ≤ d such that wi ≤ d.
Next, let ε ∈ (0, (Cd /a)d/(2δ) )d . Without loss of generality, if we assume that
ε1 is the largest, we can solve the equations εq1i = εi where qi ≥ 1. For all
q ∈ Rd with positive components there is some supporting hyperplane Hw of
N (φ) and positive constant c such that q = cw: we can just take a hyperplane
not intersecting the first orthant with normal q, and translate so that it intersects
only ∂N (φ). Since q1 = 1 ≤ qi , we see that w1 ≤ wi and therefore w1 ≤ d.
Hence, 1 = q1 = cw1 ≤ cd, so that c ≥ 1/d. Now we can solve for S in the
required interval:
i
= (εc1 )wi ,
εi = εq1i = εcw
1
so that S = εc1 ≤ (Cd /a)dc/(2δ) ≤ (Cd /a)1/(2δ) .
This finishes the proof of lemma 1, summarizing again that ε small enough
means
ε ∈ (0, (Cd /a)d/(2δ) )d .
5
5.1
(11)
Proof of lemma 2
Estimating an integration by parts operator
For ψ supported in [1, 4]d , we want to integrate
Z
I(λ, ε) =
eiλφ(εx) ψ(x)dx
[1,4]d
∇φ(x)
by parts. Let f (x) = k∇φ(x)k
2 . We define the operator D = Dε,φ on smooth
d
functions g : R → R by
D(g)(x) =
∇g(x) · f (εx)
.
iλ
12
(12)
We can check that eiλφ(εx) is an eigenfunction of D, one of the main reasons D
is considered. We estimate (Dt )N (g)(x), where the adjoint Dt of D is given by
the divergence
Dt (g)(x) = −∇ ·
g(x)f (εx)
.
iλ
(13)
To proceed in estimating (Dt )N (g), consider the components fk of f. If we could
show ∂ β fk (x) is a linear combination of terms of the form
∂ γ1 φ(x)
∂ γ1 φ(x) · · · ∂ γ2n −1 φ(x)
∂ γ2n −1 φ(x)
=
·
·
·
· k∇φ(x)k−1 ,
n
k∇φ(x)k2
k∇φ(x)k
k∇φ(x)k
(14)
where γi ∈ Nd , we could deduce an upper bound on fk (εx), namely
|fk (εx)| . ε−α
(15)
for any α ∈ N (φ), for ε small enough: (14) implies ∂ β fk (εx) is a linear combination of products of
εγi ∂ γi φ(εx)kε∇φ(εx)k−1
(16)
for 1 ≤ i ≤ 2n − 1, times kε∇φ(εx)k−1 . We claim the first 2n − 1 terms can
be bounded above by a uniform constant, while the last term, kε∇φ(εx)k−1 ,
we know is bounded above by ε−α for any α ∈ N+ (φ) by lemma 1. The claim
is easy to verify, since each function xγi ∂ γi φ(x) has uniformly and absolutely
convergent power series wherever φ(x) does. Therefore, for any ε small enough,
εγi ∂ γi φ(εx) = x−γi (εx)γi ∂ γi φ(εx) is bounded above by a uniform constant times
εη for some η ∈ supp+ (φ) (expand the power series and use the triangle inequality). Lemma 1 then guarantees the first 2n − 1 terms of (14) evaluated at εx,
namely the terms (16), are indeed bounded above by a constant independent of
ε since kε∇φ(εx)k−1 . kεx∇φ(εx)k−1 . ε−α for all α ∈ N+ (φ), in particular
α = η.
We proceed by examining some derivatives necessary to prove (14). Consider
n
k∇φ(x)k2 , a sum of products of 2 · 2n−1 = 2n functions, each equal to some
derivative of φ. Its partial derivative in the j direction is
n
∂ ej k∇φ(x)k2 = 2n k∇φ(x)k2
n
−2
d
X
φ0xi (x)φ00xi xj (x),
i=1
n
which is a sum of products of (2 − 2) + 2 = 2n functions, each equal to some
partial derivative of φ, more precisely, 2(2n−1 − 1) from the norm and 2 more
from the chain rule. Writing γ = γ1 + · · · + γ2n −1 , the function
X
∂ ej
aγ ∂ γ1 φ(x) · · · ∂ γ2n −1 φ(x)
γ
13
n
=
−1
X 2X
γ
aγ ∂ γ1 φ(x) · · · ∂ γm +ej φ(x) · · · ∂ γ2n −1 φ(x)
m=1
is again a sum of products of 2n − 1 functions, each equal to some partial
derivative of φ. Therefore the numerator of
P
γ1
γ2n −1
φ(x)
γ αγ ∂ φ(x) · · · ∂
∂ ej
n
2
k∇φ(x)k
is equal to
n
2n
k∇φ(x)k
−1
X 2X
γ
−
X
aγ ∂ γ1 φ(x) · · · ∂ γm +ej φ(x) · · · ∂ γ2n −1 φ(x)
m=1
n
aγ ∂ γ1 φ(x) · · · ∂ γ2n −1 φ(x) · 2n k∇φ(x)k2
−2
γ
d
X
φ0xi (x)φ00xi xj (x).
i=1
After reorganizing, we see that we get a sum of products of 2n + 2n − 1 =
2n+1 − 1 functions, each equal to some partial derivative of φ. We are left with
n+1
the denominator of the partial derivative in the j direction: k∇φ(x)k2 . So
by induction, the proof of (15) is complete: we let |β| = n above and wrote
β = β 0 + ej for arbitrary j (the base case β = 0 holds trivially).
We can now compute by induction without much work that for βj ∈ Rd
there are aβ = aβ1 ,...,βn ∈ {0, 1} such that
(Dt )N (g)(x) = (iλ)−N
X
aβ ∂ β0 g(x)(∂ β1 fj1 )(εx) · · · (∂ βN fjN )(εx).
1≤j1 ,...,jN ≤d
|β0 +β1 +···+βN |=N
By (15),
|(Dt )N (g)(x)| ≤ λ−N
X
aβ |∂ β0 g(x)| · |(∂ β1 fi1 )(εx)| · · · |(∂ βN fiN )(εx)|
1≤j1 ,...,jN ≤d
|β0 +β1 +···+βN |=N
. λ−N
d
X
|∂ β0 g(x)|ε−α1 · · · ε−αN
j1 ,...,jn =1
for all α1 , · · · , αN ∈ N+ (φ). In particular, for all α ∈ N+ (φ),
|(Dt )N (g)(x)| . λ−N ε−N α ,
where the implicit constant is independent of ε and λ.
14
(17)
5.2
Final estimate for lemma 2
We now put everything together for ε small enough (εi ≤ (Cd /a)d/(2δ) ):
Z
Z
I(λ, ε) =
eiλφ(x) ψε (x)dx = ε1
eiλφ(εx) ψ(x)dx
[1,4]d
[ε,4ε]
= ε1
Z
DN (eiλφ )(εx)ψ(x)dx = ε1
[1,4]d
Z
eiλφ(εx) (Dt )N (ψ)(x)dx.
[1,4]d
By (17),
Z
|(Dt )N (ψ)(x)|dx .
[1,4]d
Z
λ−N ε−N α dx . λ−N ε−N α .
[1,4]d
Therefore we have proved lemma 2:
|I(λ, ε)| . λ−N ε−(N α−1) .
6
Proof of Varchenko’s upper bounds
We now use lemma 2 and linear programming to prove Varchenko’s upper
bounds. We can sum over positive ji to get a bound on the integral
Z
eiλφ(x) ψ(x)dx
[0,1]d
where ψ is supported in a sufficiently
P∞small neighborhood of the origin. This is
achieved by decomposing ψ(x) = j1 ,...,jd =0 ψ(x)fj (x), where fj (x) is a partition of unity subordinate to the cover {(2−j , 2−j+2 )}j∈Nd of (0, 4)d . One should
choose a family {fj } for which there exists a uniform constant C > 0 such that
∂ α f (x) j
≤ Cx−α ,
∂xα
so that when one scales the support of (ψfj )(x) to [1, 4]d by 2−j , all derivatives
of (ψfj )(2−j x) are bounded above by a uniform constant independent of j. Since
it is sufficient to find some f0 (x) such that the functions fj (x) = f0 (2j x) define
our partition of unity, it is not difficult to prove that indeed we can choose a
family {fj }j∈Nd as required.
Now we apply lemma 2. For the rest of the section, we use i as an index. Fix
λ > 2. Let t be the Newton distance of φ and assume k is the smallest integer
such that t lies in a face F ⊂ N+ (φ) of dimension k − 1 (d − k + 1 the greatest
codimension). By lemma 2, it is enough to show
∞
X
j1 ,...,jd =0
min
{λ−N 2j·(N α−1) } . λ−1/t logd−k (λ).
N ≥0, α∈N+ (φ)
First, setting N = 0, we see
15
∞
X
2−|j| . λ−1/t .
j1 =log(λ)/t, j2 ,...,jd =0
Hence, it is enough to bound
log(λ)/t
X
min
j1 ,...,jd =0
N ≥0, α∈N+ (φ)
{λ−N 2j·(N α−1) }
(18)
above by a uniform positive constant times λ−1/t logd−k (λ). Since t lies in a face
of dimension k − 1 that cannot lie in a coordinate hyperplane (t > 0) there are
linearly independent α1 , . . . , αk ∈ F whose convex hull contains t, so we write
t=
k
X
λi α i .
i=1
λi
For the rest of the proof we fix N > 1/t. For 1 ≤ i ≤ k let θi = N
t and
1
θ0 = 1 − N t . Then all θi are positive and sum to 1. Moreover, we can check
θ0 (−1) +
k
X
θi (N αi − 1) = 0.
i=1
For x ∈ Rd , denote x+ = (x1 , . . . , xk , 0, . . . , 0) and x− = x − x+ . Without
i k
loss of generality assume that {α+
}i=1 is a linearly independent set, as some k
columns of the k × d matrix with rows αi must be linearly independent, so we
simply assume it is the first k columns. We estimate (18) by fixing j− , i.e., we
consider over j1 , . . . , jk the sum
log(λ)/t
X
j1 ,...,jk =0
min {J0 2−|j+ | , Ji 2j+ ·(N α
i
−1)
1≤i≤k
},
(19)
i
where J0 = 2−|j− | and the coefficients Ji equal λ−N 2j− ·(N α −1) for 1 ≤ i ≤ k;
(19) clearly bounds (18) above, since we fixed N. Letting A be the matrix
{α`i }1≤i,`≤k , we can solve Az = 1 ∈ Rk . Write the solution as z = (z1 , . . . , zk ).
Since the convex hull of {α1 , . . . , αk } contains t ∈ Rk , we conclude ht, zi = 1,
hence h1, zi = 1/t. Denoting the d−tuple log(λ)(z1 , . . . , zk , 0, . . . , 0) by j 0 , we
compute
0
J0 2−|j | = λ−1/t = Ji 2j
−1/t
Hence, by reindexing and factoring out λ
0
·(N αi −1)
.
, we see
log(λ)/t
X
j1 ,...,jk =0
min {J0 2−|j+ | , Ji 2j+ ·(N α
1≤i≤k
16
i
−1)
}
log(λ)/t−log(λ)z1
.λ
X
−1/t
log(λ)/t−log(λ)zk
X
···
j1 =− log(λ)z1
min {2−|j+ | , 2j+ ·(N α
1≤i≤k
jk =− log(λ)zk
i
−1)
}.
(20)
Now notice that the vectors (α1i , . . . , αki ) and −1 ∈ Rk do not all lie in the
same hyperplane: h−1, zi = −|z| = −1/t 6= 1. This finishes the claim, since
{ξ ∈ Rk : hξ, zi = 1} is the unique hyperplane in Rk containing the k many
linearly independent vectors (α1i , . . . , αki ). Therefore, for all x ∈ Rd ,
min {−1+ · x, (N αi − 1)+ · x} < 0.
sup
kxk∞ =1 1≤i≤k
By homogeneity, there is some c > 0 such that
min {−1+ · x, (N αi − 1)+ · x} ≤ −ckxk∞ .
1≤i≤k
Apply this fact to bound the sum in (20) by
X
2min{−|j+ |, j+ ·(N α
i
−1)}
j1 ,...,jk ∈Z
.
∞
X
X
nk−1 2−cn . 1.
n=0 kj+ k∞ =n
Now taking the sum over the remaining indices 0 ≤ jk+1 , . . . , jd ≤ log(λ)/t, we
see that (18) is bounded above by a uniform constant times
log(λ)/t
X
λ−1/t . λ−1/t logd−k (λ),
jk+1 ,··· ,jd =0
which is exactly the estimate we were looking for, as F is codimension d − k.
7
7.1
Theorem 1: Asymptotic expansion of I(λ)
Corollary to lemma 2
Under the same assumptions as lemma 2, we can show
Z
[ε,4ε]
eiλφ(x) xβ ψε (x)dx . λ−N ε−(N α−β−1)
(21)
for all λ > 0, all N ∈ N, all α ∈ N+ (φ), and all β ∈ Nd where the implicit
constant above is independent of ε and λ. The proof is a simple change of
variables and an application of lemma 2. It is not necessary for β to be a
d−tuple of nonnegative integers, but this is the setting we are interested in for
theorem 1. More importantly, one can show the following corollary by using the
inequality (21) together with the same proof of Varchenko’s upper bounds in
the previous section:
17
Corollary 1. Under the same assumptions of lemma 2,
Z
eiλφ(x) xβ ψ(x)dx . λ−hβ+1,w(β+1)i logk−1 (λ)
[0,1]d
where 1 ≤ k ≤ d is the maximum codimension over all faces F ⊂ N+ (φ)
intersecting the line {s(β + 1) : s ∈ R}.
We make heavy use of the corollary in the proof of theorem 1 below.
7.2
Derivatives of I(λ)
R
Denoting the integral Rd eiλφ(x) ψ(x)dx by I(λ), where ψ is supported close
enough to the origin, we want to prove that I(λ) has asymptotic expansion for
large λ of the form
I(λ) ∼
j −1
∞ dX
X
aj,k (ψ)λ−pj logdj −1−k (λ),
j=0 k=0
where p0 < p1 · · · is the ordering of {hβ + 1, w(β + 1)i}β∈Nd and dj is the
greatest codimension over all faces intersecting the lines {s(β + 1) : s ∈ R, β ∈
Nd , hβ + 1, w(β + 1)i = pj }.
The first step in proving theorem 1 is to estimate for β ∈ N the quantity
Z
d
d
eiλφ(x) xβ ψ(x)dx.
λ Iβ (λ) = λ
dλ
dλ Rd
We write φ(x) as the series
φ(x) =
X
cα xα =
d
XX
α
α j vj c α x α ,
α j=1
where we are free to choose any v = v α ∈ Rd≥ satisfying hα, vi = 1; we suppress
the dependence on α for notational convenience. Next, let w ∈ w(β + 1) be
arbitrary. Recall: geometrically, F = Hw ∩ N+ (φ) is a codimension 1 face hit
by the line {s(β + 1) : s ∈ R}, so in particular F does not lie in a coordinate
hyperplane. We can rewrite φ as
d
XX
αj (vj − wj )cα xα +
α j=1
d
XX
αj wj cα xα .
(22)
α j=1
It is easy to see that both sums in (22) converge uniformly in [0, 4]d since the
quantities |v| and |w| are bounded above by d, by their definitions. Letting
v = w for all α ∈ F, (22) simplifies to
d
XX
αj (vj − wj )cα xα +
d
XX
α j=1
α∈F
/ j=1
18
αj wj cα xα .
d
Denote by Φ(x) the sum on the left. Applying the operator λ dλ
, using the
P Pd
α
identity ∇ · φ(x) = α j=1 αj cα x , and integrating by parts,
Z
Z
d
eiλφ(x) xβ ψ(x)dx = eiλφ(x) iλφ(x)xβ ψ(x)dx
dλ
Z
Z
d
XX
iλφ(x)
β
= e
iλΦ(x)x ψ(x)dx + eiλφ(x) iλ
αj wj cα xα xβ ψ(x)dx
λ
α j=1
Z
Z
eiλφ(x) iλΦ(x)xβ ψ(x)dx − eiλφ(x) iλ∇ · (xα xβ ψ(x)(xw))dx
Z
Z
= λ eiλφ(x) iΦ(x)xβ ψ(x)dx − hβ + 1, wi eiλφ(x) xβ ψ(x)dx
=
−
d
X
Z
hej , wi
eiλφ(x) xβ+ej ψx0 j (x)dx.
j=1
d
+ hβ + 1, wi), we estimate
Letting Dβ be the operator (λ dλ
Z
Dβ Iβ (λ) = λ
e
iλφ(x)
Z
d
X
hej , wi eiλφ(x) xβ+ej ψx0 j (x)dx
iΦ(x)x ψ(x)dx −
β
j=1
= λI1 (λ) + I2 (λ).
We use corollary 1 to conclude
d
X
|I2 (λ)| .
hej , wiλ−hβ+ej +1,w(β+ej +1)i logkj (λ).
(23)
j=1
If hβ + ej + 1, w(β + ej + 1)i > hβ + 1, wi, we are done with the estimate.
Otherwise,
hβ+1, wi = hβ+ej +1, w(β+ej +1)i = hβ+1, w(β+ej +1)i+hej , w(β+ej +1)i,
a quantity strictly greater than hβ + 1, wi unless hej , w(β + ej + 1)i = 0 and
hβ + 1, w(β + ej + 1)i = hβ + 1, w(β + 1)i. In this case, either hej , wi = 0 or else
w(β + ej + 1) ( w(β + 1). So if hej , wi =
6 0, we conclude that β + ej + 1 lies in a
strictly smaller intersection of codimension one faces. Since β + ej + 1 6= β + 1,
we can from there conclude the codimension of the face containing β + ej + 1 is
strictly larger than that of the face containing β + 1. Therefore, the power kj of
log in (23) is strictly smaller than in the estimate of Iβ , guaranteed by corollary
1. If hej wi = 0, the j th term in (23) vanishes. No matter what, we get a strictly
better estimate consistent with corollary 1 (and theorem 1).
19
P
Pd
α
To estimate I1 (λ), we use the fact that α∈F
/
j=1 αj (vj − wj )cα x conR iλφ(x)
verges uniformly, so we just have to estimate e
xα+β ψ(x)dx for α ∈
/ F.
Applying corollary 1, the estimate is
|I1 (λ)| . max λ−hα+β+1,w(α+β+1)i logkα (λ)
α∈F
/
(24)
for some 0 ≤ kα ≤ d − 1. Note that
hβ + 1, wi + hα, w(α + β + 1)i ≤ hα + β + 1, w(α + β + 1)i.
Moreover, hα, w(α+β +1)i ≥ 1 since α must lie on or above Hw0 for w0 ∈ w(α+
β + 1) by convexity of N+ (φ). Let us examine what happens if 1 + hβ + 1, wi =
hα + β + 1, w(α + β + 1)i, namely if hα, w(α + β + 1)i = 1. It is impossible
for w(α + β + 1) to contain w, since we assumed α ∈
/ F. In particular, α must
lie in a face of strictly smaller codimension than β + 1. Therefore corollary 1
guarantees
that kα must be strictly smaller than the power of log in the estimate
R
of eiλφ(x) ψ(x)xβ dx. So the estimate in this case is strictly better because of
the power of log . If α ∈
/ Hw0 , the power of λ must be strictly smaller. Therefore,
using the bound
Z
eiλφ(x) xβ ψ(x)dx . λ−hβ+1,wi logkβ (λ)
guaranteed by corollary 1, we have shown

 max λ−hβ+1,w(β+1)i logkj (λ)
0
|Dβ Iβ (λ)| . hej ,wi6=1−hα+β+1,w(α+β+1)i
max λ
logkα (λ)
α∈F
/
where the first case bounds 1 ≤ j ≤ d in (23), where we concluded kj < kβ by
corollary 1 for those j satisfying ej , wi =
6 0, and the second case occurs otherwise
for kα guaranteed by corollary 1, where 1−hα+β+1, w(α+β+1)i < hβ+1, w(β+
1)i. It is important in the second case to show, for consistency with theorem 1,
that we can find γ ∈ Nd such that 1−hα+β+1, w(α+β+1)i = −hγ, w(α+β+1)i.
This is because there is some c > 0 such that δ = c(α + β + 1) ∈ ∂N+ (φ). Since
δ lies in the convex hull of points in Nd , there must be some α0 ∈ Nd lying at
most 1 in each component away from δ and therefore α + β + 1 − α0 = γ ∈ Nd .
7.3
Estimating derivatives of I(λ)
We now set up the problem of estimating derivatives of I(λ), which we use to
figure out its asymptotic expansion. Let p0 = 1/t. Define for j ≥ 1,
pj = min{hv + 1, wi > pj−1 : v ∈ Nd , w}
where w runs over normals of N+ (φ). We want to show for all n ∈ N, there are
0 ≤ d0 , d1 , · · · , dn ≤ d − 1 such that
20
|(λ
d
d
d
+ pn )k (λ
+ pn−1 )dn−1 +1 · · · (λ
+ p0 )d0 +1 I(λ)| . λ−pn logdn −k (λ)
dλ
dλ
dλ
(25)
for all 1 ≤ k ≤ dn . The proof is by induction on n ∈ N and 1 ≤ k ≤ dn .
d
d
d
For all such n and k, let Gn,k (λ) = (λ dλ
+ pn )k (λ dλ
+ pn−1 )dn−1 +1 · · · (λ dλ
+
d0 +1
p0 )
I(λ). The induction hypothesis states that Gn,k is of the form
d0 +···+dn−1 +n+k
Gn,k (λ) =
X
λj Jj,n,k (λ),
j=0
where there are
• ψj,n,k smooth,
• β = βj,n,k and w (depending on β) such that pn = hβ + 1, wi − j,
P
• a set Γ such that γ∈Γ bγ (j, n, k)xγ converges uniformly and absolutely,
and all γ ∈ Γ satisfy
• hγ + 1, w0 i ≥ hβj,k,n + 1, wi for all w0 normal to N+ (φ)
such that
Z
Jj,n,k (λ) =
eiλφ(x) ψj,n,k (x)
Rd
X
bγ (j, n, k)xγ dx.
γ∈Γ
From now on the dependence on j, k, n is suppressed. The case n = 0, k = 1
was shown in the previous section, taking β = 0, Γ = {α + β : α ∈ supp+ (φ)}.
d
d
+ pn ) to Gn,k−1 (otherwise, apply (λλ dλ
+
Assuming k < dn , we apply (λ dλ
pn+1 )− the proof is identical). By induction hypothesis, Gn,k−1 is a sum of
terms λj J(λ) satisfying the conditions above, where J depends on j, k, n. For
all j there is some β and w corresponding to a codimension 1 face such that
pn = hβ + 1, wi − j. Therefore,
Z
X
d
j
j
(λ
+ pn )λ J(λ) = λ
eiλφ(x) iλφ(x)ψ(x)
bγ xγ dx
dλ
Z
X
+ hβ + 1, wi eiλφ(x) ψ(x)
bγ xγ dx.
= λj (J 0 (λ) + hβ + 1, wiJ(λ)).
We separately estimate J 0 and J. For simplicity, write J 0 = J10 + J20 , where
J10
Z
=
eiλφ(x) iλψ(x)
X
bγ x γ
γ
d X
X
j=1 α
21
cα αj wj xα dx,
and
J20 =
Z
eiλφ(x) iλψ(x)
X
bγ x γ
γ
d X
X
cα αj (vj − wj )xα dx.
j=1 α
We reuse the notation v and w from the previous section. The argument is very
similar to the one already given. We use the integration by parts along with the
equality
wx · ∇φ(x) =
d X
X
cα αj wj xα
j=1 α
to write
J10
as
Z
eiλφ(x) iλ(wx) · ∇φ(x)ψ(x)
X
bγ xγ dx
(26)
γ
=−
d X
X
Z
wj bγ
eiλφ(x) ψx0 j (x)xγ+ej dx
j=1 γ
−
X
Z
hγ + 1, wi
eiλφ(x) ψ(x)bγ xγ dx.
(27)
γ
The integrals under the sum over j can be bounded above by λ−hγ+ej +1,w(γ+ej +1)i
times the appropriate power of log by corollary 1. By the induction hypothesis,
hγ + ek + 1, w(γ + ek + 1)i ≥ hβ + 1, wi + hek , w(γ + ek + 1)i.
If hek , w(γ + ek + 1)i =
6 0, the exponent of λ in our approximation is strictly
better than hβ + 1, wi. Otherwise, the k th coefficient in (27) is zero, so the
estimate of these integrals is strictly better. Next, adding the second integral
of (27) to hβ + 1, wiJ, we get
X
Z
bγ (hβ + 1, wi − hγ + 1, wi)
eiλφ(x) ψ(x)xγ dx
γ
=
X
b0γ
Z
eiλφ(x) ψ(x)xγ dx.
hγ+1,wi>hβ+1,wi
Here we can bound each summand above by λ−hγ+1,w(γ+1)i logkγ (λ), and it is
the same story as in the base case.
The last integral we have to estimate is J20 . By dominated convergence, we
rewrite it as
22
d X X Z
X
j=1 γ
eiλφ(x) iλψ(x)xγ+α dx.
α∈H
/ w
Each integral in this sum is bounded above by λ · λ−hα+γ+1,w(α+γ+1)i times
some power of log that we must treat delicately, depending on the codimension
of the face generated by the origin and α + γ + 1. The argument is exactly the
same as in the base case. So we are done proving (25).
7.4
A differential inequality
The last thing we need to do is show (25) implies the asymptotic expansion of
I(λ) we have been looking for. The expansion will be a corollary of the following
result:
Lemma 3. Let f : (2, ∞) → C be smooth. Assume there are positive rationals
p0 < p1 < · · · < pn+1 and positive integers d0 , · · · , dn+1 such that
|(λ
d
d
+ pn )dn · · · (λ
+ p0 )d0 f (λ)| . λ−pn+1 logdn+1 −1 (λ).
dλ
dλ
(28)
Then, there are constants ajk ∈ C such that
f (λ) =
n dX
n −1
X
ajk λ−pj logdj −1−k (λ) + O(λ−pn+1 logdn+1 −1 (λ)).
j=0 k=0
First, we require some more basic results about the differential operator we
are considering. We let pj and dj as in lemma 3. We assume f : R → C is
smooth for all statements below. Also, big-O statements are for λ → ∞.
Proposition 4. Let h : (2, ∞) → C be smooth. Assume there are positive
rationals p0 < p1 < · · · < pn+1 and positive integers d0 , · · · , dn+1 such that
(λ
d
d
+ pn )dn · · · (λ
+ p0 )d0 h(λ) = 0.
dλ
dλ
Then, there are ajk ∈ C such that
h(λ) =
j −1
n dX
X
ajk λ−pj logk (λ).
j=0 k=0
Proposition 4 can be shown by a simple induction argument on 0 ≤ m ≤
d0 + · · · + dn .
Proposition 5. Let f : (2, ∞) → C be smooth. Let 0 < p < q and let d ∈ N. If
d
+ p)f (λ)| . λ−q logd (λ), then |f (λ)| . λ−q logd (λ).
|(λ dλ
23
Proof. We multiply both sides of the inequality by λp−1 , notice the left-hand
side becomes exact, and integrate:
Z
∞
λ
Z ∞
Z ∞ d
d p
(t f (t))dt ≤
| (tp f (t))|dt .
tp−q−1 logd (t)dt.
dt
dt
λ
λ
Since p−q −1 < −1, the rightmost side is integrable, therefore so is the leftmost.
Integrating by parts, (differentiating the log term if d 6= 0), we conclude
|λp f (λ)| . λp−q logd (λ).
This technique is the same one we would use if we were providing the proof of
proposition 4. Proposition 5 provides the base case for the proof of lemma 3:
d
d
Proof. Let Dn be the differential operator (λ dλ
+ pn )dn · · · (λ dλ
+ p0 )d0 . Let h
be the general solution to the homogeneous equation Dn (h) = 0 guaranteed by
proposition 4. Then to solve for f in the differential inequality (28), we need to
solve |Dn (f + h)| . λ−pn+1 logdn+1 −1 (λ). We use induction the same way as in
the proof of proposition 5, making use of p0 < · · · < pn < pn+1 . We conclude
|f (λ) + h(λ)| . λ−pn+1 logdn+1 −1 (λ).
Hence, there are constants ajk ∈ C such that
f (λ) =
j −1
n dX
X
ajk λ−pj logk (λ) + O(λ−pn+1 logdn+1 −1 (λ)).
j=0 k=0
Now we can conclude that for all n ∈ N, there are ajk ∈ C such that
|I(λ) −
j −1
n dX
X
ajk λ−pj logk (λ)| . λ−pn+1 logdn+1 −1 (λ).
j=0 k=0
Finally, taking pj and dj as in theorem 1, the proof is complete. Note that in
the propositions above, we should take dj + 1 for the codimension.
8
Acknowledgments
I would like to deeply thank my adviser, Professor Philip Gressman, for his truly
inspiring enthusiasm, inexhaustible patience and invaluable advice. I would also
like to thank Professors Michael Greenblatt and Robert Strain for some helpful
comments and suggestions.
24
References
[1] Anthony Carbery. A uniform sublevel set estimate. In Harmonic analysis
and partial differential equations, volume 505 of Contemp. Math., pages
97–103. Amer. Math. Soc., Providence, RI, 2009.
[2] Anthony Carbery, Michael Christ, and James Wright. Multidimensional
van der Corput and sublevel set estimates. J. Amer. Math. Soc., 12(4):981–
1015, 1999.
[3] Anthony Carbery and James Wright. What is van der Corput’s lemma in
higher dimensions? Publ. Mat., (Vol. Extra):13–26, 2002.
[4] Yen Do and Philip T. Gressman. An operator van der corput estimate
arising from oscillatory riemann-hilbert problems. 2013. Available online
at arXiv:1308.1367.
[5] Michael Greenblatt. Oscillatory integral decay, sublevel set growth, and
the Newton polyhedron. Math. Ann., 346(4):857–895, 2010.
[6] Michael Greenblatt. Maximal averages over hypersurfaces and the Newton
polyhedron. J. Funct. Anal., 262(5):2314–2348, 2012.
[7] Philip T. Gressman. Uniform geometric estimates for sublevel sets. 2009.
Available online at arXiv:0707.3168.
[8] Branko Grünbaum. Convex polytopes, volume 221 of Graduate Texts in
Mathematics. Springer-Verlag, New York, second edition, 2003.
[9] Joe Kamimoto and Toshihiro Nose. Newton polyhedra and weighted oscillatory integrals with smooth phases, 2014. arXiv:1406.4325.
[10] Steven G. Krantz and Harold R. Parks. A primer of real analytic functions,
volume 4 of Basler Lehrbücher [Basel Textbooks]. Birkhäuser Verlag, Basel,
1992.
[11] S. Lojasiewicz. Sur le problème de la division. Studia Math., 18:87–136,
1959.
[12] Bernard Malgrange. Intégrales asymptotiques et monodromie. Annales
Scientifiques de l’École Normale Supérieure, 7(3):405–430, 1974.
[13] D. H. Phong, E. M. Stein, and Jacob Sturm. Multilinear level set operators, oscillatory integral operators, and Newton polyhedra. Math. Ann.,
319(3):573–596, 2001.
[14] Vyacheslav S. Rychkov. Sharp L2 bounds for oscillatory integral operators
with C ∞ phases. Math. Z., 236(3):461–489, 2001.
[15] A. N. Varchenko. Newton polyhedra and estimation of oscillating integrals.
Funktsional’nyi Analiz i Ego Prilozheniya, 10(3):13–38, 1976.
25