Revista Matematica Iberoamericana
Vol. 15, N.o 1, 1999
Parabolic Harnack inequality
and estimates of Markov
chains on graphs
Thierry Delmotte
Abstract. On a graph, we give a characterization of a parabolic Har-
nack inequality and Gaussian estimates for reversible Markov chains by
geometric properties (volume regularity and Poincare inequality).
1. Introduction.
Consider the standard random walk with kernel pn (x y) on a graph
; with polynomial volume growth. Under which conditions does one
have the following Gaussian estimates?
cp e;Cd(x y)2 =n p (x y)
Cp e;cd(x y)2 =n
n
V (x n)
V (x n)
where V (x n) is the cardinal of the ball of center x and radius n. Note
rst that pn (x y) may be null for d(x y) > n or for d(x y) 6 n (mod 2).
Thus we will consider only d(x y) n and graphs where all vertices are
loops. With these precisions, the Gaussian estimates were proved when
; is a group in 15]. They were also proved for linear volume growth
in 5] and it was there conjectured that they were true for polynomial
growth under an isoperimetric assumption such as Poincare inequality.
Indeed in the continuous setting of Riemannian manifolds, they were
181
182
T. Delmotte
proved rst for non-negative Ricci curvature in 19] and then under
Poincare inequality assumption in 12], 30]. All these proofs are based
on a parabolic Harnack inequality and 30] shows that the Poincare
inequality is the good isoperimetric assumption.
The aim of this paper, which is announced in 10], is to prove
the conjecture in 5] and more precisely to give a characterization of
the parabolic Harnack inequality or the Gaussian estimates by geometric properties (volume regularity and Poincare inequality) which is the
discrete counterpart of the main result in 30]. A precise statement is
proposed in Section 1.4 after some denitions. The main part, the proof
of the Harnack inequality, is an application of J. Moser's method 21],
22], 23]. His approach is presented on Euclidian spaces R n but shows
clearly the contribution of Poincare and Sobolev inequalities. That's
why it has been adapted to many dierent settings.
As far as graphs are concerned, elliptic versions (without the time
variable) of the Harnack inequality have been proved in 20] with a
special isoperimetric assumption and in 9], 16], 26] by J. Moser's iterative method with Poincare inequality. The discretization of the space
raised some technical problems but the proof could go through. It is
much more intricate to deal with both discretizations (space and time),
especially to obtain Cacciopoli inequalities. Section 1.5 is an attempt to
show these diculties and their origin. Because of these criss-crossing
discretizations diculties, we have tried to prove a continuous-time
parabolic Harnack inequality on graphs and this raised only solvable
technical problems like the elliptic version.
One application of the Harnack inequality is another proof of Holder regularity (see Section 4.1) for solutions of the elliptic/parabolic
equation (theorem of J. Nash 24]). Another application is that it yields
Gaussian estimates. The study of these estimates in the mixed setting
(discrete geometry, continuous time) in 8], 25] has been helpful because
at rst we only prove the Harnack inequality in this setting.
At this Gaussian estimates step, it is possible to deduce discretetime results from the continuous-time ones. This is the crucial point of
this paper because all other steps are more or less adaptations of known
technics which are fully reviewed in 30]. This strategy {to work on the
continuous-time setting and to compare with the discrete-time setting{
is also employed in 31, Section 1.4.1]. One side of the comparison is
stated in Theorem 3.6 and may be used again. The other side will
depend closely on the problem considered.
To deduce Gaussian estimates from the Harnack inequality is the
Parabolic Harnack inequality
183
classical (chronological after J. Moser's work) way to introduce this
theory. The reverse order based on J. Nash's ideas 24] and completed
in 11] is also useful here because our discrete-time Gaussian estimates
yield the discrete-time Harnack inequality, which is nally proven after
a return trip to Gaussian estimates.
Let us note that we aimed at not using any algebraic structure
(and we get an equivalence, which proves that this structure plays no
role in fact). Similarly, the authors of 3] tried to extend related results
to a more general class than Cayley graphs (strongly convex subgraphs
of homogeneous graphs) and in 33] some estimates are obtained still on
the particular graphs Zn (with a continuous time) but for non-uniform
transitions.
1.1. The geometric setting.
Let ; be an innite set and xy = yx 0 a symmetric weight on
; ;. It induces a graph structure if we call x and y neighbours (x y)
when xy 6= 0 (note that loops are allowed). We will assume that this
graph is connected and locally uniformly nite {this means there exists
N , such that for all x 2 ;, #fy : y xg N and it is implied by
the geometric conditions
P (see below) DV (C1 ) or (). Vertices are
weighted by m(x) = yx xy . The graph is endowed with its natural
metric (the smallest number of edges of a path between two points).
We dene balls (for r real) B (xPr) = fy : d(x y) rg and the volume
of a subset A of ;, V (A) = x2A m(x). We will write V (x r) for
V (B (x r)).
We shall consider the following geometric conditions:
Denition 1.1. The weighted graph (; ) satis es the volume regularity (or doubling volume property) DV (C1 ) if
V (x 2 r) C1 V (x r)
for all x 2 ; for all r 2 R + :
This implies for r s that (the square brackets denote the integer
part)
V (x r) V (x 2log (r=s)= log 2]+1 s)
C1 C1log (r=s)=log 2 V (x s)
log C1 = log 2
r
= C1 s
V (x s ) :
184
T. Delmotte
Denition 1.2. The weighted graph (; ) satis es the Poincare inequality P (C2 ) if
X
x2B(x0 r)
m(x) jf (x) ; fB j2 C2 r2
X
x y2B(x0 2r)
xy (f (y) ; f (x))2
for all f 2 R ; , for all x0 2 ;, for all r 2 R + , where
X
fB = V (x1 r)
m(x) f (x) :
0
x2B(x0 r)
Some methods to obtain this Poincare inequality on a graph are
proposed in 6].
Denition 1.3. Let > 0, (; ) satis es () if
x y implies xy m(x)
for all x 2 ; x x :
Two assertions are contained in this denition. The fact that
xy 6= 0 implies xy m(x) is a local ellipticity property {it may
be understood as a local volume regularity if we see the graph as a network. It implies that the graph is locally uniformly nite with N = 1=
(so does also DV (C1 ) with N = C12 ). Only this rst assertion is needed
for continuous-time results (such as Theorem 2.1). The second assertion is that xx m(x) (or p(x x) with the notations of next
section). It will be used in Section 3.2 to compare the discrete-time and
continuous-time Markov kernels. This condition appears in 4] where
the authors prove that it implies the analyticity of the Markov operator.
This fact is used for instance in 27] to obtain temporal regularity.
If one considers a weighted graph (; ) which satises only the
rst assertion, for instance the standard random walk on Z (mn = 1
if jm ; nj = 1, mn = 0 otherwise), one can study the graph (; (2))
where
X xz zy
(2)
=
:
xy
z m(z )
With the notations of next section, this gives the iterated kernel
p(2) (x y) = p2 (x y)
and
m(2) (x) = m(x) :
Parabolic Harnack inequality
185
The point is that
(2)
xx =
X (xz )2
z
X
m(z) zx xz = m(x) :
Thus (; (2)) satises the complete assumption (2). Indeed, if
(2)
xy 6= 0, there exists z0 such that xz0 6= 0 and z0 y 6= 0 and
xz0 z0 y
2
(2)
xy m(z0 ) xz0 m(x) :
To extract afterwards from the results for (; (2)) some consequences
for (; ), we must be careful that (; (2)) may not be connected (see
the standard random walk on Z).
1.2. Markov chains and parabolic equations.
To the weighted graph we associate discrete-time and continuoustime reversible Markov kernels. Set p(x y) = xy =m(x), the discrete
kernel pn (x y) is dened by
(
p0 (x z) = (x z)
(1.1)
P
pn+1 (x z) = y p(x y) pn(y z) :
This kernel is not symmetric but
pn (x y) = pn (y x) :
m(y)
m(x)
We keep this notation which represents the probability to go from x to
y in n steps but it may also be interesting to think to the density
hn (x y) = pnm(x(y)y)
which is symmetric and is the right analog of a kernel on a continuous
space.
We will say that u satises the (discrete-time) parabolic equation
on (n x) if
(1.2)
m(x) u(n + 1 x) =
X
y
xy u(n y) :
186
T. Delmotte
It is the case of p ( y).
Note that we have a weighted geometry and we consider only the
canonic parabolic equation on it. Imagine we had a non-weighted geometry (this means volume regularity and Poincare inequality without
the weights m(x) and xy ) and any parabolic equation with a uniform
ellipticity constant, then we would consider the geometry weighted by
the coecients of the equation. Because of the ellipticity, the geometric assumptions on the non-weighted geometry would yield those on the
weighted geometry. Our background is a little more general since the ellipticity constant (()) is not uniform but above all, as A. Grigor'yan
pointed out to us, its presentation is cleaner because these geometric
assumptions are always applied with the weights.
Note also that every reversible Markov chain can be obtained as
above: starting from the Markov kernel p and its invariant measure m,
one constructs xy = p(x y) m(x).
Denition 1.4. The weighted graph (; ) satis es the Gaussian estimates G(cl Cl Cr cr ) (all constants are positive) if
(y) e;Cl d(x y)2 =n p (x y)
d(x y) n implies Vc(l xmp
n
n)
Cr mp(y) e;cr d(x y)2 =n :
V (x n)
Of course, if d(x y) > n then pn (x y) = 0. On Euclidian spaces,
these estimates were rst proved for fundamental solutions of parabolic
equations in 1].
The continuous-time Markov kernel may be dened by
+1 k
X
t p (x z) :
;
t
Pt (x z) = e
k
k=0 k!
Like the discrete kernel, it satises
Pt (x y) = Pt (y x) :
m(y)
m(x)
It is also the solution for (t x) 2 R + ; of
8
>
< P0 (x z ) = (x z )
@ P (x z) = X (P (y z) ; P (x z)) :
>
m
(
x
)
xy t
t
:
@t t
y
Parabolic Harnack inequality
187
Indeed,
+1
+1 k
k tk;1 p (x z) ; X
t p (x z)
@ P (x z) = e;t X
k
k
@t t
k=1 k!
k=0 k!
+1
+1 k
k;1 X
X
X
t
t p (x z )
;
t
=e
p
(
x
y
)
p
k
;
1 (y z ) ;
k
(
k
;
1)!
k
!
y
k=1
k=0
X
= p(x y) (Pt(y z) ; Pt (x z)) :
y
Therefore we will say that u satises the (continuous-time) parabolic
equation on (t x) if
(1.3)
@ u(t x) = X (u(t y) ; u(t x)) :
m(x) @t
xy
y
1.3. Parabolic Harnack inequalities.
These inequalities apply to positive solutions of the parabolic equations on cylinders (products of a time interval and a ball). Let us make
this precise on the boundary of the cylinders. We shall say that u
is a non-negative solution on Q = I B (x0 r) if it is the trace of a
non-negative solution on I B (x0 r + 1) which satises (1.2) or (1.3)
everywhere on Q. For instance in the continuous case, this implies: for
all (t x) 2 Q,
u(t x) 0
for all t 2 I , for all x 2 B (x0 r ; 1),
@ u(t x) + u(t x) = X u(t y)
m(x) @t
xy
y
for all t 2 I , d(x0 x) = r] imply
(1.4)
@ u(t x) + u(t x) X u(t y) :
m(x) @t
xy
y2B(x0 r)
188
T. Delmotte
Denition 1.6. Set 2 ]0 1
and 0 < < < < , (; )
1
2
3
satis es the continuous-time parabolic Harnack inequality
4
H( 1 2 3 4 C )
if for all x0 s r and every non-negative solution on Q = s s + 4 r2] B (x0 r), we have
sup u C inf
u
Q
Q
where Q = s + 1 r2 s + 2 r2] B (x0 r) and Q = s + 3 r2 s +
4 r2] B (x0 r).
Let us explain the choice of the boundary condition. For r < 1, Q
has no interior so we just have (1.4) but this is sucient to obtain the
inequality since it gives a lower bound for (@=@t) u(t x0).
If we assume () and H( 1 2 3 4 C ), then for all 0 2 ]0 1
and 0 < 10 < 20 < 30 < 40 , there exists C 0 such that
H(0 10 20 30 40 C 0 )
is true. Take two points (t x) and (t x) in Q0 and Q0 . For
r big enough, there is a decomposition x = x0 : : : xn = x and
t = t0 < < tn = t {where n depends only on the , 0 , i and
i0 's{ such that we can obtain u(ti xi ) C u(ti+1 xi+1). So we can take
C 0 = C n . For r bounded, the condition () gives the inequality. For
simplicity, we will denote
H(CH ) = H(0:99 0:01 0:1 0:11 100 CH) :
These coecients have been chosen for convenience when we apply
this inequality (see typically Propositions 3.1 or 3.4). We will write
u(t x) CH u(t x ) as soon as x and x are in the contraction
of a ball on which u is a solution and \there is time" between t and
t as well as before t . We will not have to bother with technical
coecients if they don't exceed 10.
Denition 1.6. Set 2 ]0 1
and 0 < < < < , (; ) satis1
2
3
4
es the discrete-time parabolic Harnack inequality H ( 1 2 3 4 C )
if for all x0 2 ;, s 2 R , r 2 R + and every non-negative solution on
Q = (Z \ s s + 4r2]) B (x0 r), we have
(n x) 2 Q (n x) 2 Q and d(x x) n ; n
Parabolic Harnack inequality
189
implies
u(n x) C u(n x)
where Q = (Z \ s + 1 r2 s + 2 r2 ]) B (x0 r) and Q = (Z \ s +
3 r2 s + 4 r2 ]) B (x0 r).
If the condition d(x x ) n ; n is not satised, u(n x)
has no in#uence on u(n x). It is always satised if r 2 =(
3 ; 2)
and in this case we can write
sup u C inf
u:
Q
Q
The same remark as above holds for this inequality and we will denote
H (CH ) = H (0:99 0:01 0:1 0:11 100 CH) :
1.4. Statement of the results.
Here is our main result:
Theorem 1.7. The three following properties are equivalent.
i) There exist C1 C2 > 0 such that DV (C1 ), P (C2 ) and ()
are true.
ii) There exists CH > 0 such that H (CH ) is true.
iii) There exist cl Cl Cr cr > 0 such that G(cl Cl Cr cr ) is true.
Theorem 3.8 states that i) implies iii), Theorem 3.10 that iii) implies ii) and Theorem 3.11 that ii) implies i).
The rst part ( i) implies iii) ) is the most dicult and an intermediate result is:
ii)' There exists CH > 0 such that H(CH ) is true,
which is proven in Section 2 by a Moser type iteration argument. In
fact, ii)' is also equivalent to the three properties, since we can prove ii)'
implies i) the same way we prove ii) implies i). In Section 3, we complete
the proof. First ii)' implies estimates for Pt , which yield estimates iii)
for pn by comparison. Then iii) implies ii) and ii) implies i). In the last
190
T. Delmotte
section, two properties about Holder regularity and Green function are
deduced for graphs which satisfy these properties.
Let us note that it is straightforward that ii) and iii) imply the
hypothesis (). For iii), just apply the lower bound to p1 (x y) where
y 2 B (x 1). And for ii), set n = ;0:5 to obtain p0 (x x) CH p1 (x y).
This result connects with 15] and 5] since in groups or in graphs
with linear volume growth, Poincare inequality is always satised. In
Euclidian graphs Zn, the estimates which are well-known for uniform
transitions (this is for instance a consequence of the result in groups) are
here proved for non-uniform transitions (xy doesn't depend on y ; x).
1.5. Continuous or discrete time.
To prove the Harnack inequality, we will use analytic methods
which yield Cacciopoli inequalities. For these methods, the continuous
time is naturally more convenient. We may have an idea of the problems
if we look what happens on the two points graph ; = fa bg. Choose
p(a a) = p(b b) = and p(a b) = p(b a) = 1 ; , this may be done if
we set aa = bb = and ab = 1 ; . This gives
8
n
1
+
(2
;
1)
>
>
< pn (a a) =
2
n
>
>
: pn (a b) = 1 ; (2 ; 1)
2
8
(2;2)t
1
+
e
>
>
< Pt (a a) =
2
(2;2)t
>
>
: Pt (a b) = 1 ; e
:
2
Of course if = 1, there is no link between the two points. Now, if
6= 1, P is always a simple relaxation (Pt (a a) > Pt (a b)) whereas,
for < 1=2, p is an oscillating relaxation or worse (for = 0) a pure
oscillation.
The rst conclusion is that we have to force a minimum value on
the diagonal of the Markov kernel if we want a discrete-time parabolic
Harnack inequality or estimates from below. Indeed, they are not satised by this example for = 0. This has nothing to do with the fact that
the graph is nite, take the standard random walk on Z and observe
its eect on u(0 z) = z mod 2. We obtain u(n z) = (n + z) mod 2.
What plays a role is condition () and particulary the fact that
p(x x) . We have extended this condition to p(x y) for x y
so that our results are true for low values of n, think for instance to the
lower bound of p1 (x y) = p(x y). Besides, lower bounds for d(x y) = n
Parabolic Harnack inequality
191
are the consequence of lower bounds for n = 1, see the proof of Theorem
3.8.
The second conclusion is that the behaviour of the discrete-time
Markov chain is more dicult to control. In addition to the usual
heat relaxation, there may be another phenomenon of relaxation of the
oscillating errors due to the discretization of the time.
One more attempt to show the diculty of adapting the analytic
methods to the discrete time. Consider the proof of the Cacciopoli
inequality. When time and space are continuous, take u such that
@u=@t = 4u and a compactly supported cut-o function to integrate
by parts.
1 ZZ 2 @ (u2) = ZZ 2 u 4u
2
@t
=;
=;
ZZ
ZZ
r(2 u) ru
jruj ;
2
2
ZZ
2 u r ru :
Since
;
ZZ
2 r u ru
1 ZZ 2 jruj2 + 2 ZZ jrj2 u2
2
one gets
1 ZZ 2 @ (u2) + 1 ZZ 2 jruj2 2 ZZ jrj2 u2 :
2
@t 2
This inequality is essential to estimate kruk2 with kuk2 , which with the
Sobolev inequality gives estimates between mean values for exponents
of the same sign. Let us try to adapt this argument to discrete time.
Note that (1.2) may be written in the following way
m(x) (u(n + 1 x) ; u(n x)) =
X
xy (u(n y) ; u(n x)) :
For simplicity, we will forget about the cut-o function (take u(n )
192
T. Delmotte
compactly supported). Write
X
2 m(x) u(n x) (u(n + 1 x) ; u(n x))
x
=2
X
xy
X
xy u(n x) (u(n y) ; u(n x))
xy u(n x) (u(n y) ; u(n x))
xy
X
+ yx u(n y) (u(n x) ; u(n y))
xy
X
= ; xy (u(n y) ; u(n x))2 :
xy
=
This
P is nice2 but we have not taken the exact time dierentiation of
x m(x) u (n x), that is,
X
m(x) (u2(n + 1 x) ; u2(n x))
x
=2
X
+
m(x) u(n x) (u(n + 1 x) ; u(n x))
x
X
=;
+
x
X
xy
X
x
m(x) (u(n + 1 x) ; u(n x))2
xy (u(n y) ; u(n x))2
m(x) (u(n + 1 x) ; u(n x))2 :
Fortunately, if we suppose that xx m(x), then
X
m(x) (u(n + 1 x) ; u(n x))2
x
=
X 1
X
x m(x) y
X 1 X
2
xy (u(n y) ; u(n x))
xy
X
xy (u(n y) ; u(n x))2
m(x) y6=x
y
X
(1 ; ) xy (u(n y) ; u(n x))2 :
x
xy
Parabolic Harnack inequality
This yields
X
x
m(x) (u2(n + 1 x) ; u2 (n x)) ;
X
xy
193
xy (u(n y) ; u(n x))2 :
The constant has been used to control the errors due to the discrete time. But these manipulations seem far more intricate when we
deal with subsolutions or the logarithm of u (and cut-o functions).
Therefore, we won't try to apply Moser's iterative technique directly to
solutions of the discrete-time parabolic equation.
2. Harnack inequality for solutions of the continuous-time
parabolic equation.
Theorem 2.1. Assume (; ) satis es DV (C ), P (C ) and ().
Then, there exists CH such that H(CH ) is true.
1
2
The proof is an adaptation of 30]. The strategy is Moser's iterative
technique 22], that is to prove inequalities involving the mean values
1=p
X Z s2
1
2p
M(u p s1 s2] B ) = (s ; s ) V (B )
m(x) u (t x) dt :
2
1
x2B s1
The idea is we get the inmum when p ;! ;1 and the supremum
when p ;! +1. Thus we want to prove a series of inequalities between
;1 and +1. To improve the exponent of a mean value, the Sobolev
inequality proved in Section 2.1 is helpful. One application of this
inequality yields an elementary step of the iterative technique proved
in Section 2.2. The iteration gives inequalities between the extrema and
mean values as stated in Section 2.3. The most dicult step is between
negative and positive values. Here we use an improvement of the initial
version 22] proposed in 23] with an idea of E. Bombieri 2]. This is the
object of Section 2.4 and needs a weighted Poincare inequality stated
in Section 2.1.
Throughout this section devoted to the proof of Theorem 2.1,
DV (C1), P (C2) and () are assumed and u > 0. The theorem for
u 0 is then straightforward.
194
T. Delmotte
2.1. Poincar
e and Sobolev inequalities.
Proposition 2.2 (Weighted Poincare inequality). There exists C depending on C , C and such that for all x 2 ;, R 2 N and f 2
1
R B (x0 R) ,
X
x2B(x0 R)
2
0
m(x) 2(x) (f (x) ; fB )2
C R2
X
x y2B(x0 R)
xy min f2(x) 2(y)g (f (y) ; f (x))2
where (x) = 1 ; d(x0 x)=R and fB is such that the term on the left
is minimal, that is
X
0 R)
fB = x2B(xX
m(x) 2(x) f (x)
x2B(x0 R)
:
m(x) 2(x)
Fa
collection of balls with the following tree structure: denote one ball B1
{the root of the tree{ and assume that there is a function B 7;! B
from F n fB1g to F (denote B (i) its iteration) such that for all B 2 F ,
rg B = inf fk : B (k;1) = B1 g < 1. Denote r(B ) the radius of B ,
A
B ] = fB~ 2 F : exists k 2 N B~ (k) = B g and B = 1:001B . For our
discrete setting, we will need this version of Poincare inequality where
C20 depends on C1 , C2 and .
Proof. We refer to the proof in 32] based on 17]. Consider
X
x2B(x0 r)
m(x) jf (x) ; fB j2 C20 r2
X
x y2B(x0 1:001r)
xy (f (y) ; f (x))2
for all f 2 R ; , for all x0 2 ;, for all r 2 R + . It is obtained by an easy
covering argument. Again, there may be some problems for small r but
then () gives the inequality.
The following lemma will be applied to
0xy = xy min f2(x) 2(y)g :
The notations m0 or fB0 should be understood with respect to 0 .
Parabolic Harnack inequality
195
Lemma 2.3. Assume there exists C such that, for all B 2 F , there
exists cB such that
8 cB
>
< C
m(x) m0 (x) C cB m(x)
(2.5) > cB
0
:
C xy xy C cB xy
(2.6)
#fB 2 F : x 2 B g C
for all x 2 B for all x y 2 B for all x 2 ;
) (B)g
(B \ B ) max f(B
C
(2.7)
for all B 2 F :
Then for every function f ,
X
x2
B2F B
m0 (x) (f (x) ; fB0 1 )2 4 C20 C 8 sup r2(B )
(B~ ) rg B~ (B )
B~ 2AB]
X
B2F
X
0xy (f (x) ; f (y))2 :
x y2
B2F B
Proof. In fact condition (2.7) is needed for 0 but because of (2.5),
(2.7) implies that
0
0
0 (B \ B) max f (CB3) (B)g
for all B 2 F :
First note that
(fB0 ; fB0 )2
X
= x2B\B
m0 (x) ((fB0 ; f (x)) + (f (x) ; fB0 ))2
0 (B \ B )
X 0
2
0 )2 + X m0 (x) (f (x) ; f 0 )2 :
m
(
x
)
(
f
(
x
)
;
f
B
B
0 (B \ B ) x2B
x2B
P
These terms x2B m0 (x) (f (x) ; fB0 )2 satisfy
X 0
m (x) (f (x) ; fB0 )2
x2B
X 0
m (x) (f (x) ; fB )2
x2B
196
T. Delmotte
C cB
X
m(x) (f (x) ; fB )2
x2B
X
C cB C20 r2(B )
xy (f (x) ; f (y))2
x y2B
X 0
C 2 C20 r2 (B )
xy (f (x) ; f (y))2 :
x y2B
We can now prove the lemma,
X
m0 (x) (f (x) ; fB0 1 )2
x2
B2F B
rg B ;1
XX 0
X 0
0
2
0
2
m (x) rg B (f (x) ; fB ) +
(fB(i;1) ; fB(i) )
i=1
B2F x2B
X
X
X
2
0
~
~
m0 (x) (f (x) ; fB0 )2
2 (B ) rg B 0 (B )=C 3
B2F B~ 2AB]
x2B
X
X 0 (B~ )
2
0 r2 (B ) X 0 (f (x) ; f (y ))2
~
4 C3
rg
B
C
C
2
xy
0
x y2B
B2F B~ 2AB] (B )
X
X 0 (B~ )
~
4 C20 C 5
C sup r2(B )
rg
B
0 (B )
x y2
B2F B B2F
B~ 2AB]
0xy (f (x) ; f (y))2 :
To nish the proof, we replace 0 (B~ ) and 0 (B ) by (B~ ) and (B ) so
that another factor C 2 appears.
End of proof of Proposition 2.2. We will construct F as a Whitney covering of B (x0 R ; 1) by selecting Wn fx : d(x0 x) = R ; 2n g
for 0 n N = log R= log 2].
n
n o
F = B (x r) : exists n x 2 Wn and r = 12:01 fB1g
where B1 = B (x0 R=1:01). For these balls, (2.5) is satised. The tree
structure will be constructed this way: if B = B (x r) with x 2 Wn we
will choose B of center x 2 Wn+1 such that d(x x) (3=2) 2n (see the
construction of the Wn 's below). Thus, B (x (2=1:01 ; 3=2) 2n) B \ B
and condition (2.7) is satised.
Parabolic Harnack inequality
197
We must check, in order to apply Lemma 2.3, that it was possible
to select Wn so that
fx : R ; 2n+1 d(x0 x) R ; 2n g B x 23 2n
x2Wn+1
while (2.6) is satised. It is a standard Besicovitch covering argument,
we choose a minimal Wn+1 for this property. The key is that the radius
of any ball B such that x 2 B is comparable to R ; d(x0 x).
Now let us consider the term
X (B~ )
~ :
rg
B
sup r2(B )
(B )
B2F
B~ 2AB]
The rst point is that d(x x) (3=2) 2n implies that 2 B 2 B. For
B 2 F n fB1g, set n N such that Wn contains
B 's center. If we
S
(k )
~
~
denote Fk = fB : B = B g and Ak = B~ 2Fk 2B~ , Ak+1 Ak .
But there is more than this inclusion, a ball B~ = B (~x r~) in Fk is
such that d(x0 x~) = R ; 2n;k and r~ = 2n;k =1:01, so that there is
a ball of radius r~=100 which is included in 2B~ and in the area fy :
d(x0 y) < R ; 2n;1;k ; 2 2n;1;k =1:01g never reached by Ak+1. This
yields (Ak n Ak+1 ) " (Ak ) and consequently (Ak ) e;ck (A0)
C e;ck (B ). Thus,
X (B~ )
~ 22n X C e;ck (N +1;n+k) C 22N C R2 :
rg
B
r2(B )
(B )
k0
B~ 2AB]
For the case B = B1 , the proof is identical but we refer to A1 instead
of A0 .
To nish the proof, let us compare m(x)2(x) and m0 (x). For
x 6= x0 , the condition () gives m(x) 2(x) m0 (x)=, we just have
to consider y x such that d(x0 x) = d(x0 y) + 1.
Proposition 2.4 (Sobolev-Poincare inequality). There exist > 1
depending on C1 and S depending on C1 , C2 and such that for every
function f on B of radius r,
1 X m(x) f 2 (x)1=
V (B ) x2B
S r2 X (f (y) ; f (x))2 + X m(x) f 2(x) :
V (B ) x y2B xy
x2B
198
T. Delmotte
In the setting of manifolds, this result which was proven in 29]
was the key of the proof of the Harnack inequality after the work 28].
It is adapted to graphs in 9]. A nice abstract version can also be
found in 14]. In the notation of this paper, the chain condition will be
satised for < 2 (like the preceding section where = 1:001). Indeed,
consider x next to the boundary of B , the smallest ball Bi of the chain
not centered at x must contain x and satisfy Bi B .
2.2. Elementary step of Moser's iterative technique.
As in Section 1.3, we will say that u is a positive sub/supersolution
on Q = I B (x0 r) if it is the trace of a positive function on I B (x0 r+
1) which is a sub/supersolution everywhere on Q. Precisely, we say that
u is a positive subsolution on Q if it is positive and
@ u(t x) X (u(t y) ; u(t x))
m(x) @t
xy
y
for all t 2 I , for all x 2 B (x0 r ; 1). And u is a positive supersolution
on Q if it is positive and
@ u(t x) X (u(t y) ; u(t x))
m(x) @t
xy
y
for all t 2 I , for all x 2 B (x0 r ; 1),
@ u(t x) + u(t x) X u(t y)
m(x) @t
xy
y2B(x0 r)
for all t 2 I , for all x such that d(x0 x) = r]. Let us show the elementary step of Moser's iterative technique. If Q = I B where I = s1 s2]
and B = B (x r), note
B = (1 ; )B = B (x (1 ; ) r)
I = (1 ; 2) s1 + 2s2 s2]
I0 = s1 2 s1 + (1 ; 2) s2]
I00 = (1 ; 2) s1 + 2 s2 2 s1 + (1 ; 2) s2]
Q = I B Q0 = I0 B and Q00 = I00 B :
Parabolic Harnack inequality
199
Note that
(2.8) Q1 +2 (Q1 )2 Q01 +2 (Q01 )02 and Q001 +2 (Q001 )002 :
Lemma 2.5. There is an exponent = 2 ; 1=
and a constant A =
A(C S ) = A(C C ) such that if B = B (x r), Q = 0 r ] B , u
1
1
2
a positive subsolution in Q and 1=r
M(u Q )
0
1=2, then
2
A 1=M(u 1 Q) :
2
If u is a supersolution with the same assumptions, then
M(u Q0 )
A 1=M(u 1 Q) :
2
Proof. Consider the rst part, u is a subsolution. Let be a non-
negative function in B , with d(x0 x) = r implies (x) = 0, then
X
@ u(t x)
m(x) 2(x) u(t x) @t
x2B
X
xy 2(x) u(t x) (u(t y) ; u(t x))
x y 2B
1 X
=2
xy (2(x) u(t x) ; 2(y) u(t y)) (u(t y) ; u(t x))
=2
xy 2(x) (u(t x) ; u(t y)) (u(t y) ; u(t x))
x y2B
1 X
x y2B
1 X
+2
x y2B
xy (2(x) ; 2(y)) u(t y) (u(t y) ; u(t x)) :
In the last term, we use the inequality a b a2 =4 + b2.
(2(x) ; 2 (y)) u(t y) (u(t y) ; u(t x))
= (x) ((x) ; (y)) u(t y) (u(t y) ; u(t x))
+ (y) ((x) ; (y)) u(t y) (u(t y) ; u(t x))
1 (2(x) + 2 (y)) (u(t y) ; u(t x))2
4
+ 2 u2(t y) ((x) ; (y))2 :
200
T. Delmotte
Note that because of the symmetry of the weights xy ,
X
x y2B
xy 2(y) (u(t y) ; u(t x))2 =
X
x y2B
xy 2 (x) (u(t y) ; u(t x))2 :
Thus, (2.9) yields
X
@ u(t x) + 1 X 2(x) (u(t y) ; u(t x))2
m(x) 2(x) u(t x) @t
4 x y2B xy
x2B
X
(2.10)
x y2B
xy u2 (t y) ((x) ; (y))2 :
For u supersolution, the result would be
m(x) 2(x) u(t x) ;@t@ u(t x)
x2B
X
+ 14
xy 2(x) (u(t y) ; u(t x))2
x y 2B
X
X
x y2B
xy u2(t y) ((x) ; (y))2 :
And then, the same arguments work dealing with I0 instead of I .
Return now to (2.10), if is a smooth function of t, we obtain
@ X m(x) ((t) (x) u(t x))2
@t x2B
2
X
+ 2(t)
xy 2(x) (u(t y) ; u(t x))2
x y2B
2 2(t)
+
X
x y2B
xy u2 (t y) ((x) ; (y))2
@ 2 (t)u2(t x) :
m(x) @t
x2B
X
Now we choose (t) = t=( r)2 ^ 1 and so that d(x0 x) = r implies
(x) = 0 and 1 in B . For this purpose, we took 1=r .
Parabolic Harnack inequality
201
Integrating over I yields
Z X
8
X
10
2
>
sup
m(x) u (t x) ( r)2
m(x) u2(t x) dt
>
>
>
t
2
I
I x2B
>
x2B
>
>
>
>
< 1Z X
xy (u(t y) ; u(t x))2 dt
>
2
I x y2B
>
>
>
>
>
>
10 Z X m(x) u2(t x) dt :
>
>
:
( r)2 I x2B
We have used j0 j 1=( r)2 and j(x) ; (y)j 2=( r) when x y.
This result (of Cacciopoli type) allows us to use Proposition 2.4
(Sobolev). Note 0 such that 1=
+ 1=
0 = 1 and = 1 + 1=
0.
Z X
1
M(u Q ) = V (B ) r2 (1 ; 2)
m(x) u2(t x) dt
I x2B
Z
1
1 X m(x) u2(t x)1=0
V (B ) x2B
r2 (1 ; 2) I
1=
X
1
2
V (B )
m(x) u (t x) dt
x2B
X
sup
t2I
x2B
1=0
m(x) u2 (t x)
r (1 ; 2 ) V (B )1=0
Z
X
xy (u(t y) ; u(t x))2
V (SB ) r2
I
x y2B
2
+
1
X
This yields
2
Z X
I x2B
M(u Q )
m(x) u (t x) dt
x2B
10 1=0 20
r (1 ; ) ( r)
2
2
S
V (B )1=0 +1
2 + 1
m(x) u (t x) dt
2
2
1=0 +1
A 1=M(u 1 Q)
2
:
202
T. Delmotte
for a constant A, because 1 ; 2 3=4.
1=2 so that V (B )
C1 V (B ) and
2.3. Mean value inequalities.
Lemma 2.6. If u is a positive solution on I B , then
up is a subsolution on I B for p 0 and p 1,
up is a supersolution on I B for 0 p 1.
Proof. Let f (x) = xp , if p
0 or p 1, f is convex and
f 0 (a) (b ; a) f (b) ; f (a) :
This yields
@ f (u(t x)) = m(x) f 0(u(t x)) @ u(t x)
m(x) @t
@t
X
= xy f 0(u(t x)) (u(t y) ; u(t x))
y
X
y
xy (f (u(t y)) ; f (u(t x))) :
Lemma 2.7. Let B be a ball of radius r, Q = 0 r ] B , u a positive
solution on Q and 0 < 2
1=2. Then, for all p > 0,
(2.11)
M(u ;p Q) C (C ; )1=p inf
u2
Q
(2.12)
sup00 u2 C (C ; )1=pM(u p Q)
Q
where C and depend only on C1, C2 and .
Proof. We will prove (2.11). Consider rst the case r < 2 (the
dierence between B 's radius and B 's is less than 2, this includes the
cases B = B when one can not apply the elementary step, Lemma 2.5).
Take u2 (t z) = inf Q u2 and note that, for all 0 2 r2 , t ; 2 I
Parabolic Harnack inequality
203
and u2 (t ; z) e2
u2 (t z) e8 inf Q u2 . This is a consequence of
(1.4). Counting only these values,
2 2 ;
2 ;p
u
M(u ;p Q);p mr(2zV) (Br) e8 inf
Q
implies
1=p
M(u ;p Q) e8 V (zV 1(B=2)) 2 inf
u2 :
Q
Applying rst DV (C1 ) between B (z 2 r) B and B (z 1=2) then r <
2 ;1, we obtain (2.11).
Consider now r 2. Set i = 2;i , Q(0) = Q and Q(i) =
Q(i ; 1)i so that for all i, Q Q(i). Fix n the integer such that
2n+1 r < 2n+2 . We can apply Lemma 2.5 between Q(i ; 1) and
Q(i) for i n since u;q is a subsolution, the radius of the cylinder
Q(i ; 1) is bigger than r=2 and i 2=r.
1=
A
;
1=q
;1=q
M(u ;q Q(i))
M
(
u
;
q
Q
(
i
;
1))
2
i
implies
1=(q)
M(u ;q Q(i ; 1)) A2
M(u ;q Q(i)) :
i
This yields
n
Y
M(u ;p Q)
i=1
A 1=i 1=pM(u ;p n Q(n)) :
(2;i )2
To obtain (2.11), we may rst check that
1
Y
+
i=1
A 1=i C ; :
(2;i )2
Then, we estimate M(u ;p n Q(n)) as in the case r < 2. Take
u2 (t z) = inf Q(n+1) u2 and note that for all 0 (n+1 r=2)2,
t ; 2 I (n) and u2(t ; z) e2
u2 (t z) e2 inf Q(n+1) u2 . We use
1 n+1 r < 2. This yields
M(u ;p n
1=(pn )
2
r
V
(
B
)
Q(n)) e
inf
u2 :
1
Q
m(z) 4
2
204
T. Delmotte
Since n > log ( r=4)=log 2, we may estimate
n
r2 V (B ) 1= e(4=( r))c log (C rC ) :
m(z) 14
Again C is a constant which depends on C1 and C2 . To obtain C 0 ;
as in (2.11), we must check
4 c log (C rC ) C 0 ; log r
for r 2. This may be done this way: either r;1=2 and it suces
to note that 4=( r) 4=2 and use the term ; log , either > r;1=2
and we use
c
C 0 r14=2 log (C rC ) :
The proof of (2.12) is identical, except that uq may be a supersolution, that's why we take Q00 instead of Q . We also use u(t + z) e;
u(t z), that's another reason to cut I by the highest values. In fact,
having in mind the all-continuous result (
30, Corollary 3, p. 447]), we
could keep Q . First, we should use a covering argument (
30, p. 448])
to avoid the use of Lemma 2.5 on uq for q < 1. Then, instead of picking
up the sup on the values u(t + z), we could get it from the u(t ; z0 )
where z0 z. But this is only possible when r 1. Taking Q00 is
somehow articial but it has the great technical advantage that at this
point of the proof, we have no more conditions like r 1 in Lemma 2.5
which compel us to treat separately cases when it is no longer possible
to cut the space.
2.4. About log u, linking negative and positive exponents.
Let us dene the measure on R ; as the product of Lebesgue
measure and V . The next lemma states that the values of log u are
glued to their (space) mean value at a time somehow like functions
with BMO norm bounded, they cannot be much bigger on a large part
before or much lower after. J. Moser's improvement in 23] is that
this property and the (time and space) mean value inequalities are
sucient to link extrema to the (space) mean value of log u at a xed
time between Q and Q , and thus to link extrema together. This last
idea is the meaning of the abstract Lemma 2.9.
Parabolic Harnack inequality
205
Lemma 2.8. Let 2 ]0 1
, B = B (x r) and u any positive supersolution on Q = s s + r ] B , there is a constant m(u ) such that
0
2
for all > 0,
(f(t z) 2 K : log u(t z) < m ; g) C (Q)
and
(f(t z) 2 K : log u(t z) > m + g) C (Q)
where K = s + r2 s + r2] B , K = s s + r2 ] B and C
depends only on , , C1 and C2 .
Proof. Let
(z) = 1 ; d
(rx] 0+ z1)
P
(
r] denotes the integer part of r) and m(x) = y62B xy so that for all
x 2 B,
@ u(t x) X (u(t y) ; u(t x)) ; m(x) u(t x) :
m(x) @t
xy
y2B
@ X m(x) 2(x) (; log u(t x))
@t x2B
@
X 2 m(x) @t u(t x)
= ; (x)
u(t x)
(2.13)
x2B
X
xy ;u(t (xx)) (u(t y) ; u(t x)) +
2
x y2B
1 X
=2
+
X
x2B
2 (x) m(x)
2
2
xy u(t(yy)) ; u(t(xx)) (u(t y) ; u(t x))
x y2B
X
x2B
2(x) m(x) :
Now we show that
2 (y) ; 2(x) (u(t y) ; u(t x))
u(t y) u(t x)
(2.14)
2
36 ((y) ; (x))2 ; 21 min f2(x) 2(y)g (u(ut(ty)x;) uu((tt yx))) :
206
T. Delmotte
We may assume u(t x) u(t y) for that purpose.
Either
2
2(x) 2(y) 1 + uu((tt xy))
then
2 (y) ; 2(x) (u(t y) ; u(t x))
u(t y) u(t x)
2 (y) 1 + u(t x) 2
(y) ; 2
u(t y) (u(t y) ; u(t x))
u(t y)
u(t x)
2
1
(
u
(
t
y
)
;
u
(
t
x
))
2
= ; 2 (y) u(t x) u(t y)
and there is no need to use the other non-negative term 36 ((y) ;
(x))2.
Or
2
2(x) 2(y) 1 + uu((tt xy)) :
First we estimate u(t y) ; u(t x) with (y) ; (x).
u(t x) ; u(t y) = u(t x) ; 1
u(t y)
u(t y)
2 2(x) ; 2
2(y)
= 2 (x) 2+(y) (y) ((x) ; (y)) :
Thus,
2
2(y) (u(ut(tx)x;) uu((tt yy)))
2
2(y) (u(t xu)2;(t uy()t y))
2
2(y) 2 (x) 2+(y) (y) ((x) ; (y))
36 ((x) ; (y))2 :
Parabolic Harnack inequality
207
Because the function is such that (y) (x) 2 (y) when x y.
We also obtain
2(y) ; 2 (x) (u(t y) ; u(t x))
u(t y) u(t x)
2(y) ; 2(x) (u(t y) ; u(t x))
u(t y) u(t y)
= ((x) + (y)) u(t xu)(t;yu)(t y) ((x) ; (y))
2
2 ((x) +2(y)(y)) ((x) ; (y))2
18 ((x) ; (y))2 :
Inequality (2.14) is proven because 18 ((x) ; (y))2 controls the two
other terms.
We can now change (2.13) using the inequality (2.14),
@ X m(x) 2(x) (; log u(t x))
@t x2B
2
X
1
(
u
(
t
y
)
;
u
(
t
x
))
2
2
+2
xy min f (x) (y)g u(t x) u(t y)
x y2B
C
Since
X
x y 2B
xy ((y) ; (x))2 +
X
x2B
2(x) m(x) :
(u(t y) ; u(t x))2
u(t x) u(t y)
(just check (log a)2 (a ; 1)2 =a by dierentiating two times), x y
implies j(y) ; (x)j 1=r and m(x) 6= 0 implies j(x)j 1=r, this
yields
@ X m(x) 2(x) (; log u(t x))
@t x2B
X
+ 12
xy min f2(x) 2(y)g (log u(t y) ; log u(t x))2
x y 2B
(log u(t y) ; log u(t x))2
):
C V r(B
2
208
T. Delmotte
Now use the weighted Poincare inequality of Proposition 2.2 to estimate
X
x2B
m(x) 2(x) (; log u(t x) ; W (t))2
C r2
X
x y 2B
xy min f2(x) 2(y)g (log u(t y) ; log u(t x))2
where
X
W (t) = x2B
x2B
X
x2B
Use also
X
m(x) 2(x) (; log u(t x))
m(x) 2(x) m(x) 2(x)
:
2
m(x) 21 C V B2 C 0 V (B )
x2B=2
X
and x 2 B implies (x) 1 ; . This way, we obtain two constants
c and C depending only on , C1 and C2 such that
@ W (t) + c X m(x) (; log (u(t x)) ; W (t)) C r;2 :
@t
(Q) x2B
Setting m = ;W (s + r2), this yields the result (for precisions, follow
litterally the argument on 30, p. 452]).
Lemma 2.9. Let U for 0 & 1=2 be subsets of a space with a
measure such that 0 implies U U0 and (U ) C (U& ), f
a positive measurable function on U0 which satis es
(2.15)
0
sup f 2 C (C (0 ; ); )1=p M(f p U )
U
0
for all 0 < < 0
& and p > 0 and
(flog f > g) C (U0 )
for all > 0. Then
sup f
U&
A
Parabolic Harnack inequality
209
where A depends only on & , and C .
Proof. Set ( ) = log (supU f 2 ). Dividing U into two sets log (f 2 )
()=2 and log (f 2) ()=2 yields
M(f p U )p (e()=2)p + (U1 ) (C)=4 (U0 ) sup f 2p
U
2
ep()=2 + 4 (C) ep()
2 ep()=2
if we choose
p = (2) log 4 (C2)
so that the two terms are equal. Then we apply (2.15),
(0) log C + p1 log (2 C (0 ; ); e()=2)
(2 C (0 ; ); ) + 1 :
log C + (2) log
log (()=(4 C 2))
If
() (2 C (0 ; ); )2
4 C2
and
log C
then
Thus, we always have
()
8
(0) 78 () :
(0) 87 () + C 0 (0 ; );2 :
Take a positive decreasing sequence & = 0 > > i > i+1 > ,
(& )
+1
X
7 i ( ; );2
0
C
i+1
i
8
i=0
constant (= log (A2))
210
T. Delmotte
if we set i = &=(1 + i).
2.5. Proof of Theorem 2.1.
Recall the notations of Denition 1.5 and Lemma 2.8, we set =
1=2, = 3=4, 1 = 1=6, 2 = 1=3, = 1=2, 3 = 3=4 and 4 = 1.
Lemma 2.8 gives a reference value m so that one can apply Lemma
2.9 to f = e;m u on U0 = s s + r2] B with U = (U0)00 for
& = ; = 1=4. This way, Q U& and (2.15) is satised because
of Lemma 2.7 and (2.8). This yields supQ (e;m u) A. Applying
again Lemma 2.9 to f = em u;1 on U0 = s + r2 s + 4 r2 ] B with
U = (U0) yields supQ (em u;1 ) A and the Harnack inequality.
3. Kernel estimates, discrete-time Harnack inequality and
necessity of Poincar
e inequality.
3.1. Continuous-time estimates.
First, we give on-diagonal estimates. The regularity coming from
the Harnack inequality shows
p that if one starts at x, one diuses after
a time t on the ball B (x t). This is well known since the papers of
D. G. Aronson 1] or of P. Li and S. T. Yau 19].
Proposition 3.1 (On-diagonal estimates). Assume (; ) satis es
H(CH ), then
Pt (x y) CH mp(y)
V (x t)
for all x y t
;2
d(x y)2 t implies Pt (x y) CH mp(y) :
V (x t)
Proof. Applying the Harnack
inequality to P ( y) yields Pt (x y)
p
CH P2t (z y) for z 2 B (x t). Thus,
Pt (x y)
X
CHp
m(z) P2t (z y)
V (x t) z2B(x pt)
211
Parabolic Harnack inequality
X
P (y z)
= CH mp(y)
V (x t) z2B(x pt) 2t
CH mp(y) :
V (x t)
For the lower
p bound, we will use similarly Pt=2 (z y) CH Pt (x y) for
z 2 B (x t). But rst, we denep a function u( ) solution of the
parabolic equation in 0 t] B (x t) this way
u( ) = 1
u( ) =
X
h
i
t
for all 2 0 2
P
;t=2 ( z)
p
z2B(x t)
ht i
for all 2 2 t :
Applied to u the Harnack inequality yields
CH;1 = CH;1 u 2t x u(t y)
X
=
Pt=2 (y z)
z2B(x pt)
X m(z )
=
Pt=2 (z y)
m
(
y
)
p
z2B(x t)
X
CH m(z) P (x y)
t
m
(
y
)
p
z2B(x t)
p
= CH Vm((xy) t) Pt (x y) :
These on-diagonal estimates yield the volume regularity.
Proposition 3.2. Assume (; ) satis es H(CH ). Then DV (CH ) is
4
true.
Proof.
CH;2 m(x) P 2 (x x) C P 2 (x x) C CH m(x) :
H 4r
H V (x 2 r)
r
V (x r)
212
T. Delmotte
Now we prove an o-diagonal upper bound which is more precise for
x and y far apart. We still use the parabolic Harnack inequality as in
Lemma 3.1 to estimate one term by a mean value and a second tool is
the integrated maximum principle (see 13]).
Theorem 3.3 (Integrated maximum principle). If u is a solution on
I ; and K (t x) a positive and decreasing in t function such that for
all t 2 I and x y,
(K (t x) + K (t y))2
(3.16)
@K (t x) ; 2 K (t x) @K (t y) ; 2 K (t y)
@t
@t
then the quantity
I (t) =
X
x2;
m(x) u2 (t x) K (t x)
is decreasing in t 2 I .
Proof.
I 0(t) =
X
m(x) u2(t x) @K
@t (t x)
x2;
X
+
2 xy (u(t y) ; u(t x)) u(t x) K (t x) :
x y 2;
Since the weights xy are symmetric,
X
m(x) u2(t x) @K
@t (t x)
x2;
X xy 2
@K (t x) + u2 (t y) @K (t y)
=
u
(
t
x
)
@t
@t
x y 2; 2
and
X
x y 2;
2 xy (u(t y) ; u(t x)) u(t x) K (t x)
=
X
x y2;
xy (u(t y) ; u(t x)) (u(t x) K (t x) ; u(t y) K (t y)) :
Parabolic Harnack inequality
213
This yields
2
X
u
(
t
x
)
@K
0
I (t) =
xy 2
@t (t x) ; 2 K (t x)
x y 2;
0
+ u(t x) u(t y) (K (t x) + K (t y))
2
@K
u
(
t
y
)
+ 2
@t (t y) ; 2 K (t y)
because of (3.16).
Construction of a function K . Take (t x) = log (K (t x)), (3.16)
becomes
@ (t y)
(3.17) ( (t x) ; (t y)) + @
(
t
x
)
+
@t
@t
1 @ (t x) @ (t y)
2 @t
@t
where (s) = cosh (s) ; 1. Note that (s) s2 =2 for s small so that
(3.17) may be connected to the following eikonal inequation for the heat
equation on a continuous geometry
@ + 1 jr j2 0
@t 2
with a solution K = e = ed2 =t where d is a distance function of x. Our
parabolic equation should have been normalized to obtain the same
coecients. This dierence and dierential inequation (3.17) contains
only rst-order terms, that's why we get nice solutions considering its
Legendre associate. For instance,
(t x) = (t d(x)) = max
f d(x) ; () tg
is a solution if x y implies jd(x) ; d(y)j
value for which the maximum is reached,
1. Indeed, note (t x) a
@ (t x) = ;((t x))
@t
and
j (t x) ; (t y)j max f(t x) (t y)g :
214
T. Delmotte
We obtain
(t x) = arg sinh d(tx)
r
2
d
(t d) = d arg sinh t ; t 1 + dt2 ; 1 :
It will be useful to note that, since @=@d = ,
(3.18)
8
>
>
< (t d)
1 d2
2 t
>
>
:d
C d2 :
C t implies (d t) arg2sinh
C
t
Denote E t d] = e (t d) in the sequel. This function has already been
introduced by E. B. Davies in 8] with his semigroup perturbation argument (see 7]). With this argument and Harnack inequality, L. SaloCoste proves Gaussian upper bounds in 28] using ideas of 38], 39].
We adapt this proof to the use of the integrated maximum principle in
the next proposition.
Proposition 3.4 (O-diagonal upper bound). Assume (; ) satis es
H(CH ), then for all x, y, t,
Pt (x y )
C m(y)
p
V (x t) V (y t) E 6 t d(x y)]
q
p
where C depends only on CH .
Proof. Consider the following solution of the parabolic equation,
X
u( ) =
pt ( x) p
( ) :
p
2B(y t)
P
This will be useful to estimate A(t) = 2B(y pt) m() p2t ( x). Indeed
we apply Theorem 3.3 between 0 and 2 t to
I ( ) =
X
2;
m( ) u2( ) E t + d(y )] :
p
Parabolic Harnack inequality
215
p
Since u(0 ) = 0 if 62 B (y t) and u(0 ) = pt ( x) if 2 B (y t),
we have
I (0) =
X
2B(y pt)
m( ) p2t ( x) E t d(y )] C A(t) :
p
Just note that d
t implies E t d] e1=2 , see (3.18).
In order to give a lower bound for I (2 t), we use
p2t( ) CH;1 pt (x )
so that
;1
u(2 t ) mC(Hx) A(t)
p
for 2 B (x t). This yields
I (2 t) =
X
2;
m( ) u2(2 t ) E 3 t d(y )]
CH;2
m( ) A(t)2 E 3 t d(y B (x pt))] :
m(x)2
2B(x pt)
X
Since I (2 t) I (0), we get
A(t)
C m(x)2
p :
V (x t) E 3 t d(y B (x t))]
p
Again the Harnack inequality gives
m(y)2 p2 (y x)
p2t (x y) = m
(x)2 t
X
CH2 m(y)2p
m() p22t( x)
2
m(x) V (y t) 2B(y pt)
=
CH2 m(y)2p A(2 t)
m(x)2 V (y t)
2
p
p
pC m(y)
:
V (x 2 t) V (y t) E 6 t d(y B (x 2 t))]
216
T. Delmotte
The proposition follows because of the volume regularity and
p
E 6 t d(y B (x 2 t))] c E 6 t d(x y)] :
Remark 1. Instead of E 6 t d(x y )], we could get E t d(x y )] for
any > 1, the constant C depending also on . We would just have
to apply the Harnack inequality between t and (1 + ")t. Furthermore,
we could have been more precise for the choice of a function E (using
both ((x)) and ((y)) instead of only the2 biggest). All of these
manipulations tend to obtain the analog of ed =((4+")t) for d=t small.
Don't forget the normalization of the parabolic equation to compare.
Remark 2. One might nd the function E too complicated, but 25]
explained
that it is not purely technical. To understand it, one can take
2 =t
cd
it as e
for d=t small and (d=t)d e;d for d=t huge glued together. The
second value is adapted to the fact that when t ;! 0,
d(x y)
Pt (x y) pd(x yd) ((xx yy))!t
so at least in this case the function E gives an optimal upper bound.
3.2. Discrete-time estimates.
Assume () is true so that we can consider the positive submarkovian kernel p = p ; (this means p(x y) = p(x y) ; (x y),
then pn (x y) is dened as in (1.1)). Now compute Pn and pn with p
(3.19)
Pn (x y) = e(;1)n
pn (x y) =
+1 k
X
n
k=0
k! pk (x y) =
+1
X
k=0
n
X
ak pk (x y)
n
X
Cnk n;k pk (x y) = bk pk (x y) :
k=0
k=0
To compare the two sums we study ck = bk =ak for 0 k n,
n;k
ck = (n ; kn)!! e(;1)n nk :
Parabolic Harnack inequality
217
Lemma 3.5.
0 k n implies ck C ()
2
p
n a 2 and jk ; (1 ; ) nj a n imply ck C (a ) > 0 :
p
The condition n a2=2 ensures that a n n. We shall consider
only 1=4 so that we always have n=2 k n in the second
assertion.
Proof. The ck 's follow the recurrence formula ck+1 = ck (n ; k)=( n),
so they reach a maximum for k around thepreal (1 ; ) n. Let us use p
the
Gamma function, ;(n+1) = n! and tt e;t t=C ;(t+1) C tt e;t t.
Set ct = ;(n + 1) n;t =;(n ; k + 1) e(;1)t nt . Similarly, it reaches its
maximum for t = (1 ; ) n. Thus,
n
ck ;( n +;(1)n e+(1);1)n n(1;)n
n e;n
;(
n
+
1)
(
n
)
= nn e;n ;( n + 1)
p
C n pC n
C2 :
=p
Next, we prove the second assertion of the lemma. Again,
because of
ck 's variations, weponly check this for k (1 ; ) n apn. For instance,
for (1 ; ) n ; a n k (1 ; ) n,
n;k
ck = (n ; kn)!! e(;1)n nk
n+apn
;(
n
+
1)
;( n + apn + 1) e(;1)n n(1;)n;apn
pn
apn
1
e
pn n+apn p
C2
pn
a
n
+
a
1+ n
p
p
p
C12 ea n;(n+a n) log(1+a=( n)) 1
218
T. Delmotte
p
p
p
C12 ea n;(n+a n)a=( n)
2
= C12 e;a = :
This technical result is the key to compare p and P . One side is easy
now.
Theorem 3.6. Assume (; ) satis es p(x x) > 0 for all x in ;.
Then, for all x y n,
pn (x y) C () Pn (x y) :
It is the case when () is true. This theorem may be applied in
many other situations (with any volume growth) when it is easier to
work on P . When no hypothesis is assumed on p(x x), see the comment
after Denition 1.3 about (; (2)). For instance, on a locally uniformly
nite (by N ) non-weighted graph (xy 2 f0 1g), p2 (x x) 1=N .
The other side of the comparison is more intricate.
Proposition 3.7 (On-diagonal estimates). Assume (; ) satis es
DV (C1), P (C2 ) and ( ). Then, there exist cd , Cd > 0, depending
only on C1 , C2 and , such that
pn (x y) VC(dxmp(yn))
for all x y n
d(x y)2 n implies pn (x y) Vc(dxmp(yn)) :
Proof. The rst assertion follows from Theorem 3.6 and the upper
bound in Proposition 3.1. To deduce the second assertion from the
lower bound in Proposition 3.1, we will have
to prove that in the sum
p
(3.19), the terms for jk ; (1 ; ) nj a n contain half of the whole
sum. First we will set = =2 (this will be useful later when we apply
the upper bound to a Markov kernel p0 ). Now, to prove that the lower
bound for P implies one for p, it will be sucient to prove that for all
" > 0, there exists a,
X
ak pk (x y) V"(xm(py)n) :
jk;(1;)nj>apn
Parabolic Harnack inequality
219
We will take " = CH;2 =2, the desired lower bound will be proved for
n N = a2 =2. For n N , the condition ( ) gives pn (x y) N .
We can apply the upper bound to the Markov kernel p0 = p=(1 ; ).
Indeed, it is generated by weights 0xy
0xx = xx 1;;m(x) m(x)
0xy = 1;xy
if x 6= y
m0 (x) = m(x) :
Thus, the volume is identical and P (C2) is still satised because weights
p
xy for x 6= y have increased. This yieldspp0k (x y) Cd0 m(y)=V (x k),
hence pk (x y) Cd0 m(y) (1 ; )k =V (x k). Next, we have to get the
estimate
k
X
jk;(1;)nj>apn
e(;1)n ((1 ;k! ) n)
"0p :
V (x n)
1p
V (x k)
p
The sum
we simply use
p for k >p(1 ; ) n + a pn is easier because
p
V (x k) V (x n=2) V (x n=2) V (x n)=C1. Then, we
obtain the k + 1th term of the sum if we multiply the kth term by
(1 ; ) n=(k + 1). So we estimate this part by a geometric sum,
k
X
k>(1;)n+apn
e(;1)n ((1 ;k! ) n)
p
(1;)n+a n
;
1)n ((1 ; ) n)
p
e
(
1p
V (x k)
C1p
;((1 ; ) n + a n + 1) V (x n) 1 ;
p
p
p
1
(1 ; ) n p
(1 ; ) n + a n
C C1 ea n;((1;)n+a n) log (1+a=((1;) n))
pn
1
(1
;
)
n
+
a
1
V (x pn) p
p
apn
(1 ; ) n + a n
|
"0 =p2
V (x n)
{z
1
a because
n a22
}
220
T. Delmotte
with a good choice of a. Indeed, since log (1 + u) u=(u + 1), the
argument of the exponential functionpappears to be negative.
To deal pwith k < p(1 ; ) n ; a n, we must be careful with the
factors V (x k)=V (x k ; 1) when we compute the k ; 1th termpfrom
the kth . pA rough application of the volume regularity gives V (x k)
C1 V (x k ; 1). So for the terms k (1 ; ) n=(2 C1), the k ;
1th term is less than one half of the kth term and the estimationp is
straightforward.
Now for the other terms we bound all 1=V (x k)
p
by 1=V (x (1p; ) n=(2 C1)), then the same computation
as for k >
p
(1 ; ) n + a n shows the
p estimate with 1=V (x (1 ; ) n=(2 C1))
which is less than C=V (x n) if we apply many times the volume regularity.
Now we prove o-diagonal upper and lower bounds.
Theorem 3.8 (O-diagonal estimates). Assume (; ) satis es
DV (C1), P (C2 ) and (). Then, there exist positive cl , Cl , Cr and cr
depending only on C1 , C2 and such that G(cl Cl Cr cr ) is true.
Proof of the upper bound. It is a consequence of Theorem 3.6 and
Proposition 3.4.
C m(y)
pn (x y) p
p
V (x n) V (y pn) E 6 n d(x y)]
C m(y)
p
e;cd(x y)2 =n
p
p
V (x n) V (y n)
for d(x y) n because of (3.18).
Now use
p
p
V (x n) V (y d(x y) + n)
pn log C1 = log 2 p
d
(
x
y
)
+
pn
V (y n)
C1
p
pn log C1 =2 log 2
C
C
1 m(y ) d(x y ) +
pn
pn (x y) V (x pn)
e;(c=2)d(x y)2 =n e;(c=2)d(x y)2 =n :
It is clear that the factor
d(x yp) + pn log C1 =2 log 2 e;(c=2)d(x y)2 =n
n
Parabolic Harnack inequality
221
is bounded.
Proof of the lower bound. It is well-known that the Gaussian
lower bound follows from the on-diagonal one. So let us apply many
times the second assertion of Proposition 3.7. We set n = n1 + + nj ,
x = x0 x1 : : : xj = y and B0 = fxg Bi = B (xi ri ) Bj = fyg such
that
8
d(x y)2
>
>
j
;
1
C
>
>
n
>
>
>
< ri c pni+1
so that z 2 Bi imply V (z pni+1 ) AV (Bi)
>
>
(z0 ) :
>
> sup d(z z 0 )2 ni so that pni (z z 0 ) cd mp
>
>
V (z ni )
>
: z2Bi;1
z0 2Bi
We will see below how to construct this decomposition. It is a purely
technical problem (cutting in a discrete context).
It will be sucient to prove the Gaussian lower bound since
pn (x y)
X
z ::: zj;1 )2B1 Bj;1
X
pn1 (x z1) pn2 (z1 z2 ) pnj (zj;1 y)
( 1
cd mp(z1 ) cd m(pz2 ) cd m(y)
V (x n1 ) V (z1 n2 ) V (zj;1 pnj )
(z1 ::: zj ;1 )2B1 Bj ;1
cjd A1;j
X
m(p
z1 ) m(z2 ) m(y)
V (x n1) V (B1) V (Bj )
(z1 ::: zj ;1 )2B1 Bj ;1
j ;1
= Vc(dxmp(yn) ) cAd
:
1
We just have to choose Cl C log (A=cd ).
Decomposition. Consider three cases.
If d(x y) n=100, then we can set j = n, ni 1, Bi = fxi g (for
instance, ri 1=2) and choose d(xi xi ) 1.
+1
If d(x y)2 n, then we can set j = 1 (in fact Proposition 3.7 has
not to be iterated).
222
T. Delmotte
Otherwise, set
h
2i
j = 10 d(xny) 10 :
This way, n=j and d(x y)=j are bigger than 10 and
d(x y) 2
j
n
9j
so we can choose ni n=j (i.e. n=j ] or n=j ] + 1) and
d(xi xi + 1) ri d(xj y) :
3.3. Discrete-time Harnack inequality.
We will prove the discrete-time Harnack inequality thanks to the
Gaussian estimates. The method is based on 11, Section 3]. Denote
B = B (x0 R) where R 2 N , with boundary @B = fx : d(x0 x) = Rg.
The idea of the proof is that for ( ) 2 @Q and (n x) 2 Q or Q ,
pn; (x )'s lower and upper bounds dier only by a constant. The
diculty is that the solution u on Q is not a combination of pn; (x )
but of Un; (x ) where Un (x y) is the solution for (n x) 2 N B
satisfying U0 (x y) = (x y) and Un (x y) = 0 for n > 0 and x 2 @B .
Obviously Un (x y) pn (x y), so only the lower bound needs some
work.
Lemma 3.9. Assume (; ) satis es G(cl Cl Cr cr ). Then, there exist
" c > 0 depending only on cl , Cl , Cr and cr such that
Un (x y) V (cxm(2y")R)
0
whenever
8 (" R)2 n (2 " R)2
>
>
>
>
< x 2 B (x0 " R)
>
y 2 B (x0 2 " R)
>
>
>
:
d(x y) n :
Parabolic Harnack inequality
223
Proof. The idea (see 11, Lemma 5.1]) is that if d(x @B ) is big enough,
p ; U is small and the lower bound for p applies to U . First note that
pn (x y) V (2xc m2("y)R)
0
where c = cl e;9Cl =2. Now, write
r(n x) = pn (x y) ; Un (x y) =
X
2@B
n
a( ) pn; (x )
where a( ) 0. These coecients may be constructed by recurrence on . Another point of view is that (m( )=m(y)) a( ) is the
probability to reach @B for the rst time at after steps. That's why
X m( )
a( ) 1 :
m
(
y
)
We can check it this way
X (x)
pn (x y)
1= m
x m(y )
X (x)
r(n x)
m
x m(y )
X m(x)
=
a( ) pn; (x )
m
(
y
)
x
=
X
a( )
X m(x)
|x
m(y) pn; (x ) :
{z
m()=m(y)
=
}
To estimate r(n x) we use the Gaussian upper bound.
m(y) p (x )
Crpm(y) e;cr d(x y)2 =(n; )
n
;
m( )
V (x n ; )
m(y )
V
(
x
2
"
R
)
;
c
((1;")R)2 =(n; )
r
Cr V (x pn ; ) e
V (x 2 " R )
c m(y)
V (x0 2 " R)
224
T. Delmotte
with a good choice of ". The lemma follows.
Theorem 3.10. Assume (; ) satis es G(cl Cl Cr cr ), then there
exists CH > 0 such that H (CH ) is true.
Proof. Let us rst point out that the Gaussian lower bound yields a
volume regularity. The following argument,
1
X
pr2 (x y)
y2B(x 2r)
X cl m(y ) ;C (2r)2 =r2
e l
V
(
x
r
)
y2B(x 2r)
= cl e;4Cl VV((xx 2rr))
is correct for r integer and r 2 (because we need d(x y) r2 ).
This extends to other values thanks to () (which is an immediate
consequence of G(cl Cl Cr cr )).
Now we prove the Harnack inequality for = ", 1 = "2 =2, 2 = "2 ,
3 = 2 "2 , 4 = 4 "2 and r = R 2 N in the notations of Denition 1.6.
Let u be a solution on Q, there is a decomposition
v ( n x) =
X
n
2@B(x0 2"R)
a( ) Un; (x )
or
=0
2B(x0 2"R)
with non-negative a( ) such that u(n x) = v(n x) if x 2 B (x0 2 " R).
Again the coecients may be constructed by recurrence on , the key
is to keep v u everywhere.
Thus, it will be sucient to prove the Harnack inequality for
the terms U; ( ), this means Un ; (x ) C Un ; (x ) for
(n x) 2 Q , (n x ) 2 Q , 2 R2 and d(x x ) n ; n .
The lower bound is a consequence of Lemma 3.9 if d(x ) n ; ,
Un ; (x ) V (cxm(2")R)
0
Parabolic Harnack inequality
225
for x 2 B (x0 R) = B (x0 " R) and 3 R2 n 4 R2. If d(x ) >
n ; , then
d(x ) d(x ) ; d(x x) > (n ; ) ; (n ; n ) = n ; and Un ; (x ) = 0.
The upper bound looks alike either because of time regularization
in the case = 0 and 2 B (x0 2 "R) or because of space regularization
in the case 2 @B (x0 2 "R). In the rst case, for x 2 B (x0 " R) and
1R2 n 2 R2,
Un (x ) pn (x )
Cr m( )
V (x pn )
Cr m( )
V (x 1R)
C m( )
V (x0 2 " R)
where C = Cr C1N , we must apply the volume regularity N times,
N depending on " and 1. In the second case, we use the Gaussian
coecient and d(x ) 2 " R] ; " R],
Un ; (x ) pn ; (x )
Crpm( ) e;cr d(x )2 =(n ; )
V (x n ; )
C m( ) :
V (x0 2 " R)
3.4. Poincar
e inequality.
Theorem 3.11. Assume H (CH ), then there exist C , C and > 0
such that DV (C1), P (C2 ) and () are true.
1
2
Proof. In the comments after Theorem 1.7, we already mentioned
that H (CH ) implies a property (). Then DV (C1) is proven as in
Section 3.1. The discrete version raises new diculties only for small
226
T. Delmotte
radii but then () is sucient. Thus we also obtain, as in Proposition
3.1,
d(x y)2 n implies pn (x y) c m(py) :
V (x t)
The fact that parabolic Harnack inequality implies Poincare inequality
is proven on manifolds in 30] with ideas of 18]. Take f dened on
B (x0 2 r) and consider the Neumann problem on B (x0 2 r). It may
be dened this way: consider the graph B (x0 2 r) with the restriction
jB(x0 2r)B(x0 2r) , it gives a kernel p0 (x y). The crucial point is that
p0 (x y) has increased (comparing to p(x y)) for x 2 @B (x0 2 r). Set P
the Markov operator
Pg(x) =
X 0
p (x y) g(y)
y
and denote the iteration Q = P 4r2] . For any positive g, P n g(x) is a
positive solution on B (x0 2 r) of the parabolic equation (of ;). Thus,
for x 2 B (x0 r),
because
yields
P
X
y2B(x0 r)
X
c m(y) (f (y) ; (Qf )(x))2
y2B(x0 r) V (x 2 r)
X
m(y) (f (y) ; fB(x0 r) )2
V (x c 3 r)
0
y2B(x0 r)
(Q(f ; (Qf )(x))2)(x) 2
y2B(x0 r) m(y ) (f (y ) ; ) is minimal for
m(y) jf (y) ; fB(x0 r) j2 C
and
(Q(f ; (Qf )(x))2)(x)
x2B(x0 2r)
= C (kf k22 ; kQf k22)
C (4 r2 krf k22)
(3.20)
(3.21)
where
X
= fB(x0 r) . This
kf k22 =
krf k22 =
X
x2B(x0 2r)
X
x y2B(x0 2r)
m0 (x) f (x)2
xy jf (x) ; f (y)j2 :
Parabolic Harnack inequality
227
The line (3.20) is a variance formula and the line (3.21) is justied by
the two properties
kPf k22 kf k22
kf k22 ; kPf k22 krf k22 :
and
We give the proof of the second one which is not so widely known as
the rst. Note that a2 ; b2 2 a (a ; b),
X
x
2 X 0
0
2
p (x y) f (y)
m (x) f (x) ;
y
X
X 0
2 m0 (x) f (x) f (x) ;
p (x y) f (y)
x
y
X
0 (x) p0 (x y ) f (x) (f (x) ; f (y ))
=2
m
}
x y | {z
xy
X
= (xy (f (x) ; f (y)) (f (x) ; f (y))) :
xy
This ends the proof of Theorem 1.7.
4. Some consequences of Harnack inequality and Gaussian
estimates.
4.1. Holder regularity.
Among the immediate consequences of Harnack inequality are Liouville theorem stated in 9] because only the elliptic version is needed
and Holder regularity of solutions of the discrete parabolic equation.
Proposition 4.1. Assume (; ) satis es the properties of Theorem
1:7. Then there exists h > 0 and C such that for all x0 2 ;, n0 2 Z
and R 2 N , if u is a solution on Q = (Z \ n0 ; 2 R2 n0]) B (x0 2 R),
x1 x2 2 B (x0 R) and n1 n2 2 Z \ n0 ; R2 n0], then
p
h
ju(n2 x2) ; u(n1 x1)j C sup f jn2 ;Rn1j d(x1 x2)g sup juj :
Q
228
T. Delmotte
Proof. Fix n2 n1 and set Q(i) = (Z \ n2 ; 22i n0 ]) B (x2 2i ),
M (i)p= supQ(i) u, m(i) = inf Q(i) u and !(i) = M (i) ; m(i), 2i1 ;1
i1
i2
i2 +1
sup f jn2 ; n1j d(x1 x2)g < 2 and 2
R < 2 . This way,
!(i1) ju(n2 x2) ; u(n1 x1)j and !(i2) 2 supQ juj.
Set m (i) = u(n2 ; 22i+1 x2) and apply Harnack inequality in
Q(i + 1) to u ; m (i + 1) and M (i + 1) ; u
m (i) ; m(i + 1) CH (m(i) ; m(i + 1))
M (i + 1) ; m (i) CH (M (i + 1) ; M (i)) :
This yields !(i) (1 ; CH;1 ) !(i + 1). Thus,
!(i1 ) (1 ; CH;1 )i2 ;i1 !(i2)
and the proposition follows.
4.2. Green function.
With the Gaussian estimates for pn , one easily proves estimates for
the Green function.
Proposition 4.2. Assume (; ) satis P
es the properties of Theorem
1
1:7. Then the Green function G(x y) =
only if
+
=0
n
pn (x y) is nite if and
+1
X
n < +1
n=0 V (x n)
(4.22)
and it satis es the estimates
C ;1 m(y)
(4.23)
+1
X
n
G(x y)
V
(
x
n
)
n=d(x y)
C m(y)
+1
X
n :
n=d(x y) V (x n)
Note that condition (4.22) is satised or not uniformly for x 2 ;.
Indeed, for n d(x x0), C1;1 V (x n) V (x0 n) C1 V (x n). On
Parabolic Harnack inequality
229
manifolds, the necessity of (4.22) was proved in 37]. The suciency and
the estimates (4.23) were studied in 19], 34], 35], 36] with assumptions on the curvature. With the work 30], L. Salo-Coste obtained
them with Poincare inequality assumption.
Proof. We use the Gaussian estimates G(cl Cl Cr cr ). They yield
C ;1 m(y)
(4.24)
+1
X
1p
n=d2 (x y) V (x n)
G(x y)
C m(y)
+1
X
1p :
n=d2 (x y) V (x n)
The lower bound is a consequence of
G(x y) =
+1
X
n=0
pn (x y) +1
X
n=d2 (x y)
pn (x y) +1
X
cl mp
(y) e;Cl :
n=d2 (x y) V (x n)
The upper bound is obtained by dividing the sum G(x y) into two parts
+1
X
n=d2 (x y)
pn (x y) Cr m(y)
+1
X
1p
n=d2 (x y) V (x n)
and
d2X
(x y )
n=0
pn (x y)
d2X
(x y )
Cr mp(y) e;cr d2 (x y)=n
n=d(x y) V (x n)
Cr m(y)
d2X
(x y )
1
n=d(x y) V (x 2 d(x y ))
log C1 = log 2
e;cr d2 (x y)=n
C1 2 dp(xn y)
|
{z
}
constant
C m(y)
dX
x y)
2 2(
1p :
n=d2 (x y) V (x n)
230
T. Delmotte
The proposition follows from (4.24) since
+1
p
1p = X
1 :
#
f
n
2
N: k
n
<
k
+
1
g
{z
} V (x k)
n=d2 (x y) V (x n) k=d(x y) |
+1
X
k
=2 +1
Acknowledgements. I wish to thank L. Salo-Coste and T. Coulhon
for encouragement and useful remarks.
References.
1] Aronson, D. G., Bounds for the fundamental solution of a parabolic
equation. Bull. Amer. Math. Soc. 73 (1967), 890-896.
2] Bombieri, E., Theory of minimal surfaces and a counter-example to the
Bernstein conjecture in high dimensions. Mimeographed Notes of Lectures held at Courant Institute. New-York University, 1970.
3] Chung, F. R. K., Yau S. T., A Harnack inequality for homogeneous
graphs and subgraphs. Comm. Anal. Geom. 2 (1994), 628-639.
4] Coulhon, T., Salo-Coste, L., Puissances d'un operateur regularisant.
Ann. Inst. H. Poincare, Probabilites et Statistique 26 (1990), 419-436.
5] Coulhon, T., Salo-Coste, L., Minorations pour les cha^nes de Markov
unidimensionnelles. Probab. Theor. Relat. Fields 97 (1993), 423-431.
6] Coulhon, T., Salo-Coste, L., Isoperimetrie sur les groupes et les varietes. Revista Mat. Iberoamericana 9 (1993), 293-314.
7] Davies, E. B., Heat kernels and spectral theory. Cambridge University
Press, 1989.
8] Davies, E. B., Large deviations for heat kernels on graphs. J. London
Math. Soc. 47 (1993), 65-72.
9] Delmotte, T., Inegalite de Harnack elliptique sur les graphes. Colloquium Mathematicum 72 (1997), 19-37.
10] Delmotte, T., Estimations pour les cha^nes de Markov reversibles. C.
R. Acad. Sci. Paris 324 (1997), 1053-1058.
11] Fabes, E., Stroock, D., A new proof of Moser's parabolic Harnack inequality using the old ideas of Nash. Arch. Rational Mech. Anal. 96
(1986), 327-338.
12] Grigor'yan, A., The heat equation on noncompact Riemannian manifolds. Math. USSR Sb. 72 (1992), 47-77.
13] Grigor'yan, A., Gaussian upper bounds for the heat kernel on arbitrary
manifolds. J. Dierential Geom. 45 (1997), 33-52.
Parabolic Harnack inequality
231
14] Hajlasz, P., Koskela, P., Sobolev meets Poincare. C. R. Acad. Sci.
Paris 120 (1995), 1211-1215.
15] Hebisch, W., Salo-Coste, L., Gaussian estimates for Markov chains and
random walks on groups. Ann. Probab. 21 (1993), 673-709.
16] Holopainen, I., Soardi, P. M., A strong Liouville theorem for p-harmonic
functions on graphs. Ann. Acad. Sci. Fennicae Math. 22 (1997), 205226.
17] Jerison, D., The Poincare inequality for vector elds satisfying the Hormander 's condition. Duke Math. J. 53 (1986), 503-523.
18] Kusuoka, S., Stroock, D., Application of Malliavin calculus, part 3. J.
Fac. Sci. Univ. Tokyo, Serie IA, Math. 34 (1987), 391-442.
19] Li, P., Yau, S. T., On the parabolic kernel of the Schrodinger operator.
Acta Math. 156 (1986), 153-201.
20] Merkov, A. B., Second-order elliptic equations on graphs. Math. USSR
Sb. 55 (1986), 493-509.
21] Moser, J., On Harnack's Theorem for elliptic dierential equations.
Comm. Pure Appl. Math. 14 (1961), 577-591.
22] Moser, J., A Harnack inequality for parabolic dierential equations.
Comm. Pure Appl. Math. 17 (1964), 101-134. Correction in 20 (1967),
231-236.
23] Moser, J., On a pointwise estimate for parabolic dierential equations.
Comm. Pure Appl. Math. 24 (1971), 727-740.
24] Nash, J., Continuity of solutions of parabolic and elliptic equations.
Amer. J. Math. 80 (1958), 931-954.
25] Pang, M. M. H., Heat kernels of graphs. J. London Math. Soc. 47
(1993), 50-64.
26] Rigoli, M., Salvatori, M., Vignati, M., A global Harnack inequality on
graphs and some related consequences. Preprint.
27] Russ, E., Riesz transforms on graphs for 1 p 2. To appear in Math.
Scandinavia.
28] Salo-Coste, L., Uniformly elliptic operators on Riemannian manifolds.
J. Dierential Geom. 36 (1992), 417-450.
29] Salo-Coste, L., A note on Poincare, Sobolev and Harnack inequalities.
Internat. Math. Res. Notices 2 (1992), 27-38.
30] Salo-Coste, L., Parabolic Harnack inequality for divergence form second order dierential operators. Potential analysis 4 (1995), 429-467.
31] Salo-Coste, L., Lectures on nite Markov chains. Cours de l'Ecole
d'Ete de Probabilites de Saint-Flour, 1996.
32] Salo-Coste, L., Stroock, D., Operateurs uniformement sous-elliptiques
sur les groupes de Lie. J. Funct. Anal. 98 (1991), 97-121.
232
T. Delmotte
33] Stroock, D. W., Zheng W., Markov chain approximations to symmetric
diusions. Ann. Inst. H. Poincare 33 (1997), 619-649.
34] Varopoulos, N., The Poisson kernel on positively curved manifolds. J.
Funct. Anal. 44 (1981), 359-380.
35] Varopoulos, N., Green's function on positively curved manifolds. J.
Funct. Anal. 45 (1982), 109-118.
36] Varopoulos, N., Green's function on positively curved manifolds, II. J.
Funct. Anal. 49 (1982), 170-176.
37] Varopoulos, N., Potential theory and diusion on Riemaniann manifolds. Conference on harmonic analysis in honor of Antoni Zygmund,
Vol. I, II, Wadsworth Math. Ser., Wadsworth, Belmont, Calif. (1983),
821-837.
38] Varopoulos, N., Small time Gaussian estimates of the heat diusion
kernel, Part 1: the semigroup technique. Bull. Sci. Math. 113 (1989),
253-277.
39] Varopoulos, N., Coulhon, T., Salo-Coste, L., Analysis and geometry on
groups. Cambridge University Press, 1993.
Recibido: 26 de noviembre de 1.997
Thierry Delmotte
Eidgenossische Technische Hochschule Zurich
D-Math HG-649.1
ETH Zentrum
CH-8092 Zurich, SWITZERLAND
[email protected]
© Copyright 2026 Paperzz