Existence and Regularity of Solutions to the Minimal
Surface Equation
T. Begley, H. Jackson, J. Mathews, P. Rockstroh
February 11, 2013
1
Introduction
Minimal surfaces date back to the introduction of calculus of variations and were studied by both
Euler and Lagrange; indeed we will shortly present a derivation of the minimal surface equation
as the Euler-Lagrange equations of the area functional on graphs. In this report we will be
concerned with establishing existence and regularity properies of solutions to the minimal surface
equation. We proceed by constructing first a Lipschitz continuous solution, before showing that
this solution necessarily satisfies additional regularity properties. There are many approaches to
the latter problem, the one we present very PDE theoretic and is based, in part, on the celebrated
theory of DeGiorgi, Nash and Moser. Finally we will drop the Lipschitz condition on our boundary
data and generalise to arbitrary continuous boundary conditions. As promised though, we first
derive the minimal surface equation by way of motivation.
1.1
Derivation of the Minimal Surface Equation
Suppose that Ω ⊂ Rn is a bounded domain (that is, Ω is open and connected). Fix φ : ∂Ω → R,
and introduce
L(Ω; φ) := {u ∈ C 0,1 (Ω); u|∂Ω = φ},
(1.1)
the set of Lipschitz functions on Ω whose restriction to ∂Ω is φ. We consider the problem of
minimising the area functional
Z p
A(u) :=
1 + |Du|2
(1.2)
Ω
over functions u ∈ L(Ω; φ). Note that since Lipschitz functions are almost everywhere differentiable, we can make sense of this functional for all u ∈ L(Ω; φ). Suppose u ∈ L(Ω; φ) is a
minimiser and that v ∈ Cc∞ (Ω). Clearly for all t ∈ R, u + tv ∈ L(Ω; φ). Therefore the function
f : t 7→ A(u + tv)
has a local minimum at t = 0, and so f 0 (0) = 0. Differentiating f and evaluating at t = 0 leads to
Z
Du · Dv
0
p
f (0) =
= 0,
(1.3)
1 + |Du|2
Ω
1
which is precisely what it means for u to be a weak solution of the equation
!
Du
= 0.
div p
1 + |Du|2
Using the summation convention we may rewrite this in the following equivalent form
!
Di u
Mu := Di p
= 0 in Ω.
1 + |Du|2
(1.4)
This is known as the minimal surface equation in divergence form and will be the object of our
study for the remainder of this report.
Remark. Returning to (1.3), we note that if we assume u ∈ C 2 (Ω), then integrating by parts
yields
!
Z
Z
Du
Du · Dv
p
= − div p
v,
0=
1 + |Du|2
1 + |Du|2
Ω
Ω
with the boundary terms vanishing since v is compactly supported in Ω. Since v ∈ Cc∞ (Ω) was
arbitrary, we conclude that u satisfies (1.4) in the classical sense as we would expect.
We now observe that equation (1.4) may be expanded as follows:
! "
#
Di u
Dii u
Di uDij uDj u
Mu = Di p
= p
−
.
1 + |Du|2
1 + |Du|2 (1 + |Du|2 )3/2
which may be equivalently expressed as
n X
Di uDj u
δij −
Dij u = 0.
1 + |Du|2
i,j=1
(1.5)
From equation (1.5) we immediately recognize that this is a second order quasilinear PDE, since
the coefficients of the highest order terms depend only on lower order terms. A quick calculation
now establishes that the minimal surface equation is elliptic. First we let
aij (x, Du) = δij −
Di uDj u
.
1 + |Du|2
Recall that the PDE (1.5) is elliptic if for each x and u there exists a λ > 0 such that aij (x, Du)ξi ξj ≥
λ|ξ|2 for all ξ ∈ Rn . We observe
Di uDj u
aij ξi ξj = δij −
ξi ξj
1 + |Du|2
Di uξi Dj uξj
= |ξ|2 −
1 + |Du|2
Now by the Cauchy-Schwarz inequality we have (Du · ξ)2 ≤ |Du|2 |ξ|2 which implies
|Du|2
2
aij ξi ξj ≥ |ξ| 1 −
.
1 + |Du|2
(1.6)
Since |Du|2 /(1 + |Du|2 ) < 1, (1.6) implies aij ξi ξj ≥ λξi ξj which establishes that the PDE in (1.5)
is elliptic.
2
2
Existence of Solutions
In this section we prove the existence of Lipschitz continuous solutions u to the minimal surface
equation (1.3), with given boundary data φ. We assume throughout that our domain Ω ⊂ Rn is
bounded, with C 2 boundary. For now we assume that φ ∈ C(∂Ω), later on we will need to make
additional assumptions on the regularity of φ to deduce existence. Most of the material in this
section is based on [3] and [5], with many details expanded upon.
The functional A and its properties
2.1
We first prove a few basic properties of the area functional A,as defined in (1.2). Note that we
can make sense of A(u) for any u ∈ W 1,1 (Ω). The following property will be useful in various
proofs throughout this section:
Theorem 2.1. For any function u ∈ W 1,1 (Ω),
Z
Z p
1
n+1
(gn+1 + u divg)dx; g ∈ Cc (Ω; R ), kgk∞ ≤ 1 .
1 + |Du|2 dx = sup
(2.1)
Ω
Ω
Proof. We first show that
Z 0
Z
1
0
n
(v divg)dx; g ∈ Cc (Ω ; R ), kgk∞ ≤ 1
|Dv|dx = sup
(2.2)
Ω
Ω
for all v ∈ W 1,1 (Ω0 ) where Ω0 ⊂ Rn+1 . To see this we use Cauchy Schwartz and integration by
parts to get
Z 0
Z 0
Z 0
Z 0
(v divg)dx =
(Dv) · (−g)dx ≤
|Dv||g|dx ≤
|Dv|dx,
Ω
Ω
Ω
Ω
and the other direction follows on choosing g to be approximations to −Dv/|Dv| in Cc1 . To
establish (2.1) let u ∈ C 1 (Ω) and introduce v := −xn+1 + u. Integrating by parts we have
Z
Z
n+1
n
X
X
∂u
∂u
(gn+1 + u divg)dx = (gn+1 −
gi )dx = (gn+1 −
gi )dx = (v divg)dx,
∂x
∂x
i
i
Ω
Ω
Ω
Ω
i=1
i=1
Z
Z
Noting that |Dv| =
p
1 + |Du|2 and the result follows from (2.2).
Corollary 2.2 (Weak Lower Semicontinuity). Let {uj }j∈N ⊂ W 1,1 (Ω) be a sequence of functions
which converge in L1 (Ω) to a function u ∈ W 1,1 (Ω). Then
A(u) ≤ lim inf A(uj ).
j→∞
Proof. Let g ∈ Cc1 (Ω; Rn+1 ), with kgk∞ ≤ 1. Then we have
Z
Z
(gn+1 + u divg)dx = lim (gn+1 + uj divg)dx ≤ lim inf A(uj )
Ω
j→∞
j→∞
Ω
The first equality holds since g is compactly supported and continuously differentiable and so
divg is bounded on Ω. Weak lower semi-continuity now follows on taking the supremum over all
g.
3
Corollary 2.3 (Strict Convexity). The area functional A is strictly convex in the following sense:
if u, v ∈ W 1,1 (Ω) are such that Du 6= Dv then for all t ∈ (0, 1)
A(tu + (1 − t)v) < tA(u) + (1 − t)A(v)
Proof. This follows immediately from the strict convexity of the function
p
x 7→ 1 + |x|2 ,
which follows from the fact that the Hessian is positive definite at every x.
We can make sense of the area functional for any u ∈ W 1,1 (Ω), and in fact, any u ∈ BV(Ω). For
our purposes however it will be enough to restrict our attention to Lipschitz continuous functions
on Ω, which are necessarily W 1,1 (Ω) and so the area functional remains well defined on this class
of functions.
Theorem 2.4 (Characterisation of W 1,∞ ). Let Ω be open and bounded, with ∂Ω of class C 2 Then
u : Ω → R is Lipschitz continuous if and only if u ∈ W 1,∞ (Ω).
2.2
Lipschitz Functions and Minimisers of A
The next step is to choose a suitable space over which to minimise A. We want to choose a
suitably large space in order that we may appeal to certain compactness properties in order to
show existence, while at the same time not discarding so much regularity that proving additional
regularity later becomes too hard. It turns out that the space of Lipschitz continuous functions
strikes the right balance. We first present some definitions and useful facts. For a function u
which is Lipschitz continuous on Ω, we denote its Lipschitz constant as
|u(x) − u(y)|
; x, y ∈ Ω, x 6= y .
[u]Ω = sup
|x − y|
Definition. Let k > 0, and set Lk (Ω) to be the space of Lipschitz continuous functions with
Lipschitz constant less than or equal to k, i.e.
Lk (Ω) = {u ∈ C 0,1 (Ω) : [u]Ω ≤ k}.
For boundary data φ ∈ C(∂Ω), let L(Ω; φ) be the set of Lipschitz continuous functions which
agree with φ on ∂Ω, as defined in (1.1). Finally, let Lk (Ω; φ) be the set of Lipschitz continuous
functions with Lipschitz constant less than or equal to k, and which agree with φ on ∂Ω, i.e.
Lk (Ω; φ) = Lk (Ω) ∩ L(Ω; φ).
Now we present two results about sequences of functions in these spaces. Combining them with
the Arzelà-Ascoli theorem will allow us to conclude compactness of the space Lk (Ω; φ).
Lemma 2.5. Any sequence of functions {uj } ⊂ Lk (Ω) is equicontinuous in Ω.
Proof. Pick ε > 0, and let δ < kε . Then for any x, y ∈ Ω with |x − y| < δ, we have
|uj (x) − uj (y)| ≤ k|x − y| < kδ < ε,
for any j ∈ N. Hence the sequence is equicontinuous.
4
Lemma 2.6. Any sequence of functions {uj } ⊂ Lk (Ω; φ) is uniformly bounded in Ω.
Proof. Let dmax := sup{|x − y| : x, y ∈ Ω} be the greatest distance between any two points in Ω.
Since we are assuming that Ω is bounded, we know that dmax is finite. By compactness of ∂Ω and
continuity of φ, we also know that kφkL∞ (∂Ω) is finite. Therefore, using the uniform bound of k
on the Lipschitz constants, we can see that
|uj (x)| ≤ kφkL∞ (∂Ω) + kdmax < ∞
for any x ∈ Ω and j ∈ N. Hence the sequence is uniformly bounded.
Using the above results, we can now very easily deduce the existence of a minimiser in the class
Lk (Ω, φ). From here it will take some extra work, as well as some additional assumptions on the
domain Ω, to show the existence of a minimiser in L(Ω, φ).
Theorem 2.7. Let φ ∈ C(∂Ω), and suppose that Lk (Ω, φ) is non-empty. Then there exists a
function u[k] ∈ Lk (Ω, φ) such that
A(u[k] ) = inf{A(u) : u ∈ Lk (Ω, φ)}.
Proof. Let {uj }j∈N be a minimising sequence in Lk (Ω, φ). By Lemmas 2.5 and 2.6 the sequence
is equicontinuous and uniformly bounded. We can therefore apply the Arzelà-Ascoli theorem to
find a subsequence which converges uniformly to a function u[k] ∈ L(Ω, φ). Since convergence
is uniform and Ω is bounded it follows that uj converge to u[k] in L1 . Hence by weak lower
semicontinuity of A we deduce that
A(u[k] ) ≤ lim inf A(uj ) = inf{A(u) : u ∈ Lk (Ω, φ)} ≤ A(u[k] ),
j→∞
which completes the proof.
Proposition 2.8. Suppose u[k] ∈ Lk (Ω; φ) is the minimum defined above for some k, that is
A(u[k] ) = inf{A(v); v ∈ Lk (Ω, φ)}.
If in addition we have the strict inequality [u[k] ]Ω < k, then u[k] is the minimum of A in L(Ω; φ),
so
A(u[k] ) = inf{A(v); v ∈ L(Ω, φ)}.
Proof. Let 0 ≤ t ≤ 1 and v ∈ L(Ω; φ), and set vt = u[k] + t(v − u[k] ). Then note that vt |∂Ω = φ
and if t is sufficiently small then [vt ]Ω < k. To see this, find ε > 0 such that [u[k] ]Ω = k − ε and
choose N sucht that [v − u[k] ]Ω ≤ N (which is possible, since both v and u[k] Lipschitz). Then
take t = ε/2N and use the triangle inequality.
Since u[k] is the minimiser of A in Lk (Ω, φ), we must then have A(uk ) ≤ A(vt ). Thus by the
convexity of the area functional we have
A(uk ) ≤ A(vt ) ≤ (1 − t)A(uk ) + tA(v),
which, upon rearrangement, completes the proof.
5
To finish the proof of existence it remains to show two things. First, we must show that for k
large enough, Lk (Ω; φ) is non-empty. This is trivial since we have assumed a C 2 domain with
continuous boundary data. Hence we can produce a Lipschitz function agreeing with φ on the
boundary simply by solving the Dirichlet problem for the Laplacian. According to Proposition 2.8,
to show the existence of a minimiser in L(Ω, φ) it is sufficient to obtain estimates on the Lipschitz
constant of the minimisers u[k] in Lk (Ω, φ). We do this by formulating a weak maximum principle
which will allow us to restrict our attention to boundary estimates on the Lipschitz constant, and
then obtain the desired estimates using the notion of a barrier function.
2.3
Maximum principle
Definition (Supersolutions and Subsolutions). function w ∈ Lk (Ω) is called a supersolution
(subsolution) for the area functional A if for all v ∈ Lk (Ω; w) with v ≥ w (v ≤ w) on Ω we have
A(v) ≥ A(w).
Proposition 2.9 (Weak maximum principle). Let w and z be a supersolution and a subsolution
of A in Lk (Ω) respectively. If w ≥ z on the boundary ∂Ω then w ≥ z in Ω.
Proof. Seeking a contradiction suppose that the maximum principle doesn’t hold, then the set
K := {x ∈ Ω : w(x) < z(x)} is non empty. Let v := max{z, w}. Then it is clear that v = w on
the boundary, v ∈ Lk (Ω) and v ≥ w in Ω. Thus, since w is a supersolution, we have A(v) ≥ A(w).
On the set K, v = z and therefore the area functional on K satisfies A(z; K) ≥ A(w; K). By
considering min{z, w} we similarly see that A(z; K) ≤ A(w; K), so we deduce
A(z; K) = A(w; K).
Since z = w on the boundary ∂K (as z and w are continuous), and z > w in K, there exists a set
of non zero measure where Dw 6= Dz. By the strict convexity of the area functional we therefore
get
1
1
w+z
; K < A(w; K) + A(z; K) = A(w; K).
A
2
2
2
But since w is a supersolution in Lk (Ω) and (w + z)/2 ≥ w on K we also have
w+z
A
; K ≥ A(w; K),
2
giving the desired contradiction.
Proposition 2.10. Let w and z be a supersolution and a subsolution of A in Lk (Ω) respectively.
Then we have
sup[z(x) − w(x)] ≤ sup [z(y) − w(y)].
(2.3)
x∈Ω
y∈∂Ω
Proof. First, we note that if w is a supersolution then so is w + α for any α ∈ R. To see this
suppose v ∈ Lk (Ω; w + α) and v ≥ w + α. Then v − α ∈ Lk (Ω; w) and v − α ≥ w so we
get A(v − α) ≥ A(w). w + α is then a supersolution since A(v − α) = A(v). Now choose
α = supy∈∂Ω [z(y) − w(y)], and note that for any x ∈ ∂Ω,
6
z(x) ≤ w(x) + sup [z(y) − w(y)].
y∈∂Ω
After applying the maximum principle we get that the above equation holds for all x ∈ Ω, and
taking the supremum over all x gives (2.3).
Corollary 2.11. If u and v both minimise the area functional A in Lk (Ω) then
sup |u(x) − v(x)| ≤ sup |u(x) − v(x)|.
x∈Ω
x∈∂Ω
Proof. This comes immediately from Proposition 2.10, since any area minimising function is both
a supersolution and a subsolution.
Proposition 2.12 (Reduction to boundary estimates). Suppose u minimises the area functional
in Lk (Ω), then we have
|u(x) − u(y)|
[u]Ω ≤ sup
.
|x − y|
x∈Ω,y∈∂Ω
Proof. Let x1 , x2 be two distinct points in Ω and set r = x2 −x1 . If u minimises area in Lk (Ω) then
ur (x) := u(x + r) minimises area in Lk (Ωr ), where Ωr := {x + r : x ∈ Ω}. Note that x2 ∈ Ω ∩ Ωr
so the intersection is non empty, and that both u and ur minimise area in Lk (Ω ∩ Ωr ). We can
apply Corollary 2.11, which gives us that there is some z ∈ ∂(Ω ∩ Ωr ) such that
|u(x1 ) − u(x2 )| = |ur (x2 ) − u(x2 )| ≤ |ur (z) − u(z)| = |u(z + r) − u(z)|.
Then dividing by |r| = |x1 − x2 | we get
|u(z) − u(z + r)|
|u(x1 ) − u(x2 )|
≤
.
|x1 − x2 |
|r|
Since z ∈ ∂(Ω ∩ Ωr ) then either z ∈ ∂Ω, or z ∈ ∂Ωr in which case z + r ∈ ∂Ω. Thus one of z or
z + r lie on the boundary ∂Ω, so
|u(z) − u(z + r)|
|u(x) − u(y)|
|u(x1 ) − u(x2 )|
≤
≤ sup
,
|x1 − x2 |
|r|
|x − y|
x∈Ω,y∈∂Ω
and the result follows by taking the supremum on the left hand side.
2.4
Barriers and completion of proof of existence
To complete the proof, we introduce the notion of a barrier, for which it is necessary to assume
that φ is Lipschitz. We show that the existence of barriers is sufficient for existence of a solution
and then construct barriers under certain assumptions on Ω to prove that a solution exists.
Definition (Upper and lower barriers). Let x ∈ Ω, and let d(x) = dist(x, ∂Ω). For c > 0 define
Σc := {x ∈ Ω : d(x) < c} and Γc := {x ∈ Ω : d(x) = c}.
Let φ ∈ C 0,1 (∂Ω). An upper barrier v + relative to φ is a Lipschitz function defined on Σco for
some co , which satisfies
7
(i) v + |∂Ω = φ (v + agrees with φ on the outer edge),
(ii) v + ≥ sup∂Ω φ on Γc0 (v + lies above φ on the inner edge),
(iii) v + is a supersolution on Σco .
A lower barrier v − is a Lipschitz function if it satisfies (1), v − ≤ inf ∂Ω φ on Γc0 and is a subsolution
on Σco .
Theorem 2.13. Let φ be a Lipschitz function on ∂Ω, and suppose upper and lower barriers v +
and v − (relative to φ) exist. Then the area functional A achieves its minimum on L(Ω; φ).
Proof. Let Q := max [v + ]Σc0 [v − ]Σc0 , and choose k > Q large enough that Lk (Ω, φ) is non-empty.
Following earlier arguments, there exists an area minimising function u ∈ Lk (Ω, φ). We claim that
in fact [u]Ω < k and thus via Proposition 2.8 that u is the desired minimiser of A on L(Ω, φ). We
begin by observing that u also minimises area in Lk (Σco ). Moreover it follows that
inf φ ≤ u(x) ≤ sup φ ∀x ∈ Ω
∂Ω
∂Ω
To see this suppose the set K := {x ∈ Ω : sup∂Ω φ < u(x)} is non-empty. Since u is continuous,
it follows that K is open and furthermore since u ≤ sup∂Ω φ on ∂Ω, we have u = sup∂Ω φ on ∂K.
Now u minimises A on K, which in turn implies that u ≡ sup∂Ω φ on K, which is a contradiction.
In particular we have v − (x) ≤ u(x) ≤ v + (x) for x ∈ Γc0 , and thus by the weak maximum principle,
we deduce
v − (x) ≤ u(x) ≤ v + (x)
for x ∈ Σc0 . Since v − = u = v + on ∂Ω, it now follows that if x ∈ Σc0 and y ∈ ∂Ω, then
u(x) − u(y) ≤ v + (x) − v + (y) if u(x) ≥ u(y) and u(y) − u(x) ≤ v − (y) − v − (x) if u(x) ≤ u(y).
Therefore by the definition of Q
|u(x) − u(y)| ≤ Q|x − y|
for x ∈ Σc0 and y ∈ ∂Ω.
We now extend this to arbitrary x ∈ Ω. Suppose d(x) > c0 , since inf ∂Ω φ ≤ u(x) ≤ sup∂Ω φ
we have the following inequality for y ∈ ∂Ω
|u(x) − u(y)| ≤ max sup φ − u(y), u(y) − inf φ ≤ max v + (z) − v + (y), v − (y) − v − (z)
∂Ω
∂Ω
for any z ∈ Γc0 . Choosing z with |z − y| = c0 we conclude that
|u(x) − u(y)| ≤ max [v + ]Σc0 c0 , [v − ]Σc0 c0 ≤ Q|x − y|.
In other words, by Proposition 2.12, [u]Ω ≤ Q < k and therefore Proposition 2.8 implies that u is
a minimiser in L(Ω, φ).
To complete the proof of existence, we will construct an upper barrier. This is sufficient since if
v is an upper barrier relative to −φ then −v is a lower barrier relative to φ. In order to do so we
need to make additional assumptions about the domain Ω, and in particular we must introduce
the notion of mean curvature.
8
Definition. Assume that ∂Ω is C 2 . For y ∈ ∂Ω we denote by ν(y) and T (y) respectively the inner
normal to ∂Ω at y and the tangent hyperplane to ∂Ω at y. The curvatures at y are determined
as follows. By a rotation of the coordinates we may assume that the xn coordinate axis lies in
the direction ν(y). Then in some neighbourhood N = N (y) of y, ∂Ω is then given by xn = ψ(x0 )
where x0 = (x1 , . . . , xn−1 ), ψ ∈ C 2 (T (y) ∩ N ) and Dψ(y 0 ) = 0. The eigenvalues κ1 , . . . , κn−1 of the
Hessian matrix (D2 ψ(y 0 )) are called the principal curvatures of ∂Ω at y, and the mean curvature
of ∂Ω at y is given by
n−1
1 X
H(y) =
κi
n − 1 i=1
We use the following facts about the distance function d = dist(x, ∂Ω), the proofs of which can
be found in [2].
(i) For sufficiently small c0 , d ∈ C 2 (Σc0 ).
(ii) If d(x) = c ≤ c0 then −∆d(x) = nH(x) where H(x) is the mean curvature of Γc at x.
(iii) If the mean curvature H of ∂Ω is non-negative at every point, then d is superharmonic in
Σc0 (so ∆d ≤ 0).
The assumption of non-negative mean curvature, also referred to as mean convexity, combined
with these facts will allow us to construct an upper barrier, under the additional assumption that
φ ∈ C 2 (∂Ω). Later, in Section 5 we will show that the regularity assumptions on φ can be relaxed.
Theorem 2.14 (Existence of a Lipschitz solution to the MSE). Let Ω be a bounded C 2 domain
whose boundary has non negative mean curvature, and φ ∈ C 2 (∂Ω). Then there exists u ∈
W 1,∞ (Ω) which minimises the area functional A, i.e. u solves
Z
Du · Dη
p
=0
1 + |Du|2
Ω
for all η ∈ Cc1 (Ω) and the boundary condition u|∂Ω = φ.
Proof. We will construct a barrier of the form
v(x) = φ(x) + ψ(d(x)),
for some ψ ∈ C 2 [0, R] to be determined, where R < c0 , and c0 is as above. We choose ψ such that
ψ(0) = 0 and
ψ(R) ≥ 2 sup |φ| =: L
Ω
so that v satisfies the first and second properties of an upper barrier. We also specify that ψ 0 (t) ≥ 1
and ψ 00 (t) < 0 which we will make use of shortly in our calculations. We just need to show that
v is a supersolution, and it is sufficient to show that v satisfies
!
n
X
Di v
Mv =
Di p
≤0
1 + |Dv|2
i=1
9
for all x ∈ Ω, which is equivalent to
2 3/2
ζ(v) := (1 + |Dv| )
n
X
2
Mv = (1 + |Dv| )∆v −
Di vDj vDij v ≤ 0.
i,j=1
A tedious calculation which the reader is encouraged to check yields (where we have used summation convention)
ζ(v) =(1 + |Dφ|2 )∆φ − Di φDj φDij φ
+ψ 0 [2Di φDj d∆φ + (1 + |Dφ|2 )∆d − Di dDj φDij φ − Di φDj φDij d]
+ψ 02 [∆φ + 2Di φDj d∆d − Di dDj dDij φ] + ψ 03 ∆d
+ψ 00 [1 + |Dφ|2 − (Di φDj d)2 ].
(2.4)
This follows on using the property that |Dd| = 1, so Di dDij d = 0, and also noting that Di ψ(d) =
ψ 0 Di d and Dij ψ(d) = ψ 00 Di dDj d + ψ 0 Dij d. We now use the fact that ψ 0 ≥ 1, so we have ψ 02 ≥ ψ 0 .
We also note that φ and d are C 2 in ΣR and hence the second order derivatives are bounded.
Thus we can bound the first three terms in (2.4) (up to the term involving ψ 03 ) by Cψ 02 for some
C, dependent on the region and φ. We now turn our attention to the term involving ψ 00 , noting
that |Dφ|2 − (Di φDj d)2 ≥ |Dφ|2 − |Dφ|2 |Dd|2 ≥ 0 by Cauchy-Schwartz, and since ψ 00 < 0 we then
get then final term is no greater than ψ 00 . As a result, we reduce (2.4) to the inequality
ζ(v) ≤ ψ 00 + Cψ 02 + ψ 03 ∆d,
which further reduces to
ζ(v) ≤ ψ 00 + Cψ 02
(2.5)
since ∆d ≤ 0 in ΣR . It remains to choose ψ such that the right hand side of (2.5) is less than zero,
and that ψ satisfies the technical conditions above. We can choose ψ by solving the differential
equation ψ 00 + Cψ 02 = 0 which gives
ψ(t) =
1
log(1 + βt) + γ
C
We set γ = 0 so that ψ(0) = 0 and it is clear that ψ 00 < 0, so it remains to choose β and R such
that v is an upper barrier, i.e such that ψ 0 ≥ 1 and ψ(R) ≥ L. We calculate that
ψ 0 (t) =
1 β
1
β
≥
C 1 + βt
C 1 + βR
and
ψ(R) =
1
log(1 + βR),
C
1
thus we need β/(1 + βR) ≥ C and βR ≥ eLC − 1. One way of doing this is setting R = β − 2 , which
1
1
gives us the inequalities β/(1 + β 2 ) ≥ C and β 2 ≥ eLC − 1. By taking β sufficiently large (and
hence R sufficiently small) we construct an upper barrier, which completes the proof of existence
of a Lipschitz solution by Theorem 2.13.
10
Note that the only part in the proof of existence that required ∂Ω to be C 2 was in constructing an
upper barrier. It’s a natural question to ask whether we can still be guaranteed a solution under
weaker assumptions on the boundary. As we will show in the next section this is not the case. If
only a single point on the boundary has negative mean curvature then it is impossible to construct
boundary data for which there is no minimiser. The situation for φ is better, and we will show
later how we can reduce the regularity assumptions on φ while still maintaining existence of a
minimiser.
3
Necessity of mean convexity
In this section we show that mean convexity is essential in the proof of the existence theorem in
Section 2. We will show that if there is a point where the mean curvature H is negative then we
can find some φ ∈ C 2 (∂Ω) such that there is no solution to the minimal surface equation with
these boundary data. To do this, we first present a version of the maximum principle.
Lemma 3.1. Let Ω be a bounded C 2 domain and Γ a non-empty, closed subset of ∂Ω. Let
u, v ∈ C 2 (Ω) ∩ C(Ω ∪ Γ) satisfy
Mv ≤ Mu in Ω
and
v ≥ u on Γ.
Let Ωt := {x ∈ Ω : dist(x, ∂Ω) ≥ t} and let νt be the outward normal of Ωt . If for every open set
V ⊃ Γ we have
!
Z
Dv(y) · νt (y)
dHn−1 = 0,
(3.1)
lim
1− p
2
t→0+ ∂Ωt \V
1 + |Dv(y)|
then we have
v ≥ u in Ω ∪ Γ.
Proof. Let ϕ = max{0, u − v} and ϕk = min{k, ϕ}, we want to show that ϕ ≡ 0 in Ω. Since
Mv ≤ Mu in Ω the same is true in Ωt for all t, since ϕk is positive we have
Z
(Mv − Mu)ϕk ≤ 0.
(3.2)
Ωt
Define
pi
p
and Gi (p) = p
.
G(p) = p
1 + |p|2
1 + |p|2
Note that Mw = Di Gi (Dw), so using integration by parts on (3.2) we get
Z
Z
(G(Dv) − G(Du)) · νt ϕk −
(G(Dv) − G(Du)) · Dϕk ≤ 0
∂Ωt
Ωt
Assume first v > u on Γ, and set V = {x ∈ Ω : u(x) < v(x)}. Notice we have ϕk = Dϕk ≡ 0 on
V and that V is open (as a subset of Ω) and contains Γ. Thus we have
Z
Z
(G(Du) − G(Dv)) · Dϕk ≤
(G(Du) − G(Dv)) · νt ϕk .
(3.3)
Ωt \V
∂Ωt \V
11
Now using the fundamental theorem of the calculus and the chain rule we see
Z 1
d i
i
i
G (Du) − G (Dv) =
G (tDu + (1 − t)Dv)dt
0 dt
Z 1
∂Gi
d
(tDu + (1 − t)Dv) (tDj u + (1 − t)Dj v)dt
=
dt
0 dpj
Z 1
∂Gi
= Dj (u − v)
(tDu + (1 − t)Dv)dt = Dj (u − v)bij
0 dpj
where we defined
(3.4)
1
∂Gi
(tDu + (1 − t)Dv)dt.
0 dpj
Observe bij is elliptic. This follows from Section 1 since aij is elliptic due to Du and Dv being
uniformly bounded. Thus we have bij ξi ξj ≥ λt |ξ|2 for some λt > 0 for x ∈ Ωt . Proceeding, we
have
Z
Z
2
bij Dj ϕk Di ϕk
by ellipticity
λt |Dϕk | ≤
Ωt
Ωt
Z
bij Dj (u − v)Di ϕk since Dϕk = D(u − v) if 0 < u − v < k and 0 otherwise
=
Ωt
Z
=
bij Dj (u − v)Di ϕk by definition of V
Ωt \V
Z
≤
(G(Du) − G(Dv)) · νt ϕk by (3.3) and (3.4)
∂Ωt \V
Z
Z
=
(1 − G(Dv) · νt ) ϕk −
(1 − G(Du) · νt ) ϕk .
(3.5)
Z
bij :=
∂Ωt \V
∂Ωt \V
R
Since |G(Du) · νt | ≤ 1 by the Cauchy-Schwartz inequality, ∂Ωt \V (1 − G(Du) · νt ) ϕk ≥ 0 and thus
(3.5) becomes
Z
Z
Z
2
λt |Dϕk | ≤
(1 − G(Dv) · νt ) ϕk ≤ k
(1 − G(Dv) · νt ) ,
(3.6)
Ωt
∂Ωt \V
∂Ωt \V
where the second inequality follows from the definition
of ϕk . We now let t & 0. By hypothesis
R
the limit of the right hand side is zero so limt&0 Ωt λt |Dϕk |2 = 0, from which we can conclude
|Dϕk | = 0. Since (3.6) holds for all values of k, we can take k → ∞ and conclude that Dϕ ≡ 0 in
Ω. This means that either u ≤ v in Ω, or u − v is constant in Ω If it is the second case then since
u ≤ v on Γ we must have by continuity that u ≤ v in Ω. If we only have v ≥ u on Γ, we replace
v by v + ε in the definition of V and then let ε → 0 to get the result.
We now restate what we are trying to show, the non-existence of a solution.
Theorem 3.2. Let Ω be a bounded C 2 domain and suppose that there is some x0 in ∂Ω with mean
curvature H(x0 ) < 0. Then there exists φ ∈ C 2 (∂Ω) such that the Dirichlet problem
Mu = 0 in Ω
u|∂Ω = φ
has no solution u ∈ C 2 (Ω).
12
Using the version of the maximum principle we have just proved, we now show the theorem
is true. We prove an inequality related to the values that u can take on the boundary and then
show we can easily construct boundary data for which this does not hold.
Proof. We assume that u ∈ C 2 (Ω) is a solution to the Minimal Surface Equation in Ω. Since
H(x0 ) < 0 then by properties of the distance function we stated in §2.4 we have ∆d(x0 ) ≥ 0
where d(x) = dist(x, ∂Ω) as before. Since d ∈ C 2 (Σc ) for sufficently small c then there exists some
R > 0 such that ∆d(x) > ε > 0 for x ∈ Ω ∩ BR (x0 ). We claim that
sup
∂Ω∩BR (x0 )
u<
sup
u + C,
∂Ω\BR (x0 )
where C depends only on the domain Ω. To prove this we apply Lemma 3.1 to two different
functions. Let v(x) := α − βd(x)1/2 for x ∈ Ω ∩ BR (x0 ) and let w(x) := λ − µ dist(x, ∂BR (x0 ))1/2
for x ∈ Ω\BR (x0 ), with constants to be chosen later. We firstly deal with v(x), we can calculate
that
β
1
Mv = − p
,
∆d(x) −
2(d(x) + β 2 /4)
2 d(x) + β 2 /4
by checking that Di v = − 12 βDi dd−1/2 and Dij v = − 12 βDij dd−1/2 + 14 βDi Dj dd−3/2 and recalling
that |Dd| = 1 and Di dDij d = 0.
Since Mu = 0 to apply the lemma we need that Mv ≤ 0 in Ω ∩ BR (x0 ), and having β 2 > 2ε
is sufficent since
1
2
∆d(x) −
>
ε
−
> 0.
2(d(x) + β 2 /4)
β2
Figure 1: Diagram of Ω and BR (x0 )
We want to use the Lemma in Ω1 := Ω ∩ BR (x0 ) and Γ1 = Ω ∩ ∂BR (x0 ). We check that (3.1)
holds, to see this note that we have νt (y) = −Dd(y) and so the integrand reduces to
Dv(y) · νt (y)
βDd(y) · νt (y)
β
1− p
=1+ p
=1− p
.
1 + |Dv(y)|2
2 d(y) + β 2 /4
2 d(y) + β 2 /4
13
which is zero on the boundary of Ω. To apply the
p Lemma we just need to choose α > 0 such that
v(x) ≥ u(x) on Γ1 , and since α ≥ v(x) ≥ α − β diam(Ω) we can choose
p
α = sup u + β diam(Ω).
Γ1
Applying the Lemma we then get for x ∈ Ω1 ∪ Γ1 that
u(x) ≤ v(x) ≤ sup u + β
p
diam(Ω).
Γ1
Thus we get upon taking a supremum in Ω1 that
sup u ≤
Ω1
u+β
sup
p
diam(Ω).
(3.7)
Ω∩∂BR (x0 )
Finally we can use the maximum principle (see [6]) which gives us supΩ1 u = sup∂Ω1 u and thus
since ∂Ω ∩ BR (x0 ) ⊂ ∂Ω1 from (3.7) we have
p
(3.8)
sup u ≤ sup u + β diam(Ω).
∂Ω∩BR (x0 )
Ω∩∂BR (x0 )
We now carry out a similiar procedure for w(x) with x ∈ Ω2 := Ω\BR (x0 ). We can see that
w(x) = λ − µ(|x − x0 | − R)1/2 . We again need that Mv ≤ 0 in Ω2 and a similiar calculation as
before (with d(x) = |x − x0 | − R) gives
1
µ
n−1
−
Mv = − p
.
2 |x − x0 | − R + µ2 /4 |x − x0 | 2(|x − x0 | − R + µ2 /4)
Since
1
n−1
2
n−1
−
≥
− 2,
2
|x − x0 | 2(|x − x0 | − R + µ /4)
|x − x0 | µ
we can ensure the above expression is positive and hence Mv ≤ 0 by taking
r
2 diam(Ω)
µ>
.
n−1
p
By similiar logic to before, we have w(x) ≥ λ − µ diam(Ω) so choosing
p
λ = sup u + µ diam(Ω)
Γ2
ensures that v(x) ≥ u(x) in Γ2 := ∂Ω\BR (x0 ). We can check that the integral in (3.1) is zero as
before by showing the integrand is zero on the boundary, so we can apply the Lemma to get that
p
u(x) ≤ w(x) ≤ sup u + µ diam(Ω)
Γ1
Finally taking the supremum as before and applying the maximum principle in Ω2 we conclude
that
p
sup u ≤ sup u + β diam(Ω)
(3.9)
Ω∩∂BR (x0 )
∂Ω\BR (x0 )
since Ω ∩ ∂BR (x0 ) ⊂ ∂Ω2 . Combining (3.8) and (3.9) gives us the result we wanted,
sup
∂Ω∩BR (x0 )
u<
sup
∂Ω\BR (x0 )
14
u + C.
(3.10)
In fact we have just proved that we can find
φ ∈ c∞ (Ω) for which there is no solution to the
2
Dirichlet problem. For example choose φ ∼ e−x
with peak at 2C in ∂Ω ∩ BR (x0 ) and less than C
in ∂Ω\BR (x0 ) .
4
Regularity
Having established the existence of a Lipschitz continuous solution, which we will henceforth
denote ũ, we now turn to the issue of regularity. Our goal is to show that the solution of the
minimal surface equation that we have constructed is in fact smooth. This takes a lot of work,
but the idea can be summarised roughly as follows. The first step is to prove that the solution ũ
2,2
is in fact in Wloc
. Having done this we can rewrite the minimal surface equation to obtain a linear
equation for the derivatives of ũ. We apply the theory of DeGiorgi-Nash-Moser to conclude that
the derivatives are Hölder continuous. Since the coefficients in the minimal surface equation are
given in terms of the derivatives of ũ, we may conclude that the coefficients are Hölder continuous.
This will then allow us to show that ũ is C 2,α and hence a classical solution of the minimal surface
equation. We finally conclude by appealing to Schauder estimates and a bootstrapping argument
to show that ũ is C ∞ .
4.1
Additional Regularity of the Solution
2,2
We will use following results to show that the solution is in Wloc
.
Definition. Assume that u ∈ L1loc (Ω), and Ω0 ⊂⊂ Ω. Then the i-th difference quotient of size h,
where 0 < |h| < dist(Ω0 , ∂Ω), is defined as
δih u(x) =
u(x + hei ) − u(x)
,
h
for i = 1, . . . , n and x ∈ Ω0 . We also define δ h := (δ1h , . . . , δnh ).
The first two theorems we will only state, proofs may be found in Evans [1].
Theorem 4.1.
(i) Suppose that 1 ≤ p < ∞ and u ∈ W 1,p (Ω). Then for each Ω0 ⊂⊂ Ω
kδ h ukLp (Ω0 ) ≤ CkDukLp (Ω)
for some constant C and for all 0 < |h| < 12 dist(Ω0 , ∂Ω).
(ii) Assume that 1 < p < ∞, u ∈ Lp (Ω0 ), and that there exists a constant C such that
kδ h ukLp (Ω0 ) ≤ C,
for all 0 < |h| < 21 dist(Ω0 , ∂Ω). Then
u ∈ W 1,p (Ω0 ),
with kDukLp (Ω0 ) ≤ C.
15
Theorem 4.2. Let u ∈ W 1,2 (Ω) ∩ W 1,∞ (Ω) satisfy
Z
Fi (Du)Di ϕ = 0,
(4.1)
Ω
for all φ ∈ W01,2 (Ω) with φ ≥ 0, where F is a smooth function satisfying the following two
conditions:
(i) Di Fj = Dj Fi for all i, j,
(ii) There exist c, C > 0 such that c|ξ|2 ≤
Pn
i,j=1
Dj Fi ξi ξj ≤ C|ξ|2 for all ξ ∈ Rn , for all x ∈ Ω.
These conditions can be thought of as local uniform ellipticity conditions on F , and under these
2,2
.
assumption we have u ∈ Wloc
Proof. We begin by constructing a suitable test function ϕ ∈ W01,2 (Ω). In order to do so, let
η ∈ Cc∞ (Ω) and |h| < 12 dist(∂Ω, supp η) and define ϕ := δk−h (η 2 δkh u), where k ∈ {1, . . . , n}.
Also, note that δkh commutes with Dj and furthermore δih commutes with D when δih is applied
componentwise. A quick calculation shows that by substituting our choice of test function into
(4.1) we get
Z
Z
Fi (Du)Di δk−h (η 2 δkh u)dx = −
Ω
δkh Fi (Du)Di (η 2 δkh u)dx,
Ω
1
2
since η is compactly supported in Ω and 0 < |h| < dist(∂Ω, supp η). By (4.1), the LHS of the
above expression is 0, so we have
Z
δkh Fi (Du)Di (η 2 δkh u)dx = 0.
(4.2)
Ω
Now note that
Fi (Du(x + hek )) − Fi (Du(x)) = Fi (Du(x) + hδkh Du(x)) − Fi (Du(x)).
Hence, by the fundamental theorem of calculus,
Z
1 1 d
h
Fi (Du(x) + thδkh Du(x))dt
δk Fi (Du(x)) =
h 0 dt
Z 1
h
= Dj δk u(x)
Dj Fi (Du(x) + thδkh Du(x))dt.
0
Now set
Z
θij :=
1
Dj Fi (Du + thδkh Du)dt.
0
Now if we set v :=
δkh u,
then we may rewrite (4.2) as
Z
θij Dj vDi (η 2 v)dx = 0.
Ω
16
(4.3)
Suppose we are given Ω1 ⊂⊂ Ω, then we can find sets Ω2 and Ω3 such that Ω1 ⊂⊂ Ω2 ⊂⊂ Ω3 ⊂⊂
Ω. In addition, choose |h| < mink dist(∂Ωk , Ωk+1 ). Now choose η ∈ Cc∞ (Ω2 ) such that η = 1 on
Ω1 . By the ellipticity assumption, we find that there exist constants c and C such that
c|ξ|2 ≤ θij ξi ξj ≤ C|ξ|2
in Ω2 . Now since η = 1 on Ω1 , we have
Z
Z
2
|Dv| ≤
Ω1
(4.4)
|D(ηv)|2 .
Ω2
Using the first inequality in (4.4), we also have
Z
Z
1
2
|D(ηv)| ≤
θij Dj (vη)Di (vη),
c Ω2
Ω2
and moreover by the first ellipticity condition, we have θij = θji which implies
Z
Z
1
1
θij Dj (vη)Di (vη) =
θij (v 2 Dj ηDi η + 2ηvDj vDi η + η 2 Dj vDi v).
c Ω2
c Ω2
R
Applying the product rule and the fact that Ω θij Dj vDi (η 2 v)dx = 0 and (4.3) we find
Z
Z
Z
1
1
1
2
2
2
θij Dj vDi (η 2 v)
θij (v Dj ηDi η + 2ηvDj vDi η + η Dj vDi v) =
θij v Dj ηDi η +
c Ω2
c Ω2
c Ω
Z
1
=
θij v 2 Dj ηDi η.
c Ω
Finally using the second inequality in (4.4)
Z
Z
Z
C
1
2
2
2
θij v Dj ηDi η ≤
v |Dη| ≤ C
v2,
c Ω
c Ω2
Ω2
and therefore
Z
Z
2
|Dv| ≤ C
Ω1
v2.
(4.5)
Ω2
1,2
Now since v = δkh u and u ∈ Wloc
(Ω) we have that v ∈ L2 (Ω2 ) and
kvkL2 (Ω2 ) ≤ C̄kDukL2 (Ω3 ) =: K.
By (4.5) and the above result, we have
Z
|Dv|2 ≤ CK.
(4.6)
Ω1
2,2
Hence, kδih DukL2 (Ω1 ) ≤ K̃ which implies kD2 ukL2 (Ω1 ) ≤ K̃ and therefore u ∈ Wloc
(Ω).
Applying the above results to our specific situation, we first note that ũ is Lipschitz and thus is
W 1,∞ (Ω). Hence since Ω is bounded,
we also have that ũ ∈ W 1,2 (Ω). We can therefore apply
p
theorem 4.2 with F (p) := p/ 1 + |p|2 which satisfies the ellipticity conditions to conclude that
2,2
ũ ∈ Wloc
.
17
4.2
4.2.1
DeGiorgi-Nash-Moser
Overview
The proof proceeds in multiple stages, many of which are very technical, so in the interest of
2,2
clarity we present a brief informal overview of the strategy. We have just shown that ũ ∈ Wloc
.
With this result in hand we perform the following trick. Fix some k = 1, . . . , n, let w := Dk ũ and
choose a test function of the form Dk φ for φ ∈ Cc∞ (Ω). Then substituting this into (1.3) we have
Z
Z
Di ũDi (Dk φ)
p
= − ãij (x)Dj wDi φ,
0=
1 + |Dũ|2
Ω
Ω
where
ãij (x) = p
δij
1+
|Dũ|2
−
Di ũDj ũ
3
(1 + |Dũ|2 ) 2
.
In other words, w is a weak solution of the equation
Di (ãij (x)Dj u) = 0.
(4.7)
Moreover, this equation is linear! The coefficients depend only on x since the solution ũ is fixed.
In addition to this, we also have ellipticity and boundedness of the coefficients ãij . Observe
Di ũDj ũ 1
|Dũ|2
δij
−
1+
≤ 2,
|ãij (x)| = p
≤ p
1 + |Dũ|2 (1 + |Dũ|2 ) 32 1 + |Dũ|2
1 + |Dũ|2
and
|ξ|2
Di ũξi Dj ũξj
|Dũ|2
2
ãij ξi ξj = p
≥p
≥ λ|ξ|2 ,
|ξ| −
1−
2
2
2
2
1
+
|Dũ|
1
+
|Dũ|
1 + |Dũ|
1 + |Dũ|
1
where
λ = inf
Ω
1
3
(1 + |Dũ|2 ) 2
> 0.
This last inequality follows since ũ ∈ W 1,∞ (Ω). We now study the function w as a weak solution
of the uniformly elliptic linear equation with bounded coefficients (4.7), and show that w is Hölder
continuous, which of course implies that ũ ∈ C 1,α . This is done by appealing to the DeGiorgiNash-Moser theory. We first prove a local boundedness result using a powerful technique called
Moser iteration. Specifically we show that for any ball BR ⊂⊂ Ω and a subsolution u ∈ W 1,2 (BR )
we have that u+ ∈ L∞
loc (BR ) together with an estimate of the form
sup u+ ≤ C
Br
1
(R − r)
n
p
ku+ kLp (BR ) .
This will then allow us to prove a weak Harnack inequality of the form
sup u ≤ C inf u.
BR/2
BR
Using this we can then show that the oscillation of u (which we will define in due course) decays
and this will allow us to deduce Hölder continuity. Finally it is an easy matter to apply this
theory to the function w and conclude the desired result.
18
4.2.2
Local Boundedness
From here on we consider a uniformly elliptic second order partial differential equation with
bounded coefficients written in divergence form which takes the following form
Lu = Di (aij (x)Dj u) = 0.
(4.8)
This is of course precisely the form of the equation (4.7), however, we do not need to appeal to the
specific structure of the minimal surface equation to prove the results which follow. We introduce
the notions of weak subsolutions and supersolutions of (4.8).
Definition. We say that u ∈ W 1,2 (Ω) is a subsolution (respectively supersolution) of (4.8) if for
any φ ∈ W01,2 (Ω) with φ ≥ 0 we have
Z
aij Di uDj φ ≤ 0 (resp. ≥ 0).
Ω
Theorem 4.3 (Local Boundedness). Suppose that aij ∈ L∞ (B1 ) with kaij kL∞ ≤ Λ and
aij (x)ξi ξj ≥ λ|ξ|2
for all x ∈ B1 ξ ∈ Rn ,
for some Λ and λ positive. Suppose that u ∈ W 1,2 (B1 ) is a subsolution. Then u+ ∈ L∞
loc (B1 ).
Moreover, for each θ ∈ (0, 1) and p > 0 we have
sup u+ ≤
Bθ
C
(1 − θ)
n
p
ku+ kLp (B1 ) ,
where C = C(n, λ, Λ, p) is a positive constant.
Remark. As mentioned in the overview, we will use a powerful technique known as Moser iteration to prove this. The idea is essentially as follows. By a clever choice of test function we
estimate the Lp1 norm of u+ on a ball Br1 by the Lp2 norm of u+ on a larger ball Br2 where
p1 > p2 , that is,
ku+ kLp1 (Br1 ) ≤ Cku+ kLp2 (Br2 ) .
We iterate this process for a careful choice of sequences ri and pi with ri & r > 0 and pi → ∞
to get the inequality we desire. The details now follow, but we will first state a technical lemma
that will be needed towards the end of the proof of Theorem 4.3.
Lemma 4.4. Suppose that f (t) is bounded in [τ0 , τ1 ], with τ0 ≥ 0. Suppose that for all τ0 ≤ t <
s ≤ τ1 we have
A
(4.9)
f (t) ≤ θf (s) +
(s − t)α
for some fixed θ ∈ [0, 1). Then for all τ0 ≤ t < s ≤ τ1 we have
f (t) ≤ c(α, θ)
19
A
.
(s − t)α
Proof. Fix τ0 ≤ t < s ≤ τ1 . Let τ ∈ (0, 1) and consider the sequence (tn )∞
n=0 defined as follows:
t0 = t
tn+1 = tn + (1 − τ )τ n (s − t).
Notice that limn→∞ tn = s and that (tn ) is an increasing sequence. Applying (4.9) repeatedly we
get that
n−1
A(s − t)−α X k −kα
n
f (t) = f (t0 ) ≤ θ f (tn ) +
θ τ
.
(1 − τ )α k=0
Now choose τ such that θτ −α < 1, then letting n → ∞ we obtain
f (t) ≤ c(α, θ)
A
(s − t)−α .
(1 − τ )α
Proof of Theorem 4.3. Step 1: We will first prove the special case θ =
and define the functions u := u+ + k and
u
if u < m
um :=
.
k+m
if u ≥ m
1
2
and p = 2. Let k, m > 0
Note D um = 0 if u < 0 or u ≥ m and furthermore um ≤ u. Let β ≥ 0, η ∈ Cc1 (B1 ) and introduce
the test function
φ := η 2 ( uβm u − k β+1 ) ∈ W01,2 (B1 ).
Since u ≥ um ≥ k it is clear that φ ≥ 0. Computing Dφ we see
2 β
β
β+1
)
Dφ = βη 2 uβ−1
m uD um + η um D u + 2ηDη( um u − k
= η 2 uβm (βD um + D u) + 2ηDη( uβm u − k β+1 ),
where we used the fact that u = um if u < m and D um = 0 if u ≥ m. Note that φ = 0 and
Dφ = 0 if u ≤ 0. Moreover u is a subsolution since u+ is and Di u = Di u+ . Therefore we have
Z
Z
Z
2 β
0 ≥ aij Di uDj φ = aij Di u(βDj um + Dj u)η um + 2 aij Di u( uβm u − k β+1 )ηDj η. (4.10)
We have omitted the domain of integration for now, since these estimates hold when integrating
over any ball Br (y) ⊂ B1 . Using uniform ellipticity we also have the estimates
Z
Z
2 β
β η um aij Di uDj um ≥ λβ η 2 uβm |D um |2 , and
(4.11)
Z
Z
η 2 uβm aij Di uDj u ≥ λ η 2 uβm |D u|2 .
(4.12)
since either Di u = Di um or Dj um = 0 and aij Di um Dj um ≥ λ|D um |2 . Furthermore, we may use
Hölder’s inequality to obtain the following estimate for the second integral in (4.10), after first
using the Cauchy-Schwarz inequality.
Z
Z
β
β+1
2 aij Di u( um u − k )ηDj η ≥ −2Λ η uβm u|D u||Dη|
Z
≥ −2Λ
2 2
|D u| η
20
uβm
12 Z
2
2
|Dη| u
uβm
12
.
(4.13)
Combining the last three inequalities with (4.10), we get
Z
0 ≥ λβ
η 2 uβm |D um |2 + λ
Z
≥ λβ
Z
η
2
uβm |D
2Λ
η 2 uβm |D u|2 −
λ
Z
λ
um | +
2
2
η
2
uβm |D
2Λ2
u| −
λ
2
Z
Z
|D u|2 η 2 uβm
12 Z
|Dη|2 u2 uβm
12 !
|Dη|2 u2 uβm ,
1
1
R
R
2
2
2 2 β 2
|D u|2 η 2 uβm 2 and b = 2Λ
|Dη|
.
where we have used the inequality ab ≤ a2 + b2 with a =
u
u
m
λ
We deduce
Z
Z
Z
Z
Z
1
2 β
2
2 β
2
2 β
2
2
2 β
β η um |Dum | + η um |Du| ≤ 2 β η um |D um | +
η um |D u| ≤ C |Dη|2 u2 uβm .
2
(4.14)
β
Set w := um2 u. Then note that
2
β
β β −1
2
2
2
|Dw| = um uD um + um D u
2
2
β
β
β
2
2
≤
| um | |D um | + | um | |D u|
2
β2
= | um |β |D u|2 + β| um |β |D um |2 + | um |β |D um |2
4
β
2
β
2
≤ (1 + β)(β um |D um | + um |D u| ).
The second line follows the fact that if u 6= um then D um = 0. Then from (4.14) we have
Z
Z
Z
2 2
2 2 β
|Dw| η ≤ C(1 + β) |Dη| u um = C(1 + β) |Dη|2 w2 ,
which implies by the product rule that
Z
Z
2
|D(wη)| ≤ C(1 + β) |Dη|2 w2 .
We apply the Gagliardo-Nirenberg-Sobolev inequality with χ :=
n = 2, and obtain
Z
2χ
|wη|
χ1
Z
≤
2
|D(wη)| ≤ C(1 + β)
Z
n
n−2
> 1 if n > 2 or χ > 2 if
|Dη|2 w2 .
We will now choose a particular cut-off function η. Let 0 < r < R ≤ 1 and choose η ∈ Cc1 (BR )
with
2
η ≡ 1 in Br and |Dη| ≤
.
R−r
Then
Z
χ1
Z
1+β
2χ
w
≤C
w2 .
2
(R
−
r)
Br
BR
21
Recalling how w was defined, this is the same as
Z
2χ
u
uβχ
m
χ1
Br
Z
1+β
≤C
(R − r)2
u2 uβm .
BR
Set γ := β + 2 ≥ 2 and recall that um ≤ u so that
Z
uγχ
m
χ1
Br
γ−1
≤C
(R − r)2
Z
uγ .
BR
Let m → ∞ and conclude
k ukLγχ (Br )
γ1
γ−1
k ukLγ (BR ) ,
≤ C
(R − r)2
where C = C(n, λ, Λ) is independent of γ.
Step 2: Define for i = 0, 1, . . . the sequences (γi ) and (ri ) by
γi := 2χi ,
ri :=
1
1
+ i+1 .
2 2
Then we have ri − ri−1 = 1/2i+1 and γi = χγi−1 , so
k ukLγi (Bri ) ≤
(2χi−1 − 1)
C
1 2
( 2i+1
)
1
2χi−1
k ukLγi−1 (Bri−1 )
i−1
1
≤ (32C) 2χi−1 (4χ) 2χi−1 k ukLγi−1 (Bri−1 ) .
Importantly, C depends only on n, λ and Λ! Therefore we can iterate this l times to obtain
Pl
k ukLγl (Brl ) ≤ (16C)
1
i=1 2χi−1
Pl
i−1
(4χ) i=1 2χi−1 k ukLγ0 (Br0 ) ≤ Ck ukL2 (B1 ) ,
P
P∞ i−1
1
where we have used the fact that the series ∞
i=1 2χi−1 and
i=1 2χi−1 are both convergent since
χ > 1. In particular
k ukLγl (B1/2 ) ≤ Ck ukL2 (B1 )
for all l. Now let l → ∞. We have γl → ∞ also and therefore
sup u ≤ Ck ukL2 (B1 ) .
B1/2
Letting k & 0 in the definition of u we arrive at the desired result
sup u+ ≤ Cku+ kL2 (B1 ) .
B1/2
Step 3: We now seek to generalise for p > 0 and θ ∈ (0, 1). We proceed by applying a dilation
argument. Let R ≤ 1 and define û(y) := u(Ry) for y ∈ B1 . Then it is trivial to verify that
Z
âij Di ûDj φ ≤ 0 ∀φ ∈ W01,2 (B1 ), with φ ≥ 0,
B1
22
where âij (y) = aij (Ry). Notice that
kâij kL∞ (B1 ) = kaij kL∞ (BR ) ≤ Λ,
and
âij (y)ξi ξj ≥ λ|ξ|2
∀ξ ∈ Rn , ∀y ∈ B1 .
Hence we may apply the first part of the proof to û, and we write the resulting estimate in terms
of u. This yields, for p ≥ 2 the following estimate
sup u+ ≤
BR/2
C
n
Rp
ku+ kLp (BR ) ,
where C = C(n, λ, Λ, p) > 0. For θ ∈ (0, 1), we apply the above result on B(1−θ)R (y) for each
y ∈ BθR . This leads to
u+ (y) ≤
u+ ≤
sup
B((1−θ)R)/2 (y)
C
((1 − θ)R)
n
p
ku+ kLp (B(1−θ)R (y)) ≤
C
((1 − θ)R)
n
p
ku+ kLp (BR ) ,
which implies
sup u+ ≤
BRθ
C
n
((1 − θ)R) p
ku+ kLp (BR ) .
For R = 1 this is precisely the stated result for p ≥ 2.
Step 4: Our final goal is to extend this result to p ∈ (0, 2) also. Notice first that
Z
Z
+ 2
+ 2−p
(u ) ≤ ku kL∞ (BR )
(u+ )p ,
BR
BR
and so
+
ku kL∞ (BθR )
p
C
+ 1− 2
≤
n ku kL∞ (B )
R
((1 − θ)R) 2
Z
+ p
(u )
12
.
BR
We apply Young’s inequality
ε
1
ab ≤ aq + r br
q
εq r
a, b ≥ 0
for all
1 1
+ = 1 ε > 0,
q r
where we choose
a=
1− p
ku+ kL∞2(BR ) ,
C
b=
n
((1 − θ)R) 2
Z
+ p
(u )
BR
12
,
q=
2
,
2−p
2
r= ,
p
q
ε= .
2
This yields
1
C
+
ku+ kL∞ (BθR ) ≤ ku+ kL∞ (BR ) +
n ku kLp (BR ) .
2
((1 − θ)R) p
Setting f (t) := ku+ kL∞ (Bt ) for each t ∈ (0, 1] we have shown that for any 0 < r < R ≤ 1
C
1
+
f (r) ≤ f (R) +
n ku kLp (B1 ) .
p
2
(R − r)
23
Therefore we may invoke Lemma 4.4 to deduce
f (r) ≤
C
ku+ kLp (B1 ) .
n
(R − r) p
Let R % 1, then for all θ < 1 we have
ku+ kL∞ (Bθ ) ≤
C
(1 − θ)
n
p
ku+ kLp (B1 )
as desired, thus we are done.
An immediate consequence of Theorem 4.3 is the following which we obtain by a scaling
argument.
Theorem 4.5. Let Ω be a bounded domain in Rn and
Lu = Di (aij (x)Dj u),
where kaij kL∞ (Ω) ≤ Λ and aij (x)ξi ξj ≥ λ|ξ|2 for all x ∈ Ω, ξ ∈ Rn . Suppose that u ∈ W 1,2 (Ω) is
a subsolution of L in Ω, that is
Z
aij Di uDj φ ≤ 0
Ω
for all φ ∈
W01,2 (Ω)
with φ ≥ 0 in Ω. Then for any ball BR (x) ⊂ Ω, r < R and p > 0 we have
sup u+ ≤
Br
C
(R − r)
n
p
ku+ kLp (BR ) ,
where C = C(n, λ, Λ, p) is positive.
4.2.3
Weak Harnack Inequality
Theorem 4.6 (John-Nirenberg Lemma). Suppose u ∈ L1 (Ω) satisfies
Z
|u − uBr (x) | ≤ M rn
∀Br (x) ⊂ Ω,
Br (x)
where uBr (x) =
1
|Br (x)|
R
Br (x)
u(y)dy. Then for any Br (x) ⊂ Ω
Z
ep0 |u−uBr (x) |/M ≤ Crn ,
Br (x)
for some positive p0 and C depending only on n.
For the proof see [4].
24
(4.15)
Theorem 4.7 (Weak Harnack Inequality). Suppose that aij ∈ L∞ (Ω) with λ|ξ|2 ≤ aij ξi ξj ≤ Λ|ξ|2
for some 0 < λ ≤ Λ < ∞ and suppose that u ∈ W 1,2 (Ω) is a non-negative supersolution of the
equation Dj (aij Di u) = 0 in Ω, that is
Z
aij Di uDj φ ≥ 0 ∀φ ∈ W01,2 (Ω) φ ≥ 0.
(4.16)
Ω
Then for all BR ⊂ Ω, 0 < p < n/(n − 2) and 0 < θ < τ < 1 we have
p1
Z
1
p
,
inf u ≥ C
u
BθR
Rn Bτ R
(4.17)
where C = C(n, p, λ, Λ, θ, τ ).
Proof. We will prove the case R = 1. As with Local Boundness (Theorem 4.3) the general result
with follow from a scaling argument. The proof proceeds in two steps.
Step 1: We show first that the statement holds for some p0 > 0. Set u := u + k for some k > 0
so that u > 0, and define v := u−1 . We start by deriving an equation for v. Let φ ∈ W01,2 (B1 )
with φ ≥ 0 and use u−2 φ as the test function in (4.16). We have
Z
Z
−2
aij φ u−3 Di uDj u ≥ 0.
aij u Di uDj φ − 2
B1
B1
We note that D u = Du and Dv = − u−2 D u and therefore since aij ≥ 0
Z
aij Di vDj φ ≤ 0.
B1
Applying Theorem 4.5 we get
sup v ≤ CkvkLp (Bτ )
Bθ
where C = C(θ, τ, p, n, λ, Λ) > 0. In other words
Z
− p1
Z
−p
u
=C
inf u ≥ C
Bθ
Bτ
u
Bτ
−p
Z
u
p
− p1 Z
Bτ
u
p1
.
Bτ
If we can show that there exists some p0 > 0 such that
Z
Z
−p0
u
up0 ≤ C(n, λ, Λ, τ )
Bτ
p
(4.18)
Bτ
then (4.17) holds in the case p = p0 . We will show that there is some p0 > 0 such that for all
τ <1
Z
ep0 |w| ≤ C(n, λ, Λ, τ ),
(4.19)
Bτ
R
where w := log u − |Bτ |−1 Bτ log u. From this (4.18) will follow since
Z
Z
Z
Z
Z
Z
R
R
−1
p0 (w+|Bτ |−1 Bτ log u)
p0
−p0
p0 log u
−p0 log u
u
u
=
e
e
=
e
e−p0 (w+|Bτ | Bτ log u)
Bτ
Bτ
Bτ
Bτ
Bτ
R
R
|Bτ |−1 Bτ log u−|Bτ |−1 Bτ log u
≤ Ce
≤ C(n, λ, Λ, τ ).
25
Bτ
We first derive an equation for w. Choose the test function u−1 φ in (4.16) where φ ∈ L∞ (B1 ) ∩
W01,2 (B1 ). Then since Dw = u−1 D u we have
Z
Z
aij φDi wDj w ≤
aij Di wDj φ
∀φ ∈ L∞ (B1 ) ∩ W01,2 (B1 ) φ ≥ 0.
B1
B1
On replacing φ with φ2 , and using ellipticity, Cauchy-Schwarz inequality and Hölder’s inequality
we get
Z
Z
Z
2
1
2 2
2
aij φ Di wDj w ≤
aij φDi wDj φ
|Dw| φ ≤
λ B1
λ B1
B1
12
21 Z
Z
Z
2
2 2
|Dφ|
.
≤C
|Dw||φ||Dφ| ≤ C
|Dw| ζ
B1
B1
B1
and hence, in particular,
Z
Z
2 2
|Dφ|2
|Dw| ζ ≤ C
B1
∀ζ ∈ Cc1 (B1 ).
B1
For B2r (y) ⊂ B1 let ζ be such that suppζ ⊂ B2r (y), ζ ≡ 1 in Br (y) and |Dζ| ≤ 2/r, then
Z
|Dw|2 ≤ Crn−2 .
Br (y)
Therefore by Hölder’s inequality and Poicaré’s inequality
1
rn
Z
|w − wBr (y) | ≤
Br (y)
Z
1
2
12
|w − wBr (y) |
rn/2
≤
Br (y)
1
rn/2
That is, w ∈ BMO and so by the John-Nirenberg lemma
Z
ep0 |w| ≤ C.
r
2
Z
2
|Dw|
12
≤ C.
Br (y)
(4.20)
Bτ
Step 2: We next show that the result holds for all p < n/(n − 2). Note that for 0 < p ≤ p0
we have ep|w| ≤ ep0 |w| and so we already have that the result holds for p < p0 . Extending this
result requires the use of the Moser iteration technique and the ideas are similar to those in the
proof of Theorem 4.3. We give an outline of the argument here. We wish to show that for all
0 < r1 < r2 < 1 and 0 < p2 < p1 < n/(n − 2) that
Z
! p1
1
up1
Z
≤C
Br1
! p1
2
up2
(4.21)
Br2
for C = C(n, λ, Λ, r1 , r2 , p1 , p2 ) > 0. Choose the test function φ = u−β η 2 where β ∈ (0, 1) and
η ∈ Cc1 (B1 ). Then plugging this into (4.16) we have
Z
Z
Z
−β 2
2
0≤
aij Di uDj ( u η ) = −β
aij Di uη + 2
aij Di u u−β ηDj η
B1
B1
26
B1
implying by ellipticity, Cauchy-Schwarz inequality and Hölder’s inequality that
Z
Z
Z
2 −β−1 2
−β−1 2
β
|D u| u
η ≤C
aij Di uDj u u
η ≤C
aij Di u u−β ηDj η
B1
B1
B1
Z
≤C
|D u| u
−β
Z
η|Dη| ≤
−β−1 2
|Du| u
12 Z
Z
2
C
η ≤ 2
β
−β−1 2
|D u| u
B1
Z
γ
2
γ/2
2
|Dη| u
η
1−β
12
,
B1
B1
B1
and hence
2
|Dη|2 u1−β .
B1
γ/2−1
and so
Let γ := 1 − β ∈ (0, 1) and w := u , then Dw = u
Z
Z
C
2 2
|Dw| η ≤
|Dη|2 w2
2
(1 − γ) B1
B1
and thus
Z
Z
C
|D(wη)| ≤
w2 |Dη|2 .
2
(1
−
γ)
B1
B1
As in the proof of Theorem 4.3, Sobolev embedding theorem and an appropriate choice of cut-off
function imply, with χ = n/(n − 2) gives us that for 0 < r < R < 1
1
γχ
γ1 Z
γ1
Z
C
1
γχ
γ
≤
,
u
u
(1 − γ)2 (R − r)2
BR
Br
2
for any γ ∈ (0, 1). Hence we can obtain (4.17) for R = 1 after finitely many iterations.
The following result is now an immediate corollary of Theorems 4.5 and 4.7.
Theorem 4.8 (Moser’s Harnack Inequality). Let u ∈ W 1,2 (Ω) be a non-negative solution of (4.8)
in Ω, that is
Z
∀φ ∈ W01,2 (Ω).
aij Di uDj φ = 0
Ω
Then for any ball BR ⊂ Ω
sup u ≤ C inf u,
(4.22)
BR/2
BR
where C = C(n, λ, Λ) > 0.
4.2.4
Hölder Continuity of the Derivative
In this section, we will use the Harnack inequalities proved in the last section to show Hölder
continuity of solutions. We first state a simple technical lemma.
Lemma 4.9. Let τ < 1, γ > 0 and suppose ω is non-decreasing on (0, R] and satisfies
ω(τ r) ≤ γω(r)
for all r ≤ R. Then for all r ≤ R
ω(r) ≤ C
r α
R
ω(R)
for constants C = C(γ, τ ) and α = α(γ, τ ). In fact we have α =
27
log γ
.
log τ
Proof. Let r ≤ R and choose k such that τ k R < r ≤ τ k−1 R. Then
ω(r) ≤ ω(τ k−1 R) ≤ γ k−1 ω(R).
Moreover we have
log γ
γ k = (τ k ) log τ ≤
Hence
γ
r log
log τ
.
R
log γ
1 r log τ
ω(r) ≤
ω(R).
γ R
Theorem 4.10. Let u ∈ W 1,2 (Ω) be a weak solution in Ω of (4.8), that is
Z
aij Di uDj φ = 0
Ω
for all φ ∈ W01,2 (Ω). Then u ∈ C 0,α (Ω) for α ∈ (0, 1) depending only on n, λ and Λ, and moreover,
for any BR ⊂ Ω,
α 21
Z
|x − y|
1
2
|u(x) − u(y)| ≤ C
u
,
R
Rn BR
for all x, y ∈ B R .
2
Proof. We prove the result for R = 1. Fix some ball B1 ⊂ Ω and let r ∈ (0, 1]. Define M (r) :=
supBr u and m(r) := inf Br u. Then by Theorem 4.5, we know that M (r) < ∞ and m(r) > −∞.
It suffices to show that
Z
12
α
2
ω(r) := M (r) − m(r) ≤ Cr
,
u
B1
1
.
2
for any r ≤
To do so we observe that M (r) − u is a non-negative solution of (4.8) in Br , and
so Moser’s Harnack inequality (Theorem 4.8) implies
sup(M (r) − u) ≤ sup(M (r) − u) ≤ C inf (M (r) − u).
Br
Br
Br
2
Thus,
2
r ≤ C M (r) − M
.
2
2
Similarly, we can apply Moser’s Harnack inequality to u − m(r) on Br to obtain
r
r
M
− m(r) ≤ C m
− m(r) .
2
2
M (r) − m
r
Adding these two inequalities we obtain
r
r ω(r) + ω
≤ C ω(r) − ω
,
2
2
which implies
ω
r
2
≤ γω(r),
28
where γ =
C−1
C+1
< 1. We apply Lemma 4.9 to obtain.
1
ω(ρ) ≤ Cρ ω
,
2
α
for all ρ ∈ (0, 1/2] where α =
log γ
.
log 12
Finally by Theorem 4.3 we have
Z
21
1
2
ω
≤C
,
u
2
B1
from which we deduce the result.
4.3
Hölder Continuity of the Second Derivatives
In this section we will show that the solution ũ is in C 2,α , and hence is a solution of the minimal
surface equation in the classical sense. The results of this section may be found in [7].
Lemma 4.11. Let σ(r) be a non-negative monotonically increasing function defined on [0, R0 ]
satisfying
r µ
σ(r) ≤ γ
+ δ σ(R) + κRν
R
for all 0 < r ≤ R ≤ R0 , where γ, ν < µ and δ > 0 are constants. Then there exists δ0 (γ, µ, ν) > 0
such that, provided δ ≤ δ0 , we have
r ν
σ(R) + κ1 rν
σ(r) ≤ γ1
R
for all 0 < r ≤ R ≤ R0 and where γ1 = γ1 (γ, µ, ν) and κ1 = κ1 (γ, µ, ν, κ) with κ1 = 0 if κ = 0.
Proof. Let 0 < τ < 1, and R < R0 . Then by assumption
σ(τ R) ≤ γτ µ (1 + δτ −µ )σ(R) + κRν .
We choose τ such that 2γτ µ = τ λ for some ν < λ < µ, (note that we may assume without loss of
generality that 2γ > 1). Assume that δ0 τ −µ ≤ 1. It follows that if δ ≤ δ0
σ(τ R) ≤ τ λ σ(R) + κRν ,
and thus, by iterating, that
σ(τ k R) ≤ τ λ σ(τ k−1 R) + κτ (k−1)ν Rν
kλ
≤ τ σ(R) + κτ
(k−1)ν
R
ν
k−1
X
τ j(λ−ν)
j=0
κτ (k−1)ν Rν
kν
σ(R) +
.
≤τ
1 − τ λ−ν
29
Now let r ≤ R and choose k such that τ k R < r ≤ τ k−1 R as in the proof of Lemma 4.9 , then
σ(r) ≤ σ(τ k−1 R) ≤ τ (k−1)λ σ(R) +
κRν τ (k−1)ν
1 − τ (λ−ν)
≤ Cτ ν(k−1) (σ(R) + κRν )
r ν
≤ γ1
σ(R) + κ1 rν ,
R
which completes the proof.
We now a state two theorems whose proofs are found in [7], and are needed to show the C 2,α
regularity. In these theorems n,is as usual, the dimension of the space.
Theorem 4.12 (Campanato’s Theorem). If for all 0 < r ≤ R0 and all x ∈ Ω we have
Z
|u − uBr (x) |p ≤ γrn+pα
Br (x)
0,α
with constants γ, p and 0 < α < 1, then u ∈ Cloc
(Ω).
Theorem 4.13 (Campanato Estimates). Let (Aij ) be a matrix with |Aij | ≤ Λ for all i, j and
Aij ξi ξj ≥ λ|ξ|2 for all ξ ∈ Rn where 0 < λ ≤ Λ < ∞ are constants. Suppose u ∈ W 1,2 (Ω) is a
weak solution of the equation
Dj (Aij Di u) = 0.
(4.23)
Then for all x0 ∈ Ω and 0 < r < R < dist(x0 , ∂Ω) we have
Z
r n Z
2
|u| ≤ c1
|u|2
R
Br (x0 )
BR (x0 )
Z
r n+2 Z
|u − uBr (x0 ) |2 ≤ c2
|u − uBR (x0 ) |2 .
R
Br (x0 )
BR (x0 )
(4.24)
Theorem 4.14. Suppose that aij (x) are C α (Ω) for i, j = 1, . . . , n and some α ∈ (0, 1) and satisfy
aij (x)ξi ξj ≥ λ|ξ|2 for all ξ ∈ Rn and x ∈ Ω, and that |aij | ≤ Λ on Ω, where 0 < λ ≤ Λ < ∞ are
constant. Then any weak solution u ∈ W 1,2 (Ω) of the equation
Dj (aij Di u) = 0
(4.25)
aij (x) = aij (x0 ) + (aij (x) − aij (x0 ))
(4.26)
0
is C 1,α (Ω) for all α0 ∈ (0, α).
Proof. Step 1: Let x0 ∈ Ω. We write
and define Aij := aij (x0 ). Then for v satisfying (4.25)
Dj (Aij Di u) = Dj ((aij (x0 ) − aij (x))Di u) = Dj f j (x)
where we have introduced the function
f j (x) := (aij (x0 ) − aij (x))Di u(x).
30
(4.27)
In other words, u satisfies
Z
Z
Ω
∀φ ∈ W01,2 (Ω).
f j Dj φ
Aij Di vDj φ =
(4.28)
Ω
Let BR (x0 ) ⊂ Ω be a ball, and let w ∈ W 1,2 (BR (x0 )) be a weak solution of the Dirichlet problem
Then w satisfies
Z
Dj (Aij Di w) = 0 in BR (x0 )
w = u on ∂BR (x0 ).
(4.29)
Aij Di wDj φ = 0 ∀φ ∈ W01,2 (BR (x0 )).
(4.30)
BR (x0 )
Such a w can be shown to exist by the Lax Milgram Theorem, since it guarantees the existence
of z ∈ W01,2 (BR (x0 )) with
Z
Z
B(φ, z) =
Aij Di zDj φ = −
Aij Di uDj φ ∀φ ∈ W01,2 (BR (x0 )),
(4.31)
BR (x0 )
BR (x0 )
and then w given by w = u + z is easily seen to be a solution of (4.29). According to Theorem 4.2,
2,2
we may deduce that w is in Wloc
(BR (x0 )), and so since the equation for w is linear with constant
coefficients, it follows that Dk w satisfies the same equation on BR (x0 ) for k = 1, . . . , n although
naturally the boundary conditions will be different). Hence, by Campanato estimates (4.24) we
get
Z
r n Z
2
|Dw| ≤ C
|Dw|2 .
(4.32)
R
Br (x0 )
BR (x0 )
Step 2: Next we observe that since u = w on ∂BR (x0 ), u − w is an admissible test function, and
hence
Z
Z
Aij Di wDj w =
Aij Di wDj u
BR (x0 )
BR (x0 )
Combining this, the Cauchy-Schwarz inequality and ellipticity we obtain the estimate
Z
Z
Z
2
λ
|Dw| ≤
Aij Di wDj w =
Aij Di wDj u
BR (x0 )
BR (x0 )
BR (x0 )
Z
Z
≤
2
|ADw||Du| ≤
2
kAk |Dw|
BR (x0 )
BR (x0 )
Z
2
≤ nΛ
12 Z
2
|Dw|
21
|Du|
BR (x0 )
12 Z
|Du|
BR (x0 )
2
21
,
BR (x0 )
which implies
Z
2
|Dw| ≤
BR (x0 )
nΛ
λ
2 Z
Moreover, (4.28) and (4.30) together imply
Z
Z
Aij Di (u − w)Dj φ =
BR (x0 )
|Du|2 .
BR (x0 )
f j Dj φ ∀φ ∈ W01,2 (BR (x0 )).
BR (x0 )
31
(4.33)
Letting φ = u − w we obtain
Z
Z
Z
1
1
2
|D(u − w)| ≤
Aij Di (u − w)Dj (u − w) =
f j Dj (u − w)
λ
λ
BR (x0 )
BR (x0 )
BR (x0 )
! 21
Z
21 Z
X
1
|D(u − w)|2
,
≤
|f j |2
λ
BR (x0 )
BR (x0 ) j
implying
Z
Z
1
|D(u − w)| ≤ 2
λ
BR (x0 )
2
X
BR (x0 )
|f j |2 .
(4.34)
j
Step 3: We now start to combine all of these estimates. First we use (4.32) and (4.33) to show
Z
Z
Z
2
2
|Du| ≤ 2
|Dw| + 2
|D(u − w)|2
Br (x0 )
Br (x0 )
Br (x0 )
Z
r n Z
2
|Dw| + 2
|D(u − w)|2
≤C
R
BR (x0 )
B (x )
Z r 0
r n Z
|Du|2 + 2
|D(u − w)|2 .
≤C
R
BR (x0 )
Br (x0 )
Next observe that
Z
Z
2
|D(u − w)| ≤
Br (x0 )
1
|D(u − w)| ≤ 2
λ
BR (x0 )
Z
1
≤ 2 sup |aij (x0 ) − aij (x)|2
λ i,j
Z
2
x∈Ω
X
BR (x0 )
|f j |2
j
2
|Du| ≤ CR
BR (x0 )
2α
Z
(4.35)
2
|Du| ,
BR (x0 )
the last inequality following since each aij is in C 0,α by assumption. Finally we arrive at the
estimate
Z
Z
r n
2α
2
+R
|Du|2 .
|Du| ≤ γ
R
Br (x0 )
BR (x0 )
R
2
We apply Lemma 4.11 with σ(r) := Br (x0 ) |Du| , µ = n, and ν = n − ε for some ε > 0. Then
there exists some δ0 (γ, n, ε) > 0 such that provided R02α ≤ δ0 , then for all 0 < r ≤ R ≤ R0
Z
r n−ε Z
2
|Du| ≤ C
|Du|2 ,
(4.36)
R
BR (x0 )
Br (x0 )
where the constant C depends on ε, n and γ.
Step 4: The second part of the proof is devoted to deriving an analogous inequality from the
second Campanato estimate (4.24). Using the same arguments as were used to establish (4.32),
we can show that
Z
r n+2 Z
2
|Dw − (Dw)Br (x0 ) | ≤ C
|Dw − (Dw)BR (x0 ) |2 .
(4.37)
R
Br (x0 )
BR (x0 )
32
Next we note that
Z
Z
2
|Dw − (Du)BR (x0 ) |2 ,
|Dw − (Dw)BR (x0 ) | ≤
BR (x0 )
BR (x0 )
since for any real-valued g ∈ L2 (BR (x0 ))
Z
Z
2
|g − (g)BR (x0 ) | = inf
κ∈R
BR (x0 )
(g − κ)2 .
BR (x0 )
R
To see this note that the function F (κ) := BR (x0 ) (g − κ)2 is convex and differentiable and has a
critical point at κ = (g)BR (x0 ) , and so F attains its minimum here. Furthermore, we also have the
following estimate by ellipticity
Z
Z
1
2
Aij (Di w − (Di u)BR (x0 ) )(Dj w − (Dj u)BR (x0 ) )
|Dw − (Du)BR (x0 ) | ≤
λ BR (x0 )
BR (x0 )
Z
1
=
Aij (Di w − (Di u)BR (x0 ) )(Dj u − (Dj u)BR (x0 ) )
λ BR (x0 )
Z
1
+
Aij (Di u)BR (x0 ) (Dj u − Dj w).
λ BR (x0 )
The second integral vanishes since Aij (Di u)BR (x0 ) is constant and u − w ∈ W01,2 (BR ). Applying
the Cauchy-Schwarz inequality as we did to deduce (4.33), we have
Z
Z
Λ 2 n2
2
|Dw − (Dw)BR (x0 ) | ≤ 2
|Du − (Du)BR (x0 ) |2 .
(4.38)
λ
BR (x0 )
BR (x0 )
Step 5: Finally
Z
Z
Z
2
2
|Dw − (Dw)Br (x0 ) |2
|Du − Dw| + 3
|Du − (Du)Br (x0 ) | ≤ 3
Br (x0 )
Br (x0 )
Z Br (x0 )
|(Du)Br (x0 ) − (Dw)Br (x0 ) |2 ,
+3
Br (x0 )
and
Z
2
|(Du)Br (x0 ) − (Dw)Br (x0 ) | = |Br (x0 )|
Br (x0 )
≤ |Br (x0 )|
Z
1
|Br (x0 )|
Z
2
|Du − Dw|
Br (x0 )
1
|Br (x0 )|1/2
|Br (x0 )|
Z
|Du − Dw|2
1/2 !2
Br (x0 )
|Du − Dw|2 .
=
Br (x0 )
Combining the last two inequalities and using (4.35) gives
Z
Z
Z
2
2
|Du − (Du)Br (x0 ) | ≤ 3
|Dw − (Dw)Br (x0 ) | + 6
|Du − Dw|2
Br (x0 )
Br (x0 )
ZBr (x0 )
Z
2
2α
≤3
|Dw − (Dw)Br (x0 ) | + CR
|Du|2 ,
Br (x0 )
BR (x0 )
33
Applying the second Campanato estimate (4.24) to the first integral and then applying (4.38), we
have
Z
Z
r n+2 Z
2
2
2α
|Du − (Du)Br (x0 ) | ≤ C
|Du − (Du)BR (x0 ) | + CR
|Du|2
R
Br (x0 )
BR (x0 )
BR (x0 )
r n+2 Z
|Du − (Du)BR (x0 ) |2 + CRn+2α−ε
≤C
R
BR (x0 )
where we have applied (4.36) with r = R and R = R0 to deduce the second inequality. Finally
applying Lemma 4.11 we get
!
n+2α−ε Z
Z
1
|Du − (Du)BR (x0 ) |2 + C2 rn+2α−ε .
|Dv − (Dv)Br (x0 ) |2 ≤ C1
R
BR (x0 )
Br (x0 )
This holds for all 0 < r ≤ R and so the claim now follows from Campanato’s Theorem.
We apply the above theorem to the partial derivatives of ũ and the equation (4.7) to deduce
that ũ is C 2,α .
4.4
Schauder Theory
Having shown that the solution ũ is a classical solution, we now appeal to the interior Schauder
estimates to show that ũ is smooth. Roughly speaking, Schauder estimates may be used to show
that the second derivatives of a classical solution, have the same regularity as the coefficients in
the equation, so in our case in particular we can show that aij ∈ C k,α (Ω) implies u ∈ C k+2,α (Ω).
Since the coefficients aij are written in terms of the first derivatives of ũ, a bootstrapping argument
will allow us to deduce the result we are after. The first two results we shall state without proof,
they may be found in, for example, [2].
Theorem 4.15 (Dirichlet Problem). Let B ⊂ Rn be a ball, and let
L = aij Dij + bi Di + c
be a uniformly elliptic operator with coefficients aij , bi and c ∈ C 0,α (B), and c ≤ 0 on B. Then
the Dirichlet problem
Lu = f
u=g
in B
on ∂B,
for f ∈ C 0,α (B) and g ∈ C(∂B) has a unique solution u ∈ C 2,α (B) ∩ C(Ω).
Theorem 4.16 (Interior Schauder Estimates). Let α ∈ (0, 1). Suppose Ω is a domain and
u ∈ C 2,α (Ω) solves the uniformly elliptic equation
Lu = aij Dij u + bi Di u + cu = f
in Ω,
where aij , bi , c and f are in C 0,α (Ω). Then for any Ω0 ⊂⊂ Ω
kukC 2,α (Ω0 ) ≤ C(kukC(Ω) + kf kC 0,α (Ω) )
where C = C(n, α, L, Ω0 , Ω) ∈ (0, ∞).
34
Theorem 4.17 (Interior Regularity). Let k ≥ 0 be an integer and α ∈ (0, 1). Let Ω ⊂ Rn be open
and suppose that u ∈ C 2 (Ω) satisfies
Lu = aij Dij + bi Di u + cu = f
in Ω
for some elliptic operator L with coefficients aij , bi , c ∈ C k,α (Ω) and some f ∈ C k,α (Ω). Then
u ∈ C k+2,α (Ω).
Proof. We will prove the simpler case where the operator L takes the form L = aij Dij . The more
general result is proved in a similar fashion with slightly more complicated calculations.
Step 1: We first show that aij , f ∈ C 0,α (Ω) and u ∈ C 2 (Ω) implies that u ∈ C 2,α (Ω). This is
a direct consequence of Theorem 4.15. Let B ⊂⊂ Ω be a ball and consider solutions v to the
Dirichlet problem
aij Dij v = f
v=u
in B,
on ∂B.
Theorem 4.15 tells us that there exists a solution in C(B) ∩ C 2,α (B), however we also know that
u is a C(B) ∩ C 2 (B) solution and that there exists at most one solution in C(B) ∩ C 2 (B). Thus
we conclude that u ∈ C 2,α (B) and hence that u ∈ C 2,α .
Step 2: We next show that aij , f ∈ C 1,α (Ω) and u ∈ C 2,α (Ω) implies that u ∈ C 3,α (Ω). This is
done via a difference quotient argument. Choose Ω000 , Ω00 and Ω0 such that Ω000 ⊂⊂ Ω00 ⊂⊂ Ω0 ⊂⊂ Ω
and h0 > 0 such that both dist(Ω000 , ∂Ω00 ) < h0 and dist(Ω00 , ∂Ω0 ) < h0 . We then apply δlh for some
l ∈ {1, . . . , n} and 0 < h < h0 to both sides of the equation Lu = f and rearrange to get
L(δlh u) = aij (x)Dij (δlh u) = δlh f − δlh aij (x)Dij u(x + hel )
on Ω00 .
We then apply Theorem 4.16 to arrive at the estimate since δlh ∈ C 2,α (Ω1 )
kδlh ukC 2,α (Ω000 ) ≤ C(kδlh ukC(Ω00 ) + kδlh f kC 0,α (Ω00 ) + kδlh aij kC 0,α (Ω00 ) kDij (· + hel )kC 0,α (Ω00 ) .
(4.39)
We next observe that
sup
x∈Ω00
|δlh aij (x)|
Z h
Z 1
1
1
d
aij (x + htel )dt = sup Dl aij (x + tel )dt ≤ sup |Dl aij (x)|,
= sup x∈Ω00 h 0
x∈Ω0
x∈Ω00 h 0 dt
and thus δlh aij and hence continuous, and also notice that for x, y ∈ Ω00
Z h
1
h
h
|δl aij (x) − δl aij (y)| = (Dl aij (x + tel ) − Dl aij (y + tel ))dt ≤ [Dl aij ]C 0,α (Ω0 ) |x − y|α .
h 0
These together imply kδlh aij kC 0,α (Ω00 ) ≤ kDl aij kC 0,α (Ω0 ) , and similar calculations yield an estimate
of the same form for f . Hence from (4.39) we arrive at
kδlh ukC 2,α (Ω000 ) ≤ C(kDl ukC(Ω0 ) + kDl f kC 0,α (Ω0 ) + kDl aij kC 0,α (Ω0 ) kDij ukC 0,α (Ω0 ) ).
This bound is independent of h, and therefore it follows that the set {δlh u}h<h0 is bounded and
equicontinuous. The same is true for the derivatives of δlh u. Therefore by the Arzelà-Ascoli
35
h
theorem we can find a sequence (hj )j∈N with hj & 0 such that δl j u converges in C 2 (Ω000 ). Now
h
u is continuously differentiable on Ω and so δl j u converges uniformly to Dl u on Ω000 . Thus we
h
conclude that δl j u converges to Dl u in C 2 (Ω000 ), which implies Dl u ∈ C 2,α (Ω000 ). Since l and Ω000
were arbitrary, we conclude that u ∈ C 3,α (Ω).
Step 3: Let k ≥ 2. We now wish to show that aij , f ∈ C k,α (Ω) and u ∈ C k+1,α (Ω) implies
u ∈ C k+2,α (Ω). Let β be a multiindex with |β| = k − 1. Then applying Dβ to both sides of the
equation Lu = f , we obtain
L(Dβ u) = Dβ f −
X
γ<β
β!
Dβ−γ aij Dγ Dij u
γ!(β − γ)!
in Ω.
Since the right hand side is in C 1,α (Ω) and Dβ u ∈ C 2,α , we may apply step 2 to deduce that Dβ u ∈
C 3,α (Ω), and hence that u ∈ C k+2,α . The proof of the theorem now follows by induction.
5
Gradient bound
In this section, we want to find a bound of the gradient of a solution u to the minimal surface
equation. We will state the main theorem here, but we will have to do a bit of work before we
can prove it. This section is based on [6], and any proofs omitted are proved there or in [2].
Theorem 5.1 (Gradient Bound). Let Ω be a bounded C 2 domain and suppose that u solves the
minimal surface equation in Ω. Then for any y ∈ Ω, we have the following bound on gradient of
u:
!!
u(x) − u(y)
,
|Du(y)| ≤ exp C 1 + sup
d
x∈Bd (y)
where C = C(u) and d = dist(y, ∂Ω).
We will split the proof up into a series of propositions, but first of all we will define our notation
and present a few useful results. We assume throughout this section, as we did in the statement
of Theorem 5.1, that Ω ⊂ Rn is a bounded C 2 domain and that u solves the minimal surface
equation in Ω. We will write G(u) for the graph of u, namely
G(u) = {(x, u(x)) : x ∈ Ω}
p
and write a = 1 + |Du|2 for the area element of u on its graph. The graph can equivalently be
viewed as a level set of the function Φ : Ω × R → R defined by Φ(x, xn+1 ) = xn+1 − u(x). The
unit normal to G(u) is given by
DΦ
ν=
.
(5.1)
|DΦ|
Lemma 5.2. If ν is the normal to G(u) as defined in (5.1), then
div(ν) := Di νi = −nH,
where H is the mean curvature of the graph G(u).
36
Lemma 5.3. Let u be a solution to the minimal surface equation, φ ∈ Cc1 (Ω) a compactly supported, differentiable test function, and ν the normal to G(u) as defined in equation (5.1). Then
Z
νi Di φ = 0.
Ω
Proof. Since u is a solution to the minimal surface equation, we know that it has mean curvature,
H, equal to zero. Therefore, by Lemma 5.2, we know that
divν = Di νi = 0 in Ω.
We then multiply by φ ∈ Cc1 (Ω), integrate over Ω, and integrate by parts to finish the proof.
Definition (Tangential gradient). For a differentiable function g defined in some open set containing the graph G(u), we define the tangential gradient of g on G(u) to be
∇g = Dg − (Dg · ν)ν.
Note that for y ∈ G(u), the tangential gradient ∇g(y) is the projection of Dg(y) onto the tangent
plane to G(u) at y.
Lemma 5.4. Let u be a solution to the MSE, and suppose that we have a function φ̃ ∈ Cc1 (Ω×R).
Define the function φ : Ω → R by setting
φ(x0 ) = φ̃(x0 , u(x0 )),
where x0 = (x1 , . . . , xn ) ∈ Ω. Then on G(u) we have
∇i νn+1 Di φ = ∇i νn+1 ∇i φ = ∇i νn+1 ∇i φ̃.
p
Proposition 5.5. Let w = log 1 + |Du|2 . Then for any y ∈ Ω, and ρ > 0 such that Bρn (y) ⊂ Ω,
we have the following inequality:
Z
1
w(y) ≤
w dHn .
n
n+1
ωn ρ G(u)∩Bρ (y)
We split the proof of this proposition up into two lemmas.
Lemma 5.6. If u is a solution to the MSE, then the function w = log
in the sense that
∇i ∇i w ≥ 0.
37
p
1 + |Du|2 is subharmonic,
Proof. From Lemma 5.3, we know that
Z
νi Di φ = 0,
Ω
where ν is the tangent to G(u), for any φ ∈ Cc1 (Ω). Therefore, for any function φ such that
Dk φ ∈ Cc1 (Ω) for some k = 1, . . . , n + 1, we can integrate by parts to see that
Z
Dk νi Di φ = 0.
Ω
Now let us choose φ so that it is non-negative in Ω, and Di (νk φ) ∈ Cc1 (Ω) for each k, i = 1, . . . , n+1.
Then we can use νk φ as our test functions in the equation above and sum over k to get
Z
Dk νi Di (νk φ) = 0,
Ω
and therefore
Z
Z
(Dk νi )(Di νk )φ +
νk (Dk νi )(Di φ) = 0.
Ω
(5.2)
Ω
It can be shown that
Dk νi Di νk =
X
(κk )2 ≥ 0,
k
where κk are the principal curvatures, and that
νk Dk νi Di = −a∇i νn+1 .
Substituting these into (5.2), we can see that
!
Z
Z
X
2
(κk ) φ − a∇i νn+1 Di φ = 0.
Ω
Ω
k
Therefore, since we have chosen φ so that it is non-negative in Ω, we now know that
Z
a∇i νn+1 Di φ ≥ 0.
(5.3)
Ω
Now, if we find a function φ̃ ∈ Cc1 (Ω × R) such that φ(x) = φ̃(x, u(x)), then by Lemma 5.4 we
have
Z
Z
a∇i νn+1 Di φ =
∇i νn+1 ∇i φ̃ dHn .
G(u)
Ω
Combining this with (5.3) gives
Z
∇i νn+1 ∇i φ̃ dHn ≥ 0.
G(u)
38
(5.4)
To move on from here note that, by the definition of the normal vector ν and the area element a,
we have that νn+1 = 1/a. If we also use a test function φ̃ in (5.4) which has the from φ̃ = aϕ, we
can re-write the equation as:
Z
0≤
∇i νn+1 ∇i φ̃ dHn
G(u)
Z
1
=−
∇ a∇i (aϕ) dHn
2 i
a
ZG(u)
Z
1
1
n
=−
∇i a∇i ϕ dHn
∇i a∇i aϕ dH −
2
G(u) a
G(u) a
Z
2
|∇a|
∇a · ∇ϕ
n
=−
dH .
ϕ+
a2
a
G(u)
We have seen before that w = log a, which means that ∇w = a1 ∇a. Therefore, the above inequality
can be re-written as
Z
|∇w|2 ϕ + ∇w · ∇ϕ dHn ≤ 0.
(5.5)
G(u)
We now use the integration by parts formula on the graph G(u),
Z
Z
n
g∆f dHn ,
∇f · ∇g dH = −
G(u)
G(u)
to see that
Z
|∇w|2 − ∆w ϕ dHn ≤ 0.
G(u)
Therefore, since we have implicitly assumed that ϕ is positive on G(u), we can conclude that
∆w ≥ |∇w|2 ≥ 0,
i.e. the function w = log
p
1 + |Du| is subharmonic on the graph of u.
Lemma 5.7 (Mean Value Inequality). Suppose that w ∈ C 2 (Ω × R) is a non-negative, subharmonic function on G(u). Then for any y ∈ G(u) and any ρ > 0 such that Bρn (y) ⊂ Ω, we have
that
Z
1
w(y) ≤
w dHn ,
ωn ρn G(u)∩Bρn+1 (y)
where ωn is the measure of the n-dimensional unit ball.
Proof of Propositon 5.5.
p All of the hard work has now been done: we use Lemma 5.6 to see that
the function w = log 1 + |Du|2 is subharmonic, which means that we can apply Lemma 5.7 to
conclude that we have the inequality
Z
1
w(y) ≤
w dHn ,
ωn ρn G(u)∩Bρn+1 (y)
for any y ∈ G(u) and ρ > 0 such that Bρn (y) ⊂ Ω.
39
Our aim now is to bound the integral on the right hand side of the inequality in Proposition 5.5.
Proposition 5.8. For any y ∈ Ω, and ρ > 0 such that Bρn+1 (y) ⊂ Ω × R, we have the following
inequality:
Z
Z
|Du|2
n
n
w dH ≤ ωn ρ +
w,
a
Bρn (y)∩{|u−u(y)|<ρ}
G(u)∩Bρn+1 (y)
where ωn is the measure of the n dimensional unit ball.
Proof. Since w ≥ 0 in G(u), and Bρn+1 (y) ⊂ Bρn (y) × {|xn+1 − u(y)| < ρ}, we know that
Z
Z
n
w dH ≤
w dHn .
G(u)∩Bρn+1 (y)
G(u)∩(Bρn (y)×{|xn+1 −u(y)|<ρ})
By definition of Hn , we can re-write the right hand integral above as
Z
Z
n
w dH =
G(u)∩(Bρn (y)×{|xn+1 −u(y)|<ρ})
Now, recall that a =
Therefore,
Z
p
aw.
Bρn (y)∩{|u−u(y)|<ρ}
1 + |Du|2 , which means we can write a = (1+|Du|2 )/a, and also w = log a.
1 + |Du|2
w
a
Bρn (y)∩{|u−u(y)|<ρ}
Z
Z
1+
≤
Z
aw =
Bρn (y)∩{|u−u(y)|<ρ}
Bρn (y)∩{|u−u(y)|<ρ}
n
Bρn (y)∩{|u−u(y)|<ρ}
Z
≤ ωn ρ +
Bρn (y)∩{|u−u(y)|<ρ}
|Du|2
w,
a
since
log a
≤1
a
|Du|2
w,
a
which completes the proof.
Again, we want to bound the integral on the right hand side of the inequality in Proposition 5.8.
n
Proposition 5.9. For any y ∈ Ω, and ρ > 0 such that B2ρ
(y) ⊂ Ω, we have the following
inequality:
Z
Z
|Du|2
w≤C
a,
n (y)∩{u−u(y)>−2ρ}
a
Bρn (y)∩{|u−u(y)|<ρ}
B2ρ
where C = C(u) is independent of the choice of y and ρ.
As before, we will split the proof up into some lemmas.
n
Lemma 5.10. Let y ∈ Ω, and find ρ > 0 such that B2ρ
(y) ⊂ Ω. Let η ∈ Cc1 (Ω) be a function
satisfying
1) 0 ≤ η ≤ 1 in Ω,
2) |Dη| ≤ 2/ρ in Ω,
40
3) η = 1 in Bρn (y),
n
(y).
4) η = 0 in Ω/B2ρ
Then the following inequality holds:
Z
Z
Z
|Du|2
η|Dw|.
w≤4
a + 2ρ
n (y)∩{u−u(y)>−ρ}
n (y)∩{u−u(y)>−ρ}
a
Bρn (y)∩{|u−u(y)|<ρ}
B2ρ
B2ρ
Proof. Without loss of generality, we assume that y = 0 and u(y) = 0. Now, recall the result in
Lemma 5.3, that
Z
νi Di φ = 0,
(5.6)
Ω
Cc1 (Ω).
for φ ∈
We will prove the inequality in this lemma by choosing the test function φ carefully.
Let γ : R → R be a function defined by
for t < −ρ,
0
γ(t) = t + ρ for −ρ ≤ t ≤ ρ,
2ρ
for t > ρ.
Then, we let our test function φ in (5.6) be φ = ηwγ(u). Since
Di (ηwγ(u)) = wγ(u)Di η + ηγ(u)Di w + ηwγ 0 (u)Di u,
we can rearrange (5.6) to
Z
Z
Z
0
νi ηγ(u)Di w + νi wγ(u)Di η .
νi ηwγ (u)Di u = −
Ω
Ω
Ω
Note that the function γ 0 (u) is an indicator function for the set {|u| < ρ}. This, along with the
fact that |Du|2 /a = −ν · Du, lets us write
Z
Z
|Du|2
w ≤ − νi ηwγ 0 (u)Di u
a
n
Bρ (y)∩{|u−u(y)|<ρ}
Ω
Z
Z
=
νi ηγ(u)Di w + νi wγ(u)Di η.
Ω
Ω
Then we use Cauchy-Schwartz to see
Z
Z
Z
Z
νi ηγ(u)Di w + νi wγ(u)Di η ≤
|ν|ηγ(u)|Dw| + |ν|wγ(u)|Dη|,
Ω
Ω
Ω
Ω
after which we can use the facts that |ν| = 1, γ < 2ρ and γ(u) = 0 when u < −ρ to see
Z
Z
Z
Z
|ν|ηγ(u)|Dw| + |ν|wγ(u)|Dη| ≤ 2ρ
η|Dw| + 2ρ
w|Dη|.
Ω
{u>−ρ}
Ω
41
{u>−ρ}
We can now use the conditions on η set out in the statement of the lemma to see that
Z
Z
2ρ
η|Dw|+2ρ
w|Dη|
{u>−ρ}
{u>−ρ}
Z
Z
η|Dw| + 2ρ
w|Dη| by condition 4,
≤ 2ρ
n ∩{u>−ρ}
B2ρ
n ∩{u>−ρ}
B2ρ
Z
Z
≤ 2ρ
η|Dw| + 4
n ∩{u>−ρ}
B2ρ
w
by condition 2,
a
since log a ≤ a,
n ∩{u>−ρ}
B2ρ
Z
Z
≤ 2ρ
η|Dw| + 4
n ∩{u>−ρ}
B2ρ
n ∩{u>−ρ}
B2ρ
and this completes the proof.
n
Lemma 5.11. Let y ∈ Ω, and find ρ > 0 such that B2ρ
(y) ⊂ Ω, and let η ∈ Cc1 (Ω) be a function
satisfying the conditions in Lemma 5.10. Then the following inequality holds:
Z
Z
C
a,
η|Dw| ≤
n (y)∩{u−u(y)>−2ρ}
n (y)∩{u−u(y)>−ρ}
ρ B2ρ
B2ρ
where C = C(u) is independent of the choice of y and ρ.
Proof. We will start with a relationship that we showed in (5.5) back in Lemma 5.6, namely
Z
Z
2
|∇w| ϕ +
∇w · ∇ϕ ≤ 0,
G(u)
G(u)
which holds for any ϕ ∈ Cc1 (Ω × R). If instead we take our test function as ϕ2 ∈ Cc1 (Ω × R), we
can rewrite this as:
Z
Z
2 2
0≥
|∇w| ϕ +
∇w · ∇ϕ2
G(u)
G(u)
Z
Z
2 2
=
|∇w| ϕ + 2
ϕ∇w · ∇ϕ.
G(u)
G(u)
This gives
Z
2
Z
2
2
|∇w| ϕ ≤ 2
G(u)
2
1/2 Z
ϕ |∇w|
G(u)
by Cauchy-Schwarz, and therefore
Z
2
2
|∇w|
1/2
,
G(u)
Z
2
|∇w| ϕ ≤ 4
G(u)
|∇ϕ|2 .
(5.7)
G(u)
We will also assume, without loss of generality, that y = 0, and u(y) = 0. Now, let us define the
function that we will be using as our ϕ. We set
ϕ(x0 , xn+1 ) = η(x0 )λ(xn+1 ),
where η is as in the statement of Lemma 5.10, and λ is a suitably differentiable cut off function
that satisfies
42
1) λ(x) = 1 for all x ∈ (−ρ, supB2ρ u),
2) λ(x) = 0 for all x ∈ R/(−2ρ, ρ + supB2ρ u),
3) |λ0 (x)| ≤ 2/ρ for all x ∈ R.
By choosing λ, and therefore ϕ, in this way, we are able to get a further bound for equation (5.7):
Z
Z
C2
2 2
|∇w| ϕ ≤ 4
|∇ϕ|2 ≤ 2 Hn (B),
(5.8)
ρ
G(u)
G(u)
where
B = {x ∈ G(u) : |x0 | < 2ρ and
− 2ρ < xn+1 < ρ + sup u}
B2ρ
is the set where ∇ϕ is non-zero. Now, because w is independent of xn+1 , we know that
νn+1 |Dw| ≤ |∇w|,
(5.9)
from the definition of the tangential gradient, and recall that νn+1 = 1/a. We now have all of the
facts we need for our estimate:
Z
Z
η|Dw| ≤
aη|∇w|,
by (5.9),
n ∩{u>−ρ}
B2ρ
n ∩{u>−ρ}
B2ρ
Z
η|∇w| dHn ,
≤
B
Z
≤
1/2
2
2
2
2
η |∇w|
Hn (B)1/2 ,
by Cauchy-Schwartz,
Hn (B)1/2 ,
since λ = 1 on B,
B
Z
1/2
ϕ |∇w|
≤
B
Z
2
2
1/2
ϕ |∇w|
≤
Hn (B)1/2 ,
because B ⊂ G(u),
G(u)
C n
H (B),
ρ
Z
C
=
a,
ρ B2ρ ∩{u>−2ρ}
≤
by (5.8),
which completes the proof.
Proof of Proposition 5.9. As before, we have done most of the work now. We know that
Z
|Du|2
w
a
Bρn (y)∩{|u−u(y)|<ρ}
Z
Z
≤4
a + 2ρ
η|Dw|, from Lemma 5.10,
n (y)∩{u−u(y)>−ρ}
B2ρ
n (y)∩{u−u(y)>−ρ}
B2ρ
Z
Z
≤4
a + 2C1
n (y)∩{u−u(y)>−ρ}
B2ρ
Z
≤ C2
a, by Lemma
B2ρ (y)∩{u−u(y)>−2ρ}
a,
n (y)∩{u−u(y)>−2ρ}
B2ρ
43
5.11,
and we are done.
n
Proposition 5.12. For any y ∈ Ω, and ρ > 0 such that B3ρ
(y) ⊂ Ω, we have the following
inequality:
!
Z
1
a ≤ ωn (2ρ)n + Cρn 1 +
sup [u(x) − u(y)] .
n
ρ x∈B3ρ (y)
B2ρ(y) ×{u−u(y)>−2ρ}
Proof. As before, we assume without loss of generality that y = 0 and u(y) = 0. Now, by writing
a = (1 + |Du|2 )/a, we can see that
Z
Z
|Du|2
n
.
a ≤ ωn (2ρ) +
a
B2ρ ∩{u>−2ρ}
B2ρ ∩{u>−2ρ}
As usual, our task is to estimate the integral on the right hand and we do this by a careful choice
of test function in the weak form of the minimal surface equation, from Lemma 5.3. Let µ be a
function defined by
(
t + 2ρ, for t ≥ −2ρ,
µ(t) =
0,
otherwise,
and let φ be a suitably differentiable function which satisfies
1. 0 ≤ φ ≤ 1 everywhere,
2. φ = 1 on B2ρ ,
3. φ = 0 on R/B3ρ ,
4. |Dφ| ≤ 2/ρ everywhere.
We then take µ(u)φ as our test function, so
Z
νi Di (µ(u)φ) = 0,
Ω
which, when we expand the derivative, gives
Z
Z
0
νi µ (u)Di uφ = − νi µ(u)Di φ.
Ω
Ω
2
So, using the fact that |Du| /a = −ν · Du and our choice of test function, we know that
Z
Z
|Du|2
≤ − νi µ0 (u)Di uφ,
a
B2ρ ∩{u>−2ρ}
Ω
Z
=
νi µ(u)Di φ,
ΩZ
2
u + 2ρ, by Cauchy Schwartz and the definition of µ,
≤
ρ B3ρ∩{u≥−2ρ}
"
#
1
≤ Cρn 1 + sup u ,
ρ B3ρ
which completes the proof.
44
We have now done most of the work we need in order to complete the proof of Theorem 5.1.
Proof of Theorem 5.1. We p
first collect the results we have proven in the propositions in this section
n
to get a bound on w = log 1 + |Du|2 . Let y ∈ Ω, and choose ρ > 0 such that B3ρ
⊂ Ω. Then
Z
1
w(y) ≤
w dHn
by Proposition 5.5
ωn ρn G(u)∩Bρn+1 (y)
Z
1
|Du|2
≤1+
w
by Proposition 5.8
ωn ρn Bρ (y)n ×{|u−u(y)|<ρ} a
Z
C1
≤1+
a
by Proposition 5.9
n
ωn ρn B2ρ(y)
×{u−u(y)>−2ρ}
!!
1
C1
ωn (2ρ)n + C2 ρn 1 +
by Proposition 5.12
sup [u(x) − u(y)]
≤1+
ω n ρn
ρ x∈B3ρ (y)
!
u(x) − u(y)
≤ C3 1 + sup
.
ρ
x∈B3ρ (y)
Now notice that we can take ρ = d/3, where d = dist(y, ∂Ω), to see that
!
u(x) − u(y)
w(y) ≤ C 1 + sup
.
d
x∈Bd (y)
By exponentiating both sides, we finally come to the conclusion that
!!
u(x) − u(y)
,
|Du(y)| ≤ exp w(y) ≤ exp C 1 + sup
d
x∈Bd (y)
which completes the proof.
References
[1] L.C. Evans. Partial differential equations. graduate studies in mathematics 19. American
Mathematical Society, 1998.
[2] D. Gilbarg and N.S. Trudinger. Elliptic partial differential equations of second order, volume
224. Springer Verlag, 2001.
[3] E. Giusti. Minimal surfaces and functions of bounded variation. 1984.
[4] Q. Han and F. Lin. Elliptic Partial Differential Equations. Courant Institute of Mathematical
Sciences, 2000.
[5] S.T Hughes. The dirichlet problem for the minimal surface equation. 2011.
[6] S.T Hughes. Topics in geometric analysis. 2011.
[7] J. Jost. Partial Differential Equations. Springer, 2002.
45
© Copyright 2026 Paperzz