Partial Differential Equations I Fall 2007 Prof. D. Slepčev∗ Chris Almost† Disclaimer: These notes are not the official course notes for this class. These notes have been transcribed under classroom conditions and as a result accuracy (of the transcription) and correctness (of the mathematics) cannot be guaranteed. Contents Contents 1 0 Examples of PDE 2 1 Linear PDE 1.1 Transport equation 1.2 Laplace’s equation 1.3 Heat equation . . . 1.4 Wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 4 18 27 2 Nonlinear first order PDE 2.1 Method of characteristics . . 2.2 Hamilton-Jacobi equations . 2.3 Conservation Laws . . . . . . 2.4 Rankine-Hugoniot condition 2.5 Existence of solutions . . . . 2.6 Viscosity solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 32 37 42 43 45 47 . . . . . . . . . . . . . . . . . . . . Index ∗ † 53 [email protected] [email protected] 1 2 PDE I 0 Examples of PDE An example of a simple one dimensional PDE is u t + u x = 0, where u is a function of x (which we think of as space) and t (which we think of as time). 1. The transport equation is u t + V · Du = 0, where V is a vector field. It is called the transport equation because the value of u is constant under the action of the vector field V . This is a first order linear PDE. 2. The continuity equation is u t + div(V · u) = 0. Recall that div(u) = tr(Du) for u : Rn+1 → Rn . 3. A conservation law (in one dimension) is a PDE of the form u t + F (u) x = 0, where F is some function (not necessarily linear). We say that u is the conserved quantity in this case. It can be seen that the transport and continuity equations are conservation laws. 4. A Hamilton-Jacobi equation is a PDE of the form u t + H(Du) = 0, where H is a given scalar field. Solutions to these equations are the solutions to Hamiltonian dynamics. These PDE are not necessarily linear. 5. Laplace’s equation is −u x x = 0 in one dimension, and −∆u = 0 in higher dimensions, where ∆ = tr(D2 u) is the Laplacian. This equation describes equilibrium solutions in many systems. It is an elliptic PDE. 6. The heat equation is u t − ∆u = 0. The Laplace equation gives the steadystate solutions to the heat equation. The heat equation is an example of a parabolic PDE, it smooths its initial and boundry conditions, and information propagates infinitely fast. 7. The wave equation is u t t − ∆u = 0 and describes (small) waves moving in a system. Information does not propagate infinitely fast, and the wave equation does not smooth its initial conditions (but regularity is maintained). 8. Schrödinger’s equation is iu t + ∆u = 0. It is really a system of two real PDE. 9. The minimal surface equation (describing, for example, soap film across a bent closed loop) is Du = 0. − div p 1 + |Du|2 10. u t − ∆u + f (u) = 0 is the reaction-diffusion equation. There is also nonlinear diffusion given by u t − ∆um = 0. 11. There are also important systems of equations. If u is a vector-valued function then the system of PDE u t + Du · u = −Dp + ν∆u, div u = 0 are the Naiver-Stokes equations that describe fluids. Linear PDE 3 There are numerous questions that arise in the study of PDE. Intrinsic mathematical questions include the existence of solutions, uniqueness of solutions, and continuity with respect to data (i.e. stability). If a problem has a unique, regular solution then it is said to be well-posed. What are the properties of the solutions? 1 Linear PDE Any PDE can be written as F (x, u, Du, D2 u, . . . ) = 0. The PDE is said to be linear if F is a linear function of u, Du, D2 u, . . . , and homogeneous if no term is a function of x alone, otherwise it is inhomogeneous. For homogeneous linear PDE, linear combinations of solutions are also solutions. 1.1 Transport equation The transport equation is a first order linear equation. To solve it we will need a few facts about ODE. Let V be a smooth bounded vector field on Rn . Consider the IVP dx = V (x, t), x(0) = x 0 . dt It has a unique smooth solution defined everywhere, so we may define a mapping Φ(x 0 , t) := x(t). The mapping Φ(·, T ) : Rn → Rn is injective. (Indeed, consider a change of variable τ = T − t. Then dx dτ = dx dt · dt dτ = −V (x, T − τ). This change of variable “runs time backwards.” Structurally, the backwards equation is the same as the original, and if Φ(·, T ) were not injective then the backwards equation would not have a unique solution, a contradiction.) Similarly, Φ(·, T ) is surjective (by the same argument using existence instead of uniqueness). Since V is smooth, Φ is smooth, and we can solve the equation backwards in time, so Φ(·, t) is a smooth bijection for all t. The transport equation is u t + V · D x u = 0, u(x, 0) = g(x) (T) where g is smooth and V = V (x) is as above. Another way of writing (T) is D x,t u·(V, 1) = 0, so (T) is the assertion that the directional derivative of u vanishes in the direction (V, 1). Fix a point (x 0 , t 0 ) ∈ Rn+1 at which we would like to determine the value of u. Consider the system of ODE dx ds = V (x), dt x(0) = x 0 , ds = 1, t(0) = t 0 . Then t(s) = s + t 0 and ddsx = V (x) has a unique smooth solution by the theory of ODE. The curve (x(s), s + t 0 ) is a characteristic curve for the PDE. Notice that d ds u(x(s), s + t 0 ) = D x u · dx dt + u t = Du · V + u t , 4 PDE I which is identically zero if u is a solution to (T). Therefore u(x(s), s + t 0 ) is constant as a function of s. Whence the value when s = 0 is the same as the value when s = −t 0 , so u(x 0 , t 0 ) = g(x(−t 0 )). In the notation of the aside on ODE, u(t 0 , x 0 ) = g(Φ(·, t 0 )−1 (x 0 )). 1.1.1 Theorem. Let V be a smooth bounded vector field on Rn and g be a smooth function Rn → R. Then (T) has a unique smooth solution. Consider now the inhomogeneous form u t + V · D x u = f (x, t), u(x, 0) = g(x). Then the directional derivative of u in the direction (V, 1) is now f (x, t). On the = f (x(s), s + t 0 ). If solution curves to the same ODE, z(s) := u(x(s), s + t 0 ) has dz ds f is smooth then we can solve this equation (in principle) when x is known. We could go further and solve u t + V · D x u = h(x, t, u), u(x, 0) = g(x) by the similar methods. 1.1.2 Example (u t + 2u x = u, u(x , 0) = g (x )). Write ddsx = 2, x(0) = x 0 , which has solution x(s) = x 0 +2s, so Φ(x 0 , t 0 ) = x 0 +2t 0 . The characteristic curves are straight lines. The inverse mapping is Φ(·, t)−1 : x 7→ x − 2t. Taking z(s) = u(x(s), s), we have dz ds = Du · V + u t = u(x(s), s) = z. Whence z(t) = z(0)e t , where z(0) = u(x 0 , 0) = g(x 0 ), and u(x 0 + 2s, s) = z(s) = g(x 0 )es , so by renaming variables we see that u(x, t) = g(x − 2t)e t is the solution to the PDE. Warning: The symbol x is used in two different ways, as a variable independent of t and as a function of s. When x appears as a function of s then it will always be written x(s). It is hoped that this is not too confusing. 1.2 Laplace’s equation The Laplace equation is −∆u = 0 in Rn . In n = 1 the solutions are exactly the linear functions. In n = 2 the second derivatives “balance each other out,” so we get solutions like u(x, y) = x 2 − y 2 that are convex in one variable and concave in the other. The Laplace equation gives the equilibrium solutions to the heat equation, which will be discussed in the next section. A related equation is the Poisson equation, −∆u = f for a function f . It describes the equilibrium solution to the heat equation u t − ∆u = f , where f describes the source of heat. 1.2.1 Definition. A C 2 function u for which −∆u = 0 is called a harmonic function. Laplace’s equation 5 Solution to the Poisson equation Notice that the Laplace equation is translation invariant, i.e. if a ∈ Rn and u is a solution then u(· − a) is also a solution. Moreover, it is invariant under all orthogonal transformations, i.e. if R is an orthogonal matrix and u is a solution then u(R·) is also a solution. Indeed, let w(x) = u(Rx) and notice that ∂w = ∂ yi n X ∂u k=1 ∂ xk R ki , so ∂ 2w ∂ yi2 = n X n X ∂ 2u k=1 `=1 ∂ x k ∂ x` R ki R`i and it follows that −∆w = − n X n X ∂ 2u i=1 k,`=1 ∂ x k ∂ x` R ki R`i = − n X ∂ 2u k,`=1 ∂ x k ∂ x` δk,` = −∆u = 0. To solve the Laplace equation we p first seek a radial solution, a solution of the form u(x) = v(r), where r = |x| = x 12 + . . . .x n2 on x ∈ Rn \ {0} (we wish to avoid trivial constant solutions). Then ∂u ∂ xi = v0 ∂r ∂ xi = v0 xi r , Whence 00 −∆u = −v − v ∂ 2u so 0 ∂ xi n r +v 0 2 1 r = v 00 x 2 i r 00 =− v + + v0 n−1 r 1 r v 0 − v0 xi xi r2 r . , and for u to be a solution we need to solve v 00 + n−1 v 0 = 0. We can do this r (exercise), and the solutions are ¨ c1 log r + c2 n = 2 v(r) = c1 + c2 n≥3 r n−2 We are looking only for one particular special solution, so we may set c2 = 0. We require that Z −Du(x) · ν ds = 1, ∂ B(0,r) where ν is the outward pointing normal vector field, for various reasons having to do with the Gauss-Green Theorem that I didn’t really understand. In n = 2 this gives Z Z 2π 1 1 x x 1 = −c1 · ds = −c1 dθ = −c1 · 2π, |r| r r r ∂ B(0,r) 0 1 so we take c1 = − 2π . The fundamental solution to Laplace’s equation is ( Φ(x) = 1 log |x| − 2π 1 (n−2)S(n)|x|n−2 n=2 n≥3 6 PDE I where S(n) is the surface area of B(0, 1) in Rn . Recall that n S(n) = nVol(B(0, 1)) = n Notice that ( DΦ(x) = π2 Γ( 2n + 1) 1 1 x − 2π |x| |x| . n=2 1 1 x − S(n) |x|n−1 |x| n≥3 so for n = 2, Z DΦ(x) · ∂ B(0,") x |x| dS = Z − ∂ B(0,") = Z − ∂ B(0,") =− 1 1 2π " 1 1 x · x 2π |x| |x| |x| 1 1 2π " dS dS 2π" = −1 and for n ≥ 3 the computation is basically the same and the result is also −1. Therefore Z ∂ B(0,") ∂ν Φ dS = −1, a fact which we will need later. 1.2.2 Convolution. Let k be a smooth function on Rn such that R for all |α| ≥ 0. Let f ∈ L 1 (Rn ) (so Rn | f | < ∞). Then k ∗ f (x) := Z k(x − y) f ( y)d y = Rn and Z ∂ αk ∂ xα is bounded k(z) f (x − z)dz = f ∗ k(x) Rn ∂ αk ∗ f (x) = ∂ αk ∗ f (x). ∂ xα ∂ xα It follows that k ∗ f is also smooth, and ∆(k ∗ f ) = 0 if ∆k = 0. 1.2.3 Dominated Convergence Theorem. Suppose that f k ∈ L 1 (Rn ) and f k → f a.e. If thereR is a nonnegative g ∈ L 1 (Rn ) such that | f k | ≤ g for all k then f ∈ R 1 n L (R ) and Rn f k → Rn f . 1.2.4 Theorem. Let f ∈ Cc2 (Rn ). Then Z u(x) := Rn is a solution to −∆u = f . Φ(x − y) f ( y)d y = Φ ∗ f Laplace’s equation 7 This theorem gives a solution to Poisson’s equation when f is sufficiently nice. It may be extended to larger classes of f with hard work. Notation (®). We write A( f , g, . . . ) ® B( f , g, . . . ) if there a (positive?) constant C independent of the entries f , g, . . . such that A ≤ C B. PROOF: Consider that Z |Φ(x)| d x = 2D R Z ¨ (log r)r d r ® C 0 B(0,R) R2 (| log R| + 1) R2 n=2 n≥3 so Φ ∈ L 1 (B(0, R)) for all R > 0. Therefore u is well-defined for all x ∈ Rn , using the fact that f is bounded. Let {e1 , . . . , en } denote the standard orthonormal basis for Rn . Now u(x + hei ) − u(x) h = Z f (x + hei − y) − f (x − y) Φ( y) h Rn Z Φ( y) → Rn ∂f ∂ xi (x − y)d y = ∂u ∂ xi dy (x) by the Dominated Convergence Theorem, since Φ( y) f (x + hei − y) − f (x − y) ≤ Φ( y)kD f k∞ h and since we can reduce the integral over Rn to the integral over some compact set. Repeating this argument we get that ∂ 2u ∂ xi∂ x j = Z Φ( y) Rn ∂2f ∂ xi∂ x j (x − y)d y, so ∆u = Z Φ( y)∆ f (x − y)d y. Rn Let " > 0. Since Φ not defined at 0, we will break up the integral into an integral over B(0, ") and an integral over the rest. Let A" = Rn \ B(0, "). Z Φ( y)∆ f (x − y)d y ≤ B(0,") Z |Φ( y)| · |∆ f (x − y)|d y B(0,") 2 Z ® kD f k∞ |Φ( y)|d y B(0,") ¨ 2 ® kD f k∞ " 2 | log " + 1| "2 n=2 n≥3 8 PDE I which goes to 0 as " → 0. By integration by parts, Z Φ( y)∆ f (x − y)d y A" =− Z ∇Φ( y) · ∇ f (x − y)d y − Z Φ( y)∇ f (x − y) · ν dS y ∂ B(0,") A" But the magnitude of far righthand term is Z ® kD f k∞ ¨ |Φ( y)|dS y ® kD f k∞ ∂ B(0,") "| log "| 1 " n−1 = " " n−2 n=2 n≥3 which goes to zero as " → 0. Integrating the other term by parts give Z Z ∆Φ( y) f (x − y)d y + f (x − y)∇Φ( y) · ν dS y ∂ B(0,") A" =0+ Z ( f (x) − y · D f (x − θ y))∇Φ( y) · ν dS y ∂ B(0,") = f (x) Z ∇Φ( y) · ν dS y − ∂ B(0,") = − f (x) − Z y · D f (x − θ y))∇Φ( y) · ν dS y ∂ B(0,") Z y · D f (x − θ y))∇Φ( y) · ν dS y ∂ B(0,") using the Taylor expansion of f around x. Finally, the magitude of the last term is ( 1 " n=2 ® "kD f k∞ " 1 n−1 = "kD f k∞ " n ≥3 " n−1 which goes to zero as " → 0. Putting it all together, ∆u = − f , so u satisfies the Poisson equation. Mean-value property of harmonic functions 1.2.5 Theorem. Let u be a harmonic function on an open set Ω and assume that B(x, R) ⊆ Ω. Then Z Z u(x) = − u( y)dS y and u(x) = − ∂ B(x,R) y( y)d y. B(x,R) PROOF: For the first equality, consider that Z ψ(r) = − ∂ B(x,r) R u( y)dS y = u( y)dS y ∂ B(x,r) r n−1 S(n) , Laplace’s equation 9 for 0 < r ≤ R, is a continuous function of r. We have lim r→0+ ψ(r) = u(x), which can be proved along the lines of the previous theorem, noting that Z ψ(r) = − u(x) + ( y − x)Du(θ r x + (1 − θ r ) y)dS y ∂ B(x,r) for some θ r ∈ (0, 1). Therefore ψ is well-defined and continuous on [0, R]. We will show that ψ is constant. To take the derivative we must first introduce a y−x change of variables z ← R . Then R Z r n−1 ∂ B(0,1) u(x + rz)dSz ψ(r) = =− u(x + rz)dSz . r n−1 S(n) ∂ B(0,1) Thus dψ dr Z d =− ∂ B(0,1) Z =− dr u(x + rz)dSz ∇u(x + rz) · z dSz Chain rule ∂ B(0,1) Z =− ∇u( y) · y−x ∂ B(x,r) R B(x,r) = = r dS y ∆u( y)d y Divergence Theorem S(n)r n−1 r R B(x,r) ∆u( y)d y S(n) = nVol(B(0, 1)) n Vol(B(0, 1))r n Z r = − ∆u( y)d y = 0 n B(x,r) u is harmonic and it is seen that ψ is constant. In particular, ψ(R) = ψ(0) = u(x). The second equality is proved below. 1.2.6 Coarea Formula. Let η : Rn → R be Lipschitz and for all λ ∈ R {x ∈ Rn | η(x) = λ} is, for a.e. λ, either a smooth (n − 1)-dimensional hypersurface in Rn R n or empty. Let f : R → R be continuous and such that Rn f exists. Then Z Z∞Z f · |∇η| = f dSdλ. −∞ {η=λ} 1.2.7 Example. Take η = |x|. Then |∇η| = 1 (except at x = 0 where the gradient does not exist), and so Z Z ∞Z f = Rn f ( y)dS y d r. 0 {|x|=r} 10 PDE I To finish the proof the theorem, we take η( y) = | y − x| and note that Z Z u( y)d y = B(x,R) = = = u( y)1B(x,R) d y Rn Z ∞Z u( y)1B(x,r) dS y d r 0 | y−x|=r Z RZ u( y)dS y ∂ B(x,r) 0 ZR u(x)S(n)r n−1 d r 0 = u(x)nVol(B(0, 1)) rn n = u(x)Vol(B(x, r)) R 1.2.8 Theorem. If u ∈ C 2 (Ω) satisfies u(x) = −∂ B(x,R) u( y)d y for every B(x, R) ⊆ Ω (this is the mean-value property) then u is harmonic in Ω. PROOF: Assume that u has the mean-value property but is not harmonic. Then there is some point z ∈ Ω such that ∆u(z) 6= 0. Then since ∆u is continuous there R > 0 such that, without loss of generality, ∆u > 0 on B(z, R). Take ψ as in the proof of the last theorem. The averaging property says simply that ψ is constant (indeed, ψ ≡ u(z)). But for 0 < r < R, Z r 0 ψ (r) = − ∆u( y)d y > 0 n B(z,r) since ∆u is positive on B(z, R). Regularity property of harmonic functions 1.2.9 Mollifiers. Mollifiers are functions that are used for “smoothing.” For this class a mollifier is a function η : Rn → R such that 1. η ∈ C ∞ (Rn ) 2. η ≥ 0 R 3. Rn η = 1 4. η is radially symmetric and η(|x|) is non-increasing. 5. η has compact support. For example, we may take ( η(x) = cn exp( |x|21−1 ) |x| < 1 0 |x| ≥ 1 Laplace’s equation 11 were the constant cn is chosen so that R Rn η" (x) := and notice that R B(0,") η = 1. For " > 0 take 1 x " " η n η" ( y)d y = 1. 1.2.10 Regularization. Let f be locally integrable on Ω. A regularization of f is f " (x) = η" ∗ f (x) defined on Ω" = {x ∈ Ω | dist(x, ∂ Ω) > "} (notice that Z Z f " (x) = f (x − y)η" ( y)d y = B(0,") f ( y)η" (x − y)d y, B(x,") so we require that B(x, ") ⊆ Ω). Some facts are true. 1. f " ∈ C ∞ (Ω" ) 2. f " → f a.e. on Ω as " → 0 P P 3. If | f | p is locally integrable (i.e. f ∈ Lloc (Ω)) then f " → f in Lloc (Ω). 4. If f ∈ C(Ω) then f " → f uniformly on compact subsets of Ω. 5. If f ∈ C k (Ω) then f " → f in C k (K) for all K ⊂⊂ Ω. 1.2.11 Theorem. Let u ∈ C(Ω) satisfy the mean-value property. Then u is infinitely differentiable (i.e. u ∈ C ∞ (Ω)). PROOF: We have for x ∈ Ω" u" (x) = η" ∗ u(x) Z = = = = η" (x − y)u( y)d y B(x,") Z "Z 0 Z" 0 Z" η" (x − y)u( y)dS y d r ∂ B(x,r) η" (r) Z u( y)dS y d r η" (r)u(x) 0 = u(x) u is radially symmetric ∂ B(x,r) Z Z 1dS y d r by MVP ∂ B(x,r) η" (x − y)d y B(x,") = u(x) Therefore u = u" ∈ C ∞ (Ω" ). Letting " → 0 gives the result. 12 PDE I Maximum principle 1.2.12 Theorem (Maximum Principle). Let Ω be a bounded open set and let u ∈ C 2 (Ω) ∩ C(Ω) be harmonic. 1. (Weak maximum principle) maxΩ u = max∂ Ω u. 2. (Strong maximum principle) If Ω is connected and there is x 0 ∈ Ω such that u(x 0 ) = maxΩ u then u is constant on Ω. PROOF: Clearly the strong maximum principle implies the weak maximum principle. If x 0 ∈ Ω is a point such that u(x 0 ) = maxΩ u then u is constant on the connected component containing x 0 , so it attains this maximum value on the boundry of this component (and hence on the boundry of Ω). Suppose that Ω is connected and there is x 0 ∈ Ω such that u(x 0 ) = maxΩ u. Let r be such that B(x 0 , r) ⊆ Ω. Since u is harmonic, by the MVP we have u(x 0 ) = R − u(x)d x. But u(x 0 ) ≥ u(x) for all x ∈ B(x 0 , r), so u ≡ u(x 0 ) on B(x 0 , r). B(x 0 ,r) (Indeed, assume that there is y0 ∈ B(x 0 , r) such that u( y0 ) < u(x 0 ). Let " := u(x 0 )−u( y0 ) and notice that by continuity there is δ > 0 such that u( y) < u(x 0 )− " for all y ∈ B( y0 , δ). But then 2 Z Z Z u(x)d x = B(x 0 ,r) u(x)d x + B( y0 ,δ) u(x)d x B(x 0 ,r)\B( y0 ,δ) " ≤ u(x 0 ) − |B( y0 , δ)| + u(x 0 )(|B(x 0 , r)| − |B( y0 , δ)|) 2 " = u(x 0 )|B(x 0 , r)| − |B( y0 , δ)| 2 a contradiction of the MVP.) Let A := {x ∈ Ω | u(x) = u(x 0 )}. Then A is open by the argument above, and A is closed (relative to Ω) since it is u−1 ({u(x 0 )}) and u is continuous. Therefore A = Ω since Ω is connected, and u is constant on Ω, again by continuity. 1.2.13 Theorem (Comparison). Let u and v be solutions of the Poisson equation −∆u = f on a bounded open set Ω. Assume that u, v ∈ C 2 (Ω) ∩ C(Ω) and v ≥ u on ∂ Ω. Then v ≥ u on Ω. PROOF: Consider w := v − u. Then −∆w = 0, w ∈ C 2 (Ω) ∩ C(Ω), and w ≥ 0 on ∂ Ω. By the maximum principle (minimum principle), w ≥ 0 on Ω. 1.2.14 Theorem (Uniqueness). Let g ∈ C(∂ Ω) and f ∈ C(Ω). Then there is at most one solution u ∈ C 2 (Ω) ∩ C(Ω) of ¨ −∆u = f on Ω u= g on ∂ Ω PROOF: Assume that u1 and u2 are solutions. Then w := u1 − u2 is harmonic and w ≡ 0 on ∂ Ω. By the maximum and minimum principles it follows that w ≡ 0 on Ω. Laplace’s equation 13 Local estimates 1.2.15 Theorem. Let u be harmonic on B(x, r). Let α = (α1 , . . . , αn ) be a multiindex of order k = |α|. Then |Dα u(x)| ≤ (2n+1 nk)k 1 ωn r n+k kuk L 1 (B(x,r)) , where n is the dimension and ωn is the volume of the unit ball. PROOF: We prove the case where k = 1. Then Dα u = u x i for some i. Notice that all derivatives of harmonic functions are harmonic functions. Z |∂ x i u(x)| = − B(x, 2r ) = = 2n ∂ x i u( y)d y Z ωn r n 2n ωn r n = B(x, 2r ) u x i ( y)d y Z u · νi dS y ∂ B(x, 2r ) 2n nωn r n−1 kuk L ∞ (B(x, r )) 2 ωn r n 1 2n−1 n 2n 2 ≤ kuk L 1 (B(x,r)) r ωn r n The proof for all other k is by induction. 1.2.16 Theorem (Liouville). If u : Rn → R is harmonic and bounded then u is constant. PROOF: Let x ∈ Rn . |∂ x i u(x)| ≤ c r r+1 kuk L 1 (B(x,r)) ≤ c r r+1 M ωn r n = c̃ r for all r > 0, since u is bounded. Therefore the left hand side is zero, and u must be constant since all partials are zero. 1.2.17 Theorem (Harnack’s Inequality). Let u ≥ 0 be harmonic on an open set Ω. Let V be a connected open set such that V ⊆ Ω and V is compact. There exists C > 0 depending only on V and Ω such that supV u ≤ C infV u. PROOF: Let r = 14 dist(V , ∂ Ω), and x, y ∈ V . If |x − y| < r then Z Z 1 u(x) = − u(z)dz = u(z)dz ωn 2n r n B(x,2r) B(x,2r) Z Z 1 1 ωn r n 1 ≥ u(z)dz = u(z)dz = n u( y). − ωn 2n r n B( y,r) ωn 2n r n 1 2 B( y,r) 14 PDE I Cover V with the balls B(z, 2r ) and choose a finite sub-cover. Let K be the number of balls required. Now if x and y are farther apart than r we can show that u(x) ≥ 21nK u( y), from which the inequality follows. Green’s function R Recall that a solution to −∆u = f (for sufficiently nice f ) is u = Φ ∗ f = Rn Φ(x − y) f ( y)d y. Let Ω be a bounded open set with boundary C 1 , and consider the boundary value problem ¨ −∆u = f on Ω u=0 on ∂ Ω R Is there is function G such that u = Ω G(x − y) f ( y)d y is a solution to this problem? If u is a solution to Poisson’s equation on Rn then Z Z u(x) = Φ(x − y) f ( y)d y = − Φ( y − x)∆u( y)d y. Rn Rn There is no longer equality when Rn is replaced with Ω. We will look for a correction term. 1.2.18 Green’s Identity. Let u, v ∈ C 2 (Ω) ∩ C 1 (Ω), where ∂ Ω is C 1 . Then Z Z ∂v ∂u u∆v − v∆u = u −v dS. ∂ ν ∂ ν Ω ∂Ω Let Ω" := Ω x," := Ω \ B(x, "). Z Φ( y − x)∆u( y)d y Ω" =− Z ∇Φ( y − x) · ∇u( y)d y + Z ∂ Ω" Ω" = Z ∆Φ( y − x)u( y)d y − Z Ω" = u( y) · ∂ Ω" Z −u( y) · ∂Ω + ∂Φ ∂ν ( y − x) + Φ( y − x) · Z u( y) · ∂ B(0,") = Φ( y − x) · Z −u( y) · ∂Ω ∂Φ ∂ν ∂Φ ∂ν ∂Φ ∂ν ∂u ∂ν ( y − x)dS y + ∂u ∂ν ∂u ∂ν Z Φ( y − x) · ∂ Ω" ∂u ∂ν ( y)dS y ( y)dS y ( y − x) − Φ( y − x) · ( y − x) + Φ( y − x) · ( y)dS y ∂u ∂ν ( y)dS y ( y)dS y − u(x) + O(") so u(x) = − Z Ω Φ( y − x)∆u( y)d y − Z u( y) · ∂Ω ∂Φ ∂ν ( y − x) + Φ( y − x) · ∂u ∂ν ( y)dS y . Laplace’s equation 15 But we don’t know how to compute ∂∂ νu . For each point x we introduce a corrector function ϕ x ( y) defined by ¨ −∆ϕ x = 0 in Ω ϕ x ( y) = Φ( y − x) on ∂ Ω Then Z u∆ϕ − ϕ ∆u = x Z x Ω u ∂ ϕx ∂Ω − Φ( y − x) ∂ν ∂u ∂ν ( y)dS y , which contains the term for which we are looking. Plugging into above, defining Green’s function G(x, y) = Φ( y − x) − ϕ x ( y), Z Z Z ∂Φ ∂ ϕx x u(x) = −Φ( y − x)∆ud y + ϕ ∆ud y + −u( y) ( y − x) + u( y) ( y)dS y ∂ν ∂ν Ω Ω ∂Ω Z Z ∂G = − G(x, y)∆ud y − u( y) (x, y)dS y . ∂ νy Ω ∂Ω If a solution exists to the problem (a big “if” at this point) ¨ −∆u = f on Ω u= g on ∂ Ω then u(x) = Z G(x, y) f ( y)d y − Ω Z g( y) ∂Ω ∂G ∂ νy (x, y)dS y . If instead we are given data about ∂∂ νu (Von Neumann boundary conditions) then the problem term is u on the boundary. In this case we define a different type of corrector via ( −∆ϕ x = 0 in Ω ∂ ϕx ∂y ( y) = ∂Φ (y ∂y − x) on ∂ Ω 1.2.19 Proposition. Let Ω be a bounded, connected open set with C 1 boundary. Then G(x, y) = G( y, x) for all x, y ∈ Ω. PROOF: Let v(z) = G(x, z) and w(z) = G( y, z). We need to show that v( y) = w(x). Let Ω" = Ω \ (B(x, ") ∪ B( y, ")). Z 0= v∆w − w∆v dz Ω" = Z ∂ Ω" = v∂ν w − w∂ν v dSz Z ∂ B(x,")∪∂ B( y,") −v∂ν w + w∂ν v dSz 16 PDE I Now Z ∂ B(x,") −(Φ(z − x) − ϕ x (z))∂ν w + w∂ν (Φ(z − x) − ϕ x (z)) dSz R converges to ∂ B(x,") ∂ν Φ(z − x) · w dSz as " → 0, as before, and this last term goes to w(x), as before. An analogous argument for the other ball shows that the whole thing goes to w(x) − v( y). This proposition allows to extend the definition of G to the boundary of Ω in both variables (but not simultaneously). n Green’s function on R+ We need to solve ¨ n −∆ϕ x = 0 in R+ n ϕ x = Φ( y − x) on ∂ R+ Take ϕ x ( y) = Φ( ỹ − x) where ỹ = ( y1 , . . . , yn−1 , − yn ) is the reflection of y through {x n = 0}. The idea here is that Φ is a function with the desired boundary values, but fails to be harmonic at x. Instead we take Φ reflected through {x n = 0} then it becomes harmonic on the upper half space and has the correct boundary conditions. We take G(x, y) = Φ( y − x) − Φ( ỹ − x). Then for y on the boundary ∂G ∂ νy (x, y) = − 2x n 1 S(n) |x − y|n for n ≥ 3. 1.2.20 Theorem. Let g ∈ C b (Rn−1 × {0}). Then for Z u(x) := − Rn−1 ×{0} ∂G ∂ νy (x, y) · g( y)d y we have the following n n 1. u ∈ C ∞ (R+ ) ∩ L ∞ (R+ ) n 2. −∆u = 0 on R+ 3. lim x→x 0 ,x∈R+n u(x) = g(x 0 ) for x 0 ∈ Rn−1 × {0}. PROOF: Note that ∂G ∂ νy n (x, y) is smooth on R+ × (Rn−1 × {0}), and in fact it is har- monic in x and in y for y on the boundary. An explicit computation shows that R ∂G n (x, y)d y = 1 for all x ∈ R+ . Rn−1 ×{0} ∂ ν y Laplace’s equation 17 Energy Methods Consider the boundary value problem ¨ −∆u = 0 u= g E(u) = R Ω in Ω on ∂ Ω |∇u|2 d x is the associated energy functional for the problem. 1.2.21 Theorem (Uniqueness). The Poisson problem for a bounded domain Ω has at most one solution in C 2 (Ω) ∩ C(Ω). PROOF: Assume that u and v are both solutions and consider w := u − v. Then −∆w = 0 in Ω and w ≡ 0 on ∂ Ω. We have Z Z Z ∂w 2 E(w) = |∇w| d x = − ∆w · w d x + w dS = 0, ∂ν Ω Ω ∂Ω so ∇w ≡ 0 a.e. in Ω. Since w is C 2 , it is zero everywhere on Ω. Now consider the boundary value problem ¨ −∆u = f in Ω u= g on ∂ Ω R In this case E(u) = Ω 12 |∇u|2 − u f d x is the associated energy functional for the problem. The admissible set of functions for the problem is A = {u ∈ C 2 (Ω) ∩ C(Ω) | u = g on ∂ Ω}. 1.2.22 Theorem. If u is a solution of the above BVP then u minimizes the energy E over the admissible set A . Conversely, if there is a minimizer u of E over A then u solves the BVP. PROOF: Assume that u minimizes E over A . Consider i(τ) := E(u + τv) where v is such that u + τv ∈ A for all τ ∈ R, e.g. v ∈ C 2 (Ω) ∩ C(Ω) and v ≡ 0 on ∂ Ω. Then τ = 0 must be a critical point of i. We have i(τ) = E(u + τv) Z 1 1 = |∇u|2 + τ∇u · ∇v + τ2 |∇v|2 − u f − τv f d x 2 2 ZΩ 0 = i 0 (0) = = ∇u · ∇v − f v d x ZΩ (−∆u)v − f v d x + Ω = Z Ω Z ∂Ω v(−∆u − f )d x (∇u)v · ν dS 18 PDE I Since this is true for every v, we must have −∆u − f ≡ 0. Therefore u solves the BVP. Conversely, consider w ∈ A and E(w) − E(u). Let v = w − u, so w = v + u. Then Z Z 1 1 2 E(w) − E(u) = |∇(u + v)| − (u + v) f d x − |∇u|2 − u f d x 2 2 Ω ZΩ 1 = ∇u · ∇v + |∇v|2 − v f d x 2 ZΩ Z 1 2 = (−∆u)v + |∇v| − v f d x + v(∇u · ν)dS 2 ∂Ω ZΩ 1 = |∇v|2 d x ≥ 0 2 Ω so u minimizes E. Notice that the difference is strictly positive if w 6= u. 1.3 Heat equation The (inhomogeneous) heat equation IBVP is in Ω × (0, T ) u t − ∆u = f u= g on ∂ Ω × (0, T ) u(x, 0) = u0 (x) for all x ∈ Ω where g is assumed to be constant in time. Conditions on Ω × {0} are called initial conditions and conditions on ∂ Ω × (0, T ) are lateral boundary conditions. The energy functional for this problem is E(u) = Z Ω 1 2 |∇u|2 − u f d x. Then dE dt = Z ∇u · ∇u t − f u t d x Ω = Z (−∆u − f )u t d x + Z ∂ =− u t (∇u · ν)dS ΩΩ Z (∆u + f )2 d x ≤ 0 Ω Something about dissipating energy, therefore solutions will converge as t → ∞ to a solution to the Poisson problem. Heat equation 19 Physical motivation We consider a simple physical model where u is interpreted as the temperature of R a domain Ω. The energy of this system at a time t is Ω ρcu(x, t)d x, where ρ is the density and c is the specific heat capacity, and we assume these quantities do not depend on the time or temperature. The change in energy in Ω is the flow of energy through the boundary, Z Z dE =− q · ν dS = − div q d x, dt ∂Ω Ω R where q is the heat flux density. But we have ddEt = ρc Ω u t , so combining these, R ρcu t + div q d x = 0 and we conclude ρcu t + div q ≡ 0. Fourier’s law of heat Ω conduction says that q = −k∇u for some constant k depending on the material. k Therefore ρcu t − k∆u = 0, or u t − ρc ∆u = 0. By scaling (in x) it suffices to study the equation u t − ∆u = 0, the heat equation. Now suppose that energy is not completely conserved in this system. Then Z Z dE =− q · ν dS + ρc f (x)d x, dt ∂Ω Ω k and through a similar analysis we can show that u satisfies u t − ρc ∆u = f . Consider ¨ vt − ∆v = 0 on Ω × [0, ∞) ∂ν v = 0 on ∂ Ω × [0, ∞) R and assume that v is a smooth solution. Then Ω v is constant in time, i.e. d dt Z Ω v dx = Z ∆v d x = Ω Z ∂Ω ∂ν v dS = 0. Scaling properties If u is a solution to the heat equation then u(λx, λ2 t) is also a solution for any λ > 0. This is a property of any elliptic equation. We look for a solution that preserves this scaling, i.e. u such that λk u(λx, λ2 t) is constant in λ (for some k). Total heat is conserved, so we need the following integral to be independent of λ. Z Z Z 1 k 2 k 2 k−n λ u(λx, λ t)d x = λ u( y, λ t) n d y = λ u( y, λ2 t)d y, λ where y = λx is a change of variable. Total heat is independent of the time variable, so we must take k = n. We want λn u(λx, λ2 t) to be constant in λ. Taking λ = p1t we get u(x, t) = 1 n t2 u x , 1 . p t 20 PDE I Let ϕ(z) = u(z, 1), the similarity profile. Plugging this into the heat equation, we get n − n −1 x 1 − n −1 x x x − 2n −1 2 2 0 = u t − ∆u = − t ϕ p − t ∇ϕ p ·p −t ∆ϕ p 2 2 t t t t so 0 = − 2n ϕ(z) − 21 ∇ϕ(z) · z − ∆ϕ(z). We now further require that ϕ be radially symmetric. This reduces the last equation to an ODE which we can solve. The ϕ r (where r = |z| Laplacian of a radial function is given by ∆z ϕ(z) = ϕ r r + n−1 r and we abuse notation ϕ(r) = ϕ(z)), so the ODE is ϕr r + n−1 r 1 n ϕ r + ϕ r · r + ϕ = 0, 2 2 where we require that ϕ and the derivatives go to zero sufficiently fast as r → ∞. To solve it we divide by r n−1 , giving (r n−1 ϕ r ) r + 21 (r n ϕ) r = 0. The decay we desire of ϕ and ϕ r as r → ∞ gives that r n−1 ϕ r + 12 r n ϕ = 0. Solving we get that R 1 2 n ϕ(r) = ce− 4 r . We choose c so that Rn ϕ(z)dz = 1, i.e. c −1 = (4π) 2 since Z e − x2 1 4 ···e − 2 xn 4 d x1 · · · d x n = Z e Rn n 2 − x4 dx n = (4π) 2 . R The last integral is computed by noting that Z e− x2+ y2 4 dx d y = R2 Z 2π Z ∞ 0 r2 e− 4 r d r dθ = 4π ∞ e−s ds = 4π. 0 0 Φ(x, t) = Z 1 |x| (4πt) n 2 e− 4t is the fundamental solution to the heat equation. Cauchy problem Let g ∈ L ∞ (Rn ) ∩ C(Rn ) and consider the Cauchy problem for the heat equation, the initial value problem ¨ u t − ∆u = 0 on Rn × (0, ∞) u(x, 0) = g for all x ∈ Rn 1.3.1 Theorem. Every function of the form Z u(x, t) = Φ(·, t) ∗ g = Rn is a solution to the IVP above. In particular, 1. u ∈ C ∞ (Rn × (0, ∞)) Φ(x − y, t)g( y)d y Heat equation 21 2. lim x→x 0 ,t→0+ u(x, t) = g(x 0 ) for all x 0 ∈ Rn . PROOF: By its definition u is smooth on Rn × (0, ∞) (use the same arguments as before for space, and time is even simpler). Z Z u t − ∆u = Φ t (x − y, t)g( y)d y − Rn ∆Φ(x − y, t)g( y)d y = 0, Rn since Φ is itself a solution. Z |u(x, t) − g(x 0 )|| = Rn Z Φ(x − y, t)(g( y) − g(x 0 ))d y Φ(x − y, t)|g( y) − g(x 0 )|d y ≤ Rn = Z Φ(x − y, t)|g( y) − g(x 0 )|d y B(x 0 ,δ) + Z Φ(x − y, t)|g( y) − g(x 0 )|d y Rn \B(x 0 ,δ) The left is at most kgk L ∞ (B(x 0 ,δ)) . The right term is at most Z Φ(x − y, t)d y. 2kgk L ∞ (Rn ) Rn \B(x 0 ,δ) Choose δ > 0 so that |g(x) − g(x 0 )| < 21 " when |x − x 0 | < δ (by continuity of g). Choose δ0 so that if t < δ0 then Z " . Φ(x − y, t)d y < 4kgk L ∞ (Rn ) Rn \B(x ,δ) 0 Then when |x − x 0 | < δ and t < δ0 we have |u(x, t) − g(x 0 )| < ". Non-homogeneous Cauchy problem The non-homogeneous problem Cauchy is ¨ u t − ∆u = f on Rn × (0, ∞) u(x, 0) = g for all x ∈ Rn By linearity it suffices that we solve the problem for when g ≡ 0, since we already know how to solve the problem when f ≡ 0. 1.3.2 Theorem. Let f ∈ C 2,1 (Rn × [0, ∞)). The function Z tZ Φ(x − y, t − s) f ( y − s)d y ds v(x, t) = 0 Rn solves the non-homogeneous problem (with g ≡ 0). This is a special case of Duhamel’s principle. 22 PDE I 1.3.3 Duhamel’s principle. Let k ≥ 1 and consider the following non-homogeneous problem, where L is a linear partial differentiable operator in the space variables only. ¨ ∂ tk u(x, t) + Lu(x, t) = f (x, t) ∂ ti u(·, 0) = 0 i = 0, . . . , k − 1 Consider the related homogeneous problem k n ∂ t U(x, t, s) + LU = 0 on R × [0, ∞) i ∂ u(·, s, s) = 0 i = 0, . . . , k − 2 t k−1 ∂ t u(·, s, s) = f (·, s). If U is a solution to the homogeneous problem then then Z t u(x, t) = U(x, t, s)ds 0 is a solution to the inhomogeneous problem. PROOF: By the FTC and the chain rule, Z t u t (x, t) = U(x, t, t) + U t (x, t, s)ds = 0 Z t U t (x, t, s)ds 0 .. . ∂ tk u(x, t) = ∂ tk−1 U(x, t, t) + Z t ∂ tk U(x, t, s)ds 0 = f (x, t) + Z t ∂ tk U(x, t, s)ds 0 and Lu = t Z LU(x, t, s)ds = 0 t Z −∂ tk U(x, t, s)ds. 0 so the given u solves the non-homogeneous problem. Maximum principle for Dirichlet problem Now we let Ω be a bounded open set and we consider Ω T := Ω × (0, T ], the parabolic cylinder over Ω. The parabolic boundary is PΩ t := Ω × {0} ∪ ∂ Ω × [0, T ]. The Dirichlet problem for this domain is ¨ u t − ∆u = 0 u= g in Ω T on PΩ T Heat equation 23 1.3.4 Definition. A function u ∈ C 2,1 (Ω T )∩C(Ω T ) is a sub-solution to the Dirichlet problem if ¨ u t − ∆u ≤ 0 in Ω T u≤ g on PΩ T and a super-solution is defined analogously. 1.3.5 Comparison principle. If u is a sub-solution to the problem and v is a super-solution to the problem then u ≤ v on Ω T . PROOF: Assume that there is (x̂, t̂) ∈ Ω T such that u(x̂, t̂) > v(x̂, t̂). Consider, for " > 0, v " (x, t) := v(x, t)+"t, a (strict) super-solution. For small enough " we still have u(x̂, t̂) > v " (x̂, t̂). Let (x 0 , t 0 ) be a point where u − v " reaches is maximum on Ω T . Then (x 0 , t 0 ) ∈ Ω T , so Du(x 0 , t 0 ) = Dv " (x 0 , t 0 ) and D2 u(x 0 , t 0 ) ≤ D2 v " (x 0 , t 0 ) by the necessary conditions for the maximization of a multi-variable function. We also know that u t (x 0 , t 0 ) ≥ vt" (x 0 , t 0 ), so 0 ≥ u t − ∆u ≥ vt" − ∆v " = " + vt − ∆v ≥ " > 0, a contradiction. 1.3.6 Comparison principle. If u is a sub-solution of the heat equation and v is a super-solution of the heat equation and u ≤ v on PΩ T then u ≤ v on all of Ω T . 1.3.7 Theorem (Weak Maximum Principle). Let u ∈ C 2,1 (Ω T ) ∩ C(Ω T ) be a solution to the heat equation u t − ∆u = 0 on Ω T . Then max PΩT u = maxΩT u and min PΩT u = minΩT u. PROOF: Define v ≡ max PΩT u. Then v is a constant function, so it is a solution to the heat equation, and v ≥ u on PΩ T . By the comparison principle, u ≤ v on all of ΩT . 1.3.8 Theorem (Uniqueness). There is at most one solution u ∈ C 2,1 (Ω T )∩C(Ω T ) to the initial/boundary value problem u t − ∆u = 0 u= g u = u0 in Ω × (0, T ] on ∂ Ω × [0, T ] on Ω × {0} PROOF: If u and v are both solutions then by the comparison principle u ≤ v and v ≤ u (so u ≡ v) on all of Ω T . 24 PDE I Maximum principle for Cauchy Problem For the theorems in the last subsection we required that Ω was a bounded open set. What about for Ω = Rn ? 1.3.9 Theorem. Let u ∈ C 2,1 (Rn × (0, T ]) ∩ C(Rn × [0, T ]) be a solution to ¨ u t − ∆u = 0 on Rn × (0, ∞) u(x, 0) = g(x) for all x ∈ Rn 2 where g is continuous and bounded from above. Assume that u(x, t) ≤ Ae a|x| for some A, a ∈ R for all x ∈ Rn and t ∈ [0, T ]. Then supRn ×[0,T ] u = supRn g. PROOF: Consider, for t < 0, ψ(z, t) := |z|2 1 n (−t) 2 exp . −4t Then ψ t − ∆ψ = 0. There are two cases. If 4aT < 1 then there exists " > 0 such that 4a(T +") < 1. Let y ∈ Rn and µ > 0 and consider v(x, t) := u(x, t) − µψ(x − y, t − (T + ")). on Rn × [0, T ]. Then vt − ∆V = 0 on Rn × (0, T ] and v ≤ u and v ≤ g on Rn × {0}. When |x − y| = r, µ r2 v(x, t) = u(x, t) − n exp 4(T + " − t) (T + " − t) 2 µ r2 a|x|2 ≤ Ae − n exp 4(T + " − t) (T + " − t) 2 µ r2 a(| y|2 +r 2 ) ≤ Ae − n exp 4(T + ") (T + ") 2 But a < 4(T1+") , so this last quantity is bounded by a constant, and we may choose r such that max v = max v ≤ sup g. P(B( y,r)×[0,T ]) Rn B( y,r)×{0} But max B( y,r)×[0,T ] µ ≥ v( y, t) = u(x, t) − so taking µ → 0 we obtain supRn n (T + " − t) 2 g ≥ u( y, t). Conclude by pasting. Let ( ϕ(z) = exp(− z12 ) z 6= 0 0 z=0 (P∞ and u(x, t) = Then u t − ∆u = 0 on Rn × (0, ∞) and u(·, 0) = 0. n=0 ϕ 0 (n) 2n x (t) (2n)! t >0 t = 0. Heat equation 25 Regularity of solutions The non-homogeneous heat equation u t − ∆u = f does not necessarily have smooth solutions. Indeed, take h ∈ C 2,1 \ C 3,1 . Then h t − ∆h =: f˜, where f˜ ∈ C and f˜ is not differentiable (h(x, t) = |x|3 should work). 1.3.10 Theorem. Assume u ∈ C 2,1 (Ω T ) solves the heat equation u t − ∆u = 0 in Ω T . Then u ∈ C ∞ (Ω T ). PROOF: Let C(x, t, r) be the parabolic cylinder C(x, t, r) := {( y, s) ∈ Rn+1 | kx − yk < r, t − r 2 < s ≤ t}. For (x̂, t̂) ∈ Ω T . Choose r > 0 so that C = C(x̂, t̂, r) ⊆ Ω T . Choose smaller cylinders C 0 and C 00 corresponding to 34 r and 21 r. Choose a smooth cut-off function ξ such that ξ ≡ 1 on C 0 and ξ ≡ 0 outside of C. Define ¨ v(x, t) := u(x, t)ξ(x, t) if (x, t) ∈ C 0 otherwise Then v is defined on Rn × (−∞, t̂] and vt − ∆v = ξu t + uξ t − ξ∆u − 2∇ξ∇u − u∆ξ = uξ t − 2∇ξ∇u − u∆ξ =: f˜ so v solves vt − ∆v = f˜ on its domain, and f˜ ∈ C 1,1 . (Without loss of generality we are assuming that t̂ > 0 and we choose r > 0 so that t̂ − r 2 > 0.) Notice that v(·, 0) = 0 on Rn , so v is a solution to the non-homogeneous Cauchy problem. Since the boundary conditions are compactly supported, v(x, t) = Z tZ 0 Φ(x − y, t − s) f˜( y, s)d y ds Rn by uniqueness of solutions to the heat equation. Note further that f˜ is supported on C \ C 0 . For (x, t) ∈ C 00 we have u(x, t) = v(x, t) ZZ = Φ(x − y, t − s)((ξ t − ∆ξ)u − 2∇ξ∇u)d y ds C\C 0 = ZZ =: Φ(x − y, t − s)(ξ t − ∆ξ)u − 2∇Φ(x − y, t − s)∇ξu d y ds ZZC K(x, y, t, s)u( y, s)d y ds C Now K is supported on C \ C 0 and is smooth, so u is smooth at (x, t). 26 PDE I 1.3.11 Theorem. For all k, ` ≥ 0 there is a constant Ck,` such that for each multiindex α with |α| = k, max |Dαx D`t u| ≤ C(x,t, 2r ) Ck,` kuk L 1 (C(x,t,r)) k+2`+n+2 r whenever u t − ∆u = 0 on C(x, t, r). Energy methods 1.3.12 Theorem (Uniqueness). There is at most one solution u ∈ C 2 (Ω T )∩C(Ω T ) of ¨ u t − ∆u = f in Ω T u= g on PΩ T where f and g are continuous, Ω is bounded, and ∂ Ω is C 1 . PROOF: Assume that w1 and w2 are both solutions. Then u := w1 − w2 solves ¨ u t − ∆u = 0 in Ω T u=0 on PΩ T . It suffices to show that u ≡ 0 is the only solution to this problem. The energy (or entropy) functional for the problem is Z u2 (x, t)d x. e(t) = E(u) := Ω Then de dt =2 Z uu t d x = 2 Ω Z u · ∆u d x = −2 Z Ω |∇u|2 d x ≤ 0, Ω and e(0) = 0, so e(t) = 0 for all t ≥ 0. Therefore for all times. R Ω u2 d x = 0, so u is zero on Ω 1.3.13 Poincare’s Inequality. Given Ω bounded with C 1 boundary there is a constant λ such that, for all u ∈ C 1 (Ω) ∩ C(Ω) for which u ≡ 0 on ∂ Ω, Z Z u2 (x)d x ≤ λ Ω |∇u|2 d x Ω 1.3.14 Theorem. Let u be a solution of u t − ∆u = 0 in Ω × (0, ∞) u=0 on ∂ Ω × (0, ∞) u= g on Ω × {0} Wave equation 27 where g is continuous and is compatible with the other boundary condition. There is a constant, C > 0, independent of g, such that such that Z u2 (x, t)d x ≤ e−C t Ω Z g 2 (x)ds. Ω PROOF: RAs in the proof of the last theorem we consider the energy. This time e(0) = Ω g 2 (x)d x and de dt = −2 Z 2 |∇u| d x ≤ − Ω 2 Z 2 u2 (x, t)d x = − e(t). λ Ω λ 2 By Gronwall’s inequality e(t) ≤ e− λ e(0). 1.4 Wave equation d’Alambert’s formula (n = 1) 1.4.1 Theorem (d’Alambert Formula). Let g ∈ C 2 (R) and h ∈ C 1 (R). The function Z x+t 1 u(x, t) = g(x + t) + g(x − t) + h( y)d y 2 x−t is C 2 (R × [0, ∞)) and solves on R × (0, ∞) u t t − u x x = 0 u(x, 0) = g(x) for all x ∈ R u t (x, 0) = h(x) for all x ∈ R PROOF: Exercise, recalling the solution of the transport equation and noting that (∂ t + ∂ x )(∂ t − ∂ x )u = 0. Notice that for h ≡ 0 the value of u(x, t) is just the average of the values of g(x + t) and g(x − t). Even for h non-trivial, the value of u depends only on the values of h between x + t and x − t. Information propagates at finite speed and there we do not have the smoothing properties of the heat equation. 1.4.2 Example (Wave equation on the half line). The problem is, where g(0) = 0 and h(0) = 0, ut t − ux x = 0 u(x, 0) = g u (x, 0) = h t u(0, t) = 0 on R+ × (0, ∞) x ≥0 x ≥0 t ≥0 28 PDE I Define g̃(x) to be g(x) for x ≥ 0 and −g(−x) for x < 0, and h̃ similarly. Then there is a solution ũ given by d’Alambert’s formula. We have ũ(0, t) = so ( u(x, t) = 1 2 1 (g(x 2 1 (g(x 2 g̃(t) + g̃(−t) + t Z h̃( y)d y = 0, −t + t) + g(x − t) + + t) − g(t − x) + R x+t R x−t x+t t−x h( y)d y) x ≥t >0 h( y)d y) t > x >0 is a solution to the above problem. (Notice that h̃ is odd.) Spherical means 1.4.3 Lemma. R R 1. If a is continuous on B(x, r) and Φ(r) = B(x,r) a( y)d y then Φ0 (r) = ∂ B(x,r) a( y)d y. 2. If a ∈ C 1 (U) and B(x, r) ⊂ U and ϕ(r) = R − ∇a( y) · ν dS y . ∂ B(x,r) R − ∂ B(x,r) a( y)dS y then ϕ 0 (r) = R PROOF: 1. Φ(r) = r n B(0,1) a(x + rz)dz by the change of variable y = x + rz. When a is continuously differentiable, Z Z Φ r (r) = nr r−1 a(x + rz)dz + r n ∇a(x, +rz) · z dz B(0,1) = = = n Z r B(0,1) a( y)d y + Z B(x,r) n r Z y−x ∇a( y) · r B(x,r) Z Z a( y)d y − B(x,r) n a( y) d y + r B(x,r) dy Z a( y) ∂ B(x,r) y−x r · ν dS y a( y)dS y ∂ B(x,r) 2. With the same change of variables, dSz = ϕ(r) = 1 Z nα(n) Z =− and ∇a(x + rz) · zdSz ∂ B(0,1) ∇a( y) · ∂ B(x,r) let 1 dS y r n−1 y−x r dS y Let n ≥ 2 and m ≥ 2 and u ∈ C m (Rn × [0, ∞)). For x ∈ Rn and t ≥ 0 and r > 0 Z U(x, r, t) = − ∂ B(x,r) u( y, t)dS y Wave equation and 29 Z G(x, r) = − Z g( y)dS y and H(x, r) = − ∂ B(x,r) h( y)dS y . ∂ B(x,r) 1.4.4 Lemma (Euler-Poisson-Darboux Equation). For u and U as above, if u solves on Rn × [0, ∞) u t t − ∆u = 0 u(x, 0) = g(x) for all x ∈ Rn u t (x, 0) = h(x) for all x ∈ Rn then for fixed x ∈ Rn , U(x, ·, ·) ∈ C m (R+ × [0, ∞)) and solves n−1 + U t t − U r r − r U r = 0 on R × [0, ∞) U(x, r, 0) = G(x, r) U t (x, r, 0) = H(x, r) for all r > 0 for all r > 0 PROOF: For r > 0, and computing as for the MVP for the Laplacian, one obtains Z Z r U r (x, r, t) = − ∇u · ν dS y = U r (x, r, t) = − ∆u( y, t)d y. n B(x,r) ∂ B(x,r) Whence, buy the product and quotient rules, Z Z r 1 U r r (x, r, t) = − ∆u( y, t)d y + −1 − ∆u( y, t)d y. n ∂ B(x,r) n B(x,r) Note that lim r→0+ U r (x, r, t) = 0 and lim r→0+ U r r (x, r, t) = 1n ∆u(x, t) so we may extend U to r = 0 as well. Now Z Z r 1 Ur = − u t t d y, ut t d y = n B(x,r) nα(n)r n−1 B(x,r) so multiplying both sides by r n−1 and taking the derivative, Z r n−1 n−1 (r U r ) r = u t t dS y − nα(n) ∂ B(x,r) r n−1 U r r + (n − 1)r n−2 U r = r n−1 U t t which is the EPD equation. Kirchhoff’s formula (n = 3) Let Ũ = r U, G̃ = r G, and H̃ = r H. Then Ũ t t = r U t t and 2 Ũ r r = r( U r + U r r ), r 30 PDE I so, if and only if n = 3, Ũ t t − Ũ r r = r(U t t − 2 r U r − U r r ) = 0. By the example solution to the wave equation on the half line, for r < t, Ũ(x, r, t) = 1 2 (G̃(r + t) − G̃(t − r)) + 1 2 r+t Z H̃( y)d y. t−r Then U(x, t, r) = 1r Ũ(x, r, t), so u(x, t) = lim+ r→0 G̃(t + r) − G̃(t − r) 2r + 1 2r Z t+r H̃( y)d y = G̃ 0 (t) + H̃(t). t−r We have therefore derived Kirchhoff’s formula d (t G(t)) + t H(t) dt Z Z d = t− g( y)dS y + t− h( y)dS y dt ∂ B(x,t) ∂ B(x,t) Z Z Z y−x =− g( y)dS y + t− ∇g( y) · dS y + t− h( y)dS y t ∂ B(x,t) ∂ B(x,t) ∂ B(x,t) Z u(x, t) = =− th( y) + g( y) + ∇g( y) · ( y − x)dS y ∂ B(x,t) It can be checked that this is truly a solution to the wave equation in dimension n = 3. Method of descent (n = 2) In the case n = 2, let ḡ(x̄) := g(x) and h̄(x̄) := h(x), where x̄ := (x 1 , x 2 , x 3 ) and x̄ := (x 1 , x 2 ). Let ū be a solution to the wave equation in dimension 3 with initial data ḡ and h̄ given by Kirchhoff’s formula. Is u defined by u(x, t) = ū((x 1 , x 2 , 0), t) a solution the wave equation in dimension 2? Indeed it is, since ū x 3 x 3 ((x 1 , x 2 , 0), t) = 0 and the values of ū, as given by the formula, are invariant under translation in the third coordinate. Remark. If the data are constant in the x 3 direction but we don’t have a formula for the solution to the problem, it is enough that we know that the solution is unique for us to conclude that the solution is constant in the x 3 direction. Indeed, otherwise translation in the x 3 direction would give different solutions. Wave equation 31 See the text for the derivation of the following formula. Z Z g( y) 1 d h( y) 2 2 u(x, t) = t − dy +t − dy p p 2 dt t 2 − | y − x|2 t 2 − | y − x|2 B(x,t) B(x,t) Z t g( y) + t 2 h( y) + t∇g( y) · ( y − x) 1 = − dy p 2 B(x,t) t 2 − | y − x|2 Non-homogeneous problem We now consider the non-homogeneous problem n u t t − ∆u = f in R × (0, ∞) u(·, 0) = 0 on Rn u t (·, 0) = 0 on Rn . We apply Duhamel’s principle and consider the problem in Rn × (s, ∞) U t t − ∆U = 0 U(·, s, s) = 0 on Rn U t (·, s, s) = f (·, s) on Rn . We need certain smoothness properties for this to hold (this is the reason that it is Duhamel’s principle and not theorem), but we are only concerned with finding some solution and so we are not too concerned. 1.4.5 Theorem. Let either 1. n = 1 and f ∈ C 2 (R × [0, ∞)); or n 2. n ≥ 2 and f ∈ C b 2 c+1 (Rn × [0, ∞)); (smoothness at 0 is required), then u(x, t) := and solves the non-homogeneous problem. Rt 0 U(x, t, s)ds is in C 2 (Rn × [0, ∞)) Energy methods As usual, let Ω be a bounded domain and Ω T = Ω×(0, T ] be the parabolic cylinder. Energy methods can be used to show uniqueness of the solution to the following problem (but not existence). u t t − ∆u = f in Ω T u(·, 0) = g on Ω u (·, 0) = h on Ω t u = g̃ on ∂ Ω × [0, T ]. 32 PDE I 1.4.6 Theorem. There is at most one solution u ∈ C 2 (Ω T ) ∩ C 1 (Ω T ) solving the above problem. PROOF: As usual, by linearity it suffices to show that the zero function is the only solution to the homogeneous problem when the initial and boundary data is zero. Let Z 1 |∇u|2 + u2t d x E(u) = 2 Ω be the energy for the problem. Then Z Z Z dE = ∇u · ∇u t + u t u t t d x = u t (−∆u + u t t )d x + u t ∇u · ν dS = 0, dt Ω Ω ∂Ω since u = 0 on ∂ Ω × [0, T ] somehow implies that u t = 0 on that surface. Whence ∇u = 0 and u t = 0 for all t > 0, so u ≡ 0 since u(·, 0) = 0. Next we prove that “information propagates at finite speed” through the wave equation. 1.4.7 Theorem. Let C x 0 ,t 0 = {(x, t) | 0 ≤ t ≤ t 0 , kx − x 0 k ≤ t 0 − t}. If u t t − ∆u = 0 on Ω T , (x 0 , t 0 ) ∈ Ω T , C x 0 ,t 0 ⊆ Ω T , and u(·, 0) = u t (·, 0) = 0 on B x 0 ,t 0 then u(x 0 , t 0 ) = 0. PROOF: Define the energy for this problem to be Z 1 e(t) = u2 + |∇u|2 d x. 2 B(x ,t −t) t 0 0 Then de dt = Z 1 u t u t t + ∇u · ∇u t d x − 2 B(x 0 ,t 0 −t) = Z u t (u t t − ∆u)d x + B(x 0 ,t 0 −t) 1 2 Z ∂ B(x 0 ,t 0 −t) Z ∂ B(x 0 ,t 0 −t) u2t + |∇u|2 dS 2u t ∂ν u − u2t − |∇u|2 dS ≤ 0 since |2u t ∂ν u| ≤ 2|u t ||∇u| ≤ u2t + |∇u|2 by trivial inequalities. Therefore energy is dissipating. Since e(0) = 0 there was no energy to begin with, so e ≡ 0 on [0, t 0 ] and u(x 0 , t 0 ) = 0. 2 2.1 Nonlinear first order PDE Method of characteristics The general first order PDE is F (Du, u, x) = 0, for x ∈ Rn . The PDE is said to be quasi-linear if F has the form F (p, z, x) = b(z, x) · p − c(z, x). Method of characteristics 33 Suppose we are given boundary data g on some n − 1 dimensional sub-manifold Γ. If we have a solution u to the quasi-linear PDE then consider z(s) = u(x(s)), the value of the solution along curves passing through the boundary manifold. The method of characteristics considers the associated system of ODE dx ds = b(z(s), x(s)), dz ds = c(z(s), x(s)) since by the chain rule dz ds = d ds u(x(s)) = b(z(s), x(s)) · Du(x(s)) = c(z(s), x(s)). By solving this system of ODE we obtain a solution to the problem. 2.1.1 Example (u x + 2u y = u 2 , u(x , 0) = g (x )). dy = z 2 . Then x(s) = x 0 + s, y(s) = 2s, and (solving the Take ddsx = 1, ds = 2, and dz ds ODE for z), g(x 0 ) z0 = z(s) = . 1 − sz0 1 − sg(x 0 ) Therefore, for a general point (x, y), y g(x − 2 ) y u(x, y) = z( 2 ) = y y 1 − 2 g(x − 2 ) , y where we take the z associated with x 0 = x − 2 . Assuming g is positive and bounded, we require that 0 ≤ y < 2kgk∞ , and we can solve the problem in this strip. In general the solution blows up at certain points. In the two-dimensional case, for a graph of the solution (x, y, u(x, y)), tangent vectors are (1, 0, u x ) and (0, 1, u y ), and a normal vector is (−u x , −u y , 1). Notice that the coefficients to the problem (as a vector in R3 , e.g. (1, 2, u2 ) in the case of the example above)) are tangent to the graph of the solution. In general, Γ is said to be non-characteristic if the vector field (b, c) : R1+n → Rn+1 is not tangent to Γ at any point. In the fully non-linear case we must also keep track of how the derivatives of u change along characteristics. To this end let p(s) := Du(x(s)), so that dz = ds p(s) · ddsx . There is no structure of F to exploit to eliminate p. Differentiating the original equation, for all i = 1, . . . , n, 0= ∂F ∂ xi = n X ∂F j=1 ∂ pj (Du, u, x) Therefore d pi ds = d ds ∂u ∂ x j∂ xi + ∂F ∂z (Du, u, x) ∂u ∂ xi (x) + ∂F ∂ xi X n dxj ∂ u (x(s)) = (x(s)). ∂ xi ds ∂ x j ∂ x i j=1 ∂u (Du, u, x). 34 PDE I Note that taking the derivative of the original equation always gives a quasilinear equation. We use this observation to close the system of ODE, setting d xi = ∂∂ pF (p, z, x). The system becomes a system of 2n + 1 ODE ds i ṗi = − ∂∂ Fz (p, z, x)pi − ∂∂ xF (p, z, x) so ṗ = −(Dz F )p − D x F i Pn ż = j=1 p j ∂∂ pF (p, z, x) so ż = (D p F ) · p j ∂F so ẋ = D p F ẋ i = ∂ p (p, z, x) i 2.1.2 Example (u x 1 u x 2 = u). We consider the equation u x 1 u x 2 = u with initial date u(0, x 2 ) = x 22 on Γ = {x 1 = 0}. We wish to solve on Ω = {x 1 > 0}. Then F (p, z, x) = p1 p2 − z, so our system becomes ṗ1 = p1 , ṗ2 = p2 ż = p1 p1 + p2 p1 = 2p1 p2 ẋ 1 = p2 , ẋ 2 = p1 Then pi (s) = pi (0)es , so x i (s) = x i (0) + p3−i (0)(es − 1) and z(s) = z(0) + p1 (0)p2 (0)(e2s − 1). On Γ x 1 (0) = 0 and z(0) = (x 2 (0))2 . Since Γ is a line, we may compute the partial in that direction, i.e. p2 (0) = 2x 2 (0). Using the original equation, p1 (0) = 21 x 2 (0). Therefore x 1 (s) = 2x 2 (0)(es − 1) p1 (s) = 1 2 x 2 (0)es x 2 (s) = 1 2 x 2 (0)(es + 1) z(s) = (x 2 (0))2 e2s p2 (s) = 2x 2 (0)es From a general point (x 1 , x 2 ), so (solving) we need es = u(x 1 , x 2 ) = z(s) = 4x 2 + x 1 4 4x 2 +x 1 . 4x 2 −x 1 Finally, 2 . Existence of a local solution At this point we are not even sure that a solution exists. First we consider “flat” boundary data u = g on Γ = {x n = 0}—we will consider the general case later. This data for the PDE translates to initial data (at a point x 0 ) x(0) = x 0 , z(0) = g(x 0 ), pi (0) = g x i (x 0 ) for i = 1, . . . , n − 1, and F (p(0), z(0), x(0)) = 0 (which determines pn (0)). 2.1.3 Definition. For boundary conditions u = g on Γ, a triple (p, z, x) 1. satisfies the compatibility conditions (at x) a) x ∈ Γ; Method of characteristics 35 b) z = g(x); and c) pi = g x i (x) for i = 1, . . . , n − 1. 2. is admissible if the compatibility conditions hold and F (p, z, x) = 0. 3. is non-characteristic if it is admissible and F pn (p, z, x) 6= 0. Boundary data are non-characteristic if they are non-characteristic at every point of Γ. When data are non-characteristic there is no problem of information trying to propagate along the boundary conditions and possibly conflicting. 2.1.4 Lemma. If (p0 , z0 , x 0 ) is non-characteristic, then there exists a neighbourhood W of x 0 in Γ, and a function q( y) on W such that F (q( y), g( y), y) = 0 for all y ∈ W and such that (q( y), g( y), y) is admissible for all y ∈ W . PROOF: Let G : Rn × Rn → Rn be defined by ¨ G (p, y) = i pi − g x i ( ŷ) for i = 1, . . . , n − 1 F (p, g( y), y) i=n where ŷ is the projection of y onto Γ. Then G(p0 , x 0 ) = 0, and to apply the Implicit Function Theorem we need ∂∂ Gp 6= 0. But 1 0 ∂G = . ∂ p .. F p1 0 1 .. . F p2 ... ... .. . ... 0 0 .. . F pn Therefore det ∂∂ Gp (p0 , z0 , x 0 ) = F pn (p0 , z0 , x 0 ) 6= 0 since (p0 , z0 , x 0 ) is non-characteristic, so by the Implicit Function Theorem there is a neighbourhood W of x 0 in Γ and a function q such that G(q( y), y) = 0. 2.1.5 Lemma. With data as in the previous lemma, there exists 1. an open neighbourhood V ⊆ W of x 0 on Γ; 2. an open interval I containing zero; and 3. an open neighbourhood U of x 0 in Rn such that for all x ∈ U there is a unique s ∈ I and y ∈ V such that x = x( y, s). Furthermore, the mapping x 7→ ( y, s) is C 2 . 36 PDE I PROOF: For x = x 0 we must have y = x 0 and s = 0. We have a mapping x : ( y, s) 7→ x : W × I0 → Rn given by the solution to the characteristic ODE. As before, 1 0 . . . 0 F p1 0 1 . . . 0 F p2 . . ∂x .. .. .. . . . ... (x 0 , 0) = . ∂ ( y, s) 0 0 . . . 1 F pn−1 0 0 . . . 0 F pn x = F pn (p0 , z0 , x 0 ) 6= 0. and det ∂ (∂y,s) Suppose F (Du, u, x) = 0 near x 0 ∈ Rn with u = g on Γ, an (n − 1)-dimensional manifold. Assume that en is not tangent to Γ at x 0 . Locally (near x 0 ), there exists a function h : Rn−1 → R such that Γ = {x n − h(x̂) = 0} (where x̂ = (x 1 , . . . , x n−1 )). Then ν (normal to Γ at x 0 ) is parallel to (−D x̂ h, 1). Let ŷ := x̂ and yn = x n −h(x̂), so Γ maps to the set Γ̃ = { yn = 0}. Call y = Φ(x), and notice that 1 0 ... 0 1 . . . 0 0 DΦ = . .. .. .. .. . . . −h x 1 −h x 2 . . . 1 Let Ψ = Φ−1 , so that x = Ψ( y), and define v( y) := u(Ψ( y)), so that u(x) = v(Φ(x)). In particular, these problems are equivalent and locally we can solve the curved problem from a solution to the straight problem. More specifically, D x u(x) = DΦ(x) · D y (v(Φ(x)). (Note the order of multiplication. This is done because for these lectures we are taking D to be a column vector.) Whence 0 = F (Du, u, x) = F (DΦ(x) · D y v(Φ(x)), v(Φ(x)), Ψ(Φ(x))) = F (DΦ(Ψ( y)) · D y v( y), v( y), Ψ( y)) Take G(p, z, y) = F (DΦ(Ψ( y))·p, z, Ψ( y)), so that new problem is to solve G(Dv, v, y) = 0 with v( y) = g(Φ( y)) on { yn = 0}. The compatibility conditions for the u problem for a triple (p, z, x 0 ) are that p · b = ∇ b g(x 0 ) and z = g(x 0 ) for every tangent vector b to Γ at x 0 . Admissibility is the additional requirement that F (p, z, x 0 ) = 0, and non-characteristic is the further additional requirement that D p F (p, z, x 0 ) · ν 6= 0 (i.e. that the information does not try to propagate tangent to the boundary data). To check that a point remains non-characteristic in the transformed problem, we must check that n X ∂ Φn G pn = Fp j = D p F · (−D x̂ h, 1) 6= 0 ∂ xj j=1 (see sheet of corrections) Hamilton-Jacobi equations 37 2.1.6 Example (Conservation Laws). Let F : R → Rn be a vector field. The associated conservation law is u t + div(F (u)) = 0 for t > 0, with initial data u(·, 0) = g(·) at t = 0. Let y = (x, t) ∈ Rn+1 and q = (p, qn+1 ), so that p = D x u = Du and qn+1 = u t . Write G(q, z, y) = qn+1 + F 0 (z) · p so that the conservation is of the form G((Du, u t ), u, (x, t)) = 0. The system of characteristic equations is 0 ẏ = (F (z), 1) q̇ = (−(F 00 (z) · p)p, 0) ż = qn+1 + F 0 (z) · p = 0 The equations for x and z form a closed system, so we do not need to worry about tracking the derivative along the characteristics. (This is also evident because the conservation law is quasi-linear.) Now z is constant along characteristics, t = s, and ẋ = F 0 (z0 ), so x(t) = F 0 (g(x 0 ))t + x 0 . Notice that if F 0 (g(x 0 )) 6= F 0 (g(x̃ 0 )) then the characteristics through those points (which are straight lines) must intersect at some time. 2.2 Hamilton-Jacobi equations The Hamilton-Jacobi equation is u t + H(Du, x) = 0 for t > 0, with initial data u(·, 0) = g(·) at t = 0. H : R2n → R is the Hamiltonian. Let y = (x, t), q = (p, qn+1 ), and G(q, z, y) = qn+1 + H(p, x). The characteristic equations are ẏ = (H p (p, x), 1) q̇ = (−H x (p, x), 0) ż = H p · p + qn+1 As in the example above we may take t = s, and notice that ¨ ẋ = H p (p, x) ṗ = −H x (p, x) is a closed system of equations. This is the Hamiltonian system of ODE—it describes the characteristics. Notice that from the chain rule, dH dt = H p · ṗ + H x · ẋ = −H p · H x + H x · H p = 0, so the Hamiltonian is constant along characteristics. 2.2.1 Example. Consider H = 1 |p|2 2m ( + V (x). The Hamiltonian system is ẋ = 1 p m ṗ = −∇V We may interpret x at the position of a particle of mass m and V as a potential acting on the particle. Then p is the momentum and we derive Newton’s Law 38 PDE I mẍ = ṗ = −∇V , which is the force described by the potential. For gravity we c would take V (x) = − |x| . 1 1 2.2.2 Example (u t + 2 (u x )2 = 0, g (x ) = − 2 x 2 ). In this example we have H(p, x) = 21 p2 , so the system is ẋ = p ṗ = 0 ż = p2 + q Whence p(t) = p0 = g 0 (x 0 ) is constant, and x(t) = x 0 + g 0 (x 0 )t. For (x, t) ∈ R2 x we take x 0 defined by x = x 0 + g 0 (x 0 )t. For this example x = x 0 − x 0 t so x 0 = 1−t . 2 We have z(t) = ((p0 ) + q0 )t + z0 , where 1 1 q0 = u t (x 0 , 0) = − (u x (x 0 , 0))2 = − (g 0 (x 0 ))2 2 2 from the PDE, and z0 = u(x 0 , 0) = g(x 0 ). Therefore u(x, t) = z(t) = 0 1 2 0 2 (g (x 0 )) − (g (x 0 )) 2 t + g(x 0 ) = 1 x2 2 t −1 . Lagrangian description of classical mechanics Let L : Rn × Rn → R, L = L(q, x), the Lagrangian. Let the set of all paths x : [0, t] → Rn , such that x ∈ C 2 [0, t] and x(0) = x 0 and x(t) = x t , be denoted A x 0 ,x t . Let Z t I[x] := L(ẋ(s), x(s))ds, 0 the action functional. We would like to find the minimum of I over A . Of course, we need certain conditions on L in order for a minimum to exist. 2.2.3 Theorem (Euler-Lagrange Equations). Assume x ∈ A is a minimizer of I. Then for all x ∈ [0, t], − d ds Dq L(ẋ(s), x(s)) + D x L(ẋ(s), x(s)) = 0. PROOF: Since x is a minimizer, any small perturbation of the curve will increase I, i.e. I[x + τv] ≥ I[x] for all v : [0, t] → Rn such that v ∈ C 2 [0, t] with v(0) = v(t) = 0. Define i(τ) = I[x + τv] = Z 0 t L(ẋ + τv̇), x + τv)ds, Hamilton-Jacobi equations 39 which has a minimum at τ = 0. Therefore Z t 0 0 = i (0) = Dq L(ẋ, x) · v̇ + D x L(ẋ, x) · v ds = 0 Z t − 0 d ds Dq L + D x L by integration by parts. Since C 2 is dense in L 2 , the E-L equations hold. v ds, 1 2.2.4 Example (L(q , x ) = 2 m|q |2 − V(x )). In this case Dq L = mq and D x L = −∇V , so the E-L equations are − d ds mẋ(s) = ∇V (x(s)), mẍ = −∇V. or There is a clear connexion between the Lagrangian and Hamiltonian descriptions (in this case it is given by p = mq). Given a Lagrangian L, assume that for every x, p ∈ Rn there exists a unique vector q = q(p, x) such that p = Dq L(q, x) and that q depends smoothly on p (the idea is that q(p(s), x(s)) = ẋ(s)). Given a solution x(s) to the associated E-L equations, define p(s) = Dq L(ẋ(s), x(s)), the generalized momentum. Define H(p, x) = p · q(p, x) − L(q(p, x), x). Then ∂H ∂ pi = qi (p, x) + n X k=1 pk ∂ qk − Dqk L(q, x) ∂ pi ∂ qk ∂ pi = qi (p, x) = ẋ i (s), which is the first Hamilton equation, and ∂H ∂ xi = n X k=1 pk ∂ qk ∂ xi − Dqk L(q, x) ∂ qk ∂ xi − D x i L(q, x) = −D x i L(q, x). By the E-L equations, this is equal to ṗi (s). Legendre transform For this section we consider only Hamiltonians and Lagrangians that do not depend on the space variables. The Hamilton-Jacobi equations become ¨ u t + H(Du) u(·, 0) = g(·) We further assume that L(q) is convex on Rn , and lim |q|→∞ L(q) |q| = ∞. 2.2.5 Definition. The Legendre transform of L is defined to be L ∗ (p) = sup {p · q − L(q)}. q∈Rn 40 PDE I Formally, the supremum is attained at q = q(p) = (Dq L)−1 (p) (noting that Dq L : Rn → Rn and taking the inverse to be the matrix inverse). Therefore L ∗ (p) = pq(p) − L(q(p)) = H(p) by the definition of the Hamiltonian associated with the Lagrangian L. 2.2.6 Theorem (Convex duality). Assume that L satisfies the assumptions above. Define H = L ∗ . Then 1. H(p) is convex on Rn and lim H(p) |p| |p|→∞ = ∞. 2. L = H ∗ . PROOF: 1. H(p) = supq∈Rn {p · q − L(q)} is a supremum of linear functions, so it is p convex. For the particular value q = λ |p| we have H(p) = λ|p| − L(λ p |p| ) ≥ λ|p| − kLk L ∞ (B(0,λ)) , so lim inf |p|→∞ H(p) |p| ≥λ and the limit is ∞. 2. For all p, q ∈ Rn , H(p) + L(q) ≥ p · q, so subtracting H(p) from both sides and taking the supremum gives L(q) ≥ H ∗ (q). For the other inequality, H ∗ (q) = sup {p · q − sup {r · p − L(r)}} p∈Rn r∈Rn = sup { infn {p · (q − r) − L(r)}} p∈Rn r∈R ≥ infn {Dq L(q) · (q − r) − L(r)} r∈R ≥ infn {L(r) − Dq L(q) · (r − q)} r∈R ≥ L(q) by convexity If L is not differentiable at every point, instead of Dq L use a supporting hyper-surface. (For every point there is at least one hyper-surface containing that point for which the graph of the convex function L lies entirely on one side.) Hamilton-Jacobi equations 41 Hopf-Lax formula We will give a formula for the solution to the H-J equation when H is smooth, depends only on p, is convex, and satisfies H(p) lim |p|→∞ |p| = ∞. We further assume that g is Lipschitz. We consider the modified action functional I[w] = t Z L(ẇ(s))ds + g(w(0)) 0 where the set of admissible functions are A x,t = {w ∈ C 1 ([0, t], Rn ), w(t) = x}. Our candidate for the solution is u(x, t) = infw∈A x,t I[w]. 2.2.7 Theorem (Hopf-Lax formula). u(x, t) = minn {t L( y∈R x− y )+ t g( y)} PROOF: Consider the line joining y to x that takes an amount of time t. It has equation w(s) = y + st (x − y) and is clearly admissible. Since u is defined as an infimum, Z t u(x, t) ≤ L( 0 x− y )ds t + g( y) = t L( x− y )+ t g( y) and so u(x, t) is at most the right hand side of the formula. Conversely, by Jensen’s inequality, Z t L(ẇ(s)) 0 ds t Z ≥L t ẇ(s) ds L is convex t x − w(0) 0 =L by the FTC t Therefore, adding g(w(0)) to both sides and noting that taking the infimum over all w ∈ A x,t amounts to, on the right hand side, taking the infimum over all starting points y ∈ Rn . u(x, t) = inf w∈A x,t Z 0 t x− y L(ẇ(s))ds + g(w(0)) ≥ infn {t L( t ) + g( y)}. y∈R Finally, the infimum is actually a minimum since g is Lipschitz, and hence grows at most linearly, while L grows super-linearly and pwns g. 42 PDE I 2.3 Conservation Laws Push-forward of a measure Let Φ : Rn → Rn . Given a measure µ on Rn , the push-forward of µ in Φ is Φ# µ. For ξ ∈ Cc (Rn ), Z Z ξ dΦ# µ = ξ ξ ◦ Φ dµ. Consider the system of ODE ¨ ẏ(t) = V ( y(t), t) y(0) = y0 where V is Lipschitz in y, and let Φ t : y0 → y(t) be the unique solution map. Let g( y) denote the density of a material at a location y ∈ Rn at time 0, and let u(x, t) denote the density at x ∈ Rn at time t, where the idea is that the points in Rn have been transported according to Φ t . For a conservation law we require that for every domain Ω, Z Z u(x, t)d x = Φ t (Ω) g( y)d y. Ω For every ψ ∈ Cc∞ (Rn ) we have (noting the push-forward of measures) Z Z ψ(x)u(x, t)d x = Differentiating with respect to t, Z Z ψ(x)u t (x, t)d x = Therefore Z ψ(Φ t ( y))g( y)d y. ∇ψ(Φ t ( y))·V (Φ t ( y), t)g( y)d y = Z ∇ψ(x)·(V (x, t)u(x, t))d x. ψ(x)(u t (x, t), div(V (x, t)u(x, t)))d x = 0 for all smooth functions of compact support, so u t + div(V u) = 0. We restrict our attention to one space dimension, u t + div F (u) x = 0, where F : R → R. We may also write u t + f (u)u x = 0, where f = F 0 . The characteristics are ẋ = f (z), ż = 0, so the characteristics are straight lines x(t) = x 0 + t f (g(x 0 )). Burger’s equation is the conservation law 2 u ut + = u t + uu x = 0. 2 x For initial data g( y) = 1(−∞,0] , there are many “almost everywhere solutions” given by placing the shock line at different angles. Such a notion of solution is not useful because they are not unique (and in this case there is not even a preferred solution). Rankine-Hugoniot condition 43 Weak solutions If u t + F (u) x = 0 with initial data g on R × {0}, then for any φ ∈ Cc∞ (R × [0, ∞)), Z ∞Z ∞ 0 φ(x, t)(u t + F (u) x )d x d t = 0. −∞ If u were a classical solution then we could integrate by parts to get Z ∞Z ∞ Z∞ φ t u + φ x F (u)d x d t + 0 −∞ φ(x, 0)g(x)d x = 0. −∞ 2.3.1 Definition. u ∈ L ∞ (R × [0, ∞)) is an integral solution of the conservation law if the equation above holds for every φ ∈ Cc∞ (R × [0, ∞)). This notion gives a unique (preferred) solution to Burger’s equation with initial data g = 1(−∞,0] . The “shock line” is x = 12 t. (This may not be correct.) 2.4 Rankine-Hugoniot condition We now give a characterization of piece-wise continuously differentiable integrable solutions u to the scalar convervation law u t + F (u) x = 0 with initial data g (at time 0). Let γ = γ(t) be the curve of non-differentiability (and possibly discontinuity) of u, Ω L be the domain to the left (smaller t) and ΩR be the domain to the right (larger t). Then R × [0, ∞) = Ω L ∪ γ ∪ ΩR . For (x, t) ∈ γ let u L (x, t) = lim ( y,s)∈Ω L ( y,s)→(x,t) u( y, s) and uR (x, t) = lim ( y,s)∈ΩR ( y,s)→(x,t) u( y, s). Let ϕ be smooth with compact support and such that ϕ(x, t) 6= 0 and supp(ϕ) ∩ R × {0} = ∅. Then since u is an integral solution, and ϕ is zero at time zero, Z ∞Z uϕ t + F (u)ϕ x d x d t = 0, 0 R so Z uϕ t + F (u)ϕ x d x d t + Ω L ∩supp(ϕ) Z uϕ t + F (u)ϕ x d x d t = 0. ΩR ∩supp(ϕ) Let V = (F (u)ϕ, uϕ), a vector field. Then div V = F (u) x ϕ + F (u)ϕ x + u t ϕ + uϕ t , and by the divergence theorem, Z F (u) x ϕ + F (u)ϕ x + u t ϕ + uϕ t d x d t = Ω L ∩supp(ϕ) Z γ (F (u L )ϕ, u L ϕ) · ν dS. 44 PDE I Now a continuously differentiable integral solution is a classical solution, so on Ω L , u t + F (u) x )ϕ = 0. Applying the same reasoning to ΩR we get Z Z (F (u L )ϕ, u L ϕ) · ν dS − (F (uR )ϕ, uR ϕ) · ν dS = 0. γ γ Therefore, for every such ϕ, since ν = (1, γ̇), Z ((F (uR ) − F (uR )) − (u L − uR )γ̇)ϕ dS = 0, γ and so F (u L ) − F (uR ) = (u L − uR )γ̇. This is the Rankine-Hugoniot condition. Conversely, if we have a piece-wise continuously differentiable function that is a solution inside its domains of differentiability and such that the R-H condition holds, then it is an integral solution. Note in particular that a continuous piece-wise continuously differentiable solution is always an integral solution. 2.4.1 Example (Burger’s equation, revisited). Do this. 2.4.2 Example (Riemann problem). Let F be a convex function with F 00 > 0. We consider the PDE u t + F (u) x = 0 with initial data u L 1(−∞,0] + uR 1(0,∞) , where u L , uR ∈ R. The characteristics are straight lines that collide, and if u L > uR then the slope of the shock is given by the R-H conditions, and it is F (u L ) − F (uR ) γ̇ = =: σ. u L − uR The corresponding solution is ¨ u(x, t) = x ≤ σt x > σt. uL uR If u L < uR then the above is still a solution. Notice that F 0 (u L ) < σ < F 0 (uR ) since F is convex, so the shock line is in the “gap” where there are no characteristics emanating from the initial data. This type of shock (with characteristics emanating from it) is a non-physical shock. We look for another solution with characteristics passing through (0, 0), i.e. a solution for which u(x, t) = ϕ( xt ). Plugging in, we obtain x x x x 1 0 ϕ0 −ϕ 0 + F ϕ = 0. t t2 t t t It suffices that F 0 (ϕ( xt )) = corresponding solution is u(x, t) = x . t Take ϕ = (F 0 )−1 on the range [u L , uR ], and the u L (F 0 )−1 ( xt ) uR x t < F 0 (u L ) x t > F (uR ). F 0 (u L ) < 0 x t < F (uR ) Existence of solutions 45 This solution is the rarefaction wave. There are a number of heuristic reasons for preferring the rarefaction wave solution to the non-physical wave solution. One is stability (consider smooth initial data approximating the discontinuous initial data). Another is that of vanishing viscosity (consider u t + F (u) x = "u x x and we hope that solutions u" to this equation converge to the solution the conservation law as " → 0). Such conditions are called entropy conditions. 2.4.3 Definition. 1. The Lax entropy condition at a shock is F 0 (u L ) > F (u L ) − F (uR ) u L − uR > F 0 (uR ) when u L > uR . 2. For general (non-convex) F , the Oleinik entropy condition is a) if u L > uR then the secant is above the graph of F ; and b) if uR > u L then the secant is below the graph of F . 3. u satisfies the entropy condition if there is C > 0 for all x ∈ R and h > 0 and t > 0 then u(x + h, t) − u(x, t) ≤ ct h. The third condition allows only jumps downwards. It says (for small t) that the gradient decays as 1t . 4. A pair (η, ψ) is an entropy-entropy-flux pair if η > 0 and ψ0 = η0 F . The idea is that u solves u t + F 0 (u) x = 0 only if η(u) t + ψ(u) x = 0. 2.5 Existence of solutions Rx Let h(x) = 0 g( y)d y. Then if there is a solution u to the conservation law then consider w x = u. The function w satisfies w t + F (w x ) = 0 with initial data h. This is a Hamilton-Jacobi PDE in w. We have a formula x−y + h( y) w(x, t) = min t L y t where L = F ∗ . The candidate for u is u(x, t) = ∂ x−y min t L + h( y) . ∂x y t Suppose the minimum is attained at y(x, t). Then y(x, t) is increasing in x. Since y(x, t) in increasing in x, it is differentiable in x a.e., so x − y(x, t) 0 u(x, t) = L (1 − y x ) + h0 ( y(x, t)) y x . t Since the function z 7→ t L( 1t (x − y(z, t))) + h( y(z, t)) has a minimum at z = x, we obtain that x − y(x, t) h0 ( y(x, t)) y x = L 0 yx , t 46 PDE I so u(x, t) = L 0 x − y(x, t) t =G x − y(x, t) t , where G = (F 0 )−1 . This is the Lax-Oleinik formula. Aside: We show that w(x, t) (defined above) is Lipschitz in x. Consider w(x 2 , t) − w(x 1 , t) x2 − y x 1 − y(x 1 , t) = min t L + h( y) − t L − h( y(x 1 , t)) y t t ≤ h(x 2 − x 1 + y(x 1 , t)) − h( y(x 1 , t)) ≤ Lip(h)|x 2 − x 1 | so (doing the reverse as well) |w(x 2 , t) − w(x 1 , t)| ≤ Lip(h)|x 2 − x 1 |. 2.5.1 Theorem. Under the assumptions of Theorem 1 (F smooth, F (0) = 0, F 00 > 0, F super-linear at infinity, g ∈ L ∞ ) u defined above is an integral solution to the conservation law. PROOF: We have w t + F (w x ) = 0 a.e. Let ϕ ∈ Cc∞ (R × [0, ∞)) be a test function. Then Z ∞Z w t ϕ x + F (w x )ϕ x d x d t = 0, 0 R and integrating by parts, Z ∞Z 0 wt ϕx d x d t = R = Z ∞Z 0 R ∞Z Z 0 Therefore Z ∞Z 0 Z −wϕ x t d x d t − R w x ϕt d x d t − Z R uϕ t + F (u)ϕ x d x d t + R wϕ x d x w x ϕd x R Z t=0 t=0 gϕd x = 0 R and u is an integral solution. 2.5.2 Lemma. Under the assumptions of the theorem, 1. u given by the formula above satisfies the entropy condition (E3); and 2. there is C such that u defined above satisfies u(x + h, t) − u(x, t) ≤ for all x ∈ R, t > 0, h > 0. C t h Viscosity solutions 47 PROOF: We claim that there is a constant C̃ such that for every x x − y(x, t) ≤ C̃. t Assuming this claim, modify G outside the interval [−C̃, C̃] into G̃ so that G̃ is Lipschitz. Then x − y(x, t) u(x, t) = G̃ t x − y(x + h, t) ≥ G̃ t x + h − y(x + h, t) h ≥ G̃ − Lip(G̃) t t h ≥ u(x + h, t) − Lip(G̃). t 2.6 Viscosity solutions The most general second order equation is F (D2 u, Du, u, x) = 0, where x ∈ Rn and u : Ω ⊆ Rn → R is the unknown. This is too general a class to investigate. We are interested in those PDE whose solutions satisfy the comparison principle. We have seen that Laplace’s equation is of this type, and it can be shown that the Hamilton-Jacobi equation is of this type. The comparison principle will imply uniqueness of solutions. 2.6.1 Definition. We say that u is a sub-solution if F (D2 u, Du, u, x) ≤ 0 in Ω, and u is a super-solution if F (D2 u, Du, u, x) ≥ 0 in Ω. We may also speak of strict suband super-solutions, for which the inequality is strict. Let u be a sub-solution and v be a super-solution and u ≤ v on ∂ Ω. When is u ≤ v in Ω? For which equations does the comparison principle hold? Assume that u − v has a positive maximum at some x 0 ∈ Ω. Then Du = Dv and D2 u ≤ D2 v at x 0 . (Recall that for a matrix M , we say M ≥ 0 if x T M x ≥ 0 for all x ∈ Rn .) Assume for the moment that v is a strict super-solution. Then at x 0 , F (D2 u, Du, u, x 0 ) ≤ 0 < 0F (D2 v, Dv, v, x 0 ), and we have comparisons among the entries. 2.6.2 Definition. A function F : Sym(n) × Rn × R × Ω → R is 1. degenerate elliptic if for all X , Y ∈ Sym(n), p ∈ Rn , r ∈ R, and x ∈ Ω, X ≤ Y implies F (X , p, r, x) ≥ F (Y, p, r, x); and 2. proper elliptic if F is degenerate elliptic and non-decreasing in r. 48 PDE I 2.6.3 Examples. 1. The Poisson equation −∆u = f is given by F (X , x) = − tr X − f (x), and is proper. Note that ∆u = f is not degenerate elliptic. 2. −∆u + u = 0 is proper while −∆u − u is degenerate elliptic but not proper. 3. H(Du) = 0 is trivially proper. 4. The following PDE are proper. a) u t + H(Du) = 0 b) u t − ∆u = 0 c) div( p Du 1+|Du|2 d) u t − div( p )=0 Du 1+|Du|2 )=0 2.6.4 Definition. Let u : Ω → R and let u∗ (x) := lim+ sup{u( y) | y ∈ Ω, |x − y| < r} r→0 and u∗ similarly with an infimum instead. Then u∗ : Ω → {∞} is the upper semicontinuous envelope of u and u∗ : Ω → {−∞} is the lower semi-continuous envelope of u. We have u∗ ≤ u ≤ u∗ , and u∗ is the largest lower semi-continuous with this property, and u∗ is the smallest upper semi-continuous function with this property. It is not hard to show that an upper semi-continuous function reaches its maximum on a compact set. 2.6.5 Definition. Let F (D2 u, Du, u, x) be degenerate elliptic and proper. u : Ω → R is a viscosity sub-solution of F (D2 u, Du, u, x) = 0 if u is upper semi-continuous and for every ϕ ∈ C 2 (Ω) such that u − ϕ has a local maximum at some x̄ ∈ Ω, we have F (D2 ϕ(x̄), Dϕ(x̄), u(x̄), x̄) ≤ 0. The idea is that we must test against all C 2 functions that touch u from above (since we take derivatives of ϕ, we may add constant in such a way to guarantee that u − ϕ is non-positive with maximum value 0). Super-solutions are defined analogously. In this case we test against functions touching u from below. 2.6.6 Definition. u is a viscosity sub-solution to the boundary value problem ¨ F (D2 u, Du, u, x) = 0 on Ω u = gon ∂ Ω, Ω = Ω̃ × [0, T ] if it is a sub-solution and u ≤ g on ∂ Ω. 2.6.7 Proposition. If u is a classical sub-solution to the PDE then it is a viscosity sub-solution. Viscosity solutions 49 PROOF: If F (D2 u, Du, u, x) ≤ 0 then for any ϕ ∈ Cc2 touching u from above at x, Du(x) = Dϕ(x) and D2 u(x) ≤ D2 ϕ(x), so F (D2 ϕ(x), Dϕ(x), u(x), x) ≤ F D2 u, Du, u, x) ≤ 0. 2.6.8 Proposition. If u is a viscosity solution and u is C 2 then it is a classical solution. PROOF: 2.6.9 Example. Take F (X , p, z, x) = |p|2 − 1, so the corresponding PDE is |Du|2 − 1. In one dimension, consider the domain Ω = (0, 2). There is no smooth function u such that u(0) = u(2) = 0 and |u0 | = 1. But the function ¨ x 2− x x ∈ (0, 1) x ∈ (1, 2) Then u is a viscosity solution to the PDE. (There are no nontrivial functions to check for super-solution.) The function u fails to be a sub-solution to the PDE associated with F̃ = −F (i.e. −|Du|2 + 1 = 0), but −u is a solution. Crazy. Viscosity solutions to Hamilton-Jacobi equations are appropriate for problems arising in optimal (stochastic) control (and hence finance), but they may not be the “right” solution for problems with other motivation. The main reference on viscosity solutions is a paper by Crandall, Ishii, and Lions, “User’s guide to viscosity solutions.” Comparison We consider the Hamiltonian H(Du, x). Suppose there is continuous w : [0, ∞) → [0, ∞) with w(0) = 0 and w > 0 on (0, ∞) such that for all p ∈ Rn and x, y ∈ Ω, |H(p, x) − H(p, y)| ≤ w((1 + |p|)|x − y|). 2.6.10 Theorem (Comparison). Let u be a viscosity sub-solution and v be a viscosity super-solution of u t + H(Du, x) = 0 on Ω × (0, T ), where Ω is bounded and open. Assume further that u is upper semi-continuous on Ω×[0, T ] and v is lower semi-continuous on Ω × [0, T ]. If u ≤ v on the parabolic boundary and H is as above then u ≤ v in Ω × [0, T ). 2.6.11 Theorem. Let F be proper, F = F (X , p, x), such that the statement above holds uniformly in X (so w does not depend on X ). Then comparison holds for u t + F (D2 u, Du, x) = 0. 50 PDE I Stability and Existence Suppose −∆un = 0 are a sequence of classical solutions to the Laplace equation. If un → u uniformly (i.e. in C) then can we conclude −∆u = 0? In this case, yes, because of the mean-value property. 2.6.12 Definition. For a sequence of functions {un }, ∗ lim sup un (x) = lim sup{un ( y) | n ≥ m, y ∈ Ω, |x− y| ≤ m→∞ 1 } m = sup{lim sup unk (x nk )} k→∞ The last equality is an exercise. Also show (via diagonalization) that the supremum is attained by some sequence. 2.6.13 Lemma. Let un be a sequence of upper semi-continuous functions on Ω, and let x̂ ∈ Ω. Assume that u ≥ lim sup∗ un , u is USC, and that u(x̂) = (lim sup∗ un )(x̂). Assume ϕ ∈ C 2 (Ω) and u − ϕ has a strict local maximum at x̂. Then there exists a sequence nk → ∞ and x k → x̂ such that unk − ϕ has a local maximum at x k and unk (x k ) → u(x̂). This lemma also holds if instead un → u uniformly. PROOF: There is r > 0 such that u(x̂) − ϕ(x̂) > u(x) − ϕ(x) for all x ∈ B(x̂, r). Let x̂ n be the location of the maximum of un − ϕ on B(x̂, r). (This value may be on the boundary of the ball.) Since u(x̂) = (lim sup∗ un )(x̂), there is a sequence (x̃ k , nk ) → (x̂, ∞) such that u(x̂) = limk→∞ unk (x̃ k ). Without loss of generality, assume nk = k (by throwing away some of the un ’s). Let x̄ be an accumulation point of x̂ n . There exists a subsequence ñk such that x̂ ñk → x̄. ñk (x̂ ñk ) − ϕ(x̂ ñk ) > uñk (x̃ ñk ) − ϕ(x̃ ñk ) so u(x̄) − ϕ(x̄) ≥ lim sup uñk (x̂ ñk ) − ϕ(x̂ ñk ) k→∞ ≥ lim sup uñk (x̃ ñk ) − ϕ(x̃ ñk ) k→∞ = u(x̂) − ϕ(x̄) This contradicts the strictness of x̂ unless x̄ = x̂. Therefore the desired sequence is x̂ ñk . 2.6.14 Theorem. Let un be a viscosity sub-solution of Fn (D2 u, Du, u, x) = 0, where Fn is proper for each n. Assume F proper and such that F ≤ lim inf∗ Fn . Let u = lim sup∗ un . If u is finite then u is a viscosity sub-solution of F (D2 u, Du, u, x) = 0. Viscosity solutions 51 PROOF: Let ϕ be a text function such that u − ϕ has a local maximum at some x̂. Without loss of generality we may assume the maximum is strict (Indeed, let ϕ̃(x0 = ϕ(x) + 41 |x − x̂|4 , so that u = ϕ̃ has a strict local maximum at x̂. We will show that F (D2 ϕ(x̂), Dϕ(x̂), u(x̂), x̂) ≤ 0, but it is enough to show that F (D2 ϕ̃(x̂), Dϕ̃(x̂), u(x̂), x̂) ≤ 0.) By the previous lemma there exists a subsequence x̂ k → x̂ and nk → ∞ such that unk − ϕ has a local maximum at x̂ k . Then for each k, F (D2 ϕ(x̂ k ), Dϕ(x̂ k ), unk (x̂ k ), x̂ k ) ≤ 0 But the entries converge (respectively) to D2 ϕ(x̂), Dϕ(x̂), u(x̂), and x̂. By the assumption on F , we get the desired inequality. Read about Perron’s method for existence. Index action functional, 38 admissible, 35 admissible set, 17 Lax-Oleinik formula, 46 Legendre transform, 39 linear, 2, 3 lower semi-continuous envelope, 48 Burger’s equation, 42 mean-value property, 10 method of characteristics, 33 minimal surface equation, 2 mollifier, 10 Cauchy problem, 20 characteristic curve, 3 compatibility conditions, 34 conservation law, 2, 37, 42 conserved quantity, 2 continuity equation, 2 Naiver-Stokes equations, 2 non-characteristic, 33, 35 non-physical shock, 44 nonlinear diffusion, 2 degenerate elliptic, 47 Dirichlet problem, 22 Duhamel’s principle, 21 Oleinik entropy condition, 45 energy, 17, 18, 26, 32 entropy, 26 entropy condition, 45 entropy-entropy-flux pair, 45 parabolic boundary, 22 parabolic cylinder, 22, 25 Poisson equation, 4 proper elliptic, 47 push-forward, 42 first order, 2 fundamental solution, 5, 20 quasi-linear, 32 generalized momentum, 39 Green’s function, 15 radial solution, 5 Rankine-Hugoniot condition, 44 rarefaction wave, 45 reaction-diffusion equation, 2 regularization, 11 Hamilton-Jacobi equation, 2, 37 Hamiltonian, 37 harmonic function, 4 heat equation, 2 homogeneous, 3 Schrödinger’s equation, 2 similarity profile, 20 sub-solution, 23, 47 super-solution, 23, 47 inhomogeneous, 3 initial conditions, 18 integral solution, 43 translation invariant, 5 transport equation, 2, 3 Kirchhoff’s formula, 30 upper semi-continuous envelope, 48 Lagrangian, 38 Laplace equation, 4 Laplace’s equation, 2 Laplacian, 2 lateral boundary conditions, 18 Lax entropy condition, 45 viscosity sub-solution, 48 wave equation, 2 well-posed, 3 53
© Copyright 2026 Paperzz