Stability of the fourth order Runge-Kutta method for time

Stability of the fourth order Runge-Kutta method for time-dependent partial
differential equations1
Zheng Sun2 and Chi-Wang Shu3
Abstract
In this paper, we analyze the stability of the fourth order Runge-Kutta method for
integrating semi-discrete approximations of time-dependent partial differential equations.
Our study focuses on linear problems and covers general semi-bounded spatial discretizations.
A counter example is given to show that the classical four-stage fourth order Runge-Kutta
method can not preserve the one-step strong stability, even though the ordinary differential
equation system is energy-decaying. But with an energy argument, we show that the strong
stability property holds in two steps under an appropriate time step constraint. Based on
this fact, the stability extends to general well-posed linear systems. As an application, we
utilize the results to examine the stability of the fourth order Runge-Kutta approximations
of several specific method of lines schemes for hyperbolic problems, including the spectral
Galerkin method and the discontinuous Galerkin method.
Keywords: stability analysis, Runge-Kutta method, method of lines, spectral Galerkin
method, discontinuous Galerkin method.
1
Research is supported by ARO grant W911NF-15-1-0226 and NSF grant DMS-1418750.
Division of Applied Mathematics, Brown University, Providence, RI 02912.
zheng [email protected]
3
Division of Applied Mathematics, Brown University, Providence, RI 02912.
[email protected].
2
1
E-mail:
E-mail:
1
Introduction
In common practice, the method of lines for solving time-dependent partial differential equations (PDEs) starts with a spatial discretization to reduce the problem to an ordinary differential equation (ODE) system solely dependent on time, then a suitable ODE solver is
used for the time integration. The issue of stability arises from such a procedure, whether a
stable semi-discrete scheme will still be stable after coupling with the time discretization. In
this paper, we focus on the stability of the fourth order Runge-Kutta (RK) method in this
context.
The model problem for stability analysis is usually chosen as a well-posed linear system
∂t u = L(x, t, ∂x )u. But firstly, let us consider a simpler problem ∂t u = L(x, ∂x )u with
L + L⊤ ≤ 0. (Here the superscript ⊤ stands for the adjoint, to be distinguished from T ,
which stands for the transpose. But for matrices, they are the same under the usual dot
product in Rm .) Its semi-discrete scheme corresponds to an autonomous ODE system
d
uN (t) = LN uN (t).
dt
Here LN is a constant matrix and the parameter N relates with the degree of freedom in
the spatial discretization. Suppose LN inherits the semi-negativity LN + L⊤
N ≤ 0. Then
d
kuN (t)k2
dt
= (LN uN , uN ) + (uN , LN uN ) ≤ 0, where we temporarily assume (·, ·) to be the
usual dot product and k · k to be the induced norm. Therefore the solution to the semidiscrete scheme is strongly stable. We are interested in whether this strong stability will be
preserved after the four-stage fourth order RK time discretization. In other words, with τ
being the time step and
un+1
N
=
P4 (τ LN )unN
1
1
1
2
3
4
= I + τ LN + (τ LN ) + (τ LN ) + (τ LN ) unN ,
2
6
24
n
we wonder whether kun+1
N k ≤ kuN k, or equivalently, whether the operator norm of P4 (τ LN )
is bounded by 1. If this is true, the stability extends to general semi-bounded systems
∂t u = L(x, t, ∂x )u by using a standard argument.
2
For LN being a scalar and τ being sufficiently small, the proposition above can be justified
by analyzing the region of absolute stability, see Chapter IV.2 in [15], for example. A
natural attempt is to apply the eigenvalue analysis to extend the result to systems. However,
⊤
this technique can only be used for normal matrices LN , namely LN L⊤
N = LN LN . If re-
norming is allowed, the method also covers diagonalizable LN . (But it would still be useless
if the diagonalizing matrix is ill-conditioned, see [10].) While for general systems, the naive
eigenvalue analysis may end up with misleading results. We refer to Chapter 17.1 in [8] for
a specific example.
The analysis for the general systems initiates from the coercive problems, namely LN +
⊤
L⊤
N ≤ −ηLN LN for some positive constant η. In [10], Levy and Tadmor used the energy
method to prove the strong stability of the third order and the fourth order RK schemes
under the coercivity condition and the time step constraint τ ≤ cη, where c =
third order scheme and c =
1
62
3
50
for the
for the fourth order scheme. Later in [6] and [14], a simpler
proof was provided, with the time step constraint been significantly relaxed to c = 1, which
extends to general s-stage s-th order linear Runge-Kutta schemes. The authors in [6] pointed
out that the coercivity condition implies the strong stability of the forward Euler scheme
when τ ≤ η. The strong stability of the high order RK schemes follows from the fact that
they can be rewritten as convex combinations of the forward Euler steps. These results
coincides with the earlier work on contractivity analysis of the numerical solutions to ODE
systems, see [12] and [9]. In their theory for contractivity, or strong stability in our context, a
circle condition is assumed, which is essentially equivalent to the strong stability assumption
for the forward Euler scheme in [6].
In general, the coercivity condition does not hold for method of lines schemes arising
from purely hyperbolic problems. Hence it is still imperative to analyze schemes which only
satisfy a semi-negativity condition LN + L⊤
N ≤ 0. In [14], for the third order RK scheme,
Tadmor successfully removed the coercivity assumption and proved that the third order RK
scheme is strongly stable if LN + L⊤
N ≤ 0 and τ kLN k ≤ 1. But the strong stability for the
3
fourth order RK method remains open, which is the main issue we are concerned with in
this paper.
We find that, although the strong stability of the fourth order RK method passes the
examination of the scalar equation, and can be extended to normal systems, it does NOT
hold in general. More specifically, we have the following counter example in Proposition 1.1,
whose proof is given in Appendix A.


1 2 2
Proposition 1.1. Let LN = − 0 1 2. Then LN + LTN ≤ 0 and kP4 (τ LN )k > 1 for τ
0 0 1
sufficiently small.
Interestingly, despite the negative result on the one-step performance, the strong stability
property holds in two time steps. In other words, with the constraint τ kLN k ≤ c0 for some
n
positive constant c0 , kP4 (τ LN )2 k ≤ 1 and hence kun+2
N k ≤ kuN k. This is still sufficient to
justify the stability after long time integration. It can also be used to obtain the stability
kunN k ≤ K(tn )ku0N k for semi-bounded and time-dependent LN . We will apply the results
to study the stability of different spatial discretizations coupled with the fourth order RK
approximation.
The paper is organized as follows. In Section 2 we exhibit our main results, and prove
the stability of the fourth order RK method with skew-symmetric, semi-negative and semibounded LN . The case for LN dependent on time is also discussed. In Section 3, we apply our
results to several different spatial discretizations. We specifically focus on Galerkin methods
as a complementary of [14], including the spectral Galerkin method and the discontinuous
Galerkin method. Finally, we give our concluding remark in Section 4.
2
Stability of the fourth order RK method
In this section, we analyze the stability of the fourth order RK schemes. For simplicity, we
assume everything to be real, but the approach extends to the complex spaces. To facilitate
our later discussion on applications to spatial discretizations based on Galerkin methods,
4
we discuss over a general Hilbert space. We avoid clearly characterizing the method of lines
scheme in this general setting. But it will not lead to ambiguities, since it only serves as
a motivation to derive the fourth order RK iteration, and will not be used in the stability
analysis. To reduce to ODE systems, one simply sets V = Rm and (·, ·) to be the usual dot
product. We also remark that, by using the inner product (u, v)H = uT Hv with H being a
T
symmetric positive definite matrix, LN + L⊤
N ≤ 0 is equivalent to LN H + HLN ≤ 0, which
is consistent with the assumption in equation (6) of [14].
2.1
Notations and the main results
We denote by V a real Hilbert space equipped with the inner product (·, ·). The induced
p
norm is defined as k · k = (·, ·). Consider a method of lines scheme defined on V.
∂t uN = LN uN ,
(2.1)
where uN (·, t) ∈ V and LN is a bounded linear operator on V. Suppose LN is independent
of t, then the four-stage fourth order RK approximation can be written as
un+1
= P4 (τ LN )unN ,
N
1
1
1
P4 (τ LN ) = I + τ LN + (τ LN )2 + (τ LN )3 + (τ LN )4 .
2
6
24
(2.2)
When LN = LN (t) depends on time, we denote by
un+1
= Rτ (t)unN
N
(2.3)
and the specific form of Rτ (t) will be introduced latter.
For simplicity, we drop all the subscripts N in the remaining parts of the section.
Given an operator L, the operator norm is defined as kLk = supkvk=1 kLvk. We denote
by B(V) = {L : kLk < +∞} the collection of bounded linear operators on V. For any
L ∈ B(V), there exists a unique L⊤ ∈ B(V) such that (Lw, v) = (w, L⊤ v). L⊤ is referred as
the adjoint of L.
If (v, (L + L⊤ )v) ≤ 0, ∀v ∈ V, denoted by L + L⊤ ≤ 0, then we say L is semi-negative.
A special case is when L + L⊤ = 0, L is referred to be skew-symmetric. If L + L⊤ ≤ 2µI for
some positive number µ, namely L − µI is semi-negative, then we say L is semi-bounded.
5
We denote by [w, v] = −(w, (L+ L⊤ )v) and JwK =
p
[w, w]. [w, v] is a bilinear form on V.
For L + L⊤ ≤ 0, [w, v] is a semi-inner-product, and one has the Cauchy-Schwarz inequality,
[w, v] ≤ JwKJvK.
To simplify our notation, we define L = τ L. This notation will be used in the intermediate
lemmas and the proofs. For clearness, we restore the notation τ L in our main theorems.
Here we summarize our main results on the stability of the fourth order RK approximation. c0 and K are some positive real numbers in the following statements. The details will
be given in Theorem 2.1, Corollary 2.1, Theorem 2.2, Theorem 2.3 and Theorem 2.4.
√
(1) Suppose L + L⊤ = 0. (2.2) is strongly stable, kun+1k ≤ kun k, if τ kLk ≤ 2 2.
(2a) Suppose L+L⊤ ≤ 0, LL⊤ = L⊤ L. (2.2) is strongly stable, kun+1 k ≤ kun k, if τ kLk ≤ c0 .
(2b) Suppose L + L⊤ ≤ 0. (2.2) is strongly stable in two steps, kun+2k ≤ kun k, if τ kLk ≤ c0 .
(3) Suppose L + L⊤ ≤ 2µI. (2.2) is stable, kun k ≤ K(tn )ku0 k, if τ ≤
c0
.
kLk+µ
(4) Suppose L = L(t) satisfies certain regularity assumptions, and L + L⊤ ≤ 2µI. Then
(2.3) is stable, kun k ≤ K(tn )ku0 k, under an appropriate time step constraint.
Here the results with strong assumptions are listed first. But one should note that (2b)
is the most crucial one among them. (1) is obtained from an intermediate step in deriving
(2b), while (2a), (3) and (4) are all corollaries of (2b).
2.2
Strong stability for skew-symmetric L
The first thing we would do is to derive an energy equality, which would be useful in understanding the subtle change of the norm of the solution after one time step. To this end,
we introduce several identities in Lemma 2.1, and then use them to derive the equality in
Lemma 2.2.
Lemma 2.1.
τ
(Lv, v) = − JvK2 ,
2
(2.4)
(L2 v, v) = −kLvk2 − τ [Lv, v],
(2.5)
(L3 v, v) =
(2.6)
τ
JLvK2 − τ [L2 v, v],
2
6
(L4 v, v) = kL2 vk2 + τ [L2 v, Lv] − τ [L3 v, v].
(2.7)
Proof. (1) (Lv, v) = 12 (Lv, v) + 12 (v, Lv) = 12 (v, (L + L⊤ )v) = − τ2 JvK2 .
(2) (L2 v, v) = (Lv, L⊤v) = −(Lv, Lv) + (Lv, (L + L⊤ )v) = −kLvk2 − τ [Lv, v].
(3) (L3 v, v) = (L2 v, L⊤ v) = −(L(Lv), Lv) + (L2 v, (L + L⊤ )v) = τ2 JLvK2 − τ [L2 v, v], where
we have used (2.4) in the last equality.
(4) (L4 v, v) = (L3 v, L⊤ v) = −(L2 (Lv), Lv) + (L3 v, (L + L⊤ )v) = kL2 vk2 + τ [L2 v, Lv] −
τ [L3 v, v], where (2.5) is used in the last equality.
Lemma 2.2 (Energy equality).
kun+1k2 − kun k2 = Q1 (un ),
where
3
X
1
1
4 n 2
3 n 2
kL u k − kL u k + τ
αij [Li un , Lj un ],
Q1 (u ) =
576
72
i,j=0
n
and
A = (αij )4×4
(2.8)

1
1/2 1/6 1/24
 1/2 1/3 1/8 1/24 

= −
 1/6 1/8 1/24 1/48  .
1/24 1/24 1/48 1/144

Proof. Taking an inner product of (2.2) with un , one has
1
1
1
1 n+1 2 1 n 2 1 n+1
ku k − ku k − ku
− un k2 = ((L + L2 + L3 + L4 )un , un ).
2
2
2
2
6
24
Applying (2.4) - (2.7) with v = un , we obtain
1
kL2 un k2 − τ Jun K2 − τ [Lun , un ]
12
τ
τ
τ
τ
+ JLun K2 − [L2 un , un ] + [L2 un , Lun ] − [L3 un , un ].
6
3
12
12
kun+1k2 − kun k2 = kun+1 − un k2 − kLun k2 +
(2.9)
Using (2.4) - (2.6) with v = Lun , the first two terms on the right can be expanded.
kun+1 − un k2 − kLun k2 = (un+1 − un − Lun , un+1 − un + Lun )
1
1
1
= kun+1 − un − Lun k2 + (( L2 + L3 + L4 )un , 2Lun)
2
6
24
1 2 n 2 τ
n+1
n
n 2
= ku
− u − Lu k − kL u k − JLun K2
3
2
τ
τ
τ 2 n
− [L u , Lun ] + JL2 un K2 − [L3 un , Lun ].
3
24
12
7
(2.10)
Substitute (2.10) into (2.9), one has
1
kun+1 k2 − kun k2 = kun+1 − un − Lun k2 − kL2 un k2
4
τ
τ
τ
n 2
n
n
− τ Ju K − τ [Lu , u ] − JLun K2 − [L2 un , un ] − [L2 un , Lun ]
3
3
4
τ
τ
τ 3 n n
− [L u , u ] + JL2 un K2 − [L3 un , Lun ].
12
24
12
(2.11)
Similar as before, one can use (2.4) and (2.5) with v = L2 un to calculate the difference of
square terms on the right.
1
kun+1 − un − Lun k2 − kL2 un k2
4
1
1 2 n n+1
n+1
n
n
− un − Lun + L2 un )
=(u
− u − Lu − L u , u
2
2
1 2 n 2
1 3
1 4 n 2 n
n+1
n
n
=ku
− u − Lu − L u k + (( L + L )u , L u )
2
6
24
1
τ
τ
1 2 n 2
3 n 2
n+1
n
n
=ku
− u − Lu − L u k − kL u k − JL2 un K2 − [L3 un , L2 un ].
2
24
12
24
(2.12)
Once again, we plug (2.12) into (2.11) to get
1
1
kun+1 k2 − kun k2 = kun+1 − un − Lun − L2 un k2 − kL3 un k2
2
24
τ
τ
τ
n 2
n 2
n
n
− τ Ju K − τ [Lu , u ] − JLu K − [L2 un , un ] − [L2 un , Lun ]
3
3
4
τ
τ 3 n
τ 3 n 2 n
τ 3 n n
2 n 2
n
− [L u , u ] − JL u K − [L u , Lu ] − [L u , L u ].
12
24
12
24
Finally, note that
1
1
kun+1 − un − Lun − L2 un k2 − kL3 un k2
2
36
1 2 n 1 3 n 2
1
1
n+1
n
n
=ku
− u − Lu − L u − L u k + ( L4 un , L3 un )
2
6
24
3
τ
1
4 n 2
3 n 2
kL u k −
JL u K .
=
576
144
Therefore
1
1
τ
kL4 un k2 − kL3 un k2 − τ Jun K2 − τ [Lun , un ] − JLun K2
576
72
3
τ 2 n
τ
τ
τ 2 n n
− [L u , u ] − [L u , Lun ] − [L3 un , un ] − JL2 un K2
3
4
12
24
τ 3 n
τ 3 n 2 n
τ
n
3 n 2
− [L u , Lu ] − [L u , L u ] −
JL u K ,
12
24
144
Q1 (un ) =
which can be rewritten as (2.8).
8
According to Lemma 2.2, the energy change Q1 (un ) consists of two parts, the numerical
P
1
1
kL4 un k2 − 72
kLun k2 and the generalized quadratic form τ 4i,j=0 αij [Lui, Luj ].
dissipation 576
When L is skew-symmetric, the quadratic form is simply 0. Hence one can obtain the onestep strong stability.
Theorem 2.1. Suppose L + L⊤ = 0, then the fourth order RK approximation (2.2) to the
√
method of lines scheme (2.1) is strongly stable, kun+1 k ≤ kun k, if τ kLk ≤ 2 2.
√
Proof. L + L⊤ = 0 implies [w, v] = 0, ∀v, w. Hence when τ kLk ≤ 2 2,
kun+1 k2 − kun k2 = Q1 (un ) =
2.3
1
1
1
1
kL4 un k2 − kL3 un k2 ≤ (
kLk2 − )kL3un k2 ≤ 0.
576
72
576
72
Stability for semi-negative L
According to Lemma 2.2, for non-skew-symmetric L, we would need to absorb the quadratic
form with the help of the numerical dissipation. One can see that the high order terms Li un
with i ≥ 3 are easy to control. But there are no other terms to bound un , Lun and L2 un .
P
Our only hope is that τ 2ij=0 αij [Lui , Luj ] itself is negative-definite. Lemma 2.3 indicates
that, to check the negativity of the generalized quadratic form, one only needs to examine the
coefficient matrix, as that for the polynomials. In Lemma 2.4, we prove a sufficient condition
P
for strong stability, once 2ij=0 αij [Lui, Luj ] is negative-definite, the energy change will be
non-positive when kLk is sufficiently small.
Lemma 2.3. Suppose L is semi-negative. Let M = (mij ) be a real symmetric semi-negative
P
definite matrix, then i,j mij [ui , uj ] ≤ 0.
Proof. Suppose M = S T ΛS, where S = (sij ) is an orthogonal matrix and Λ is a diagonal
P
matrix with the diagonal elements λi ≤ 0. Note that mij = k λk skiskj . Therefore
X
XX
X
X
mij [ui , uj ] =
(
λk ski skj )[ui, uj ] =
λk (
ski skj [ui, uj ])
i,j
i,j
=
X
k
k
i,j
k
X
X
X
X
λk [
ski ui,
skj uj ]) =
λk J
ski ui K2 ≤ 0.
i
j
9
k
i
Lemma 2.4. Suppose L is semi-negative. Let
3
2
Q(u) = αkL uk + τ
a
X
i,j=0
αij [Li u, Lj u],


α00 α01 α02
where αij = αji and a ≥ 2. If α < 0 and A2 = α10 α11 α12  is negative-definite, then
α20 α21 α22
there exists c0 > 0, such that Q(u) ≤ 0 as long as kLk ≤ c0 .
Proof. Let −ε be the largest eigenvalue of the matrix A2 . Then A2 + εI is semi-negative
definite. According to Lemma 2.3,
2
X
i,j=0
i
j
αij [L u, L u] =
2
X
(αij + εδij )[Li u, Lj u] − ε(JuK2 + JLuK2 + JL2 uK2 )
(2.13)
i,j=0
≤ −ε(JuK2 + JLuK2 + JL2 uK2 )
By Cauchy-Schwartz and the arithmetic-geometric mean inequality,
a
X
max(i,j)>2
i
j
2
2
2
2
αij [L u, L u] ≤ ε(JuK + JLuK + JL uK ) +
2
2
2
2
≤ ε(JuK + JLuK + JL uK ) +
a
X
α̃i JLi uK2
i=3
a
X
i=3
(2.14)
2α̃i kLkkLk
2(i−3)
3
2
kL uk ,
where we have used the fact JvK2 ≤ 2kLkkvk2 in the last inequality, and α̃i are some nonnegative constants depending on ε and αij . Using (2.13) and (2.14), if kLk = τ kLk ≤ c0 ,
one has
Q(u) ≤ (α +
a
X
2(i−3)+1
2α̃i c0
)kL3 uk2 .
i=3
Since α is negative, Q(u) is non-positive as long as c0 is sufficiently small.
Unfortunately, Q1 (un ) does
 not satisfy theassumptions in Lemma 2.4. Actually, the
1 1/2 1/6

three eigenvalues of A2 = − 1/2 1/3 1/8  are −0.00560618, 0.0793266 and 1.30128.
1/6 1/8 1/24
This motivates us to disprove the strong stability of the fourth order RK method and we end
up with the counter example in Proposition 1.1. So we turn to seek the power-boundedness
of P4 (L). Surprisingly, though Q1 (un ) itself fails to pass Lemma 2.4, Q1 (un+1 ) + Q1 (un )
succeeds. Hence we obtain the two-step strong stability in Theorem 2.2.
10
Theorem 2.2 (Two-step strong stability for semi-negative L). Suppose L+L⊤ ≤ 0, then the
fourth order RK approximation (2.2) to the method of lines scheme (2.1) is strongly stable
in two steps, kun+2k ≤ kun k, if τ kLk ≤ c0 .
Proof. Let un+2 = P4 (L)un+1 . Then
kun+2k2 − kun+1 k2 = Q1 (un+1).
Substitute (2.2) into Q1 (un+1 ) and rewrite the quadratic form in terms of un . By direct
calculation, one can obtain
Q1 (un+1 ) =
7
X
1
1
kL4 un+1 k2 − kL3 un+1 k2 + τ
α̃ij [Li un , Lj un ],
576
72
i,j=0
where
Ã2 = (α̃ij )3×3
While
Q1 (un ) =
where

1
3/2
7/6
= − 3/2 7/3 15/8  .
7/6 15/8 37/24
3
X
1
1
kL4 un k2 − kL3 un k2 + τ
α̂ij [Li un , Lj un ],
576
72
i,j=0
Â2 = (α̂ij )3×3
When kLk ≤ 2,



1 1/2 1/6
= − 1/2 1/3 1/8  .
1/6 1/8 1/24
1
1
kL4 un+1 k2 − kL3 un+1 k2 ≤ 0,
576
72
1
1
1
kL4 un k2 − kL3 un k2 ≤ −
kL3 un k2 .
576
72
144
Hence
ku
n+2 2
n 2
k − ku k = Q1 (u
n+1
where
A2 = (αij )3×3
7
X
1
3 n 2
kL u k + τ
αij [Li un , Lj un ],
) + Q1 (u ) ≤ −
144
i,j=0
n


2
2
4/3
2 .
= Ã2 + Â2 = −  2 8/3
4/3 2 19/12
The three eigenvalues of A2 are −5.73797, −0.499093 and −0.0129329. Hence A2 is negative
definite, and one could complete the proof by applying Lemma 2.4.
11
Remark 2.1. There is another interpretation of the two-step strong stability. We treat un+1
as a intermediate stage and consider un+2 as the solution of an eight-stage fourth order RK
scheme composed by two four-stage fourth order RK time stepping. Then Theorem 2.2 states
this new RK scheme is strongly stable. This coincides with Remark 3 of Section 4 in [14],
where the author conjectured the fourth order RK scheme can preserve the strong stability if
allowing additional stages.
Remark 2.2. Here we only focus on the existence of the constant c0 instead of giving a
specific estimate. The argument above is far from sharp. To obtain a reasonable bound for
c0 , one should expand
1
kL4 un+1k2
576
−
1
kL3 un+1k2
72
to separate the quadratic form involving
[·, ·], and carefully go through all the constants. To this end, a generalized version of Lemma
2.1 is also needed. We leave these to interested readers.
Realizing the fact that, for a normal operator G, kG2 k ≤ 1 implies kGk ≤ 1, we prove
the one-step strong stability for normal and semi-negative L.
Corollary 2.1 (Strong stability for normal and semi-negative L). Suppose LL⊤ = L⊤ L and
L + L⊤ ≤ 0, then the fourth order RK approximation (2.2) to the method of lines scheme
(2.1) is strongly stable, kun+1 k ≤ kun k, if τ kLk ≤ c0 for some constant c0 .
Proof. According to Theorem 2.2, we have kP4 (L)2 k ≤ 1 if kLk ≤ c0 . When L is normal,
P4 (L) is also normal, P4 (L)P4 (L)⊤ = P4 (L)⊤ P4 (L). Hence for any u ∈ V,
kP4 (L)uk2 = (P4 (L)u, P4(L)u) = (u, P4(L)⊤ P4 (L)u) = (u, P4 (L)P4 (L)⊤ u) = kP4 (L)⊤ uk2 .
Therefore, under the same constraints, kLk ≤ c0 , one has
kP4 (L)k2 = sup kP4 (L)uk2 = sup (u, P4(L)⊤ P4 (L)u)
kuk=1
kuk=1
⊤
≤ sup kP4 (L) P4 (L)uk = sup kP4 (L)2 uk = kP4 (L)2 k ≤ 1.
kuk=1
kuk=1
In other words, kun+1 k ≤ kun k if kLk ≤ c0 .
Remark 2.3. By using scalar eigenvalue analysis, the sharp bound of c0 in Corollary 2.1 is
√
2 2, as that in Theorem 2.1.
12
2.4
Stability for semi-bounded L
We then apply the perturbation analysis to obtain the stability for semi-bounded L. In the
following proof, one actually needs to assume the time steps satisfies τ2k = τ2k+1 in order
to apply Theorem 2.2. For simplicity, we use a uniform time stepping, namely τk = τ . The
same assumption will be used in Section 2.5.
Theorem 2.3. Suppose L + L⊤ ≤ 2µI, then the fourth order RK approximation (2.2) to the
method of lines scheme (2.1) is stable, kun k ≤ K(tn )ku0k, if τ ≤
c0
kLk+µ
for some constant
c0 . Here K(tn ) is a constant depending on µ and tn .
Proof. Note that
P4 (τ L)2 = P4 (τ (L − µI) + τ µI)2 = P4 (τ (L − µI))2 + τ µP (τ µ, τ (L − µI)),
where P (s, G) is of the form P (s, G) =
P
i,j≥0, i+j≤7 αij s
i
Gj . Since L + L⊤ ≤ 2µI, (L −
µI) + (L − µI)⊤ ≤ 0. According to Theorem 2.2, kP4 (τ (L − µI))2 k ≤ 1 if τ kL − µIk ≤ c0 .
Under a stricter restriction τ ≤
kP (τ µ, τ (L − µI))k ≤
Therefore for τ ≤
c0
,
kLk+µ
c0
,
kLk+µ
X
one also has
i,j≥0, i+j≤7
|αij ||τ µ|ikτ (L − µI)kj ≤
X
i,j≥0, i+j≤7
we have kP4 (τ L)2 k ≤ 1 + cµτ , where c =
With this one-step estimate, one has
P
|αij |ci+j
0 .
i,j≥0, i+j≤7
|αij |ci+j
0 .
t2k
ku2k k ≤ kP4 (τ L)2 kk ku0 k ≤ (1 + cµτ ) 2τ ku0 k ≤ K(t2k )ku0k,
and
ku2k+1k ≤ kP4 (τ L)kku2k k ≤ P4 (c0 )K(t2k )ku0 k,
which prove the stability.
2.5
Stability for semi-bounded and time-dependent L
We conclude this section by extending the results to L dependent on time. One should note in
this case, the fourth order RK scheme can no longer be written as the truncated exponential
13
in (2.2). Also different fourth order RK time integrators are no longer equivalent. One can
use the classical four-stage fourth order RK scheme in (2.15) as an example for our stability
analysis. However, our proof does not rely on this specific form and can be used for general
cases. The classical four-stage fourth order RK scheme is
k1 = L(tn )un ,
1
τ
k2 = L(tn+ 2 )(un + k1 ),
2
τ
n
n+ 21
)(u + k2 ),
k3 = L(t
2
(2.15)
k4 = L(tn+1 )(un + τ k3 ),
τ
un+1 = un + (k1 + 2k2 + 2k3 + k4 ),
6
1
where tn+ 2 = tn + τ2 . As a short hand notation, let us denote by
un+1 = Rτ (tn )un ,
where
τ n
n+ 12
n+1
L(t ) + 4L(t
Rτ (t ) =I +
) + L(t )
6
1
1
1
τ2 n
+
L(t )L(tn+ 2 ) + L(tn+ 2 )2 + L(tn+ 2 )L(tn+1 )
6
τ3 n
n+ 12 2
n+ 21 2
n+1
+
L(t )L(t
) + L(t
) L(t )
12
1
τ4
+ L(tn )L(tn+ 2 )2 L(tn+1 ).
24
n
(2.16)
The two-step approximation can be written as un+2 = Rτ (tn+1 )Rτ (tn )un .
We also assume that L satisfies the Lipschitz continuity condition
kL(t2 ) − L(t1 )k ≤ η|t2 − t1 |,
with supt kL(t)k ≤ η < +∞ for some constant η.
Lemma 2.5. Suppose L satisfies (2.17). Then
k
m
Y
i=1
m
L(t + αi τ ) − (L(t)) k ≤
14
m
X
i=1
|αi |η m τ.
(2.17)
Proof. We prove by induction. According to assumption (2.17), the lemma holds for m = 1.
Suppose it holds for m, then
k
≤k
m+1
Y
i=1
m+1
Y
i=1
L(t + αi τ ) − (L(t))m+1 k
L(t + αi τ ) − L(t)
≤kL(t + αm+1 τ ) − L(t)k
≤|αm+1 |ητ · η m + η · (
=
m+1
X
i=1
m
Y
i=1
m
Y
i=1
m
X
i=1
L(t + αi τ )k + kL(t)
m
Y
i=1
kL(t + αi τ )k + kL(t)kk
L(t + αi τ ) − (L(t))m+1 k
m
Y
i=1
L(t + αi τ ) − (L(t))m k
|αi |)η m τ
|αi |η m+1 τ,
where we have used the inductive assumption and η > supt kL(t)k in the second last line.
Hence the lemma holds for any positive integer m.
Theorem 2.4. Suppose L = L(t) and L + L⊤ ≤ 2µI for some positive number µ. L satisfies
the Lipschitz continuity condition (2.17). Then the four-stage fourth order RK approximation
(2.15) to the method of lines scheme (2.1) is stable, kun k ≤ K(tn )ku0k, if τ ≤
c0
.
η+µ
Here
K(tn ) is some constant depending on µ and tn .
Proof.
4
2
P4 (τ L)2 = I + 2τ L + 2(τ L)2 + (τ L)3 + (τ L)4
3
3
5
1
1
1
(τ L)8 .
+ (τ L)5 + (τ L)6 + (τ L)7 +
4
72
72
576
(2.18)
And
Rτ (tn+1 )Rτ (tn ) =
8 X
X
i=0
with {
P
j
j
αij }8i=0 take values of 1, 2, 2, 34 , 32 , 41 ,
αij
i
Y
(τ L(tn + α̃ijk τ )),
(2.19)
k=1
5
, 1
72 72
and
1
576
as those in P4 (τ L)2 . (This must
be true since when L is independent of time, (2.19) reduces to (2.18).) Applying Lemma
15
2.5, one has
n+1
kRτ (t
n
2
)Rτ (t ) − P4 (τ L) k = k
≤
8 X
X
i=0
8 X
X
i=0
≤τ
j
j
αij τ (
j
|αij |(
i
Y
k=1
|αij |τ i k
8 X
X
i=0
i
i
Y
k=1
i
X
k
L(t + α̃ijk τ ) − (L(t))i )k
L(t + α̃ijk τ ) − (L(t))i k
|α̃ijk |)(τ η)i = c̃τ,
On the other hand, kP4 (τ L)2 k ≤ 1 + cµτ if τ (η + µ) ≤ c0 , hence
kRτ (tn+1 )Rτ (tn )k ≤ kP4 (τ L)2 k + kRτ (tn+1 )Rτ (tn ) − P4 (τ L)2 k ≤ 1 + (cµ + c̃)τ.
As we have done in Theorem 2.3, one can prove kun k ≤ K(tn )ku0 k, if τ ≤
c0
.
η+µ
Remark 2.4. For different four-stage fourth order RK schemes, the values of αij and α̃ijk
P
are different. But j αij are always the same. Hence the proof above would still work.
3
Applications to hyperbolic problems
In this section, we apply the previous results to several semi-discrete approximations of
hyperbolic problems, and justify their stability after coupling with the fourth order RK
time integrator. The time step restriction is also referred as the CFL condition in this
context. In [14], Tadmor has provided detailed examples for spatial discretization based on
the nodal-value formulation, including the finite difference method and spectral collocation
method. Complementary to [14], we will mainly focus on Galerkin methods, including the
global approach, the spectral Galerkin method, and the local approach, the finite element
discontinuous Galerkin method.
The Galerkin methods are spatial discretization techniques based on the weak formulation. Consider the initial value problem,

 ∂t u(x, t) = L(x, t, ∂x )u(x, t),
 u(x, 0) = u (x),
0
16
(x, t) ∈ Ω × (0, T ),
x ∈ Ω.
The Galerkin methods seeks a solution of the form uN (x, t) =
PN
k=0 ck (t)φk (x).
Here
{φk (x)}N
k=0 forms the basis of a finite dimensional space V on Ω, equipped with the inner
product (·, ·). In addition, one requires that

 (∂t uN , vN ) = B(uN , vN ),

∀vN ∈ V,
uN (x, 0) = P u0.
Here B is a bilinear form on V with B(uN , vN ) approximating (LuN , vN ), and P is the
projection to V.
For spectral Galerkin method, V is chosen as the span of the trigonometric functions or
polynomials over the whole domain. Since the functions in V are sufficiently smooth. One
can directly set B(uN , vN ) = (LuN , vN ). Hence the method can be written as
∂t uN = LN uN = P LuN = P LP uN .
It suffices to check whether LN = P LP satisfies the conditions in Section 2 and examine
kLN k to obtain the time step constraint.
For discontinuous Galerkin method, V is a piecewise polynomial space based on an appropriate partition of the domain Ω. The associated bilinear form is involved with the so-called
numerical flux. We will explain it in the latter part of the section.
3.1
Spectral Galerkin method
The following examples are based on Example 3.8 and Example 8.3, 8.4 in [7]. The estimate of
kLN k relies on the inverse inequalities on the finite dimensional trigonometric or polynomial
spaces, which we refer to [11] for details. For interested readers, we also refer to the same
books [7] and [11] for a systematic set up of the spectral methods.
3.1.1
Fourier method
Consider the linear advection equation
ut = a(x, t)ux ,
t ∈ (0, T ), x ∈ (0, 2π),
17
(3.1)
with periodic boundary conditions. We assume that a(x, t) is a smooth function and is
periodic with respect to x.
Under this circumstance, L = a(x, t)∂x . One has
(v, (L + L⊤ )v) = (a∂x v, v) + (v, a∂x v) = −(v, ∂x (av)) + (v, a∂x v) = −(v, ax v) ≤ sup |ax |kvk2 ,
x,t
where (·, ·) is specified as the L2 inner product on [0, 2π]. Hence L + L⊤ ≤ supx,t |ax |I.
In the Fourier Galerkin method, we set V = span{sin(kx), cos(kx)}N
k=0 , and choose (·, ·)
to be the L2 inner product on [0, 2π] as well. Note that P is self-adjoint, hence
⊤
⊤
LN + L⊤
N = P LP + (P LP ) = P (L + L )P ≤ sup |ax |I.
x,t
The semi-boundedness holds.
On the other hand, by using the inverse inequality for trigonometric functions kφ′ k ≤
Nkφk, ∀φ ∈ V and the fact kP k ≤ 1, one has
kLN k ≤ ka(x, t)∂x k ≤ sup |a|N,
x,t
and
kLN (t2 ) − LN (t1 )k ≤ k(a(·, t2) − a(·, t1 ))∂x k ≤ sup |at |N|t2 − t1 |.
x,t
Hence, the Lipschitz condition holds for η = N max{supx,t |a|, supx,t |at |}. We obtain the
following corollary of Theorem 2.4.
Corollary 3.1. Consider the Fourier Galerkin approximation of the advection equation (3.1)
with the fourth-order RK time discretization,
un+1
= Rτ (tn )unN ,
N
Rτ (tn ) defined in (2.16), with LN = P (a(x, t)∂x )P.
This fully-discrete scheme is stable,
kunN k ≤ K(tn )ku0N k,
under the CFL condition τ (N max{supx,t |a|, supx,t |at |} + 21 supx,t |ax |) ≤ c0 , for some constant c0 .
18
3.1.2
Polynomial method
Consider the linear advection equation
∂t u = ∂x u,
t ∈ (0, T ), x ∈ (−1, 1),
(3.2)
with the inflow boundary condition u(1, t) = 0.
In the polynomial Galerkin method, we set V = {φ(x) ∈ span{xk }nk=0 |φ(1) = 0} and
R1
(·, ·)w is the weighted L2 inner product, namely (f, g)w = −1 w(x)f (x)g(x)dx. The corre-
sponding norm is denoted as k · kw . We denote by Pw the projection to V under (·, ·)w .
Different weight functions w(x) correspond to different polynomial methods in the literature. Let us consider the Jacobi method as an example. Here w(x) = (1 + x)α (1 − x)β with
α ≥ 0 and −1 < β ≤ 0. When α = β = 0, w(x) = 1, and the method is also referred as the
Legendre method.
It can be shown that LN is semi-negative in this setting. For any uN ∈ V, one has
(uN , (LN + L⊤
N )uN )w = (Pw uN , LPw uN )w + (LPw uN , Pw uN )w = 2(uN , LuN )w
Z 1
Z 1
Z 1
Z
2
′ 2
2
=2
wuN ∂x uN dx =
w∂x uN dx = −
w uN dx − w(−1)uN (−1, t) ≤ −
−1
−1
−1
1
−1
w ′ u2N dx.
For α > 0 and β < 0,
w ′(x) = α(1 + x)α−1 (1 − x)β − β(1 + x)α (1 − x)β−1
= (1 + x)α−1 (1 − x)β−1 (α(1 − x) − β(1 + x)) ≥ 0.
Similarly, one can prove w ′ (x) ≥ 0 for other cases. Hence LN + L⊤
N ≤ 0.
Furthermore, we apply the inverse inequality for Jacobi polynomials to obtain kLN kw ≤
CN 2 for some constant C. (Refer to Theorem 3.34 in [11].) Therefore, by applying Theorem
2.2, one has the following corollary.
Corollary 3.2. Consider the Jacobi Galerkin approximation of the advection equation (3.2)
with the fourth-order RK time discretization,
un+1
= P4 (τ LN )unN ,
N
19
LN = Pw (∂x )Pw .
This fully-discrete scheme is strongly stable in two steps,
n
kun+2
N kw ≤ kuN kw ,
under the CFL condition τ ≤ CN −2 , where C depends on the constant in the inverse inequality.
3.2
Discontinuous Galerkin method
A detailed discussion of the example in Section 3.2.1 can be found in [13]. For a systematic
set up of the discontinuous Galerkin method for solving conservation laws, one can refer to
a series of paper [4], [3], [2], [1] and [5] by Cockburn et al. To be consistent with the past
literatures, we switch the subscript N to h in this section, where h corresponds to the mesh
size. Also, we will use the bold fonts for vectors and matrices in this section.
3.2.1
Multi-dimensional scalar equation
Consider the linear scalar conservation law
x, t) = ∇ · (β
β (x
x)u(x
x, t)),
∂t u(x
x ∈ Ω ⊂ Rd , t ∈ (0, T ),
(3.3)
with the periodic boundary conditions. Here β is a smooth function satisfying the divergencex) = 0.
free condition, ∇ · β (x
Suppose K = {K} is a quasi-uniform partition of the domain Ω. We denote by h the
largest diameter of the elements. The collection of cell interfaces is denoted by E. Let us
define V = {v ∈ L2 (Ω) : v|K ∈ P p (K), ∀K ∈ K}, where P p (K) is the space of polynomials
of degree no more than p on K. As for (·, ·), we use the L2 inner product on Ω.
In discontinuous Galerkin method, one seeks a solution satisfying
Z
Ω
∂t uh vh dx =
X
K
(−
Z
K
β vh )dx +
uh ∇ · (β
Z
∂K
u
bh vhβ · ndl),
∀vh ∈ V,
where u
bh is the numerical flux, approximating the trace of u along the edges. A popular choice
+
lim
x + εβ
β ).
is to obey the unwinding principle. In our case, that is u
bh = u+
h , where uh = ε→0 uh (x
+
20
lim
x − εβ
β ) to represent the trace of uh from the downwind side. When
We also use u−
h = ε→0 uh (x
+
x) is parallel to the cell interfaces, u±
x) · n = 0,
β (x
h are not well-defined. But in this case, β (x
hence the value of u±
h will not make any difference.
Let us introduce the short hand notation,
Z
X Z
+
β v)dx −
Hβ (w, v) =
w∇ · (β
K∈K
K
∂K
Then the scheme becomes, find uh ∈ V, such that
(∂t uh , vh ) = −Hβ+ (uh , vh ),
β · ndl .
w + vβ
(3.4)
∀vh ∈ V.
Hβ+ has the following properties. We refer to [13] for details.
Lemma 3.1. For any wh , vh ∈ V, we have
qP R
+
+
− 2
1
2
β · n |dl .
(1) Hβ (vh , vh ) = 2 Jvh Kβ , where Jvh Kβ =
e∈E e (vh − vh ) |β
(2) |Hβ+ (wh , vh )| ≤ Ch−1 kwh kkvh k, for some constant C depending on β and the constant in
the inverse estimate.
By defining
(Lh uh , vh ) = −Hβ+ (uh , vh ),
(3.5)
we can also rewrite the scheme as
∂t uh = Lh uh .
Using Lemma 3.1, one has
+
2
(vh , (Lh + L⊤
h )vh ) = 2(Lh vh , vh ) = −2Hβ (vh , vh ) = −Jvh Kβ ≤ 0,
(3.6)
kLh vh k2 = −Hβ+ (vh , Lh vh ) ≤ Ch−1 kvh kkLh vh k.
(3.7)
Hence Lh is semi-negative and kLh k ≤ Ch−1 . The stability of the fully discretized scheme
now follows as a corollary of Theorem 2.2.
Corollary 3.3. Consider the discontinuous Galerkin approximation of (3.3) with the fourthorder RK time discretization,
un+1
= P4 (τ Lh )unh ,
h
Lh defined in (3.4) and (3.5).
21
This fully-discrete scheme is strongly stable in two steps,
n
kun+2
h k ≤ kuh k,
under the CFL condition τ ≤ Ch, where C depends on β and the constant in the inverse
estimate.
3.2.2
Multi-dimensional system
We now consider the symmetric hyperbolic system
x, t) =
∂tu (x
d
X
x = (x1 , ..., xd ) ∈ Ω ⊂ Rd , u = (u1 , ..., um )T ∈ Rm ,
x, t),
A i ∂xi u (x
i=1
(3.8)
where A i are m × m constant real symmetric matrices. For simplicity, we assume Ω to be a
hypercube in Rd and apply the periodic boundary conditions.
Again, we denote by K = {K} a quasi-uniform partition of the domain Ω with the mesh
size h. And E is the collection of the cell interfaces. The space V is chosen as
V = {vv ∈ [L2 (Ω)]m |vv = (v 1 , ..., v m )T , v k |K ∈ P p (K), ∀K ∈ K},
R
P
P
w , v ) = K∈K (w
w , v )K = K∈K K w · v dx and the induced norm
with the inner product (w
p
R
w , v ie = e w · v dl for the integration along the cell interfaces.
k · k = (·, ·). We also use hw
To set up the discontinuous Galerkin approximation, we firstly write down the weak
formulation of (3.8)
(∂tu , v )K = −
d
X
i=1
d
X X
u , v ie ,
Aiu , ∂xi v )K +
h(
nie,K A i )u
(A
e∈∂K
(3.9)
i=1
where n e,K = (n1e,K , ..., nde,K ) is the outward unit normal vector to the edge e in K. Since A i
are symmetric, for each e there is an orthogonal matrix S e such that Λ e = diag(λ1e , ..., λm
e ) =
P
S Te is diagonal. The numerical scheme can be obtained by applying the
S e ( di=1 nie,K A i )S
upwind flux for each eigen-component, namely
(∂tu h , v h )K = −
d
X
Aiu h , ∂xi v h )K +
(A
i=1
X
e∈∂K
22
ΛeS[
hΛ
eu h , S ev h ie ,
(3.10)
[ 1
[ m T
where S[
eu h = ((S eu h ) , ..., (S eu h ) ) and
(
k
S eu int
(S
λke < 0,
h ) ,
k
(S[
eu h ) =
k
k
S eu ext
(S
h ) , λe > 0.
(3.11)
ext
k
lim u h (x
x, t) = y →xlim
x
x
x, t). When λke = 0, (S[
Here, u int
eu h ) will
h (x
x,y
y ∈K u h (x , t) and u h (x , t) = y →x
x,y
y ∈K
c
not contribute to the integration, hence one can avoid defining it in this case. Also note that
Pd
Pd
′
i
i
n
A
=
−
i
e,K
i=1 ne,K ′ A i for K ∩ K = e, and the same S e should be used on both
i=1
′
sides. Then S[
eu h defined in K and K are the same, and the numerical flux is single-valued
on the cell interfaces. Furthermore, for the scalar case, (3.11) coincides with the previous
definition in Section 3.2.1.
As before, we define
wh, v h) =
H(w
X
K∈K
wh, v h) =
HK (w
X
K∈K
d
X
i=1
Aiw h , ∂xi v h )K −
(A
X
e∈∂K
!
ΛeS[
hΛ
ew h , S ev h ie . (3.12)
H has the following properties, and its proof can be found in Appendix B.
w h , v h )| ≤ Ch−1 kw
w h kkvv h k, where C
Lemma 3.2. ∀vv h , w h ∈ V, H(vv h , v h ) ≥ 0 and |H(w
depends on A i and the constant in the inverse estimate.
With the definition of H, the scheme (3.10) can be written as
∂tu h = Lhu h ,
uh , v h ),
(Lhu h , v h ) = −H(u
∀vv h ∈ V.
(3.13)
Similar to those in (3.6) and (3.7), Lemma 3.2 implies that Lh is semi-negative and kLh k ≤
Ch−1 . Hence, one can use Theorem 2.2 to obtain the following corollary.
Corollary 3.4. Consider the discontinuous Galerkin approximation of (3.8) with the fourthorder RK time discretization,
unh ,
u n+1
= P4 (τ Lh )u
h
Lh defined in (3.11), (3.12) and (3.13).
The fully-discrete scheme is strongly stable in two steps,
un+2
unh k,
ku
h k ≤ ku
23
under the CFL condition τ ≤ Ch. Here C depends on Ai and the constant in the inverse
estimate.
Remark 3.1. The stability extends to symmetrizable hyperbolic systems with constants coefficients. More specifically, if there is a symmetric positive-definite matrix H such that H A i are
un+2
unh kH ,
symmetric, then under an appropriate CFL condition τ ≤ Ch−1 , one has ku
h kH ≤ ku
R
P
where kvv kH = K∈K K v T H v dx.
Remark 3.2. For symmetric hyperbolic systems with variable coefficients, one can follow
similar lines to show Lh is semi-bounded, and prove the stability of the fully discretized
scheme. But the extension to symmetrizable hyperbolic systems with variable coefficients is
non-trivial.
4
Concluding remarks
We analyze the stability of the fourth order RK method for integrating method of lines
schemes solving the well-posed linear PDE system ∂t u = L(x, t, ∂x )u. The issue of strong
stability is of special interests. We consider the ODE system
d
u
dt N
= LN uN , with LN +L⊤
N ≤ 0
and LN independent of time. When LN is normal, the strong stability of the fourth order
RK approximation has already been justified by the scalar eigenvalue analysis. But for
non-normal LN , we provide a counter example to show that the strong stability can not be
preserved whatever small time step we choose. However, the strong stability can actually be
n
obtained in two steps. We prove kun+2
N k ≤ kuN k under the time step constraint τ kLN k ≤ c0
for some constant c0 . This can also be interpreted as the strong stability of the eightstage fourth order RK method composed by two steps of the four-stage method. Then,
based on this fact, we extend the stability results to general semi-bounded linear systems
after using a perturbation analysis and a froze-coefficient argument. Finally, we apply the
results to justify the stability of the fully discretized schemes combining the fourth order RK
method and different spatial discretizations, including the spectral Galerkin method and the
discontinuous Galerkin method. The corresponding CFL time step restrictions are obtained.
24
A
Proof of Proposition 1.1
Proof. (1) To prove LN + LTN ≤ 0.


 
2 2 2
1
T



2
2
2
LN + LN = −
= −2 1 1 1 1 .
2 2 2
1
Hence for any u = (u1 , u2, u3 )T ∈ R3 ,
uT (LN + LTN )u = −2(u1 + u2 + u3 )2 ≤ 0.
(2) To prove kP4 (τ LN )k > 1.
By definition,

P4 (τ LN ) = 
1−τ +
Since kP4 (τ LN )k =
p
τ2
2
0
0
−
τ3
6
+
τ4
24
−2τ + 2τ 2 − τ 3 +
2
3
1 − τ + τ2 − τ6 +
0
τ4
3
τ4
24
4 
−2τ + 4τ 2 − 3τ 3 + 4τ3
4
−2τ + 2τ 2 − τ 3 + τ3  .
2
3
4
1 − τ + τ2 − τ6 + τ24
λmax (P4 (τ LN )T P4 (τ LN )) is the square root of the largest eigenvalue
of P4 (τ LN )T P4 (τ LN ), we define f (λ, τ ) = det(λI − P4 (τ LN )T P4 (τ LN )). It suffices to show,
for any sufficiently small τ , f (·, τ ) has a root larger than 1.
After direct calculation, one can obtain
f (λ, τ ) = λ3 + c2 λ2 + c1 λ + c0 ,
where
6
1 4
1 2 1 3
,
c0 = − 1 − τ + τ − τ + τ
2
6
24
211τ 5 1069τ 6
+
c1 = 3 − 12τ + 36τ 2 − 72τ 3 + 100τ 4 −
2
12
257τ 7 4081τ 8 3827τ 9 2225τ 10 1133τ 11
−
+
−
+
−
4
96
144
144
144
13
14
15
12
3709τ
197τ
851τ
385τ 16
1895τ
−
+
−
+
,
+
576
3456
768
20736
110592
163τ 5 589τ 6 75τ 7 385τ 8
c2 = −3 + 6τ − 18τ 2 + 36τ 3 − 46τ 4 +
−
+
−
.
4
24
8
192
One can see
8τ 9 τ 10 τ 11 59τ 12 341τ 13 247τ 14 1435τ 15 19τ 16 2299τ 17
+
−
−
+
−
+
−
+
27
3
9
216
864
864
10368
384
165888
79τ 19
107τ 20
179τ 21
13τ 22
τ 23
τ 24
4757τ 18
+
−
+
−
+
−
,
−
1492992 124416 995328 11943936 7962624 7962624 191102976
f (1, τ ) = −
25
67τ 5 371τ 6 289τ 7 7007τ 8 11609τ 9
+
−
+
−
4
24
8
192
432
11
12
13
14
15
16
10
383τ
5213τ
2345τ
203τ
673τ
5087τ
2299τ 17
2273τ
−
+
−
−
+
−
+
+
144
48
1728
3456
6912
6912
110592
165888
79τ 19
107τ 20
179τ 21
13τ 22
τ 23
τ 24
4757τ 18
+
−
+
−
+
−
.
−
1492992 124416 995328 11943936 7962624 7962624 191102976
Hence, for any sufficiently small τ ,
f (2, τ ) = 1 + 6τ − 18τ 2 + 36τ 3 − 38τ 4 +
f (1, τ ) = −
8τ 8
+ O(τ 9 ) < 0,
27
f (2, τ ) = 1 + 6τ + O(τ 2 ) > 1.
Note f (λ, τ ) is a polynomial with respect to λ and τ , hence it is continuous. By the intermediate value theorem, there exists λ(τ ) ∈ (1, 2) such that f (λ(τ ), τ ) = 0. Therefore
kP4 (τ LN )k ≥
p
λ(τ ) > 1.
Remark A.1. We also provide the numerical examination of Proposition 1.1. The values of
kP4 (τ LN )k − 1 and kP4 (τ LN )2 k − 1 with a decreasing sequence of τ are listed in Table A.1.
As we can see, for this specific case, the one-step strong stability does fail, but the two-step
strong stability holds.
Table A.1: The numerical examination of Proposition 1.1 and Theorem 2.2
τ
kP4 (τ LN )k − 1 kP4 (τ LN )2 k − 1
0.5
1.2794e-3
-1.0428e-3
0.2
8.3857e-6
-1.8788e-5
0.1
2.2173e-7
-6.6719e-7
0.05
6.3437e-9
-2.2029e-8
6.1504e-11
-2.3254e-10
0.02
0.01
1.8868e-12
-7.3375e-12
B
Proof of Lemma 3.2
Proof. (1) To prove H(vv h , v h ) ≥ 0.
[
We denote by v e,h = S ev h and vbe,h = S
ev h . Since A i are symmetric. Integrating by parts,
one has
d
X
i=1
Aiv h , ∂xi v h )K =
(A
d
X 1
X 1X
h Λ ev e,h , v e,h ie .
nie,K A iv h , v h ie =
h
2
2
i=1
e∈∂K
e∈∂K
26
(B.14)
Substitute (B.14) into (3.12), then we obtain
H(vv h , v h ) =
X X
1
Λe ( v e,h − vbe,h ), v e,h ie .
hΛ
2
K∈K e∈∂K
Each cell interface e is shared by two elements. We sit in either side and call v e,h defined
ext
in this element to be v int
e,h . v e,h defined from the other side is referred as v e,h . Then the
summation can be rewritten as
H(vv h , v h ) =
X
e∈E
1
1
Λe ( v int
be,h ), v int
Λe ( v ext
hΛ
− vbe,h ), v ext
e,h − v
e,h ie + h−Λ
e,h ie
2
2 e,h
X 1
1
int
int
ext
Λev int
Λev ext
Λevbe,h , v ext
=
( hΛ
e,h − v e,h ie ).
e,h , v e,h ie − hΛ
e,h , v e,h ie + hΛ
2
2
e∈E
By checking the definition of vbe,h , one can prove that
H(vv h , v h ) =
1X
int
int
Λe )(vv ext
v ext
habs(Λ
e,h − v e,h ), (v
e,h − v e,h )ie ,
2 e∈E
Λe ) = diag(|λ1e |, ..., |λm
v h , v h ) is non-negative.
where abs(Λ
e |). Hence H(v
w h , v h )| ≤ Ch−1 kw
w h kkvv h k.
(2) To prove |H(w
We will use the notation | · | for different meanings. For scalars, |a| is the absolute value
A| is the operator
of a; for vectors, |aa| stands for the Euclidean norm of a; and for matrices, |A
p
norm of A. We also use the notation k · kK = (·, ·)K .
By using the Cauchy- Schwarz inequality and the inverse estimate, one has
|
d
XX
K∈K i=1
Aiw h , ∂xi v h )K | ≤
(A
d
XX
K∈K i=1
≤ Ch−1
≤ Ch−1
Ai |kw
w h kK k∂xi v h kK
|A
d
X
i=1
d
X
i=1
Ai |
|A
!
!
X
K∈K
w h kK kvv h kK
kw
Ai | kw
w h kkvv h k.
|A
And
|
X X
K∈K e∈∂K
k
ΛeS[
hΛ
ew h , S ev h ie | ≤ sup |λe |
k,e
X X
K∈K e∈∂K
27
sZ
e
2
|S[
ew h | dl
sZ
e
S ev h |2 dl.
|S
(B.15)
S ev h | = |vv h |. By using the inverse inequality, we obtain
Note that S e are orthogonal, |S
sZ
sZ
e
and
sZ
e
S ev h |2 dl =
|S
2
|S[
ew h | dl ≤
sZ
e
1
e
|vv h |2 dl ≤ Ch− 2 kvv h kK ,
1
−2
2
2
w ext
w int
w h kN (K) ,
kw
|w
h | dl ≤ Ch
h | + |w
where N (K) is the union of K and its neighboring elements. Hence
X
X X
−1
ΛeS[
w h kN (K) kvv h kK
hΛ
sup |λke |
kw
|
ew h , S ev h ie | ≤Ch
k,e
K∈K e∈∂K
−1
≤Ch
K∈K
sup |λke |
k,e
sX
K∈K
w h k2N (K)
kw
w h kkvv h k.
≤Ch−1 sup |λke |kw
sX
K∈K
kvv h k2K
(B.16)
k,e
Combining (B.15) and (B.16), we get
w h , v h )| ≤ Ch−1 kw
w h kkvv h k.
|H(w
References
[1] B. Cockburn, S. Hou, and C.-W. Shu. The Runge-Kutta local projection discontinuous
Galerkin finite element method for conservation laws IV: the multidimensional case.
Mathematics of Computation, 54:545–581, 1990.
[2] B. Cockburn, S.-Y. Lin, and C.-W. Shu. TVB Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws III: one-dimensional systems.
Journal of Computational Physics, 84:90–113, 1989.
[3] B. Cockburn and C.-W. Shu. TVB Runge-Kutta local projection discontinuous Galerkin
finite element method for conservation laws II: general framework. Mathematics of
computation, 52:411–435, 1989.
28
[4] B. Cockburn and C.-W. Shu. The Runge-Kutta local projection P 1-discontinuousGalerkin finite element method for scalar conservation laws.
RAIRO-Modélisation
Mathématique et Analyse Numérique, 25:337–361, 1991.
[5] B. Cockburn and C.-W. Shu. The Runge-Kutta discontinuous Galerkin method for
conservation laws V: multidimensional systems. Journal of Computational Physics,
141:199–224, 1998.
[6] S. Gottlieb, C.-W. Shu, and E. Tadmor. Strong stability-preserving high-order time
discretization methods. SIAM Review, 43:89–112, 2001.
[7] J.S. Hesthaven, S. Gottlieb, and D. Gottlieb. Spectral methods for time-dependent
problems. Cambridge University Press, Cambridge, 2007.
[8] A. Iserles. A first course in the numerical analysis of differential equations. Cambridge
University Press, Cambridge, 2009.
[9] J.F.B.M. Kraaijevanger. Contractivity of Runge-Kutta methods. BIT Numerical Mathematics, 31:482–528, 1991.
[10] D. Levy and E. Tadmor. From semidiscrete to fully discrete: stability of Runge-Kutta
schemes by the energy method. SIAM review, 40:40–73, 1998.
[11] J. Shen, T. Tang, and L.-L. Wang. Spectral methods: algorithms, analysis and applications. Springer, Heidelberg, 2011.
[12] M.N. Spijker. Contractivity in the numerical solution of initial value problems. Numerische Mathematik, 42:271–290, 1983.
[13] Z. Sun and C.-W. Shu. Stability analysis and error estimates of Lax-Wendroff discontinuous Galerkin methods for linear conservation laws. ESAIM: Mathematical Modelling
and Numerical Analysis, to appear.
29
[14] E. Tadmor. From semidiscrete to fully discrete: stability of Runge-Kutta schemes
by the energy method II. Collected Lectures on the Preservation of Stability under
Discretization, Lecture Notes from Colorado State University Conference, Fort Collins,
CO, 2001 (D. Estep and S. Tavener, eds.), Proceedings in Applied Mathematics, SIAM,
109:25–49, 2002.
[15] G. Wanner and E. Hairer. Solving ordinary differential equations II. Springer-Verlag,
Berlin, 1991.
30