Order conditions for general linear methods A. Cardone,∗ Z. Jackiewicz,† James H. Verner,‡ and Bruno Welfert§ October 6, 2013 Abstract We describe the derivation of order conditions, without restrictions on stage order, for general linear methods for ordinary differential equations. This derivation is based on the extension of Albrecht approach proposed in the context of Runge-Kutta and composite and linear cyclic methods. This approach was generalized by Jackiewicz and Tracogna to two-step Runge-Kutta methods, by Jackiewicz and Vermiglio to general linear methods with external stages of different orders, and by Garrappa to some classes of Runge-Kutta methods for Volterra integral equations with weakly singular kernels. This leads to general order conditions for many special cases of general linear methods such as diagonally implicit multistage integration methods, Nordsieck methods, and general linear methods with inherent RungeKutta stability. Examples of high order methods with some desirable stability properties are also presented. Keywords: General linear methods; order conditions; Nordsieck methods; two-step Runge-Kutta formulas 2010 MSC : 65L05, 65L20 ∗ Dipartimento di Matematica, Università di Salerno, Fisciano (Sa), 84084 Italy, e-mail: [email protected]. The work of this author was supported by travel fellowship from the Department of Mathematics, University of Salerno. † Department of Mathematics, Arizona State University, Tempe, Arizona 85287, e-mail: [email protected], and AGH University of Science and Technology, Kraków, Poland. ‡ Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby, B.C. V5A 1S6, Canada e-mail: [email protected]. § Department of Mathematics, Arizona State University, Tempe, Arizona 85287, e-mail: [email protected] 1 1 Introduction Consider the initial value problem for a system of ordinary differential equations (ODEs) { ( ) y ′ (x) = f y(x) , x ∈ [x0 , X], (1.1) y(x0 ) = y0 , where y0 ∈ Rm , and the function f : Rm → Rm is assumed to be sufficiently smooth. Let N be a positive integer and define the stepsize h = (X − x0 )/N , and the uniform grid xn = x0 + nh, n = 0, 1, . . . , N . To approximate the solution y = y(x) to (1.1) we consider the class of general linear methods (GLMs) defined by s r ∑ ∑ [n+1] [n+1] [n] =h aij f (Yj )+ uij yj , i = 1, 2 . . . , s, Yi j=1 j=1 (1.2) s r ∑ ∑ [n+1] [n+1] [n] =h bij f (Yj )+ vij yj , i = 1, 2 . . . , r, yi j=1 j=1 [n+1] n = 0, 1, . . . , N − 1. Here, Yi order q to y(xn + ci h), i.e., [n+1] Yi are approximations, possibly of low stage = y(xn + ci h) + O(hq+1 ), i = 1, 2, . . . , s, (1.3) [n] where q is the minimum of all stage orders, and yi are approximations of order p to the linear combinations of the derivatives of the solution y at the point xn , i.e., [n] yi = p ∑ qik hk y (k) (xn ) + O(hp+1 ), i = 1, 2, . . . , r. (1.4) k=0 These methods can be characterized by the abscissa vector c = [c1 , . . . , cs ]T , the coefficient matrices A = [aij ] ∈ Rs×s , U = [aij ] ∈ Rs×r , B = [aij ] ∈ Rr×s , V = [aij ] ∈ Rr×r , the vectors q0 , q1 , . . . , qp defined by q1,1 q1,0 . .. , q1 = ... , q0 = qr,1 qr,0 2 q1,p . . qp = . , qr,p ..., and the four integers: the order p, the stage order q, the number of external approximations r, and the number of stages or internal approximations s. Introducing the notation ( [n+1] ) [n+1] [n] f Y1 Y1 y1 ( ) .. .. , f Y [n+1] = , y [n] = ... , Y [n+1] = . . ( [n+1] ) [n+1] [n] f Ys Ys ys the GLM (1.2) can be written in a more compact vector form ( ) Y [n+1] = h(A ⊗ I)f Y [n+1] + (U ⊗ I)y [n] , y [n+1] = h(B ⊗ I)f (Y [n+1] ) + (V ⊗ I)y [n] , (1.5) n = 0, 1, . . . , N − 1, where I is the identity matrix of dimension m and ‘⊗’ stands for the Kronecker product of matrices. Moreover, the relations (1.4) and (1.5) take the form Y [n+1] = y(xn + ch) + O(hq+1 ), (1.6) y [n] = z(xn ) + O(hp+1 ), (1.7) and y(x + c1 h) .. ∈ Rs×m , y(x + ch) = . y(x + cs h) where and the so-called exact value function z(x) ∈ Rs×m is defined by z(x) = p ∑ qk hk y (k) (x) = q0 y(x) + q1 hy ′ (x) + · · · + qp hp y (p) (x). k=0 To guarantee zero-stability we assume that the coefficient matrix V is power bounded. This is equivalent to the condition that the minimal polynomial of V has no zeros with magnitude greater than 1 and all zeros with magnitude equal to 1 are simple. If V has only one eigenvalue on the unit circle equal to 1 and all other eigenvalues inside of the unit circle the GLM (1.5) is called strictly zero-stable. 3 Applying (1.5) to the linear test equation y ′ = ξ y, t ≥ 0, (1.8) where ξ is a complex parameter, leads to the matrix recurrence relation y [n+1] = M(z)y [n] , n = 0, 1, . . . , (1.9) where z = hξ and the so-called stability matrix M(z) is defined by M(z) = V + zB(I − zA)−1 U. We also define the stability function p(w, z) by the formula ( ) p(w, z) = det wI − M(z) . (1.10) (1.11) This function determines the stability properties (absolute stability) of GLMs (1.5) with respect to the test equation (1.8). The GLM (1.5) is said to have Runge-Kutta stability if the stability function p(w, z) has only one nonzero root, i.e., if p(w, z) assumes the form ( ) p(w, z) = wr−1 w − R(z) , (1.12) where R(z) is an approximation of order p to the exponential function exp(z). Here, p is the order of the method (1.5). The GLMs were first introduced by Butcher [13] using somewhat different notation, and the modern theory of these methods was developed in [14, 16, 17, 42, 43, 45, 65]. These methods include as special cases many numerical methods for ODEs, for example, linear multistep methods [42– 44, 55, 56, 58, 59], predictor-corrector methods [55, 56, 58, 59], Runge-Kutta (RK) methods [14, 16, 40, 42, 55, 56], diagonally implicit multistage integration methods (DIMSIMs) [15, 24–28, 30–35], Nordsieck methods [7, 20, 29], two-step Runge-Kutta (TSRK) methods [6, 7, 46–50, 52, 53, 60–62], GLMs with inherent Runge-Kutta stability (IRKS) [31, 35, 65, 66], and peer methods [9, 54, 57, 63, 64]. For additional examples we refer to [45] and the references therein. Some special cases of GLMs are reviewed in Section 2. The paper is organized as follows. Section 2 contains a short review of some classes of GLMs. Section 3 deals with the derivation of local discretization errors of GLMs by Taylor series expansion. The main results on order conditions are illustrated in Section 4. In Section 5 these theoretical results are used to derive order conditions up to order six. Examples of application of the order conditions on various methods are given in Section 6. Last section contains some concluding remarks. 4 2 Some special cases of GLMs In this section we will review briefly some special cases of GLMs (1.5) which will be used later to illustrate the general theory of order conditions. 2.1 Linear multistep methods The class of linear multistep methods is defined by yn+1 = k ∑ αj yn+1−j + h j=1 k ∑ βj f (yn+1−j ), (2.1) j=0 n = k − 1, k, . . . , N − 1, where y0 , y1 , . . . , yk−1 are given starting values which are approximations to y(x0 ), y(x1 ), . . . , y(xk−1 ). Putting Y [n+1] = yn+1 and [ ]T y [n] = yn yn−1 · · · yn−k+1 hf (yn ) hf (yn−1 ) · · · hf (yn−k+1 ) the method (2.1) can be represented as GLM with c = c1 = 1, and the coefficient matrices given by β0 α1 · · · αk−1 αk β1 β0 α1 · · · αk−1 αk β1 0 1 ··· 0 0 0 . . . .. .. . .. .. .. [ ] .. . . A U = 1 0 0 0 0 ··· B V 1 0 ··· 0 0 0 0 0 ··· 0 0 1 . . . .. .. . .. .. .. .. . . 0 0 ··· 0 0 0 r = 2k, s = 1, and with ··· βk−1 βk βk−1 βk ··· 0 0 .. .. .. . . . ··· 0 0 . ··· 0 0 ··· 0 0 .. .. .. . . . ··· 1 0 ··· This method has order p with respect to the starting procedure y [0] = q0 y(x0 ) + q1 hy ′ (x1 ) + . . . + qp hp y (p) (x0 ) + O(hp+1 ), with q0 , q1 , . . . , qp given by [ q0 = 1 1 · · · 1 0 0 ··· 5 ]T 0 , [ q1 = [ 0 −1 · · · −(k − 1) 1 1 · · · ]T 1 , ]T 1 2l (k − 1)l , − ··· − 0 1 2l−1 · · · (k − 1)l−1 0 − l l l l = 2, 3, . . . , p, compare [45]. This representation with r = 2k and s = 1 was first proposed by Burrage and Butcher [12] (compare also [18]). A more compact representation of these methods with r = k and s = 1 was discovered recently by Butcher and Hill [23], see also [45]. This more compact representation may accommodate a simpler analysis of methods. We refer to the monographs [42–44, 55, 56, 58, 59] for a thorough discussion of linear multistep methods. (−1)l−1 ql = (l − 1)! 2.2 RK methods The RK methods with s stages are defined by s ∑ [n+1] [n+1] Y = y + h aij f (Yj ), i = 1, 2, . . . , s, n i j=1 s (2.2) ∑ [n+1] y = y + h bj f (Yj ), n+1 n j=1 [n+1] n = 0, 1, . . . , N − 1, where Yi is an approximation to y(xn + ci h) and yn is an approximation to y(xn ). These methods are usually represented by the Butcher tableau c1 a11 · · · a1s .. . . . .. . .. c A . . = . bT cs as1 · · · ass b1 ··· They can be represented as GLMs (1.5) with c = [c1 , . . . , cs ]T , the coefficient matrices a11 [ ] [ ] . .. A U A e = = T as1 B V b 1 b1 6 bs r = 1 by the abscissa vector · · · a1s . .. . .. ··· ass ··· bs 1 .. . , 1 1 and q0 = 1, q1 = 0. We refer to the monographs [14, 16, 40, 42, 55, 56] for a thorough discussion of these methods. 2.3 DIMSIMs DIMSIMs (with r = s) are the GLMs defined by i−1 ∑ [n+1] [n+1] [n+1] [n] = h aij f (Yj ) + hλf (Yi ) + yi , i = 1, 2, . . . , s, Y i j=1 s s ∑ ∑ [n+1] [n+1] [n] y = h b f (Y ) + vj y j , ij j i j=1 (2.3) i = 1, 2, . . . , s, j=1 n = 1, 2, . . . , N − 1. For these methods the coefficient matrix A is lower triangular with the same element λ ≥ 0 on the diagonal. If λ = 0 these methods are explicit and will be referred to as type 1 methods. If λ > 0 these methods are implicit and will be referred to as type 2 methods. The coefficients matrix V is a rank one matrix of the form V = evT , with vT e = 1. The methods (2.3) can be represented by the abscissa vector c = [c1 , . . . , cs ]T , the coefficient matrices given by 1 0 ··· 0 λ a21 λ 0 1 ··· 0 .. . . .. .. . . .. .. . . . . . . . [ ] a A U s1 as2 · · · λ 0 0 · · · 1 = , b11 b12 · · · b1s v1 v2 · · · vs B V b21 b22 · · · b2s v1 v2 · · · vs . .. . . .. .. .. . . .. . . . . . . . . . bs1 bs2 · · · bss v1 v2 · · · vs and the vectors q0 , q1 , . . . , qp defined in Section 1. These methods were introduced by Butcher [15] and further investigated in [24–28, 30–35]. 2.4 DIMSIMs in Nordsieck representation The Nordsieck representation of DIMSIMs (2.3) was introduced in [20] to simplify the implementation of these methods in variable stepsize variable 7 order environments. In this representation r = s+1 and the vector of external approximations, denoted by z [n] , approximates directly the Nordsieck vector z(x, h) defined by y(x) hy ′ (x) z(x, h) = .. . s (s) h y (x) . As demonstrated in [20, 21, 45] this leads to methods given by i−1 s+1 ∑ ∑ [n+1] [n+1] [n+1] [n] Yi =h aij f (Yj ) + hλf (Yi )+ uij zj , j=1 j=1 i = 1, 2, . . . , s, s s+1 ∑ ∑ [n+1] [n+1] [n] z = h b f (Y ) + vj zj , 1j 1 j j=1 j=1 s ∑ [n+1] [n+1] =h bij f (Yj ), i = 2, 3, . . . , s + 1, zi (2.4) j=1 n = 1, 2, . . . , N − 1, with v1 = 1. These methods can be represented by the abscissa vector c = [c1 , . . . , cs ]T and the coefficient matrices given by u11 u12 · · · u1,s+1 λ a21 u u λ 21 u22 · · · 2,s+1 .. .. .. . . .. .. ... . . . . . . [ ] a A U a · · · λ u u · · · u s2 s1 s2 s,s+1 s1 = . b11 B V b · · · b 1 v · · · v 12 1s 2 s+1 b22 · · · b2s 0 0 ··· 0 b21 . .. .. .. .. . . .. ... . . . . . . . . bs+1,1 bs+1,2 · · · bs+1,s 0 0 ··· 0 Assuming that the method (2.4) has order p = s the vectors q0 , q1 , . . . , qp are given by q0 = e1 , q1 = e2 , . . . , qp = ep+1 , where e1 , e2 , . . . , ep+1 is the canonical basis in Rp+1 . 8 2.5 TSRK methods TSRK methods depend on the stage values at two consecutive steps and assume the form s ( ) ∑ [n+1] [n+1] [n] Y = (1 − u )y + u y + h a f (Y ) + b f (Y ) , i n i n−1 ij ij j j i j=1 (2.5) s ( ) ∑ [n+1] [n] vj f (Yj ) + wj f (Yj ) , yn+1 = (1 − ϑ)yn + ϑyn−1 + h j=1 [n+1] i = 1, 2, . . . , s, n = 0, 1, . . . , N − 1. Here, the stage values Yi are approximations to y(xn + ci h), i = 1, 2, . . . , s, and yn is an approximation to y(xn ). These methods can be represented by the abscissa vector c = [c1 , . . . , cs ]T and the tableaux of its coefficients u A B ϑ vT wT = u1 a11 · · · a1s b11 · · · b1s .. . . . .. . . . .. . .. . .. . . . us as1 · · · ϑ ··· v1 ass bs1 · · · bss w1 · · · ws vs . The TSRK methods (2.5) can be represented as GLMs (1.5) with 2s internal stages and r = s + 2 external stages. This representation take the form 0 0 I Y [n] hf (Y [n] ) 0 0 [n+1] Y B A e − u u 0 hf (Y [n+1] ) T T (2.6) yn+1 = w v 1 − ϑ ϑ 0 , yn y 0 0 1 0 0 yn−1 n [n+1] [n] Y B A e−u u 0 Y n = 0, 1, . . . , N − 1, and 1 q0 = 1 e the vectors q0 and q1 are given by 0 ∈ Rs+2 , q1 = −1 ∈ Rs+2 , c−e 9 compare [6, 49]. A more compact representation of TSRK methods (2.5) as GLMs (1.5) with s internal stages and r = s + 2 external stages is given by Y [n+1] A e−u u T v = yn 0 I hf (Y [n+1] ) yn+1 B hf (Y [n+1] ) 1 − ϑ ϑ wT yn 1 0 0 yn−1 0 0 0 hf (Y [n] ) , n = 0, 1, . . . , N − 1, and the vectors q0 and q1 are 1 0 ∈ Rs+2 , q1 = −1 ∈ Rs+2 , q0 = 1 0 e (2.7) (2.8) compare [45, 65]. TSRK methods (2.5) were introduced by Jackiewicz and Tracogna [49] and further investigated in [6, 7, 46–48, 50, 52, 53, 60–62] and the monograph [45]. 2.6 GLMs with IRKS Consider the GLMs with r = s = p + 1, s s ∑ ∑ [n+1] [n+1] [n] =h aij f (Yj )+ uij yj , i = 1, 2 . . . , s, Yi j=1 s j=1 s ∑ ∑ [n+1] [n+1] [n] y = h b f (Y ) + vij yj , ij i j j=1 (2.9) i = 1, 2 . . . , s, j=1 n = 0, 1, . . . , N − 1, where the vector of external approximations y [n] approximates the Nordsieck vector defined, similarly as in Section 2.4, by y(x) hy ′ (x) z(x, h) = . .. . p (p) h y (x) 10 This implies that the vectors q0 , . . . , qs−1 are given by q0 = e 1 , q1 = e 2 , ..., qs−1 = es , where e1 , . . . , es is the canonical basis in Rs . The GLM (2.9) is said to possess IRKS property if its coefficients satisfy the algebraic constraints BA ≡ XB, BU ≡ XV − VX, for some matrix X ∈ Rs×s , and the condition on the characteristic polynomial of the matrix V det(wI − V) = wr−1 (w − 1). Here, the notation M1 ≡ M2 means that these matrices are identical except possibly their first rows. The significance of IRKS conditions follows from the result, which was proved in [35, 65], that for the methods (2.9) which satisfy these conditions (and the preconsistency condition Ve1 = e1 discussed in Section 3) the stability function p(w, z) have the form (1.12), i.e., the GLMs have RK stability. GLMs with IRKS were investigated in [31, 35, 65, 66] and the monograph [45] with additional assumption that the stage order q is equal to the order p and that the coefficient matrix V has the form [ ] 1 vT V= , 0 V where the matrix V ∈ Rp×p has spectral radius equal to zero. In [31] this matrix V was assumed to be upper triangular with the zero on the diagonal. 3 Local discretization errors of GLMs To simplify the notation we assume that m = 1, i.e., that (1.1) is a scalar differential equation. The general case corresponding to m > 1 follows easily by replacing A, U, B, V, and qi by A ⊗ I, U ⊗ I, B ⊗ I, V ⊗ I, and qi ⊗ I, where I is the identity matrix of dimension m - the dimension of the differential system (1.1). For GLMs (1.5), local discretization errors of stage values Y [n+] and exˆ n+1 ), respecternal approximations y [n+1] , denoted by hd(xn + ch) and hd(x tively, are defined as the residua obtained by replacing in (1.5), Y [n+1] by 11 y(xn + ch), and y [n+1] , y [n] by z(xn+1 ), z(xn ). Taking into account that f (y(xn + ch)) = y ′ (xn + ch) this leads to y(xn + ch) = hAy ′ (xn + ch) + Uz(xn ) + hd(xn + ch), (3.1) z(x ) = hBy ′ (x + ch) + Vz(x ) + hd(x ˆ n+1 ), n+1 n n n = 0, 1, . . . , N − 1. Put e = [1, . . . , 1] ∈ Rs . Expanding y(xn + ch) and y ′ (xn + ch) into Taylor series around the point xn we obtain p ∑ ∑ ck ck−1 (xn ) = hA hk y (k) (xn ) k! (k − 1)! k=1 p ∑ + U hk y (k) (xn )qk + hd(xn + ch) + O(hp+1 ), p k (k) h y k=0 k=0 and p ∑ k (k) h y (xn+1 )qk = hB p ∑ hk y (k) (xn ) k=1 k=0 + V p ∑ ck−1 (k − 1)! ˆ n+1 ) + O(hp+1 ). hk y (k) (xn )qk + hd(x k=0 This leads to hd(xn + ch) = (e − Uq0 )y(xn ) ) p ( k ∑ c Ack−1 + − − Uqk hk y (k) (xn ) + O(hp+1 ), k! (k − 1)! k=1 (3.2) ˆ n+1 ) = (q0 − Vq0 )y(xn ) hd(x ) p ( k ∑ ∑ ql Bck−1 + − − Vqk hk y (k) (xn ) + O(hp+1 ). (k − l)! (k − 1)! k=1 l=0 (3.3) and Introducing the notation γ0 = e − Uq0 , ck Ack−1 γk = − − Uqk , k! (k − 1)! 12 (3.4) k = 1, 2, . . . , p, and γ̂ = q0 − Vq0 , 0 γ̂k = k ∑ l=0 ql Bck−1 − − Vqk , (k − l)! (k − 1)! (3.5) k = 1, 2, . . . , p, the relations (3.2) and (3.3) for the local discretization errors take the simple form p ∑ hd(xn + ch) = γl hl y (l) (xn ) + O(hp+1 ), (3.6) l=0 and ˆ n+1 ) = hd(x p ∑ γ̂l hl y (l) (xn ) + O(hp+1 ). (3.7) l=0 If the GLM (1.5) is to have order p then by considering the solution of y ′ (x) = f (x) by (1.2), it follows that a necessary (but far from sufficient) condition is that the local discretization error of the vector of external approximations satisfies ˆ n+1 ) = O(hp+1 ). hd(x This leads to γ̂i = 0, i = 0, 1, . . . , p. (3.8) The condition γ̂0 = 0 or Vq0 = q0 (3.9) is called the preconsistency condition, and the condition γ̂1 = 0 or Be + Vq1 = q0 + q1 (3.10) is called the consistency condition, compare [45]. We will also always assume the stage preconsistency condition γ0 = 0, or Uq0 = e, (3.11) and stage consistency condition γ1 = 0, or Ae + Uq1 = c, 13 (3.12) compare again [45]. Assuming (3.9), (3.10), (3.11), and (3.12) the local discretization errors now take the form hd(xn + ch) = p ∑ γl hl y (l) (xn ) + O(hp+1 ), (3.13) l=2 and ˆ n+1 ) = hd(x p ∑ γ̂l hl y (l) (xn ) + O(hp+1 ). (3.14) l=2 Assuming in addition to (3.8), (3.11), and (3.12), that γl = 0, l = 2, 3, . . . , p − 1 or l = 2, 3, . . . , p leads to the GLMs of order p with stage order q = p − 1 or q = p, respectively. Such high order and stage order methods with some desirable stability properties (large regions of absolute stability for explicit methods, A-, L-, and algebraic stability for implicit methods) were investigated in a series of papers [8, 10, 11, 15, 24–28, 30–39] and the monograph [45]. In the next section we will derive the general order conditions for GLMs (1.5) without restrictions on stage order (except stage preconsistency and stage consistency conditions (3.11) and (3.12)) using the approach proposed by Albrecht [1–5] in the context of RK, composite and linear cyclic methods, and generalized by Jackiewicz and Tracogna [49, 50, 60], to TSRK methods for ODEs, by Jackiewicz and Vermiglio [51] to general linear methods with external stages of different orders, and by Garrappa [41] to some classes of Runge-Kutta methods for Volterra integral equations with weakly singular kernels. A very readable summary of this approach for RK methods for ODEs was presented by Lambert [56]. 4 General order conditions for GLMs In [51] an extension of Albrecht [2–5] was used to derive general order conditions for GLMs with external stages of different orders. This derivation is technically very complicated and in this paper we restrict our attention to GLMs, where all components of the vector of external approximations have the same order p. Although order conditions for such methods can be obtained using the theory presented in [51], for the sake of completeness we present the derivation of order conditions for this simpler class of GLMs. 14 In this section we will derive general order conditions for GLMs (1.5) for which the coefficient matrix V is such that there exists a limit Ṽ = lim Vn . n→∞ This is a case, for example, for RK methods (2.2) for which V = [1], DIMSIMs (2.3) for which V = evT , DIMSIMs in Nordsieck representation (2.4), TSRK methods (2.5), GLMs with IRKS (2.9), and for GLMs (1.5) which are strictly zero-stable. As observed in Section 3 for the GLM (1.5) to have order p, it is necessary that ˆ n+1 ) = O(hp+1 ), hd(x which leads to the algebraic conditions (3.8) expressed in terms of the coefficients of the methods. In order to obtain necessary and sufficient conditions for the method (1.5) to have order p we consider the global truncation errors q [n+1] and q̂ [n+1] of the vectors of internal and external approximations Y [n+1] and y [n+1] , which are defined by q [n+1] = y(xn + ch) − Y [n+1] , q̂ [n+1] = z(xn+1 ) − y [n+1] , n = 0, 1, . . . , N − 1. We also define the vector t[n+1] by ( ) t[n+1] = f y(xn + ch) − f (Y [n+1] ), n = 0, 1, . . . , N − 1. Subtracting (1.5) (with m = 1) from (3.1) we obtain q [n+1] = hAt[n+1] + Uq̂ [n] + hd(xn + ch), (4.1) ˆ n+1 ), q̂ [n+1] = hBt[n+1] + Vq̂ [n] + hd(x (4.2) and n = 0, 1, . . . , N − 1. By an easy induction argument (4.2) leads to q̂ [n] n [0] = V q̂ +h n−1 ∑ V n−k Bt [k] + hBt k=1 [n] +h n ∑ ˆ k ), Vn−k d(x (4.3) k=1 n = 1, 2, . . . , N , where q̂ [0] = z(x0 )−y [0] is the error of the starting procedure. The above expression leads to the following general order criterion. 15 Theorem 4.1. Assume that there exists a limit Ṽ = limn→∞ Vn . Moreover, assume that q̂ [0] = O(hp ), and that ˆ k ) = O(hp+1 ), hd(x k = 1, 2, . . . , N. Then the GLM (1.5) is convergent with order p, i.e., q̂ [n] = O(hp ), n = 1, 2, . . . , N, if and only if the following conditions hold Bt[k] = O(hp−1 ), k = 1, 2, . . . , N, (4.4) ṼBt[k] = O(hp ), k = 1, 2, . . . , N. (4.5) Moreover, the global errors q [n+1] of the internal stages are given by q [n+1] = hAt[n+1] + hd(xn + ch) + O(hp ), n = 0, 1, . . . , N − 1. (4.6) Proof. It follows from the assumption of the theorem that the relation (4.3) corresponding to n = N takes the form q̂ [N ] = hBt(xN ) + h N −1 ∑ VN −k Bt(xk ) + O(hp ), n = 1, 2, . . . , N, k=1 where we have written t(xk ) instead of t[k] . Since k = (xk − x0 )/h we have q̂ [N ] = hBt(xn ) + h N −1 ∑ V(X−xk )/h Bt(xk ) + O(hp ), k=1 and approximating the resulting sum by the corresponding Riemann integral we obtain ∫ X [N ] q̂ = hBt(xN ) + V(X−s)/h Bt(s)ds + O(hp ), x0 where t(s) is the continuous analogue of t(xk ). Hence, for h → 0, we obtain ∫ X [N ] ṼBt(s)ds + O(hp ). q̂ = hBt(xN ) + x0 This relation implies (4.4) and (4.5). The relation (4.6) follows easily from (4.1) and the relation q̂ [n] = O(hp ). 16 The relations (4.4) and (4.5) are the general order conditions for GLMs (1.5). In what follows we will try to reformulate them in terms of the coefficients of the methods. Similarly as in the theory of RK methods [2, 4] we will need the following lemma for recursive computation of order conditions. Lemma 4.1. Assume that the function f is sufficiently smooth. Then t[n+1] = ∞ ∑ ( )l Gl (xn ) q [n+1] (4.7) l=1 with ∞ ( ) ∑ 1 k k (k) Gl (xn ) := diag gl (xn + c1 h), . . . , gl (xn + cs h) = h Γc gl (xn ), k! k=0 where gl (x) = ) (−1)l+1 ∂ l f ( y(x) , l l! ∂y Γc = diag(c1 , . . . , cs ). Proof. We have [n+1] ti ( ) ( [n+1] ) = f y(xn + ci h) − f Yi ( ) ( ) [n+1] = f y(xn + ci h) − f y(xn + ci h) + Yi − y(xn + ci h) ∞ )l ∑ )( [n+1] 1 ∂lf ( = − ( y(xn + ci h) Yi − y(xn + ci h) , l! ∂y l l=1 i = 1, 2, . . . , s, and using the definition of q [n+1] and gl (x) we obtain [n+1] ti = ∞ ∑ (−1)l+1 ∂ l f ( l=1 l! ∂y l ∞ )( [n+1] )l ∑ ( [n+1] )l gl (xn + ci h) qi , y(xn + ci h) qi = l=1 i = 1, 2, . . . , s. This is equivalent to (4.7). Introduce the notation u := q [n+1] − hAt[n+1] − hd(xn + ch) + O(hp ), )2 ( v := t[n+1] − G1 (xn )q [n+1] − G2 (xn ) q [n+1] − · · · , 17 (4.8) where (q [n+1] )2 stands for componentwise exponentiation. Then ∂u ∂u −hA I ∂t[n+1] ∂q [n+1] = det , det ∂v ∂v [n+1] I −G1 (xn ) − 2G2 (xn )q − ··· ∂t[n+1] ∂q [n+1] and it follows that ∂u ∂u ∂t[n+1] ∂q [n+1] det ∂v ∂v [n+1] ∂t ∂q [n+1] ( t[n+1] =0,q [n+1] =0,h=0 ) = det 0 I I −G1 (xn ) ̸= 0. Moreover, u and v are analytical in a neighborhood of (0, 0, 0). Therefore, it follows from the implicit function theorem that the system (u, v) = (0, 0), where u and v are defined by (4.8), has a unique solution ( [n+1] [n+1] ) ( [n+1] ) t ,q = t (h), q [n+1] (h) in a neighborhood of h = 0, and the following Taylor series expansions exist q [n+1] = r2 (xn )h2 + r3 (xn )h3 + · · · + rp−1 (xn )hp−1 + O(hp ), (4.9) t[n+1] = w2 (xn )h2 + w3 (xn )h3 + · · · + wp−1 (xn )hp−1 + O(hp ), (4.10) with h0 and h1 terms equal to zeros. To see why r1 (xn ) = w1 (xn ) = 0, observe first that q [n+1] = O(h) (compare (4.6)), which implies that t[n+1] = O(h) (compare (4.7)). The formula (4.1) with (3.12) now implies that q [n+1] = O(h2 ), and then using (4.7) again we obtain t[n+1] = O(h2 ). These observations lead to expansions of the form (4.9) and (4.10). The order conditions (4.4) and (4.5) reduce now to Bwi (xk ) = 0, i = 2, 3, . . . , p − 2, k = 0, 1, . . . , N, (4.11) and ṼBwp−2 (xk ) = 0, k = 0, 1, . . . , N. (4.12) The vectors ri (xk ) and wi (xk ), i = 2, 3, . . . , p − 1, can be computed in the same way as was done in the theory of order condition for RK methods [2, 4, 56]. We have the following result. 18 Theorem 4.2. The vectors ri (xk ) and wi (xk ), i = 2, 3, . . . , p − 1, satisfy the following recursions ri (xk ) = γi y (i) (xk ) + Awi−1 (xk ), w1 (xk ) := 0, (4.13) i−2 ∑ 1 l (l) wi (xk ) = Γ g (xk )ri−l (xk ) l! c 1 l=0 + + i−4 ∑ ∑ l=0 m,n≥2 m+n+l=1 i−6 ∑ l=0 ( ) 1 l (l) Γc g2 (xk ) rm (xk ) · rn (xk ) l! ∑ m,n,s≥2 m+n+s+l=i (4.14) ( ) 1 l (l) Γc g3 (xk ) rm (xk ) · rn (xk ) · rs (xk ) l! + ··· Proof. Substituting (4.9) and (4.10) into (4.6) and (4.7) and taking into account (3.13) we obtain p−1 ∑ rl (xk )h l = l=2 = p−1 ∑ (l) l γl y (xk )h + A p−2 ∑ l=2 l=2 p−1 ∑ p−1 ∑ γl y (l) (xk )hl + A wl (xk )hl+1 + O(hp ) wl−1 (xk )hl + O(hp ), l=2 l=2 where we have used the convention that w1 (xk ) := 0. Comparing the hi terms leads to the recurrence (4.13). We have also ( ) p−1 ∑ h2 2 ′′ l ′ wl (xk )h = g1 (xk )I + hΓc g1 (xk ) + Γc g1 (xk ) + · · · 2! l=2 ( ) × r2 (xk )h2 + r3 (xk )h3 + · · · ( ) h2 2 ′′ ′ + g2 (xk )I + hΓc g2 (xk ) + Γc g2 (xk ) + · · · 2! ( )2 × r2 (xk )h2 + r3 (xk )h3 + · · · ( ) h2 2 ′′ ′ + g3 (xk )I + hΓc g3 (xk ) + Γc g3 (xk ) + · · · 2! ( )3 × r2 (xk )h2 + r3 (xk )h3 + · · · + ··· . 19 Comparing the hi terms we obtain the recurrence relation (4.14). This completes the proof. 5 Derivation of order conditions up to order six The vectors ri (xk ) and wi (xk ) can be computed from the recursions (4.13) and (4.14) in exactly the same way as was done for RK methods [2, 4] and for TSRK methods [45, 49]. Once the vectors wi (xk ), i = 2, 3, . . . , p − 1, are computed, the order conditions for GLMs (1.5) can be obtained by substituting these vectors into (4.11). In what follows we will illustrate this process by computing order conditions for GLMs (1.5), up to the order p = 6 For p = 6 the expanded form of recursions (4.13) and (4.14) takes the form r2 h2 + r3 h3 + r4 h4 + r5 h5 = Aw2 h3 + Aw3 h4 + Aw4 h5 + γ2 y ′′ h2 + γ3 y ′′′ h3 + γ4 y (4) h4 + γ5 y (5) h5 + O(h6 ), and w 2 h2 + w 3 h3 + w 4 h4 + w 5 h5 ( ) ) 1 2 2 ′′ 1 3 3 ′′′ ( 2 ′ = g1 I + hΓc g1 + h Γc g1 + h Γc g1 r2 h + r3 h3 + r4 h4 + r5 h5 2! 6 ( )( ) 2 + g2 I + hΓc g2′ r2 h2 + r3 h3 + O(h6 ), where ri = ri (xk ) and wi = wi (xk ). Comparing the successive powers of h leads to the following relations r2 = γ2 y ′′ , w2 = g1 r2 = γ2 g1 y ′′ , r3 = γ3 y ′′′ + Aw2 = γ3 y ′′′ + Aγ2 g1 y ′′ , w3 = g1 r3 + Γc g1′ r2 = γ3 g1 y ′′′ + Aγ2 g12 y ′′ + Γc γ2 g1′ y ′′ , r4 = γ4 y (4) + Aw3 = γ4 y (4) + Aγ3 g1 y ′′′ + A2 γ2 g12 y ′′ + AΓc γ2 g1′ y ′′ , 20 1 w4 = g1 r4 + Γc g1′ r3 + Γ2c g1′′ r2 + g2 r22 2 = γ4 g1 y (4) + Aγ3 g12 y ′′′ + A2 γ2 g13 y ′′ + AΓc γ2 g1 g1′ y ′′ 1 + Γc γ3 g1′ y ′′′ + Γc Aγ2 g1′ g1 y ′′ + Γ2c γ2 g1′′ y ′′ + γ22 g2 (y ′′ )2 , 2 r5 = γ5 y (5) + Aw4 = γ5 y (5) + Aγ4 g1 y (4) + A2 γ3 g12 y ′′′ + A3 γ2 g13 y ′′ + A2 Γc γ2 g1 g1′ y ′′ 1 + AΓc γ3 g1′ y ′′′ + AΓc Aγ2 g1′ g1 y ′′ + AΓ2c γ2 g1′′ y ′′ + Aγ22 g2 (y ′′ )2 , 2 1 1 w5 = g1 r5 + Γc g1′ r4 + Γ2c g1′′ r3 + Γ3c g1′′′ r2 + Γc g2′ r22 + g2 r2 r3 + g2 r3 r2 . 2 6 We have r2 r3 = γ2 γ3 y ′′ y ′′′ + γ2 Aγ2 y ′′ g1 y ′′ , r3 r2 = γ3 γ2 y ′′′ y ′′ + Aγ22 g1 (y ′′ )2 , and substituting the relations for r5 , r4 , r3 , r2 and the above relations for r2 r3 and r3 r2 into the formula for w5 we obtain w5 = γ5 g1 y (5) + Aγ4 g12 y (4) + A2 γ3 g13 y ′′′ + A3 γ2 g14 y ′′ 1 + A2 Γc γ2 g12 g1′ y ′′ + AΓc γ3 g1 g1′ y ′′′ + AΓc Aγ2 g1 g1′ g1 y ′′ + AΓ2c γ2 g1 g1′′ y ′′ 2 + Aγ22 g1 g2 (y ′′ )2 + Γc γ4 g1′ y (4) + Γc Aγ3 g1′ g1 y ′′′ + Γc A2 γ2 g1′ g12 y ′′ 1 1 1 + Γc AΓc γ2 (g1′ )2 y ′′ + Γ2c γ3 g1′′ y ′′′ + Γ2c Aγ2 g1′′ g1 y ′′ + Γ3c γ2 g1′′′ y ′′ 2 2 6 + Γc γ22 g2′ (y ′′ )2 + 2γ2 γ3 g2 y ′′ y ′′′ + γ2 Aγ2 g2 y ′′ g1 y ′′ + Aγ22 g2 g1 (y ′′ )2 . Observe that the coefficients of g1 g2 (y ′′ )2 and g2 g1 (y ′′ )2 are the same and equal to Aγ22 . (j) The above expressions for ri and wi contain various combinations of gi and y (j) . These combinations are called recursive differentials [2, 4]. They play a similar role to elementary differentials in the Butcher theory of order conditions for RK methods which is based on rooted trees and elementary weights. 21 Order Recursive differentials Corresponding order conditions p=1 y ′ γ̂1 = 0 p=2 y ′′ γ̂2 = 0 p=3 y ′′′ γ̂3 = 0 g1 y ′′ ṼBγ2 = 0 or Bγ2 = 0 y (4) γ̂4 = 0 g1 y ′′′ ṼBγ3 = 0 or Bγ3 = 0 g12 y ′′ ṼBAγ2 = 0 or BAγ2 = 0 g1′ y ′′ ṼBΓc γ2 = 0 or BΓc γ2 = 0 y (5) γ̂5 = 0 g1 y (4) ṼBγ4 = 0 or Bγ4 = 0 g12 y ′′′ ṼBAγ3 = 0 or BAγ3 = 0 g13 y ′′ ṼBA2 γ2 = 0 or BA2 γ2 = 0 g1 g1′ y ′′ ṼBAΓc γ2 = 0 or BAΓc γ2 = 0 g1′ y ′′′ ṼBΓc γ3 = 0 or BΓc γ3 = 0 g1′ g1 y ′′ ṼBΓc Aγ2 = 0 or BΓc Aγ2 = 0 g1′′ y ′′ ṼBΓ2c γ2 = 0 or BΓ2c γ2 = 0 g2 (y ′′ )2 ṼBγ22 = 0 or Bγ22 = 0 p=4 p=5 Table 1: Recursive differentials and order conditions for p ≤ 5 22 Order Recursive differentials Corresponding order conditions p=6 y (6) γ̂6 = 0 g1 y (5) ṼBγ5 = 0 g12 y (4) ṼBAγ4 = 0 g13 y ′′′ ṼBA2 γ3 = 0 g14 y ′′ ṼBA3 γ2 = 0 g12 g1′ y ′′ ṼBA2 Γc γ2 = 0 g1 g1′ y ′′′ ṼBAΓc γ3 = 0 g1 g1′ g1 y ′′ ṼBAΓc Aγ2 = 0 g1 g1′′ y ′′ ṼBAΓ2c γ2 = 0 g1 g2 (y ′′ )2 , g2 g1 (y ′′ )2 ṼBAγ22 = 0 g1′ y (4) ṼBΓc γ4 = 0 g1′ g1 y ′′′ ṼBΓc Aγ3 = 0 g1′ g12 y ′′ ṼBΓc A2 γ2 = 0 (g1′ )2 y ′′ ṼBΓc AΓc γ2 = 0 g1′′ y ′′′ ṼBΓ2c γ3 = 0 g1′′ g1 y ′′ ṼBΓ2c Aγ2 = 0 g1′′′ y ′′ ṼBΓ3c γ2 = 0 g2′ (y ′′ )2 ṼBΓc γ22 = 0 g2 y ′′ y ′′′ ṼBγ2 γ3 = 0 g2 y ′′ g1 y ′′ ṼBγ2 Aγ2 = 0 Table 2: Recursive differentials and order conditions for p = 6 23 Order conditions for GLMs (1.5) for which there exists a limit Ṽ, Ṽ = lim Vn , n→∞ can now be obtained by imposing the condition ˆ k ) = O(hp+1 ), hd(x k = 1, 2, . . . , N, compare Theorem 4.1, and equating to zero the coefficients of recursive differentials in the expressions Bwi = 0, i = 2, 3, . . . , p − 2, Ṽ Bwp−1 = 0, compare (4.11) and (4.12). The recursive differentials and the resulting order conditions up to the order p = 6 are listed in Tables 1 and 2. We always assume the preconsistency condition γˆ0 = 0, or Vq0 = q0 , compare (3.9), and the stage preconsistency condition γ0 = 0, or Uq0 = e, compare (3.11). Moreover, we will always assume stage consistency condition γ1 = 0, or Ae + Uq1 = c, compare (3.12). In the last column of Table 1 and Table 2, when there is a couple of conditions separated by ’or’, the first condition refer to order p methods, while the second condition refers to methods with order greater than p. 6 Application of the order conditions to special cases of GLM In this section we will illustrate the application of the theory of order conditions derived in Section 4 and 5 to verify the order p and stage order q of some classes of GLMs discussed in Section 2, and some other GLMs from the literature on the subject. 24 6.1 Linear multistep methods In this subsection we will illustrate the derivation of order conditions for linear multistep methods with k = 3 and order p = 4. These methods take the form yn+1 = 3 ∑ αj yn+1−j + h j=1 3 ∑ with given starting values y0 , y1 , and y2 . follows from Section 2.1 that 0 0 1 1 −1 1 2 2 −2 1 , q = q0 = , q1 = 2 0 1 0 −1 1 0 0 n = 2, . . . , N − 1, βj f (yn+1−j ), (6.1) j=0 For this method c = c1 = 1 and it 0 0 −1 1 6 24 4 2 −3 , q3 = , q4 = 3 . 0 0 1 1 − 2 6 −2 2 − 43 1 By imposing the order conditions: γ̂i = 0, i = 0, 1, 2, 3, 4, and stage preconsistency and consistency conditions γ0 = 0, γ1 = 0, we obtain a two-parameters family of methods, with coefficients: α3 = 1 − α1 − α2 , β2 = β0 = 27 − 32α1 − 19α2 , 24 9 − α2 , 24 β3 = β1 = 27 − 8α1 + 5α2 , 24 9 − 8α1 − 9α2 . 24 For these methods, by formula (3.4) find out that γ2 = γ3 = γ4 = 0. Hence, the method has also stage order q = 4, and the remaining conditions of order 4, i.e. ṼBγ3 = 0, ṼBAγ2 = 0, 25 ṼBΓc γ2 = 0, are automatically satisfied. Putting α1 = 1 and α2 = 0 leads to yn+1 = yn + h 3 ∑ βj f (yn+1−j ), n = 2, . . . , N − 1, j=0 with 3 19 5 1 β0 = , β1 = , β2 = − , β3 = , 8 24 24 24 and we obtain the Adams-Moulton method with k = 3 and p = 4 [56]. 6.2 Almost Runge–Kutta method We consider the explicit Almost Runge–Kutta method with s = 4 internal and r = 3 external stages (see method (505a) of [16]): 1 1 0 0 0 0 1 1 1 7 0 0 0 1 16 16 8 ] [ 0 0 1 − 43 − 12 − 14 2 A U 2 1 1 . = 0 1 0 0 3 6 6 B V 1 1 0 23 0 1 0 6 6 0 0 0 1 0 0 0 1 1 1 −6 0 −3 1 0 −2 0 By easy computations we find that matrix Ṽ = limn→∞ Vn is given by 1 16 0 . Ṽ = 0 0 0 0 0 0 This method satisfies the order conditions γ̂i = 0, i = 0, 1, 2, 3, 4, γi = 0, i = 0, 1, 2, ṼBγ3 = 0, for c1 = c3 = c4 , c2 = c4 − 1/2. Hence, this method has order p = 4 and stage order q = 2 which is in agreement with [16]. It can be also verified that the 26 vectors q0 , q1 , q2 , q3 , and q4 , are given by 1 c4 − 1 , q1 = 1 , q0 = 0 0 0 q3 = (c4 −1)3 6 (c4 −1)2 2 2c4 −3 4 , q2 = c − 1 4 1 2 , (c4 −1)2 2 q4 = q41 (c4 −1)3 6 c24 −9c4 +7 12 , where c4 and q41 are free parameters. Observe that ṼBγ3 = 0 and Bγ3 ̸= 0. 6.3 G-symplectic method We analyze the order and stage order of the G-symplectic method with r = s = 2 (see [17, 19]) defined by the abscissa vector c given by [ c= 1 2 √ + 3 6 1 2 − √ 3 6 ]T and the coefficient matrices [ A U B V ] = √ 3+ 3 √6 − 33 1 2 1 2 √ 3 0 √ 3+ 3 6 1 2 − 12 1 − 3+23 1 √ 3+2 3 3 1 0 0 −1 . The coefficient matrix V of this method is nonsingular, and we have V2k = I, V2k+1 = V, k = 0, 1, . . . . Hence, the limit of Vn as n → ∞ does not exist. However, by an argument similar to that given in the proof of Theorem 4.1 we can show that in the order conditions the matrix Ṽ can be replaced by the identity matrix I. In particular, it is possible to verify that this method satisfies the conditions γ̂i = 0, i = 0, 1, 2, 3, 27 γi = 0, i = 0, 1, 2, and that [ q0 = 1 ] 0 [ , q1 = 0 ] [ , 0 q2 = 0 ] 1 √ 4 3 [ , q3 = 0 0 ] . Moreover it results γ̂4 = 0 and [ Bγ3 = 0 √ 3 − 9+5 108 ]T ̸= 0. This means that this method has exactly order p = 3 and stage order q = 2 with respect to the starting procedure y [0] = Sh y0 = q0 y(x0 ) + q1 hy ′ (x0 ) + q2 h2 y ′′ (x0 ) + q3 h3 y ′′′ (x0 ) + O(h4 ). However, it was proved by Butcher [19] (see also [22]) that this method has order p = 4 and stage order q = 2 with respect to a different starting procedure which satisfies [ ] y(x ) 0 √ √ y [0] = Sh y0 = √3 2 ′′ + O(h5 ). 3 4 (4) 9+5 3 4 ∂f (4) h y (x0 ) − 108 h y (x0 ) + 216 h ∂y y (x0 ) 12 It was also demonstrated in [19, 22] that this starting procedure can be generated by ] [ y0 [0] , y = Sh y0 = 1 (Rh y0 + R−h y0 − y0 ) 2 where Rh is the Runge-Kutta method which, when written as GLM, takes the form [17, 19] 0 0 0 0 1 2 5 11 √ 9− 3 72 0 0 0 0 0 0 6 11√ 3 − 15+2 54 √ 10 3 27 √ 33+11 3 216 √ 11 3 − 108 28 0 1 1 1 1 . 1 1 6.4 DIMSIMs Consider the class of DIMSIMs with s and coefficient matrices λ ] [ a21 A U = B V b11 b21 = r = 2, abscissa vector c = [0, 1]T , 1 λ 0 b12 v1 b22 v1 0 1 , v2 v2 (6.2) where v1 + v2 = 1. Since Ṽ = V, this method has order p = 4 and stage order q = 2 if and only if γ̃i = 0, i = 0, 1, 2, 3, 4, γi = 0, i = 0, 1, 2, VBγ3 = 0. Solving these order and stage order conditions leads to a one-parameter family of methods with the following coefficients [ ] [ ] 5−12λ 1 λ 6(1−2λ) 6(1−2λ) A = −144λ3 +204λ2 −102λ+17 , V= , 5−12λ 1 λ 5−12λ 6(1−2λ) 6(1−2λ) [ ] B 59−156λ 12(5−12λ 144λ2 +24λ−29 12(12λ−5) 5−12λ 12 −1728λ3 +2160λ2 −828λ+89 12(12λ−5) , λ ̸= 1/2, λ ̸= 5/12. Moreover, the vectors qi , i = 0, 1, 2, 3, 4, which define the starting procedure y [0] = Sh y0 = q0 y(x0 )+q1 hy ′ (x0 )+q2 h2 y ′′ (x0 )+q3 h3 y ′′′ (x0 )+q4 h4 y (4) (x0 )+O(h5 ) are given by [ q0 = 1 1 ] [ , q1 = [ q3 = ] −λ −144λ3 +192λ2 −85λ+12 12λ−5 1 24 12λ−5 24 ] [ , q4 = [ , q2 = q41 1+12q41 −2λ 12 ] 0 1−2λ 2 ] , , where q41 is a free parameter. It can be verified using the Schur √ criterion ∗ that these methods are A-stable if and only if λ < λ < (3 + 3)/6, where λ∗ ≈ 0.714633 is a root of φ(λ) = 17 − 204λ + 840λ2 − 1440λ3 + 864λ4 . Moreover, these methods are not L-stable for any value of λ. 29 6.5 DIMSIMs in Nordsieck form Now we consider the Nordsieck form of DIMSIM method illustrated in the previous subsection. This method can be derived from representation (6.2), using the approach described in [21] (see also [45]). In this way we obtain a DIMSIM with s = 2, r = 3, abscissa vector c = [0, 1]T , and coefficient matrices λ u11 u12 u13 [ ] a21 λ u21 u22 u23 A U = b11 b12 1 v2 v3 , B V b 0 0 21 b22 0 b31 b32 0 0 0 with matrix A the same of method (6.2), and [ U= ] 1 −λ 0 1 −144λ3 +192λ2 −85λ+12 1−2λ 2 12λ−5 v2 = − , B= 156λ−59 12(12λ−5) 0 −1 5 12 1 , 1 2(3λ − 1) 1 , v3 = , 12λ − 5 12 with λ ̸= 5/12. Since Ṽ = V, this method has order p = 4 and stage order q = 2 if and only if γ̂i = 0, i = 0, 1, 2, 3, 4, γi = 0, i = 0, 1, 2, VBγ3 = 0. It can be easily verified that these conditions hold, if the vectors qi , i = 0, 1, 2, 3, 4, which define the starting procedure y [0] = Sh y0 = q0 y(x0 )+q1 hy ′ (x0 )+q2 h2 y ′′ (x0 )+q3 h3 y ′′′ (x0 )+q4 h4 y (4) (x0 )+O(h5 ) are given 1 q0 = 0 0 by , 0 q1 = 1 , 0 0 q2 = 0 , 1 where q41 is a free parameter. 30 1 24 , q3 = 0 1 −2 q41 , q4 = 0 1 6 6.6 TSRK methods The class of TSRK with s = 1 is defined by the abscissa c and by the tableau of coefficients u λ b . ϑ v w It can be represented as a GLM with one internal stage and three external stages (see (2.7)) with the following coefficients λ 1−u u b [ ] v 1−ϑ ϑ w A U . = B V 1 0 0 0 1 0 0 0 This method has order p = q = 2 if and only if γi = 0, i = 0, 1, 2, γ̂i = 0, i = 0, 1, 2. (6.3) Solving these order and stage order conditions we derive a family of methods depending on (λ, c, ϑ) with the tableau −c2 +2c−2λ 2c−1 λ c2 −2λc+c−λ 2c−1 ϑ −2θc−2c+θ+3 2 2θc+2c+θ−1 2 , for c ̸= 1/2. This is in agreement with the conclusions of [45], Sec. 5.4.2. Then, if we are looking for a TSRK with p = 3, q = 2, i.e. if we impose conditions (6.3) and γ̂3 = 0, we find a two-parameters family of methods with tableau −c2 +2c−2λ c2 −2λc+c−λ λ 2c−1 2c−1 , 2 −6c+2 2 3c 2 3c2 −1) 2 ( ) ( 6c −12c+5 1−6c2 √ 1−6c2 6c2 −1 for c ̸= 1/2 and c ̸= ± 6/6. This is the same result as that reported in [45], Sec. 5.4.2. The vectors qi , i = 0, 1, 2, 3, which define the starting procedure are given by (2.8) and 0 0 1 q2 = 2 , q3 = − 16 . 2 (c−1) c−1 2 31 A discussion is due on order conditions for TSRK derived following Theorem 4.1. In the representation (2.7) of a TSRK as GLM, the external stage is given by yn . y [n] = y n−1 [n] hf (Y ) Therefore, if we impose that the method has order p with respect to the external stage y [n] , we are requiring that the method has order p with respect to the approximate solution yn and order p − 1 with respect to Y [n] . Thus, coming back to the standard representation (2.5), we are imposing that the method has order p and stage order q ≥ p − 1. Expressed in other words, applying order conditions of theorems 4.1 and 4.2 to TSRK methods, we are able to find only high stage order methods. Similar conclusions can be drawn considering the representation (2.6). 6.7 GLMs with IRKS The GLM with IRKS with s = r = 3, illustrated in [45] has abscissae vector c = [0, 1/2, 1] and coefficient matrices 1 1 0 0 1 − 0 4 4123 631 1 − 1 0 0 770 4 1540 [ ] 1496 1391 1 756 690 1 1163 − 3911 − 1987 1631 A U 4 , = 1376 315 68 1297 140 B V − 1157 269 1277 1 1344 − 1009 − 1572 938 483 560 0 0 − 1093 531 719 1009 55 103 151 − 24 0 0 0 48 48 It can be easily verified that the method has order p = q = 2, as stated in [45] since it results γ̂i = 0, i = 0, 1, 2, γi = 0, i = 0, 1, 2, Moreover it has also p = q = 3, with respect to the starting procedure defined by the vectors qi , i = 0, 1, 2, 3 with q0 , q1 , q2 equal to the canonical basis of R3 and ]T [ 194 676 47 , q3 = − 11907 − 12287 2351 32 since it results γ3 = γ̂3 = 0. The stability polynomial of this method takes the form p(z, w) = w2 (w − R(z)) with R(z) = P (z)/(4 − z)4 , where P (z) is a polynomial of degree 4. For the GLMs with IRKS we are not able to give examples of methods with stage order different from the order of the method, since these methods have p = q by design. 7 Concluding remarks We derived general order conditions on GLMs with the aim to have separate conditions for the external and for the internal stages. Such conditions leave more freedom in the choice of method parameters, since one can use a low stage order method and use the remaining free parameters to obtain some desirable properties, such as strong stability, G-symplecticity or the conservation of some properties of the analytical solution. From the main theoretical results, we derived order conditions up to order six and applied these conditions to various classes of GLMs. Acknowledgements. The results reported in this paper were obtained during the visit of the first author to the Arizona State University from January to May of 2013. This author wishes to express her gratitude to the School of Mathematical and Statistical Sciences for hospitality during this visit. References [1] P. Albrecht, Numerical treatment of O.D.E.s: the theory of A-methods, Numer. Math. 47(1985), 59–87. [2] P. Albrecht, A new theoretical approach to Runge–Kutta methods, SIAM J. Numer. Anal. 24(1987), 391–406. [3] P. Albrecht, Elements of a general theory of composite integration methods, Appl. Math. Comput. 31(1989), 1–17. 33 [4] P. Albrecht, The Runge-Kutta theory in a nutshell, SIAM J. Numer. Anal. 33 (1996), 1712–1735. [5] P. Albrecht, The common basis of the theories of linear cyclic methods and Runge-Kutta methods, Appl. Numer. Math. 22(1996), 3–21. [6] Z. Bartoszewski and Z. Jackiewicz, Construction of two-step RungeKutta methods of high order for ordinary differential equations, Numer. Algorithms 18(1998), 51–70. [7] Z. Bartoszewski and Z. Jackiewicz, Nordsieck representation of two-step Runge-Kutta methods for ordinary differential equations, Appl. Numer. Math. 53(2005), 149–163. [8] Z. Bartoszewski and Z. Jackiewicz, Explicit Nordsieck methods with extended stability regions, Appl. Math. Comput. 218(2012), 6056–6066. [9] S. Beck, R. Weiner, H. Podhaisky, and B.A. Schmitt, Implicit peer methods for large stiff ODE systems, J. Appl. Math. Comput. 38(2012), 389– 406. [10] M. Braś, A. Cardone: Construction of efficient general linear methods for non-stiff differential systems. Math. Model. Anal. 17(2), 171189 (2012) [11] M. Braś, A. Cardone, R. D’Ambrosio: Implementation of explicit nordsieck methods with inherent quadratic stability. Math. Model. Anal. 18(2), 289307 (2013) [12] K. Burrage and J.C. Butcher, Non-linear stability of a general class of differential equation methods, BIT 20(1980), 185–203. [13] J.C. Butcher, On the convergence of numerical solutions to ordinary differential equations, Math. Comput. 20(1966), 1–10. [14] J.C. Butcher, The Numerical Analysis of Ordinary Differential Equations. Runge-Kutta and General Linear Methods, John Wiley & Sons, Chichester, New York 1987. [15] J.C. Butcher, Diagonally-implicit multi-stage integration methods, Appl. Numer. Math. 11(1993), 347–363. 34 [16] J.C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, Chichester 2003. [17] J.C. Butcher, General linear methods, Acta Numerica 15(2006), 157– 256. [18] J.C. Butcher, Thirty years of G-stability, BIT 46(2006), 479–489. [19] J.C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, Chichester 2008. [20] J.C. Butcher, P. Chartier, and Z. Jackiewicz, Nordsieck representation of DIMSIMs, Numer. Algorithms 16(1997), 209–230. [21] J.C. Butcher, P. Chartier, and Z. Jackiewicz, Experiments with a variable-order type 1 DIMSIM code, Numer. Algorithms 22(1999), 237– 261. [22] J.C. Butcher, Y. Habib, A.T. Hill, and T.J.T. Norton, The control of parasitism in G-symplectic methods, manuscript. [23] J.C. Butcher and A.T. Hill, Linear multistep methods as irreducible general linear methods, BIT 46(2006), 5–19. [24] J.C. Butcher and Z. Jackiewicz, Diagonally implicit general linear methods for ordinary differential equations, BIT 33(1993), 452–472. [25] J.C. Butcher and Z. Jackiewicz, Construction of diagonally implicit general linear methods of type 1 and 2 for ordinary differential equations, Appl. Numer. Math. 21(1996), 385–415. [26] J.C. Butcher and Z. Jackiewicz, Implementation of diagonally implicit multistage integration methods for ordinary differential equations, SIAM J. Numer. Anal. 34(1997), 2119–2141. [27] J.C. Butcher and Z. Jackiewicz, Construction of high order diagonally implicit multistage integration methods for ordinary differential equations, Appl. Numer. Math. 27(1998), 1–12. [28] J.C. Butcher and Z. Jackiewicz, A reliable error estimation for diagonally implicit multistage integration methods, BIT 41(2001), 656–665. 35 [29] J.C. Butcher and Z. Jackiewicz, Error estimation for Nordsieck methods, Numer. Algorithms 31(2002), 75–85. [30] J.C. Butcher and Z. Jackiewicz, A new approach to error estimation for general linear methods, Numer. Math. 95(2003), 487–502. [31] J.C. Butcher and Z. Jackiewicz, Construction of general linear methods with Runge-Kutta stability properties, Numer. Algorithms 36(2004), 53–72. [32] J.C. Butcher and Z. Jackiewicz, Unconditionally stable general linear methods for ordinary differential equations, BIT 44(2004), 557–570. [33] J.C. Butcher, Z. Jackiewicz, and H.D. Mittelmann, A nonlinear optimization approach to the construction of general linear methods of high order, J. Comput. Appl. Math. 81(1997), 181–196. [34] J.C. Butcher, Z. Jackiewicz, and W.M. Wright, Error propagation for general linear methods for ordinary differential equations, J. Complexity 23(2007), 560–580. [35] J.C. Butcher and W.M. Wright, The construction of practical general linear methods, BIT 43(2003), 695–721. [36] A. Cardone and Z. Jackiewicz, Explicit Nordsieck methods with quadratic stability, Numer. Algorithms 60(2012), 1–25. [37] A. Cardone, Z. Jackiewicz, and H. Mittelmann, Optimization-based search for Nordsieck methods of high order with quadratic stability polynomials, Math. Model. Anal. 17(2012), 293–308. [38] A. Cardone, Z. Jackiewicz, A. Sandu and H. Zhang, Extrapolation-based implicit-explicit general linear methods for ordinary differential equations, Numerical Algorithms, 1-23 (2013), to appear. [39] A. Cardone, Z. Jackiewicz, A. Sandu and H. Zhang, Extrapolated implicit-explicit Runge-Kutta methods, manuscript. [40] K. Dekker and J.G. Verwer, Stability of Runge-Kutta Methods for Stiff Nonlinear Differential Equations, North-Holland, Amsterdam, New York, Oxford 1984. 36 [41] R. Garrappa, Order conditions for Volterra Runge-Kutta methods, Appl. Numer. Math. 60(2010), 561–573. [42] E. Hairer, S.P. Nørsett, and G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems, Springer-Verlag, New York 1993. [43] E. Hairer and G. Wanner, Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems. Springer Verlag, Berlin, Heidelberg, New York 1996. [44] P. Henrici, Discrete Variable Methods in Ordinary Differential Equations, Wiley, New York, 1962. [45] Z. Jackiewicz, General Linear Methods for Ordinary Differential Equations, John Wiley, Hoboken, New Jersey 2009. [46] Z. Jackiewicz, H. Podhaisky, and R. Weiner, Construction of highly stable two-step W -methods for ordinary differential equations, J. Comput. Appl. Math. 167(2004), 389–403. [47] Z. Jackiewicz, R. Renaut, and A. Feldstein, Two-step Runge-Kutta methods, SIAM J. Numer. Anal. 28(1991), 1165–1182. [48] Z. Jackiewicz, R. Renaut, and M. Zennaro, Explicit two-step RungeKutta methods, Appl. Math. 40(1995), 433–456. [49] Z. Jackiewicz and S. Tracogna, A general class of two-step Runge-Kutta methods for ordinary differential equations, SIAM J. Numer. Anal. 32(1995), 1390–1427. [50] Z. Jackiewicz and S. Tracogna, Variable stepsize continuous two-step Runge-Kutta methods for ordinary differential equations, Numer. Algorithms 12(1996), 347–368. [51] Z. Jackiewicz and R. Vermiglio, General linear methods with external stages of different orders, BIT 36(1996), 688–712. [52] Z. Jackiewicz and J.H. Verner, Derivation and implementation of twostep Runge-Kutta pairs, Japan J. Indust. Appl. Math. 19(2002), 227– 248. 37 [53] Z. Jackiewicz and M. Zennaro, Variable step size explicit two-step Runge-Kutta methods, Math. Comput. 59(1992), 421–438. [54] S. Jebens, O. Knoth, and R. Weiner, Partially implicit peer methods for the compressible Euler equations, J. Comput. Phys. 230(2011), 4955– 4974. [55] J.D. Lambert, Computational Methods in Ordinary Differential Equations, Wiley, New York, 1973. [56] J.D. Lambert, Numerical Methods for Ordinary Differential Systems, Wiley, New York, 1991. [57] B.A. Schmitt, R. Weiner, and S. Jebens, Parameter optimization for explicit parallel peer two-step methods, Appl. Numer. Math. 59(2009), 769–782. [58] L.F. Shampine, Numerical Solution of Ordinary Differential Equations, Chapman & Hall, New York, 1994. [59] L.F. Shampine and M.K. Gordon, Computer Solution of Ordinary Differential Equations: The Initial Value Problem, W.H. Freeman, San Francisco, 1975. [60] S. Tracogna, A general class of two-step Runge-Kutta methods for ordinary differential equations, Ph.D. thesis, Arizona State University, Tempe, 1996. [61] S. Tracogna, Implementation of two-step Runge-Kutta methods for ordinary differential equations, J. Comput. Appl. Math. 76(1997), 113–136. [62] S. Tracogna and B. Welfert, Two-step Runge-Kutta: Theory and practice, BIT 40(2000), 775–799. [63] R. Weiner, K. Biermann, B.A. Schmitt, and H. Podhaisky, Explicit twostep peer methods, Comput. Math. Appl. 55(2008), 609–619. [64] R. Weiner, B.A. Schmitt, H. Podhaisky, and S. Jebens, Superconvergent explicit two-step peer methods, J. Comput. Appl. Math. 223(2009), 753– 764. 38 [65] W. Wright, General linear methods with inherent Runge-Kutta stability, Ph.D. thesis, The University of Auckland, New Zealand, 2002. [66] W. Wright, Explicit general linear methods with inherent Runge-Kutta stability, Numer. Algorithms 31(2002), 381–399. 39
© Copyright 2026 Paperzz