IDENTIFICATION CONDITIONS IN SIMULTANEOUS SYSTEMS OF COINTEGRATING EQUATIONS OF HIGHER ORDER 1 Rocco Mosconi2 and Paolo Paruolo2,3 This paper discusses identification of systems of simultaneous cointegrating equations with integrated variables of order two or higher. Rank and order conditions for identification are provided for general linear restrictions, as well as for equation-byequation constraints. The conditions are illustrated on models of aggregate consumption with liquid assets and on system of equations for inventories. Keywords: Identification, (Multi-)Cointegration, I(d), Stocks and flows, Inventory models. 1. INTRODUCTION The identification problem of system of simultaneous equations (SSE) lies at the heart of classical econometrics, see e.g. Koopmans (1949). Rank (and order) conditions for identification of these systems provide necessary and sufficient (or simply necessary) conditions under which it is possible to estimate the structural form of economic interest. These conditions are well summarized in Fisher (1966) or Sargan (1988). Identification represents a key condition to connect a SSE with a reference theoretical economic model; as such, its importance is not limited to parameter-estimability, but also to the very possibility to consider an SSE as a system of economic interest. Moreover, the comparison of the structural and reduced form, and the resulting test of over-identifying restrictions, forms the basis of checks of the congruency of economic models, as advocated inter alia by Hendry (1993, 2002). Simultaneous systems of cointegrating (CI) equations have revived interest on SSE over the last three decades, especially for variables integrated of order 1, I(1), see Engle and Granger (1987). Systems of CI equations provide statistical counterparts to the economic concept of equilibrium; identification conditions are hence important to distinguish when different equations correspond to different stable economic-equilibrium relations. For I(1) simultaneous systems of CI equations, here indicates as I(1) SSE, the rank and order conditions for identification coincide with the classical ones, see e.g. Saikkonen (1993), Davidson (1994) and Johansen (1995a). In this paper we discuss identification for SSE with integrated variables of order higher than 1. Simultaneous CI systems with variables integrated of order 2, or I(2) SSE, have also been used to accommodate models with stock and flow variables, see Hendry and von Ungern-Sternberg (1981) and Granger and Lee (1989). The relevance of a coherent economic framework for both stock and flow variables is well documented in econometrics, see e.g. Klein (1950). Several flow variables, such as Gross Domestic Product, Rocco Mosconi. Dipartimento di Ingegneria Gestionale, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano, Italy; Phone: +39 0223992747; Fax : +39 02700423151. Email: [email protected] Corresponding author: Paolo Paruolo. Econometrics and Applied Statistics Unit, European Commission Joint Research Centre, Via E.Fermi 2749, I-21027 Ispra (VA), Italy; Phone: +39 0332785642; Fax : +39 0332785733. Email: [email protected] 1 October 10, 2014. JEL classification: C32, E32. 2 Partial financial support from MUR Grant Cofin2006-13-1140 and Cofin2010J3LZEN is gratefully acknowledged. 3 Partial financial support from University of Insubria FAR funds is gratefully acknowledged. 1 2 MOSCONI AND PARUOLO have been found to be well described as I(1) variables; if this is the case, then the corresponding stocks are I(2) by construction. Inventory models, for example, involve stock and flow variables, see e.g. Arrow et al. (1951). Additional examples are given by models of aggregate consumption, income and and wealth, see Stone (1966, 1973), of public debt, government expenditure and revenues, of the (stock of) mortgage loans and periodic repayments. These examples highlight the potential interest in I(2) SSE for all models with stock and flow variables. A different rationale for I(2) models is provided by the literature on integral control mechanisms in economics initiated by Phillips (1954, 1956, 1957). Here, variables are cumulated on the way to build an ‘integral stabilization policy’, or integral control. Haldrup and Salmon (1998) discuss the relations of integral control mechanisms with I(2) systems. In the rest of the paper we indicate ‘stock variables’, or stocks – i.e. cumulated levels, as Xt and ‘flow variables’, or flows – i.e. levels, as ∆Xt , where ∆ is the difference operator. The representation of I(2) systems has been discussed in Stock and Watson (1993) (henceforth SW) and Johansen (1992). In I(2) systems, CI equations may involve both stock and flow variables, and they represent dynamic equilibrium relations. These equations are called ‘multi-cointegrating’ relations, see Granger and Lee (1989) and Engsted and Johansen (1999). They are a special case of ‘polynomial-cointegration’ relations, as introduced by Engle and Yoo (1991). A different type of CI equations consists of linear combinations of flow variables only; they represent balancing equations for flows, as in balanced growth models. Below, we label these CI equations as ‘proportional control’ relations. The presence of multi-CI and proportional control CI relations gives a richer structure in I(2) SSE. In fact, one key feature of I(2) systems is that the first differences of the multi-CI equations also contain proportional control terms; moreover, linearly combining multi-CI equations and proportional control equations result in alternative and equivalent multi-CI equations. As shown below, this gives rise to an identification problem that differs from the one encountered in I(1) systems. Inference on systems of CI equations with I(2) variables has typically been based on reduced forms; see inter alia Boswijk (2000, 2010), Johansen (1995a, 1997, 2006), Kitamura (1995), Kurita et al. (2011), Paruolo (2000); Paruolo and Rahbek (1999), Rahbek et al. (1999) and Stock and Watson (1993). A notable exception is given by Johansen et al. (2010), who noted that the identification of the coefficients of the stock variables in multi-CI equations, β say, is identical to the identification of I(1) SSE, see their Section 4.2. The identification of the structural form of the remaining coefficients in the multi-CI equations and of the other CI equations has instead been left undiscussed. The identification problem for I(2) SSE thus appears not to have been (fully) addressed in the literature; this is the purpose of the present paper. We provide rank and order conditions for identification for general linear restrictions. The leading special case of linear equation-by-equation constraints is also discussed. These rank and order conditions hold in general, irrespective of the subsequent estimation and inference method of choice, and type of model. In particular they apply to Vector AutoRegressive (VAR) processes, employed e.g. in Johansen (1996) or to linear processes, employed in SW.1 Moreover, when the rank conditions are applied to the triangular form of SW or to similar formulations in line with the ones discussed in Johansen 1 SW considered a linear process setup and discuss the triangular form for systems with I(2) variables. Boswijk (2000) discussed the relation between the triangular form and other formulations of the error correction terms in I(2) VAR systems. IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER 3 (1996), it is found that both are just-identifying. These identification results are also extended to the case of SSE involving I(d) variables, d ≥ 2, or I(d) SSE. Here we show how the usual formulation of the rank condition needs to be amended to account for the specific identification problem encountered in higher order systems. The rank and order conditions provided here are also useful in sequential identification schemes, as the one suggested in Johansen et al. (2010). There, the CI coefficients β multiplying the stock variables in the multiCI relations are identified first, leaving all other CI coefficients unrestricted. Even when the identification of β has been checked, one needs to control the identification of the other CI coefficients; the correct rank conditions for identification of these remaining parameters are the ones given here. As illustrated in Saikkonen (1993) for I(1) SSEs, the triangular form may be used as a starting point to develop minimum distance estimators for over-identified structural SSEs, “assuming that the matrix [of structural parameters] satisfies the rank condition, entirely similar to that in standard simultaneous models”, see Saikkonen (1993) page 21. Remark that the main results in this paper show that the rank condition for the I(2) model has relevant specificities with respect to the I(1) case; therefore the present results are instrumental, inter alia, for extending Saikkonen’s approach to the case of over-identified I(2) SSEs. The rest of the paper is organized as follows: Section 2 reports a motivating example; Section 3 introduces the I(2) SSE; Section 4 discusses rank and order conditions first for general affine restrictions, and subsequently for the special case of equation-by-equation constraints. Section 5 relates the present results to triangular forms, while Section 6 illustrates the use of the rank and order conditions in examples of an aggregate consumption function with liquid assets and in a model of inventories. Section 7 discusses identification for higher order systems. Section 8 concludes. Proofs for I(2) SSE are placed in Appendix A, while proofs for I(d) SSE are placed in Appendix B. In the following a := b and b =: a indicate that a is defined by b; (a : b) indicates the matrix obtained by horizontally concatenating a and b. For any full column rank matrix H, H̄ indicates H(H 0 H)−1 and H⊥ indicates a basis of the orthogonal complement of the space spanned by the columns of H. vec is the column stacking operator, ⊗ is the Kronecker product, blkdiag(A1 , . . . , An ) a block-diagonal matrix with A1 , . . . , An as diagonal blocks. We use the notation Xt = I(d) to mean that Xt is a vector process integrated of order d, i.e. that ∆d Xt is a stationary linear process with a non-zero moving average (MA) impact matrix, where d is an integer, and initial values are chosen appropriately.2 2. A MOTIVATING EXAMPLE This section reports an example on sales and inventories, taken from Granger and Lee (1989); it is used here to show how the identification problem arises in models with stock and flows. Let st and qt represent sales and production of a (possibly composite) good. Sales st are market-driven and trending; in particular Granger and Lee assume that they are I(1). Production qt is chosen to meet demand st , i.e. st and qt have the same trend. Hence zt := qt − st , the change in inventory, is stationary. 2 This is in line with the definition of I(0) in Johansen (1996). 4 MOSCONI AND PARUOLO This corresponds to xt := (qt : st )0 being CI with cointegrating vector (1 : −1)0 , i.e. (2.1) (1 : −1)0 xt = u1t , where u·t indicate a generic stationary process. Pt Pt The stock of inventory Zt = i=1 zi +Z0 can be expressed in terms of the cumulated sales St = i=1 si +S0 Pt and cumulated production, Qt = i=1 qi + Q0 . Because qt and st are assumed to be I(1), Qt and St are 0 I(2), and the previous CI relation of xt := (qt : st ) in (2.1) corresponds to Xt := (St : Qt )0 being CI with cointegrating vector (1 : −1)0 , i.e. (1 : −1)0 Xt = I(1). The principle of inventory proportionality anchors the inventory stock Zt to a fraction of sales st , i.e. it implies Zt = ast + u0t . Because Zt and st are I(1), the relation Zt = ast + u0t is a CI relation; it involves the stock variables Xt and the flow variables xt = ∆Xt . In this case, Granger and Lee define Xt := (St : Qt )0 to be multi-cointegrated, with ‘multi-CI vector’ (1 : −1 : −a : 0)0 i.e. ! Xt (2.2) (1 : −1 : −a : 0) = u0t . ∆Xt The present paper investigates the following question: is the multi-CI vector in (2.2) unique (with or without the 0 restriction in the last entry)? This is an instance of the identification problem addressed in the paper. In fact, the set of CI relations (2.1) and (2.2) forms a system of two equations, ! ! Xt 1 −1 −a 0 (2.3) = I(0). 0 0 1 −1 ∆Xt Here we have collected the stationary processes in the I(0) term.3 Pre-multiplication by the following 2 × 2 matrix with generic b ! 1 b 0 1 gives a system of equations with (1 : −1 : −(a + b) : b)0 in place of (1 : −1 : −a : 0)0 as multi-CI relation. Hence, without a 0 restriction on the last entry of this vector, the first equation, and hence the system, is not identified. 3. THE I(2) SIMULTANEOUS SYSTEM OF EQUATIONS In this section we introduce the I(2) SSE. Let Yt be a p×1 vector of I(2) variables. Let also Xt := (Yt0 : Dt0 )0 be an n × 1 vector where Dt is a (n − p) × 1 vector of deterministic components; SW take Dt = t2 with n = p + 1.4 Next consider the set of possible CI relationships involving Xt and ∆Xt , both of which contain nonsta0 tionary variables; hence one needs to consider the variable vector (Xt0 : ∆Xt0 ) .5 Consider first a set of r < p 3 If the error term here was I(−1), one could cumulate the variables once more, obtaining formally a system of equations of higher order, see Section 7 below. 4 When there are no deterministic components we set n = p and X = Y . t t 5 Note that deterministic components are included in both X and ∆X . t t IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER 5 equations of the form (3.1) 0 0 β Xt + υ ∆Xt = β0 υ0 ! Xt ∆Xt = µ0t + u0t where β and υ are n × r and β is of full column rank r. Hereafter µit , i = 0, 1, denote deterministic vectors, containing linear combinations of terms of the form ∆u Dt , u ≥ 2. When Dt = t2 as in SW, one has µt = µ, a constant vector. When some rows in υ 0 are equal to 0, the corresponding rows in υ 0 describe CI relations that reduce the order of integration of Xt from 2 to 0, see (3.1). When, instead, some row in υ 0 is non-zero, the corresponding equation involves both stocks and flows, and it is hence a multi-CI equation. Remark that eq. (3.1) gives the structural form corresponding to eq. (5.1c) in SW’s representation. Similarly it gives the structural form corresponding to eq. (2.4) in Johansen’s representation, see Johansen (1992). The assumption that β 0 has full row rank r is similar to the requirement of classical SSE that there is a set of r variables for which the system can be solved for. In other words, if β 0 had not been of full row rank, one could have reduced the number of equations correspondingly. It is simple to observe that one needs r < p, because r = p would contradict the assumption that Yt = I(2). The first differences of eq. (3.1), β 0 ∆Xt +υ 0 ∆2 Xt , are also stationary; this implies that β 0 ∆Xt is stationary, i.e. that β 0 ∆Xt contains r CI relations. Moreover, other CI relations involving only ∆Xt can be present in the form (3.2) 0 γ ∆Xt = 0 γ0 Xt ∆Xt ! = µ1t + u1t , where γ is n × s and of full column rank. This gives the structural form corresponding to eq. (5.1b) in SW’s representation and to eq. (2.2) in Johansen’s representation. Similarly to β 0 above, we assume that γ 0 has been reduced to be of full row rank s. Moreover, γ 0 needs to be linearly independent from β 0 , otherwise the SSE composed of β 0 ∆Xt and γ 0 ∆Xt would contain some redundant equations; this implies that s < p − r, see SW and Johansen (1992). Hence ϕ := (γ : β) contains the CI relations of ∆Xt . Collecting terms, we find that the following system of k := 2r + s stationary SSE is relevant for the discussion of identification in I(2) cointegrated system ! ! ! Xt β0 υ0 Xt 0 (3.3) ζ = = µt + ut , 0 ∆Xt (γ : β) ∆Xt where 0 entries are omitted for readability and ζ 0 indicates the matrix of CI SSE coefficients, µt := (µ00t : µ01t : ∆µ00t )0 and ut = I(0); we call (3.3) an I(2) SSE. Remark that ζ 0 is block upper triangular and contains cross-equation restrictions, given by the presence of β 0 in the first and third block of rows. The identification problem can now be presented as follows. An equivalent I(2) SSE is obtained by premultiplying eq. (3.3) by Q0 with Q00 0 Q010 , (3.4) Q0 := r×r Q011 Q011 := Q1γ 0 Q01ϕ s×s Q000 . 6 MOSCONI AND PARUOLO where Q00 and Q1γ are non-singular square matrices of order r and s, and we let ` = r2 + (r + s)2 indicate the number of generically non-zero elements of Q. In fact the coefficient matrices in ! ! ! ! β0 υ0 Xt β ◦0 υ ◦0 Xt 0 Q = 0 0 (γ : β) ∆Xt (γ ◦ : β ◦ ) ∆Xt have the same 0− and cross-equation restrictions, where β ◦ = βQ00 , γ ◦ = γQ1γ + βQ1ϕ , υ ◦ = υQ00 + (γ : β) Q10 . This gives rise to the identification problem for SSE with I(2) variables. We observe that this identification problem is different from the one encountered in I(1) system; in that case the observationally equivalent structures are induced by any non-singular matrix Q, see Saikkonen (1993), Davidson (1994) and Johansen (1995a). Here, instead, the 0− and cross-equation restrictions imply that Q satisfies the structure in (3.4). Hence, unlike in I(1) system, one need to discuss appropriate rank and order conditions for identification that differ from the classical ones; this is the contribution of the present paper. Following current practice, one could consider a sequential identification procedure that first aims at identifying β through affine restrictions of the type R◦0 vec β = c◦ , and subsequently consider identification of υ, γ, with or without β fixed. The first-stage identification of β is standard, and the associated rank condition is rank R◦0 (Ir ⊗ β) = r2 , see Sargan (1988) and Johansen (1995b). When β is identified, the identification problem for υ, γ is of the type (3.4) with Q00 = Ir ; hence the present discussion of identification is relevant also for this sequential procedure. 4. RANK AND ORDER CONDITIONS FOR I(2) SSE In this section we consider the I(2) SSE (3.3) under general linear restrictions on ζ. Generic affine restrictions are defined first, and next rank and order conditions for identification are given. These results are then specialized for equation-by-equation restrictions. We partition ζ =: (ζ 1 : ζ 2 : ζ 3 ) where ζ 1 and ζ 3 have r columns and ζ 2 has s columns. Affine restrictions can be stated directly on ζ, or on ζ 1 and γ, because they contain the same nonzero coefficients matrices υ, γ, β. We consider (4.1) R?0 θ m? ×f = c? , θ := f ×1 vec ζ 1 vec γ ! , where f := n(2r + s). The next theorem gives rank and order conditions for (4.1) to identify ζ. Theorem 1 (Rank and order conditions for I(2) SSE) A necessary and sufficient condition (rank condition) for the restrictions (4.1) to identify ζ in the I(2) SSE (3.3) is that the matrix (4.2) R?0 diag (Ir ⊗ ζ, Is ⊗ (γ : β)) is of full column rank `, equal to r2 + (r + s)2 . A necessary but not sufficient condition (order condition) for (4.2) to have full column rank is that its number of rows is greater or equal to the number of columns, that is (4.3) m? ≥ `. IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER 7 A few remarks are in order. Remark 2 (No integral control relations) which reduces to rank R?0 If r = 0, then Ir ⊗ ζ and β are dropped from the rank condition, 2 (Is ⊗ γ) = s . This is the usual rank condition for identification in a standard I(1) SSE, see Johansen (1995a), due to the fact that the I(2) SSE simplifies into an I(1) SSE in first differences. Remark 3 (No proportional control relations) If s = 0, then γ is dropped from ζ, which simplifies into ! β ζ= , υ β and Is ⊗ (γ : β) is dropped from (4.2). The rank condition becomes rank R?0 (Ir ⊗ ζ) = 2r2 . Note that this is not the usual rank condition for identification in a standard I(1) SSE. We next consider the rank and order conditions for equation-by-equation constraints, where we indicate the i-th column of ζ as ζi and the i-th column of γ as γi . They can be formulated as follows (4.4) Ri0 ζi = ci , Ri0 γi−r = ci , i = 1, . . . , r, i = r + 1, . . . , r + s. mi ×n mi ×2n These restrictions are a special case of (4.1), with R? = blkdiag(R1 , R2 ), where R1 = blkdiag(R1 , . . . , Rr ) collects the first r equations (concerning ζ 1 ) and R2 = blkdiag(Rr+1 , . . . , Rr+s ) the second s equations (concerning γ). The following corollary specializes the rank and order conditions to the case of equation-by-equation restrictions. Corollary 4 (Identification, equation-by-equation restrictions) Let the restrictions be given as in (4.4); 1 then the i-th column of ζ, i = 1, . . . , r (i.e. column i in ζ ) is identified if and only if (4.5) rank (Ri0 ζ) = 2r + s, i = 1, . . . , r; column number i − r in γ for i = r + 1, . . . , r + s is identified if and only if: (4.6) rank (Ri0 (γ : β)) = r + s, i = r + 1, . . . , r + s. The joint validity of rank conditions (4.5) for i = 1, . . . , r and (4.6) for i = r + 1, . . . , r + s is equivalent to the full column rank of (4.2), which can also be expressed equivalently as follows (4.7) rank (R01 (Ir ⊗ ζ)) = r (2r + s) and rank (R02 (Is ⊗ (γ : β))) = s (r + s) . A necessary but not sufficient condition (order condition) for (4.5) is (4.8) mi ≥ 2r + s, i = 1, . . . , r. Similarly, a necessary but not sufficient condition (order condition) for (4.6) is (4.9) mi ≥ r + s, i = r + 1, . . . , r + s. 8 MOSCONI AND PARUOLO In practice, conditions (4.5) and (4.6) can be controlled prior to estimation by generating random numbers for the parameters and substituting them into (4.5) and (4.6), as suggested by Boswijk and Doornik (2004). Alternatively one could modify the generic identification approach of Johansen (1995a), Theorem 3, to obtain conditions that do not depend on specific parameter values. 5. TRIANGULAR FORMS AND OTHER SCHEMES A popular identification scheme for both I(1) and I(2) SSE is the triangular form proposed in SW. As an illustration of the identification conditions, this section shows that the triangular form is just-identifying. To prove that the triangular form is just identified, we first consider a normalization similar in spirit to Johansen (1991) for I(1) SSE, given by βc := β(c0 β)−1 , where c is a matrix of the same dimensions of β with the property that c0 β is square and nonsingular. In the special case of c chosen as c := (Ir : 0), the normalized CI vectors βc0 take the form of βc0 := (I : −A0 ), which gives the triangular form, see Phillips (1991). The choice of βc0 := (I : −A0 ) accommodates any reduced form of the I(1) SSE.6 Consider now an I(2) SSE with associated matrix ζ. Assume one can specify a matrix c0 of the same dimension of β and a matrix c1 of the same dimension of γ, with the property that c00 β and c01 γ are square and nonsingular and that (c0 : c1 ) has full column rank. Given the matrices c0 and c1 , we consider the following restrictions on ζ, i.e. on β, υ and γ: (5.1) 0 c00 β = Ir , (c0 : c1 ) (υ : γ) = blkdiag (0r,r , Is ) , 2 2 where the first equality includes r2 constraints and the second one (r + s) , with a total of m? = r2 +(r + s) restrictions. The following proposition applies. 2 The m? = r2 + (r + s) restrictions (5.1) are just-identifying. Proposition 5 (Normalizations) We next show that constraints (5.1) contain the triangular form of SW as a special case. Let c0 and c1 be chosen to satisfy (5.1) and also orthogonal to one another, c00 c1 = 0. Define c2 to be any choice of (c0 : c1 )⊥ ; this implies that the matrix C := (c0 : c1 : c2 ) is square and nonsingular, because it is composed et := (X 0 : X 0 : X 0 )0 := C̄ 0 Xt , where Xit is defined as c̄0 Xt and where of orthogonal blocks. Next consider X 0t 1t 2t i C̄ := (c̄0 : c̄1 : c̄2 ) by block-orthogonality. A leading case is c0 = (Ir : 0)0 , c1 = (0 : Is : 0)0 , c2 = (0 : Ip−r−s )0 ; et = Xt , so that there is no rotation in this case. The following with this choice of c0 , c1 and c2 one has X proposition applies. Corollary 6 (Triangular form) Let β, υ and γ be the parameters in ζ normalized by the just-identifying re- strictions (5.1), where c0 , c1 and c2 are chosen orthogonal to one another, c0i cj = 0, i 6= j. Then the I(2) SSE et := (X 0 : X 0 : X 0 )0 := associated with ζ can be expressed equivalently in terms of the rotated variables X 0t 1t C̄ 0 Xt , C := (c0 : c1 : c2 ), as follows (5.2) 6 See β0 υ0 0 (γ : β)0 ! Xt ∆Xt ! Ir = 0 0 −A1 −A2 0 0 −A3 −A4 −A2 0 0 0 Is 0 0 Ir −A1 et X et ∆X Johansen (1995a) for a discussion of the corresponding structural forms and their identification. ! . 2t IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER 9 The r.h.s. of (5.2) corresponds to the triangular form in SW, see eq. (4.2) in Theorem 4.1 in Boswijk (2000). Because this is a special case of Proposition 5, ζ in (5.2) is just-identified. 6. ILLUSTRATION In this section we illustrate the use of the rank and order conditions. The discussion is based on models of aggregate consumption with liquid assets discussed in Hendry and von Ungern-Sternberg (1981), henceforth HUS, and on the model of inventories introduced in Section 2. HUS discuss the role of liquid assets in integral control mechanisms for the aggregate consumption function. They point out that when flow variables (as income and consumption) are I(1), the corresponding stock variables (as components of wealth), are in general I(2). Define ct as real consumption of nondurables and services, yt as real income, and Wt as real liquid assets, where (6.1) ∆Wt = yt − ct − rt and rt includes expenditure for durable goods, investment in other assets, and an ‘inflation tax’ on liquid Pt Pt Pt assets. Define, for t = 1, . . . , T , the stock variables Yt = j=1 yj , Ct = j=1 cj , Rt = j=1 rj with Wt = W0 + Yt − Ct − Rt , see (6.1). 0 Next consider the following 5-dimensional vector Xt = (Yt : Ct : Rt : πt : it ) , where πt is inflation and it is the real interest rate. Assuming as in HUS that yt and ct are I(1), one finds that Xt is I(2) because it contains the cumulation of yt and ct . In line with HUS, we also posit that liquidity Wt , inflation πt and the interest rate it are I(1). This implies that, defining ej as the j-th column of I5 , the vectors (e1 − e2 − e3 ), e4 and e5 belong to the span of (β : γ). HUS postulate the existence of an ‘integral control’ CI relation Wt = τ yt +u01,t and a ‘proportional control’ CI relation ct = ϑyt + u11,t . Given that rt contains expenditure for durable goods, investment in other assets, and an inflation tax on liquid assets, rt should depend on the real interest rate it for the non-liquid assets component, and on inflation πt and past liquidity Wt for the inflation tax component. If this dependence can be represented as a linear relation, one could postulate the existence of a CI relation of the type (6.2) rt = ψπt + φit + ξWt + u02,t where we have used the fact that Wt−1 = Wt − ∆Wt (with ∆Wt = I(0)) can be replaced by Wt in the CI relation (6.2). Placing together (i) the ‘integral control’ CI relation Wt = τ yt + u01,t , (ii) the ‘proportional control’ CI relation ct = ϑyt + u11,t , (iii) the CI relation (6.2), and (iv) the requirement that (e1 − e2 − e3 ), e4 and e5 belong to the span of (β : γ), one deduces that r = 2, s = 2.7 7 If instead one assumes that πt and it are I(0) and rt is I(1) and does not cointegrate, one would expect e04 Xt and e05 Xt to be stationary. This would imply r = 3 and s = 1, with e4 , e5 included in column span of β. 10 MOSCONI AND PARUOLO Moreover, the above implies the following I(2) SSE 1 −1 −1 0 −ξ ξ ξ −ψ ! 0 0 β υ ζ0 = = 0 (γ : β) 0 −τ 0 0 0 −φ 0 0 1 0 −ϑ 1 0 0 0 0 0 0 1 −1 −1 0 −ξ ξ ξ −ψ 0 0 0 . 1 0 −φ Notice that the second vector in γ is here represented as e4 ; this follows from the assumption that the vectors (e1 − e2 − e3 ), e4 and e5 belong to the span of (β : γ) and the form of β and of the first column of γ. This implies that, provided ψ 6= 0, e4 can be obtained as a linear combination of the columns in β and γ. We can now check whether this specification is identified or not. Let fi denote the i-th column in I10 , ej the j-th column of I5 and 0u the u × 1 vector of zeros. The equation-by-equation restriction matrices, see (4.4), are given by R1 = (f1 : · · · : f5 : f7 : · · · : f10 ) , c1 = (1 : −1 : −1 : 006 )0 , m1 = 9, R2 = (f1 + f2 : f1 + f3 : f6 : · · · : f10 ) , c2 = (004 : 1 : 002 )0 , m2 = 7, R3 = (e2 : · · · : e5 ) , c3 = (1 : 003 )0 , m3 = 4, R4 = I5 , c4 = (004 : 1)0 , m4 = 5. The first two equations meet therefore the order condition (4.8), since mi ≥ 2r + s = 6 for i = 1, 2. Also, the third and fourth equations meet the order condition (4.9), since mi ≥ r + s = 4 for i = 3, 4. Consider now the rank condition (4.5) for the first two equations. One finds 1 −1 −1 0 0 0 0 0 0 −ξ ξ ξ −ψ −φ 0 1 0 0 1 0 0 0 0 (6.3) ζ R1 = 0 0 0 1 −1 −1 0 0 ξ ξ −ψ −φ which has rank 2r + s = 6 if and only if ψ is different from 0. Hence, for all parameter values except for a set of Lebesgue measure 0, the rank condition (4.5) is satisfied by the first equation, which is hence (generically) identified. Because m1 − 2r − s = 3, there are 3 over-identifying restrictions. For the second equation 0 0 0 0 ζ 0 R2 = we find −τ 0 0 0 0 0 1 0 −ϑ 1 0 0 0 0 0 0 1 −1 −1 0 −ξ ξ ξ −ψ 0 0 0 1 0 −φ IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER 11 which has rank ≤ 5 < 2r + s = 6. Hence, the rank condition (4.5) fails for the second equation, which therefore is not identified. If one added the further restriction ξ = 0, corresponding to the addition of f1 as an additional column in R2 (the first one, say), one would find instead 1 0 0 −τ 0 0 0 0 0 ζ R2 = 0 0 0 0 1 0 −ϑ 1 0 0 0 0 0 0 1 −1 −1 0 0 0 0 −ψ 0 0 0 1 0 −φ which has rank 2r + s = 6 if and only if ψ is different from 0 and ϑ is different from 1. Hence the rank condition (4.5) is satisfied by the second equation when ξ = 0, for all parameter values except for a set of Lebesgue measure 0. The equation is hence (generically) identified. Because in this case m2 − 2r − s = 1, there is 1 over-identifying restriction. Consider finally the third and fourth equations. One has r + s = 4 in (4.6) and 1 0 0 0 0 0 0 1 0 , (γ : β) R3 = −1 −1 0 0 ξ ξ −ψ −φ −ϑ 1 0 0 0 0 0 0 0 1 0 . (γ : β) R4 = 0 1 −1 −1 0 −ξ ξ ξ −ψ −φ The matrix for the third equation has rank 4 if and only if ψ 6= 0. The matrix for the fourth equation has rank 4 if and only if ψ 6= 0 or ϑ 6= 1 or both. Hence, for all parameter values, except for a set of Lebesgue measure 0, the rank condition (4.6) is satisfied by both equations, which are hence (generically) identified. Because 4 = m3 = r + s the third equation is just-identified while, being 5 = m4 = r + s + 1, there is 1 over-identifying restriction on the fourth equation. Note that the identification of equations 1, 3 and 4 does not depend on the additional restriction ξ = 0 required for identification of the second equation. Consider now the model of inventories in Section 2, and the specification in eq. (2.3). One has r = 1, s = 0; for the first equation 0 ζ R1 = 1 −1 0 0 0 −1 ! which has rank 2, so that the rank condition (4.5) is satisfied. The number of restrictions is m1 = 3, and the number of overidentifying restrictions is m1 − 2r − s = 1. When one makes the element in position 1, 4 of ζ 0 unrestricted, however, one finds ! 1 −1 0 ζ R1 = 0 0 12 MOSCONI AND PARUOLO which has rank 1, so that the rank condition (4.5) fails. Note that the order condition is met in this case, see (4.8), because m1 = 2r = 2. 7. IDENTIFICATION OF HIGHER ORDER SSE In this section we discuss the rank and order conditions for higher order systems. In this section we indicate r and s from the previous sections as r0 and r1 respectively. As shown in Appendix B, the relevant I(d) SSE is given by (7.1) 0 ζ k×nd Xt ∆Xt .. . ∆ d−1 Xt = β00 0 υ01 0 υ02 ϕ01 0 υ12 0 υ1d−1 ϕ02 0 υ2d−1 ... .. . 0 υ0d−1 .. . ϕ0d−1 Xt ∆Xt .. . ∆d−1 Xt where ϕi := (γi : γi−1 : · · · : γ1 : β0 ), of dimension is n × ki with ki := Pi = µt + I(0) h=0 rh , k := Pd−1 i=0 ki and rh ≥ 0 is the number of columns in γh . System (7.1) is henceforth referred to as an I(d) SSE. Note that an I(2) SSE includes only the 2 × 2 leading block of matrices in ζ 0 ; here remark that we denote ϕ1 := (γ1 : β0 ) what was indicates as ϕ = (γ : β) in the previous sections; we refer to Appendix B for details on the definition of other matrices. We express restrictions on the SSE (7.1) as follows: (7.2) R0 vec m×w ζ = c. Here w := knd. As for the I(2) case, let θ be the f × 1 vector containing the generically nonzero elements of ζ, and the linear restrictions (7.2) can be equivalently expressed as (7.3) R?0 θ = c? . m? ×f Without loss of generality, we assume that w − f of the restrictions in (7.2) ensure that ζ 0 has a blocktriangular structure with cross-equation restrictions8 , while m? := m − w + f are possibly (over-)identifying restrictions on θ. Because ζ and θ contain the same parameter matrices, we denote by A the (unique, 0-1) transformation matrix that satisfies vec ζ = Aθ. The class of matrices Q that induces lack of identification in (7.1) is of the form Q0d−1,0 Q000 Q010 . . . ! Q011 . . . Q0d−1,1 Q0j,γ Q0j,ϕ 0 0 Qjj = , (7.4) Q = , .. .. Q0j−1,j−1 . uj ×uj . Q0d−1,d−1 j = 1, . . . , d − 1, where Q00 , Qj,γ for j = 1, . . . , d − 1 are square and nonsingular. In the following we indicate by q the ` × 1 vector containing the generically non-zero elements of matrices Q in (7.4) that gives rise to lack of identification, and by N the (unique, 0-1) matrix that maps q into vec Q, vec Q = N q. We now state the rank and order conditions for identification of ζ in I(d) SSE. 8 These concern the fact that ϕi = (γi : ϕi−1 ) and ϕi−1 contain the same parameter ϕi−1 for i = 1, . . . , d − 1. IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER Theorem 7 (Rank and order conditions for I(d) SSE) 13 A necessary and sufficient condition (rank condition) for the restrictions (7.2) to identify ζ in an I(d) SSE (7.1) is that the matrix R0 (Ik ⊗ ζ)N (7.5) is of full column rank `. A necessary but not sufficient condition (order condition) for the rank condition to hold is (7.6) m? ≥ `. For the case d = 2, the condition of full rank of (7.5) is equivalent to (4.2) and the order condition (7.6) is equivalent to (4.3). Remark 8 The rank condition in (7.5) can be compared with the one obtained for standard SSE, see e.g. Sargan (1988), Chapter 3, Theorem 1. The matrix R0 (Ik ⊗ ζ)N in the rank condition here is very similar to the matrix R0 (Ik ⊗ ζ) in the standard case, the only difference being the additional multiplicative factor N here. This is due to the different class of matrices Q in (7.4). 8. CONCLUSIONS This paper provides order and rank conditions for general linear hypotheses on the cointegrating vectors in I(2) systems, as well as in system of higher order. Results are illustrated for the case of equation-by-equation restrictions on models of aggregate consumption with liquid assets and on models of inventories. If one has in mind an equation-by-equation identification scheme, the main implications of our results, stemming from the block triangularity of the matrix Q in (3.4), are the following. (i) The coefficients in β, i.e. the coefficients involving the stock variables in the multi-CI equations, might be identified separately following the standard approach adopted for I(1) systems, see Johansen et al. (2010). In this case, however, identification of the coefficients in γ, i.e. the coefficients of the proportional control relations, has to be analysed jointly with β in a non-standard way, since each column of γ can be replaced with a linear combination of columns of γ and β, but not the other way round. (ii) Even more relevantly, the identification analysis of the coefficients in υ, i.e. the coefficients involving the flow variables in the multi-CI equations, requires a joint non-standard analysis of υ, γ and β, since each column of υ could be replaced with a linear combination of columns in υ, γ and β, but not the other way round. Therefore, even if the purpose of identification is to check whether multi-CI equations are identified or not, one has to resort to the identification results discussed in this paper; in fact, applying standard identification results to multi-CI equations, i.e. placing at least r restrictions on each equation and then checking for the usual rank condition is incorrect. Another advantage of the approach discussed in this paper is that alternative identification schemes are also covered, involving for example (linear) cross-equation restrictions. The present approach is based on algebraic conditions similar to the ones employed in Sargan (1988) for classical SSE. These rank and order conditions can be linked to quantities related to the likelihood, as in Rothenberg (1971) for the case of classical SSE. This link is analysed in a separate paper to which we refer, see Mosconi and Paruolo (2014). 14 MOSCONI AND PARUOLO ACKNOWLEDGEMENTS We thank Peter Boswijk, Niels Haldrup, David Hendry, Søren Johansen, Anders Rahbek and two anonymous referees for useful comments on previous versions of the paper. A special thank goes to one of the anonymous referees who suggested to formulate restrictions on vec ζ 1 and vec γ instead of on ζ 12 or J20 ζ as used in a previous version of the paper; this allowed us to simplify notation considerably. REFERENCES Arrow, K. J., T. Harris, and J. Marschak (1951). Optimal inventory policy. Econometrica 19, 250–272. Boswijk, P. (2000). Mixed normality and ancillarity in I(2) systems. Econometric Theory 16, 878–904. Boswijk, P. and J. A. Doornik (2004). Identifying, estimating and testing restricted cointegrated systems: An overview. Statistica Neerlandica 58, 440–465. Boswijk, P. H. (2010). Mixed normal inference on multicointegration. Econometric Theory 26, 1565–1576. Davidson, J. (1994). Identifying cointegrating regressions by the rank condition. Oxford Bulletin of Economics and Statistics 56, 105–10. Engle, R. and C. Granger (1987). Co-integration and Error Correction: Representation, Estimation, and Testing. Econometrica 55, 251–276. Engle, R. and S. Yoo (1991). Cointegrated economic time series: an overview with new results. in Long-run Economic Relations: Readings in Cointegration Engle and Granger eds., Oxford University Press, 237–266. Engsted, T. and S. Johansen (1999). Granger’s representation theorem and multi-cointegration. in Causality and Forecasting. Festschrift for Clive W. Granger. in R. F. Engle and H. White, eds. Oxford University Press, 200–212. Fisher, F. (1966). The identification problem in econometrics. McGraw-Hill. Franchi, M. (2010). A representation theory for polynomial cofractionality in vector autoregressive models. Econometric Theory 26, 1201–1217. Granger, C. W. J. and T. H. Lee (1989). Investigation of production, sales and inventory relationships using multicointegration and non-symmetric error correction models. Journal of Applied Econometrics 4, pp. S145–S159. Haldrup, N. and M. Salmon (1998). Representation of I(2) cointegrated systems using the Smith-McMillan form. Journal of Econometrics 44, 303–325. Hendry, D. F. (1993). Econometrics: Alchemy or Science? Blackwell. Hendry, D. F. (2002). Applied econometrics without sinning. Journal of Economic Surveys 16, 591–603. Hendry, D. F. and T. von Ungern-Sternberg (1981). Liquidity and inflation effects on consumers’ expenditure. In A. Deaton (Ed.), Essays in the Theory and Measurement of Consumers’ Behaviour, 237–261. Cambridge University Press. Johansen, S. (1991). Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica 59, 1551–1580. Johansen, S. (1992). A representation of vector autoregressive processes integrated of order 2. Econometric Theory 8, 188–202. Johansen, S. (1995a). Identifying restrictions of linear equations with applications to simultaneous equations and cointegration. Journal of Econometrics 69, 111–132. Johansen, S. (1995b). A statistical analysis of cointegration for I(2) variables. Econometric Theory 11, 25–59. Johansen, S. (1996). Likelihood-based Inference in Cointegrated Vector Auto-Regressive Models. Oxford University Press. Johansen, S. (1997). A likelihood analysis of the I(2) model. Scandinavian Journal of Statistics 24, 433–462. Johansen, S. (2006). Statistical analysis of hypotheses on the cointegrating relations in the I(2) model. Journal of Econometrics 132, 81–115. Johansen, S., K. Juselius, R. Frydman, and M. Goldberg (2010). Testing hypotheses in an I(2) model with piecewise linear trends. an analysis of the persistent long swings in the DMK/$ rate. Journal of Econometrics 158, 117 – 129. Kitamura, Y. (1995). Estimation of cointegrated systems with I(2) processes. Econometric Theory 11, 1–24. Klein, L. R. (1950). Stock and flow analysis in economics. Econometrica 18, 236–241. Koopmans, T. (1949). Identification Problems in Economic Model Construction. Econometrica 17, 125–144. 15 IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER Kurita, T., H. Nielsen, and A. C. Rahbek (2011). An I(2) cointegration model with piecewise linear trends: Likelihood analysis and application. The Econometrics Journal 14, 131–155. Mosconi, R. and P. Paruolo (2014). Identification and estimation of cointegrating relations in I(2) Vector Autoregressive Models. Manuscript. Paruolo, P. (2000). Asymptotic efficiency of the two stage estimator in I(2) systems. Econometric Theory 16, 524–550. Paruolo, P. and A. Rahbek (1999). Weak exogeneity in I(2) VAR systems. Journal of Econometrics 93, 281 – 308. Phillips, A. W. (1954). Stabilisation policy in a closed economy. The Economic Journal 64, 290–323. Phillips, A. W. (1956). Some notes on the estimation of time-forms of reactions in interdependent dynamic systems. Economica 23, 99–113. Phillips, A. W. (1957). Stabilisation policy and the time-forms of lagged responses. The Economic Journal 67, 265–277. Phillips, P. C. B. (1991). Optimal inference in cointegrated systems. Econometrica 59, 283–306. Rahbek, A. C., H. Kongsted, and C. Jørgensen (1999). Trend-stationarity in the I(2) cointegration model. Journal of Econometrics 90, 265–89. Rothenberg, T. J. (1971). Identification in parametric models. Econometrica 39, 577–591. Saikkonen, P. (1993). Estimation of cointegration vectors with linear restrictions. Econometric Theory 9, 19–35. Sargan, J. D. (1988). Lectures on Advanced Econometrics. Basil Blackwell. Stock, J. and M. Watson (1993). A simple estimator of cointegrating vectors in higher order integrated systems. Econometrica 61, 783–820. Stone, J. R. N. (1966). Spending and saving in relation to income and wealth. L’Industria 4, 471–499. Stone, J. R. N. (1973). Personal spending and saving in postwar Britain. In H. C. Bos, H. Linneman, and P. de Wolff (Eds.), Economic Structure and Development (Essays in honour of Jan Tinbergen), 237–261. North-Holland. A. APPENDIX A: I(2) SSE In this appendix we report proofs of propositions in Sections 4 and 5. We introduce the following additional notation. Define (L1 : L2 : L3 ) := I2r+s , Lij := (Li : Lj ), i, j = 1, 2, 3, where L1 and L3 have r columns and L2 has s columns. Similarly, define (U1 : U2 ) := Ir+s , where U1 and U2 have r and s columns, respectively. Proof of Theorem 1: We wish to show that, if ζ and ζ◦ := ζQ both satisfy (4.1), then Q = I2r+s iff (4.2) holds. Observe that θ= ! vec (ζL1 ) vec ((γ : β) U1 ) , θ◦ = vec ζQ0 vec ! (γ : β) Q1 , θ − θ◦ = ! vec (ζG1 ) vec ((γ : β) G2 ) , where G1 := L1 − Q0 , G2 := U1 − Q1 . Subtracting (4.1) for θ from (4.1) for θ◦ , one finds R?0 (θ − θ◦ ) = 0, 0 0 0 i.e., setting g := (vec G1 ) : (vec G2 ) , (A.1) R?0 diag (Ir ⊗ ζ, Is ⊗ (γ : β)) g = 0 System (A.1) admitts a unique solution g = 0 if and only if R?0 diag (Ir ⊗ ζ, Is ⊗ (γ : β)) is of full column rank; this gives the rank condition. The order conditions stems from the number of rows and columns of the matrix (4.2). Proof of Corollary 4: Proof of Proposition 5: Q.E.D. Condition (4.7) follows from (4.2) by substituting R = blkdiag(R1 , R2 ). Q.E.D. Observe that (5.1) can be written as R?0 θ = c? with R? = blkdiag(R1 , R2 ), R1 = (Ir ⊗ blkdiag(c0 , (c0 : c1 ))), R2 = (Is ⊗ (c0 : c1 )). We check the rank condition (4.7); we find ! !! c00 β 0 Ir 0 R1 (Ir ⊗ ζ) = Ir ⊗ = Ir ⊗ blkdiag Ir , 0 0 0 (c0 : c1 ) υ (c0 : c1 ) γ (c0 : c1 ) β Is 0 16 MOSCONI AND PARUOLO which has rank r(2r + s). Moreover R02 (Is ⊗ (γ : β)) = Is ⊗ 0 (c0 : c1 ) γ 0 (c0 : c1 ) β = Is ⊗ 0 Ir Is 0 ! which has rank s(r + s). Thus the normalisations (5.1) satisfy the rank condition for identification (4.7); moreover, the order condition (4.8) is satisfied with an equal sign, i.e. the restrictions are just-identifying. Q.E.D. −1 Define D = blkdiag(C, C), and observe that 0 C = C̄ by block-orthogonality. 0 0 et0 : ∆X et0 , where we have used orthogonal Hence one has ζ 0 (Xt0 : ∆Xt0 ) = ζ 0 DD̄0 (Xt0 : ∆Xt0 ) = ζ 0 D X Proof of Corollary 6: projections in the form I = C C̄ 0 . It remains to show that ζ 0 D has the form on the r.h.s. of (5.2). First observe that ζ 0D = ! β0C υ0 C 0 (γ : β)0 C where β 0 C = (Ir : −A1 : −A2 ) and γ 0 C = (0 : I : −A4 ) by (5.1), where Ai indicate generic matrices of appropriate dimensions. Finally, because (c0 : c1 )0 υ = 0, applying orthogonal projections of the form I = C̄C 0 , one finds υ = c̄2 c02 υ = −c̄2 A3 , with A3 := −c02 υ. This completes the proof. Q.E.D. B. APPENDIX B: HIGHER ORDER SSE In this appendix we motivate the definition of the I(d) SSE in (7.1) and give the proofs of Section 7. In order to discuss I(d) SSE, one can consider SW eq. (3.2) or Franchi (2010) eq. (5.3) for c = d and b = 1; here for simplicity we refer to the latter. These results show that there exist non-negative integers rj , Pd i=0 ri , and othogonal matrices βj with full column rank rj with the following Pj properties: there exist matrix polynomials γj (L) := βd−j−1 + i=1 ψji ∆i such that γj (L) Xt is I(d − 1 − j) j = 0, . . . , d, such that p = for j = 0, . . . , d − 1. We next introduce, for j = 1, . . . , d, the matrices γj by requiring that col (β0 : β1 : · · · : βj ) = col (β0 : β1 : · · · : βj−1 : γj ) . For ease of later reference, let also γ0 := β0 . This implies that col (β0 : β1 : · · · : βj ) = col (γ0 : γ1 : · · · : γj−1 : γj ). Consider now γj0 (L) Xt = I(d − j − 1); by successive application of the difference operator, one finds that the following linear combinations are stationary j X 0 d−j−1 0 (B.1) βd−j−1 ∆ Xt + ψji ∆d−j−1+i Xt = uj1t = uj2t i=1 0 βd−j−1 ∆d−j Xt + j−1 X 0 ψji ∆d−j+i Xt i=1 0 βd−j−1 ∆d−1 Xt ... = uj+1,j+1,t , The I(d) SSE (7.1) is obtained by collecting (B.1) for j = 0, . . . , d − 1 into a single system, replacing βj with γj , collecting γj into ϕi := (γi : γi−1 : · · · : γ1 : β0 ), and defining υij appropriately. 17 IDENTIFICATION IN COINTEGRATED SYSTEMS OF HIGHER ORDER In order to see why one can choose γj in place of βj , consider the case d = 2. One has γ0 (L) ∆ := β1 ∆, 0 and γ1 (L) := β0 + ψ11 ∆ give rise to I(0) (and linearly independent) processes. Differentiating γ1 (L) once, one finds that any linear combination of β1 ∆ and β0 ∆ still gives a stationary process; hence any γ1 that satisfies col (β0 : β1 ) = col (β0 : γ1 ) can be taken to represent the r1 extra linear combinations in β1 . Note that by changing βj into γj also the ψij coefficients are similarly linearly combined; we hence denote the resulting coefficient as υij in (7.1). Proof of Theorem 7: Let ζ and ζ ◦ := ζQ both satisfy (7.2), and subtract R0 vec ζ = c from R0 vec ζ ◦ = c to find 0 = R0 vec(ζG), where G := Ik − Q. Write vec G = N g where g is ordered in the same way as q; one finds (B.2) 0 = R0 vec(ζG) = R0 (Ik ⊗ ζ) vec G = R0 (I2r+s ⊗ ζ)N g Observe that G = 0 iff g = 0; this proves the rank condition (7.5). We next wish to prove the order conditions. Remark that, for consistency between (7.2) and (7.3), one can choose without loss of generality R = (ĀR? : A⊥ ) and c = (c0? : 00 )0 , where R? has m? := m − w + f columns and represents the alternative equivalent formulation R?0 θ = c? of the linear restrictions. Next we wish to prove that A0⊥ (Ik ⊗ ζ)N = 0. Remark that for any vector q, (Ik ⊗ ζ)N q = vec (ζQ) has the same block triangular structure with the cross-equations restrictions, and hence satisfies vec (ζQ) = Aθ◦ for an appropriate choice of θ◦ . This proves A0⊥ (Ik ⊗ ζ)N = 0. (This can be verified for the I(2) case by substituting from (B.6) and (B.4)). This implies that the rank of R0 (Ik ⊗ ζ)N equals the rank of R?0 Ā0 (Ik ⊗ ζ)N , which has m? rows and ` columns, which gives the order condition. We finally prove equivalence of the rank and order conditions for the case d = 2. Substituting from (B.4) and (B.5) in Lemma 9 below, one finds R?0 Ā0 (Ik ⊗ ζ)N = blkdiag(Ir ⊗ ζ, Ir ⊗ (γ : β)) which shows that the condition of full rank of (7.5) is equivalent to (4.2) and the order condition (7.6) is equivalent to (4.3). Q.E.D. We conclude by reporting the specific form of the A and N matrices in the I(2) case that are needed in the proof of Theorem 7, where A and N satisfy vec ζ = Aθ and vec Q = N q. For I(2) SSE the form of the 0 0 matrix Q is given in (3.4). Among the many possible choices of q we choose q := ( vec Q0 vec Q1 )0 where Q0 := (Q000 : Q010 )0 is the first block of r columns of Q and Q1 := (Q01γ : Q01ϕ )0 is the first block of s columns of Q11 . Let (J1 : J2 ) := I2n , where J1 and J2 have n columns each, and recall that Lj and Uj matrices are defined at the beginning of Appendix A. Lemma 9 (A and N) A = (L1 ⊗ I2n + L3 ⊗ J2 J10 : L2 ⊗ J2 ) , (B.3) (B.4) In the I(2) case the A and N matrices are given by w×f N (2r+s)2 ×` = (L1 ⊗ I2r+s + L3 ⊗ L3 L01 : L2 ⊗ L23 ) , where w = 2n(2r + s), f = n(2r + s), ` = r2 + (r + s)2 . Moreover, 1 Ā = ( (L1 ⊗ (I2n + J2 J20 ) + (L3 ⊗ J2 J10 )) : L2 ⊗ J2 ) (B.5) 2 (B.6) A⊥ = (L23 ⊗ J1 : L1 ⊗ J1 − L3 ⊗ J2 ) w×f 18 MOSCONI AND PARUOLO Proof of Lemma 9: Write ζ as ζ = ζ 1 L01 +J2 γL02 +J2 J10 ζ 1 L03 ; vectorizing one finds vec ζ = (L1 ⊗ I2n ) vec ζ 1 + (L2 ⊗ J2 ) vec γ + (L3 ⊗ J2 J10 ) vec ζ 1 from which (B.3) follows. Eq. (B.4) is proved observing that Q = Q0 L01 + L23 Q1 L02 + L3 L01 Q0 L03 , and applying the vec operator. Moreover Ā = 1 2 (L1 ⊗ I2p + L3 ⊗ J2 J10 ) (I2nr + Ir ⊗ J2 J20 ) : L2 ⊗ J2 from which (B.5) follows. Finally it is simple to verify that A⊥ has the right number of columns and it satisfies A0⊥ A = 0. Q.E.D.
© Copyright 2026 Paperzz