IPM Yöntemi ile Alt Parametrelerin Tahmini

Estimation of Subparameters by IPM Method
ABSTRACT
In this study, a general partitioned linear model M   y, X  ,V    y, X11  X 2 2 ,V  is considered to determine the
best linear unbiased estimators (BLUEs) of subparameters X11 and X 2  2 . Some results are given related to the
BLUEs of subparameters by using the inverse partitioned matrix (IPM) method based on a generalized inverse of a
symmetric block partitioned matrix which is obtained from the fundamental BLUE equation.
Keywords: BLUE, generalized inverse, general partitioned linear model.
IPM Yöntemi ile Alt Parametrelerin Tahmini
ÖZ
Bu çalışmada, X11 ve X 2  2 alt parametrelerinin en iyi lineer yansız tahmin edicilerini (BLUE’ larını) belirlemek için
bir M   y, X  ,V    y, X11  X 2 2 ,V  genel parçalanmış lineer modeli ele alınmıştır. Temel BLUE denkleminden
elde edilen simetrik blok parçalanmış matrisin bir genelleştirilmiş tersine dayanan parçalanmış matris tersi (IPM)
yöntemi kullanılarak alt parametrelerin BLUE’ ları ile ilgili bazı sonuçlar verilmiştir.
Anahtar kelimeler: BLUE, genelleştirilmiş ters, genel parçalanmış lineer model.
1.
INTRODUCTION (GİRİŞ)
Consider the general partitioned linear model
y  X     X11  X 2 2   ,
where y 
X2 
nxp2
nx1
(1)
is an observable random vector, X   X1 : X 2  
,    1 :  2  
px1
nxp
is a known matrix with X1 
is a vector of unknown parameters with 1 
p1 x1
and 2 
p2 x1
random error vector. Further, the expectation E  y   X  and the covariance matrix Cov  y   V 
, 
nxn
nxp1
and
nx1
is a
is a known
nonnegative definite matrix. We may denoted the model (1) as a triplet
M   y, X  ,V    y, X11  X 2 2 ,V  .
(2)
Partitioned linear models are used in the estimations of partial parameters in regression models as well as in the
investigations of some submodels and reduced models associated with the original model. In this study, we consider the
general partitioned linear model M
and we deal with the best linear unbiased estimators (BLUEs) of subparameters
under this model. Our main purpose is to obtain the BLUEs of subparameters X11 and X 2  2 under M
by using the
inverse partitioned matrix (IPM) method which is introduced by Rao [1] for statistical inference in general linear
models. We also investigate some consequences on the BLUEs of subparameters obtained by using IPM approach.
Under the linear models, BLUE has been investigated by many statisticians. Some valuable properties of BLUE have
been obtained, e.g., [2-6]. By applying matrix rank method, some characterizations of BLUE have been given by Tian
[7, 8]. IPM method for the general linear model with linear restrictions has been considered by Baksalary [9].
2.
PRELIMINARIES (ÖN BİLGİLER)
The BLUE of X  under M , denoted as BLUE  X  M
 , is defined to be an unbiased linear estimator
Gy such
that its covariance matrix Cov  Gy  is minimal, in the Löwner sense, among all covariance matrices Cov  Fy  such
that Fy is unbiased for X  . It is well-known, see, e.g., [10, 11], that Gy  BLUE  X  M
 if and only if
G satisfies
the fundamental BLUE equation
G  X : VQ    X : 0
(3)
where Q    Px with Px is orthogonal projector onto the column space C  X  . Note that the equation (3) has a unique
solution if and only if rank  X : V   n and the observed value of Gy is unique with probability 1 if and only if M
is
consistent, i.e., y  C  X : V   C  X : VQ  holds with probability 1; see [12]. In the study, it is assumed that the model
M
is consistent.
The corresponding condition for Ay to be BLUE of an estimable parametric function K  is A  X : VQ    K : 0 .
Recall that a parametric function K  is estimable under M
and X 2  2 is estimable under M
if and only if C  K   C  X  and in particular, X11
if and only if C  X1   C  X 2   0 ; see [13, 14].
The fundamental BLUE equation given in (3) equivalently expressed as follows. Gy  BLUE  X  M
there exists a matrix L 
V
 
X
pxn

if and only if
such that G is solution to
X   G   0 
 G   0 
      i.e., Z       .
0  L   X 
 L  X 
(4)
Partitioned matrices and their generalized inverses play an important role in the concept of linear models. According to
Rao [1], the problem of inference from a linear model can be completely solved when one has obtained an arbitrary
generalized inverse of the partitioned matrix Z . This approach based on the numerical evaluation of an inverse of the
partitioned matrix Z is known as the IPM method, see [1, 15].
 C1
Let the matrix C  
 C3
ZCZ  Z , where C1 
 G    C1
 
 L   C3
C2 
 be an arbitrary generalized inverse of Z , i.e., C is any matrix satisfying the equation
C 4 
nxn
and C2 
nxp
. Then one solution to the (consistent) equation (4) is
C2   0   C2 X  
   
.
C4   X    C4 X  
Therefore, we see that BLUE  X  M
(5)
  XC2 y  XC3 y , which is one representation for the BLUE of
X  under M .
If we let C vary through all generalized inverses of Z we obtain all solutions to (4) and thereby all representations
Gy for the BLUE of X  under M . As further reference for submatrices Ci , i  1, 2,3, 4, and their statistical
applications, see [16-23].
SOME RESULTS on a GENERALIZED INVERSE OF Z ( Z ’ NİN GENELLEŞTİRİLMİŞ TERSİ
3.
ÜZERİNE BAZI SONUÇLAR)
Some explicit algebraic expression for the submatrices of C was obtained in [15, Theorem 2.3]. The purpose of this
section is to extend this theorem to 3x3 symmetric block partitioned matrix to obtain the BLUEs of subparameters and
their properties.
 
Let D  Z  , expressed as
 D0

D   E1
E
 2
where D0 
D1
 F1
 F3
nxn
D2   V
 
 F2    X1
 F4   X 2
, D1 
nxp1
X1
0
0
, D2 

X2 

0 ,
0 
nxp2
(6)
  stands for
, E1 , E2 , F1 , F2 , F3 , F4 are conformable matrices and Z 
the set of all generalized inverse of Z . In the following theorem, we collect some properties related to the submatrices
of D given in (6).
Theorem 1. Let V , X1 , X 2 , D0 , D1 , D2 , E1 , E2 , F1 , F2 , F3 , F4 be defined as before and let C  X1   C  X 2   0 . Then
the following hold:
(i)
V
 
 X1
 X
 2
X1
0
0

X2 
 D0


0    D1
 D
0 
 2
E1
 F1
 F2
E2 

 F3  is another choice of a generalized inverse.
 F4 
(ii)
VD0 X1  X1E1 X1  X 2 E2 X1  X1 , X1D0 X1  0 , X 2 D0 X1  0 .
(iii)
VD0 X 2  X1E1 X 2  X 2 E2 X 2  X 2 , X1D0 X 2  0 , X 2 D0 X 2  0 .
(iv)
VD0V  X1E1V  X 2 E2V  V , X1D0V  0 , X 2 D0V  0 .
(v)
VD1 X1  X1F1 X1  X 2 F3 X1 , X1D1 X1  X1 , X1 D1 X 2  0 .
(vi)
VD2 X 2  X1F2 X 2  X 2 F4 X 2 , X 2 D2 X 2  X 2 , X 2 D2 X1  0 .
Proof: The result (i) is proved by taking transposes of either side of (6). We observe that the equations
Va  X1b  X 2c  X1d , X1a  0 , X 2 a  0
(7)
are solvable for any d, in which case a  D0 X1d , b  E1 X1d , c  E2 X1d is a solution. Substituting this solution in (7)
and omitting d , we have (ii). To prove (iii), we can write the equations
Va  X1b  X 2 c  X 2 d , X1a  0 , X 2 a  0 ,
(8)
which are solvable for any d . Then a  D0 X 2 d , b  E1 X 2 d , c  E2 X 2 d is a solution. Substituting this solution in (8)
and omitting d , we have (iii). To prove (iv), the equations which are solvable for any d
Va  X1b  X 2 c  Vd , X1a  0 , X 2 a  0
(9)
are considered. In this case, one solution is a  D0Vd , b  E1Vd , c  E2Vd . If we substitute this solution in (9) and
omit d , we have (iv). In view of the assumption C  X1   C  X 2   0 , we can consider the equations
Va  X1b  X 2 c  0 , X1a  X1d1 , X 2 a  0
(10)
and
Va  X1b  X 2 c  0 , X1a  0 , X 2 a  X 2 d 2
(11)
for the proof of (v) and (vi), respectively, see [18, Theorem 7.4.8]. In this case a  D1 X1d1 , b   F1 X1d1 , c   F3 X1d1
is a solution for (10) and a  D2 X 2 d2 , b   F2 X 2 d2 , c   F4 X 2 d2 is a solution for (11). Substituting these solution
into corresponding equations and omitting d1 and d2 , we obtain the required results.
4.
IPM METHOD FOR SUBPARAMETERS (ALT PARAMETRELER İÇİN IPM YÖNTEMİ)
The fundamental BLUE equation given in (4) can be accordingly written for Ay being the BLUE of estimable K  ,
that is, Ay  BLUE  K  M

if and only if there exists a matrix L 
pxn
such that A is solution to
 A   0 
Z  .
 L   K 
(12)
Now, assumed that X11 and X 2  2 are estimable under M . If we take K   X1 : 0 and K   0 : X 2  , respectively,
from equation (12) , we get the BLUE equations of subparameters X11 and X 2  2 . There exist L1 
L3 
p1 xn
, L4 
p2 xn
G1 y  BLUE  X 1 1 M
p1 xn
, L2 
p2 xn
,
such that G1 and G2 are solution to following the equations, respectively,
V

  X 1
 X
 2
X1
V

  X1
 X
 2
X1

0
0
X 2  G1   0 
   
0  L1    X 1 
  
0 
 L2   0 
(13)
X 2  G2   0 
   
0  L3    0  .
  
0 
 L4   X 2 
(14)
and
G2 y  BLUE  X 2  2 M
0
0
Therefore, the following theorem can be given to determine the BLUE of subparameters by the IPM method.
Theorem 2. Consider the general partitioned linear model M
and the matrix D given in (6). Suppose that
C  X1   C  X 2   0 . Then
BLUE  X11 M
  XD1 y  XE1 y and
BLUE  X 2  2 M
  XD2 y  XE2 y .
Proof: The general solution of the matrix equation given in (13) is
(15)
 G1   D0
  
 L1    E1
L  E
 2  2
D1
 F1
 F3
D2  0    D0
   
 F2  X 1      E1


 F4 
 0    E2
D1
 F1
 F3
D2  V

 F2  X 1
 F4 
 X 2
X1
0
0
X 2 

0 U
0  
and thereby we get
G1 y  X1 D1y  U1    VD0  X1 D1  X 2 D2  y  U2   X1D0   U3   X 2 D0  y .
Here y can be written as y  X1 L1  X 2 L2  VQL3 for some L1 , L2 and L3 since the model M
is assumed to be
consistent. From Theorem 1, we see that
U1    VD0  X1 D1  X 2 D2  X1 L1  X 2 L2  VQL3   0 ,
U2   X1D0  X1 L1  X 2 L2  VQL3   0 ,
U3   X 2 D0  y  U3   X 2 D0  X1 L1  X 2 L2  VQL3   0.
Moreover, according to Theorem 1 (i), we can replace D1 by E1 . Therefore, BLUE  X11 M
obtained. BLUE  X 2  2 M
  XD2 y  XE2 y
  XD1 y  XE1 y
is
is obtained by similar way.
The following results are easily obtained from Theorem 1 (v) and (vi) under M .

E BLUE  X11 M
  X11 and E  BLUE  X 2 2 M   X 2 2 ,
(16)

  X1F1 X1 and Cov  BLUE  X 2 2 M   X 2 F4 X 2 ,
(17)

 , BLUE  X 2 2 M   X1F2 X 2  X1F3X 2 .
(18)
Cov BLUE  X11 M
Cov BLUE  X11 M
5.
ADDITIVE DECOMPOSITION OF THE BLUEs OF SUBPARAMETERS (ALT PARAMETRELERİN
BLUE’ LARININ TOPLAMSAL AYRIŞIMI)
The purpose of this section is to give some additive properties of BLUEs of subparameters under M .
Theorem 3. Consider the model
M
and assume that
BLUE  X11 M   BLUE  X 2 2 M  is always BLUE for
Proof: Let BLUE  X11 M

and BLUE  X 2  2 M
X11 and
X 2 2
are estimable under
X  under M .
 be given as in (15). Then we can write
M .
BLUE  X11 M
According
to
  BLUE  X 2 2 M    X1D1  X 2 D2  y .
fundamental
BLUE
equation
 X1D1  X 2 D2  X1 : X 2 : VQ   X1 : X 2 : 0 for all
and
from
Theorem
1
(v)
and
(vi),
we
see
that
y C  X : VQ  . Therefore the required result is obtained.
The following results are easily obtained from Theorem 1 (iv) and (16)-(18).

E BLUE  X11 M

  BLUE  X 2 2 M   X  ,
Cov BLUE  X11 M
  BLUE  X 2 2 M   X1F1 X1  X 2 F4 X 2  X1F2 X 2  X 2 F3 X1 .
KAYNAKÇA
[1]
Rao, C. R., Unified theory of linear estimation. Sankhyā, Ser. A 33:371-394, 1971. [Corrigendum (1972), 34, p.
194 and p. 477.].
[2]
Nurhanen, M., Puntanen, S., Effect of deleting an observation on the equality of the OLSE and BLUE. Linear
Algebra and its Applications, 176, 131-136, 1992.
[3]
Werner, H. J., Yapar, C., Some equalities for estimations of partial coefficients under a general linear regression
model. Linear Algebra and its Applications, 237/238, 395-404, 1996.
[4]
Bhimasankaram, P., Saharay, R., On a partitioned linear model and some associated reduced models. Linear
Algebra and its Applications, 264, 329-339, 1997.
[5]
Gross, J., Puntanen, S., Estimation under a general partitioned linear model. Linear Algebra and its Applications,
321, 131-144, 2000.
[6]
Zhang, B., Liu, B., Lu, C., A study of the equivalence of the BLUEs between a partitioned singular linear model
and its reduced singular linear models. Acta Mathematica Sinica, English Series, Vol. 20, 3, 557-568, 2004.
[7]
Tian, Y., Puntanen, S., On the equivalence of estimations under a general linear model and its transformed
models. Linear Algebra and its Applications, 430, 2622-2641, 2009.
[8]
Tian, Y., On properties of BLUEs under general linear regression models. Journal of Statistical Planning and
Inference 143, 771-782, 2013.
[9]
Baksalary, J. K., Pordzik, P. R., Inverse-Partitioned-Matrix method for the general Gauss-Markov model with
linear restrictions. Journal of Statistical Planning and Inference, 23, 133-143, 1989.
[10]
Rao, C. R., Least squares theory using an estimated dispersion matrix and its application to measurement of
signals. In: Le Cam, L. M., Neyman, J., eds. Proc. Fifth Berkeley Symposium on Mathematical Statistics and
Probability: Berkeley, California, 1965/1966. Vol. 1. Univ. Of California Press, Berkeley, 355-372, 1967.
[11]
Zyskind, G., On canonical forms, non-negative covariance matrices and best and simple least squares linear
estimators in linear models. The Annals of Mathematical Statistics, 38, 1092-1109, 1967.
[12]
Rao, C. R., Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular
dispersion matrix. Journal of Multivariate Analysis, 3, 276-292, 1973.
[13]
Alalouf, I. S., Styan, G. P. H., Characterizations of estimability in the general linear model. The Annals of
Statistics, 7, 194-200, 1979.
[14]
Puntanen, S., Styan, G.P.H., Isotalo, J., Matrix tricks for linear statistical models, Our Personal Top Twenty,
Springer, Heidelberg, 2011.
[15]
Rao, C. R., A note on the IPM method in the unified theory of linear estimation. Sankhyā, Ser. A 34:285-288,
1972.
[16]
Drygas, H., A note on the Inverse-Partitioned-Matrix method in linear regression analysis. Linear Algebra and
its Applications, 67, 275-277, 1985.
[17]
Hall, F. J., Meyer, C. D., Generalized inverses of the fundamental bordered matrix used in linear estimation.
Sankhyā, Ser. A 37:428-438, 1975. [Corrigendum (1978), 40, p. 399.]
[18]
Harville, D. A., Matrix algebra from a statisticians perspective, Springer, New York, 1997.
[19]
Isotalo, J., Puntanen, S., Styan, G. P. H., A useful matrix decomposition and its statistical applications in linear
regression. Commun. Statist. Theor. Meth., 37, 1436-1457, 2008.
[20]
Pringle, R. M., Rayner, A. A., Generalized Inverse Matrices with Applications to Statistics, Hafner Publishing
Company, New York, 1971.
[21]
Rao, C. R., Some recent results in linear estimation. Sankhyā, Ser. B, 34, 369-378, 1972.
[22]
Rao, C. R., Linear Statistical Inference and its Applications, 2nd Ed., Wiley, New York, 1973.
[23]
Werner, H. J., C. R. Rao’s IPM method: a geometric approach. New Perspectives in Theoretical and Applied
Statistics (M. L. Puri, J. P. Vilaplana & W. Wertz, eds.). Wiley, New York, 367-382, 1987.