Mixed-effects models in theory and practice Part 2: Linear

Mixed-effects models in theory and practice
Part 2: Linear mixed-effects models
Lauri Mehtätalo1
1 Associate
Professor in Applied Statistics University of Eastern Finland School of Computing
2 Docent
in Forest Biometrics University of Helsinki Department of Forest Sciences
7-9.10.2015 / IBS-DR Biometry Workshop, Würzburg, Germany
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
1 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
2 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
3 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
When mixed effects
Grouped data
Groups are sample from a population of groups
Especially if interest lies in the population of groups, not only in the ones present in the
data
Especially if the sample size per group is small
Different grouping structures
Justified either from the predictive or inferential point of view
Possible grouping structures
I
I
I
I
single level of grouping,
multiple nested levels
multiple crossed levels
combinations of these
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
4 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Variance component model
We assume the following model for response yki of observation i in group k:
yki = µ + bk + εki
(1)
µ is a fixed population mean (mean of group means),
bk are i.i.d. random group effects with bk ∼ N(0, σb2 ),
εki are i.i.d observation-level residuals with εki ∼ N(0, σ 2 ),
and bk is independent of εki ,
Part bk + εki is called random part and µ the fixed part.
Model parameters are µ, σb2 , and σ 2
Once the parameters have been estimated, group effects bk can be predicted using BLUP
(Best Linear Unbiased Predictor).
Example 6.1
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
5 / 49
Mixed-effects model with random constant
The fixed part µ of the variance component model is replaced by a linear function of fixed
predictors
(2)
(p)
yki = β1 + β2 xki + . . . + βp xki + bk + εki .
(2)
(2)
(p)
The fixed part β1 + β2 xki + . . . + βp xki now expresses the mean dependence of y on x
over the groups, i.e., in a typical group.
bk and εki are as in variance component model.
Reorganizing terms as
(2)
(p)
yki = (β1 + bk ) + β2 xki + . . . + βp xki + εki
shows that we are assuming a model where y − x relationship is similar in all the groups up
to a level shift.
Example 6.2.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
6 / 49
Mixed-effects model with random constant
The fixed part µ of the variance component model is replaced by a linear function of fixed
predictors
(2)
(p)
yki = β1 + β2 xki + . . . + βp xki + bk + εki .
(2)
(2)
(p)
The fixed part β1 + β2 xki + . . . + βp xki now expresses the mean dependence of y on x
over the groups, i.e., in a typical group.
bk and εki are as in variance component model.
Reorganizing terms as
(2)
(p)
yki = (β1 + bk ) + β2 xki + . . . + βp xki + εki
shows that we are assuming a model where y − x relationship is similar in all the groups up
to a level shift.
Example 6.2.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
6 / 49
Mixed-effects model with random constant
The fixed part µ of the variance component model is replaced by a linear function of fixed
predictors
(2)
(p)
yki = β1 + β2 xki + . . . + βp xki + bk + εki .
(2)
(2)
(p)
The fixed part β1 + β2 xki + . . . + βp xki now expresses the mean dependence of y on x
over the groups, i.e., in a typical group.
bk and εki are as in variance component model.
Reorganizing terms as
(2)
(p)
yki = (β1 + bk ) + β2 xki + . . . + βp xki + εki
shows that we are assuming a model where y − x relationship is similar in all the groups up
to a level shift.
Example 6.2.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
6 / 49
Mixed-effects model with random constant
The fixed part µ of the variance component model is replaced by a linear function of fixed
predictors
(2)
(p)
yki = β1 + β2 xki + . . . + βp xki + bk + εki .
(2)
(2)
(p)
The fixed part β1 + β2 xki + . . . + βp xki now expresses the mean dependence of y on x
over the groups, i.e., in a typical group.
bk and εki are as in variance component model.
Reorganizing terms as
(2)
(p)
yki = (β1 + bk ) + β2 xki + . . . + βp xki + εki
shows that we are assuming a model where y − x relationship is similar in all the groups up
to a level shift.
Example 6.2.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
6 / 49
Variance and covariance between observations from same group
The variance of a single observation is:
var(yki )
(p)
=
var(β1 + . . . + βp xki + bk + εki )
=
var(bk + εki )
=
var(bk ) + var(εki )
=
σb2 + σ 2
Consider observations i and i0 from group k.
cov(yki , yki0 )
=
(p)
cov(β1 + . . . + βp xki + bk + εki ,
(p)
β1 + . . . + βp xki0 + bk + εki0 )
=
cov(bk + εki , bk + εki0 )
=
cov(bk , bk ) + cov(bk , εki0 ) + cov(bk , εki ) + cov(εki , εki0 )
=
var(bk )
=
σb2
Parameter σb2 has double interpretation: the variance of the group-specific constants or the
covariance of observations from the same group.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
7 / 49
Variance and covariance between observations from same group
The variance of a single observation is:
var(yki )
(p)
=
var(β1 + . . . + βp xki + bk + εki )
=
var(bk + εki )
=
var(bk ) + var(εki )
=
σb2 + σ 2
Consider observations i and i0 from group k.
cov(yki , yki0 )
=
(p)
cov(β1 + . . . + βp xki + bk + εki ,
(p)
β1 + . . . + βp xki0 + bk + εki0 )
=
cov(bk + εki , bk + εki0 )
=
cov(bk , bk ) + cov(bk , εki0 ) + cov(bk , εki ) + cov(εki , εki0 )
=
var(bk )
=
σb2
Parameter σb2 has double interpretation: the variance of the group-specific constants or the
covariance of observations from the same group.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
7 / 49
Variance and covariance between observations from same group
The variance of a single observation is:
var(yki )
(p)
=
var(β1 + . . . + βp xki + bk + εki )
=
var(bk + εki )
=
var(bk ) + var(εki )
=
σb2 + σ 2
Consider observations i and i0 from group k.
cov(yki , yki0 )
=
(p)
cov(β1 + . . . + βp xki + bk + εki ,
(p)
β1 + . . . + βp xki0 + bk + εki0 )
=
cov(bk + εki , bk + εki0 )
=
cov(bk , bk ) + cov(bk , εki0 ) + cov(bk , εki ) + cov(εki , εki0 )
=
var(bk )
=
σb2
Parameter σb2 has double interpretation: the variance of the group-specific constants or the
covariance of observations from the same group.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
7 / 49
Multiple random effects
Allowing random effects to other coefficients leads to
yki
=
(2)
(p)
β1 + β2 xki + . . . + βp xki
(1)
(2) (2)
(p) (p)
+bk + bk xki + . . . + bk xki + εki ,
(2)
(3)
(p)
The fixed part β1 + β2 xki + . . . + βp xki is as before
(1)
(p)
We define b0k = bk , . . . , bk
and assume that bk are i.i.d. and independent of εki ,
bk ∼ N(0, D• )
where D• is a positive definite p × p variance-covariance matrix of random effects.
Model parameters are β1 , . . . , βp , σ 2 , and the
Lauri Mehtätalo (UEF)
p(p−1)
2
Linear mixed-effects models
variances and covariances of D• .
October 2015, Würzburg
8 / 49
Multiple random effects
Allowing random effects to other coefficients leads to
yki
=
(2)
(p)
β1 + β2 xki + . . . + βp xki
(1)
(2) (2)
(p) (p)
+bk + bk xki + . . . + bk xki + εki ,
(2)
(3)
(p)
The fixed part β1 + β2 xki + . . . + βp xki is as before
(1)
(p)
We define b0k = bk , . . . , bk
and assume that bk are i.i.d. and independent of εki ,
bk ∼ N(0, D• )
where D• is a positive definite p × p variance-covariance matrix of random effects.
Model parameters are β1 , . . . , βp , σ 2 , and the
Lauri Mehtätalo (UEF)
p(p−1)
2
Linear mixed-effects models
variances and covariances of D• .
October 2015, Würzburg
8 / 49
Multiple random effects
Allowing random effects to other coefficients leads to
yki
=
(2)
(p)
β1 + β2 xki + . . . + βp xki
(1)
(2) (2)
(p) (p)
+bk + bk xki + . . . + bk xki + εki ,
(2)
(3)
(p)
The fixed part β1 + β2 xki + . . . + βp xki is as before
(1)
(p)
We define b0k = bk , . . . , bk
and assume that bk are i.i.d. and independent of εki ,
bk ∼ N(0, D• )
where D• is a positive definite p × p variance-covariance matrix of random effects.
Model parameters are β1 , . . . , βp , σ 2 , and the
Lauri Mehtätalo (UEF)
p(p−1)
2
Linear mixed-effects models
variances and covariances of D• .
October 2015, Würzburg
8 / 49
Multiple random effects
Allowing random effects to other coefficients leads to
yki
=
(2)
(p)
β1 + β2 xki + . . . + βp xki
(1)
(2) (2)
(p) (p)
+bk + bk xki + . . . + bk xki + εki ,
(2)
(3)
(p)
The fixed part β1 + β2 xki + . . . + βp xki is as before
(1)
(p)
We define b0k = bk , . . . , bk
and assume that bk are i.i.d. and independent of εki ,
bk ∼ N(0, D• )
where D• is a positive definite p × p variance-covariance matrix of random effects.
Model parameters are β1 , . . . , βp , σ 2 , and the
Lauri Mehtätalo (UEF)
p(p−1)
2
Linear mixed-effects models
variances and covariances of D• .
October 2015, Würzburg
8 / 49
Alternative formulations
By reorganizing terms of model 3 gives
yki
(2)
(1)
=
(2)
(β1 + bk ) + (β2 + bk )xki + . . .
(p)
(p)
+(βp + bk )xki + εki ,
which emphasizes that a random effect is associated for each fixed coefficient.
A third equivalent formulation is
(1)
(2) (2)
(p) (p)
yki = ck + ck xki + . . . + ck xki + εki ,
where

(1)
ck
 . 

ck = 
 ..  ∼ N(β , D• ) ,
(p)
ck

i.e., all parameters are random, and the fixed parameters specify their expected values.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
9 / 49
Alternative formulations
By reorganizing terms of model 3 gives
yki
(2)
(1)
=
(2)
(β1 + bk ) + (β2 + bk )xki + . . .
(p)
(p)
+(βp + bk )xki + εki ,
which emphasizes that a random effect is associated for each fixed coefficient.
A third equivalent formulation is
(1)
(2) (2)
(p) (p)
yki = ck + ck xki + . . . + ck xki + εki ,
where

(1)
ck
 . 

ck = 
 ..  ∼ N(β , D• ) ,
(p)
ck

i.e., all parameters are random, and the fixed parameters specify their expected values.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
9 / 49
Random constant and slope
Inclusion of random effects to all predictors easily leads to an overparameterized model,
and random effects are commonly assigned only to few primary (observation-level)
predictors.
A commonly applicable model is the model with random constant and slope:
(1)
(2)
yki = (β1 + bk ) + (β2 + bk )xki + εki ,
(4)
(1) (2)
where b0k = bk , bk
and bk ∼ N (0, D• ) with
(1)
D• =
var(bk )
(1) (2)
cov(bk , bk )
(1)
(2)
cov(bk , bk )
(2)
var(bk )
!
.
(5)
Homework: Find the implicitly assumed formulas of var(yki ) and cov(yki , yki0 ) for this
model.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
10 / 49
Random constant and slope
Inclusion of random effects to all predictors easily leads to an overparameterized model,
and random effects are commonly assigned only to few primary (observation-level)
predictors.
A commonly applicable model is the model with random constant and slope:
(1)
(2)
yki = (β1 + bk ) + (β2 + bk )xki + εki ,
(4)
(1) (2)
where b0k = bk , bk
and bk ∼ N (0, D• ) with
(1)
D• =
var(bk )
(1) (2)
cov(bk , bk )
(1)
(2)
cov(bk , bk )
(2)
var(bk )
!
.
(5)
Homework: Find the implicitly assumed formulas of var(yki ) and cov(yki , yki0 ) for this
model.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
10 / 49
Random constant and slope
Inclusion of random effects to all predictors easily leads to an overparameterized model,
and random effects are commonly assigned only to few primary (observation-level)
predictors.
A commonly applicable model is the model with random constant and slope:
(1)
(2)
yki = (β1 + bk ) + (β2 + bk )xki + εki ,
(4)
(1) (2)
where b0k = bk , bk
and bk ∼ N (0, D• ) with
(1)
D• =
var(bk )
(1) (2)
cov(bk , bk )
(1)
(2)
cov(bk , bk )
(2)
var(bk )
!
.
(5)
Homework: Find the implicitly assumed formulas of var(yki ) and cov(yki , yki0 ) for this
model.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
10 / 49
Population and group-level parameters
The random effects can be thought either (i) as a part of the systematic part or (ii) as a part
of residual errors.
(j)
The use of group-level coefficients βbj + e
bk in prediction leads to group-level predictions
e
yki and group-level residuals yki −e
yki .
(j)
The use of population-level coefficients βbj + e
b in prediction leads to population-level
k
predictions ẏki and population-level residuals yki − ẏki .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
11 / 49
Population and group-level parameters
The random effects can be thought either (i) as a part of the systematic part or (ii) as a part
of residual errors.
(j)
The use of group-level coefficients βbj + e
bk in prediction leads to group-level predictions
e
yki and group-level residuals yki −e
yki .
(j)
The use of population-level coefficients βbj + e
b in prediction leads to population-level
k
predictions ẏki and population-level residuals yki − ẏki .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
11 / 49
Population and group-level parameters
The random effects can be thought either (i) as a part of the systematic part or (ii) as a part
of residual errors.
(j)
The use of group-level coefficients βbj + e
bk in prediction leads to group-level predictions
e
yki and group-level residuals yki −e
yki .
(j)
The use of population-level coefficients βbj + e
b in prediction leads to population-level
k
predictions ẏki and population-level residuals yki − ẏki .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
11 / 49
Example
Data spati of package lmfor includes 4747 tree-specific measurements from 66 Scots
pine stands in North Carelia. Each tree was measured for the diameter growth within 1-5
years prior to the measurement (id1) and for the diameter growth within 6-10 years prior to
the measurement. We pretend the situation 5 years before, and therefore call id1 the future
diameter growth and id2 as past diameter growth. We model the dependence of future
growth on past growth.
For a first look at the data, we plot id1 on id2. We also add plot-specific fits of a simple
linear model id1i = β0 + β2 id2i + ei .
70
●
●
50
40
30
20
0
●
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
●
● ●● ● ●
●●●
●
●
●
●
●
●●
●
●
●
● ●● ●●●● ●
●●
●●●
●
● ●●●●
●
●
● ●●
●
●●
●
●
●
●
●
●
●●
● ●
●
●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●
●
●
● ●●●● ●
●
●
●
●
●●
●
●
●
●
●● ●● ● ●●●
● ●
●●
●●●
●
●
● ●●
● ●
●●● ●● ●
●
●● ●
● ● ●
● ●
●●
●●
●
●●
●●
●
●●●
●
●
●
●●
●
●
● ●●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●●
●
● ●●
●
●
●
●
●
●●●●●
●
●●
●●●●
●
●
●
● ●
●
●
●●●
●●●
●
●
●●
●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●●●
●●
●●
●
●
●● ●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●
●●● ●●● ●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●●
●
●
●
●
●●●●●●
●
●
●
●
●
●● ● ●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
10
Future diameter growth
60
●
●
0
20
●
40
60
●
●
80
Past diameter growth
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
12 / 49
Example
Data spati of package lmfor includes 4747 tree-specific measurements from 66 Scots
pine stands in North Carelia. Each tree was measured for the diameter growth within 1-5
years prior to the measurement (id1) and for the diameter growth within 6-10 years prior to
the measurement. We pretend the situation 5 years before, and therefore call id1 the future
diameter growth and id2 as past diameter growth. We model the dependence of future
growth on past growth.
For a first look at the data, we plot id1 on id2. We also add plot-specific fits of a simple
linear model id1i = β0 + β2 id2i + ei .
70
●
●
50
40
30
20
0
●
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
●
● ●● ● ●
●●●
●
●
●
●
●
●●
●
●
●
● ●● ●●●● ●
●●
●●●
●
● ●●●●
●
●
● ●●
●
●●
●
●
●
●
●
●
●●
● ●
●
●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●
●
●
● ●●●● ●
●
●
●
●
●●
●
●
●
●
●● ●● ● ●●●
● ●
●●
●●●
●
●
● ●●
● ●
●●● ●● ●
●
●● ●
● ● ●
● ●
●●
●●
●
●●
●●
●
●●●
●
●
●
●●
●
●
● ●●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●●
●
● ●●
●
●
●
●
●
●●●●●
●
●●
●●●●
●
●
●
● ●
●
●
●●●
●●●
●
●
●●
●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●●●
●●
●●
●
●
●● ●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●
●●● ●●● ●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●●
●
●
●
●
●●●●●●
●
●
●
●
●
●● ● ●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
10
Future diameter growth
60
●
●
0
20
●
40
60
●
●
80
Past diameter growth
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
12 / 49
Fits and predictions from three models
Random constant and slope
0
20
40
60
70
70
60
●
●
50
10
●
40
●
0
60
50
40
30
20
Future diameter growth
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●●●
●
● ●●●●
●
●
●
●
●
●
●
●
●
● ● ●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●● ●
●●
●●
●
●●●●
●●
●
● ●●
●
●
●●●●
●
●●●
●●●●
●
●
●● ●
●
●
●●●
●●●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●●●
●●
●●
●
●
●● ●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
10
●
●
Future diameter growth
●
●
●
●
0
●
●
● ●●
●
30
70
60
50
40
30
Future diameter growth
20
10
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
●
●
●
● ● ● ● ● ●●
●●
●●
●
● ●●●●
●
●
● ●●
●●
●
● ●●●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●
●
●
● ●●●● ●
●
●
●
●
●●
●
●
●
●
●● ●● ● ●●●
● ●
●●
●●●
●
●
● ●●
● ●
●
●
●●● ●
●
●● ●
● ● ●
● ●
● ● ●●
●●
●
●●
●
●
●
●
●●
●
●
●●
● ●●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●●
●
●
● ●●
●
●
●
●
●
●●●●●
●
●●
●●●●
●
●
●
● ●
●
●
●●●
●●●
●
●
●●
●
●
●●●
●●
●
●
●●
●
●
●●
●
●
●●
●●
●
●
●
●●
●●
●
●
●
●● ●
●
●●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●●
●●
●
●
●
●●
●
●
●
● ●●
●
●
●
●●
●
●
●●●
●
●
●●●●●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●●● ●●● ●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
Random constant
●
20
Variance component model
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●
●
●
●
●
●
●● ●
●
●
●
● ●●
●
●● ●●●
● ●
●●
●● ● ●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●
●●
●●
●
●●
●●● ●
●●
●●
●●
●
●
●
●●●●
●
●
●
●
●
● ● ●●
●
● ● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●●
●●
●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●
● ●●
●●
●
●
●●●
●
●●●●
●
●
●●●●●
●
●
●●●●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●●●●
●
●
●● ●●
●
●
●● ●
●
●●
●
●
●●
●●
●
●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
0
20
20
40
60
40
●
60
●
●
80
Past diameter growth
80
0
●
●
80
Past diameter growth
Past diameter growth
id1ki
id1ki = µ + bk + εki
lmm1<-lme(id1 ∼ 1,
random=∼ 1|plot,
data=spati)
Lauri Mehtätalo (UEF)
=
β0 + β1 id2ki
(1)
(2)
id1ki = β0 + β1 id2ki + bk + εki
+bk + bk id2ki
lmm2<-update(lmm1,
model=id1 ∼ id2)
+εki
Linear mixed-effects models
lmm3<-update(lmm2,
random= ∼ id2|plot)
October 2015, Würzburg
13 / 49
Fits and predictions from three models
Random constant and slope
0
20
40
60
70
70
60
●
●
50
10
●
40
●
0
60
50
40
30
20
Future diameter growth
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●●●
●
● ●●●●
●
●
●
●
●
●
●
●
●
● ● ●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●● ●
●●
●●
●
●●●●
●●
●
● ●●
●
●
●●●●
●
●●●
●●●●
●
●
●● ●
●
●
●●●
●●●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●●●
●●
●●
●
●
●● ●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
10
●
●
Future diameter growth
●
●
●
●
0
●
●
● ●●
●
30
70
60
50
40
30
Future diameter growth
20
10
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
●
●
●
● ● ● ● ● ●●
●●
●●
●
● ●●●●
●
●
● ●●
●●
●
● ●●●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●
●
●
● ●●●● ●
●
●
●
●
●●
●
●
●
●
●● ●● ● ●●●
● ●
●●
●●●
●
●
● ●●
● ●
●
●
●●● ●
●
●● ●
● ● ●
● ●
● ● ●●
●●
●
●●
●
●
●
●
●●
●
●
●●
● ●●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●●
●
●
● ●●
●
●
●
●
●
●●●●●
●
●●
●●●●
●
●
●
● ●
●
●
●●●
●●●
●
●
●●
●
●
●●●
●●
●
●
●●
●
●
●●
●
●
●●
●●
●
●
●
●●
●●
●
●
●
●● ●
●
●●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●●
●●
●
●
●
●●
●
●
●
● ●●
●
●
●
●●
●
●
●●●
●
●
●●●●●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●●● ●●● ●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
Random constant
●
20
Variance component model
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●
●
●
●
●
●
●● ●
●
●
●
● ●●
●
●● ●●●
● ●
●●
●● ● ●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●
●●
●●
●
●●
●●● ●
●●
●●
●●
●
●
●
●●●●
●
●
●
●
●
● ● ●●
●
● ● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●●
●●
●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●
● ●●
●●
●
●
●●●
●
●●●●
●
●
●●●●●
●
●
●●●●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●●●●
●
●
●● ●●
●
●
●● ●
●
●●
●
●
●●
●●
●
●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
0
20
20
40
60
40
●
60
●
●
80
Past diameter growth
80
0
●
●
80
Past diameter growth
Past diameter growth
id1ki
id1ki = µ + bk + εki
lmm1<-lme(id1 ∼ 1,
random=∼ 1|plot,
data=spati)
Lauri Mehtätalo (UEF)
=
β0 + β1 id2ki
(1)
(2)
id1ki = β0 + β1 id2ki + bk + εki
+bk + bk id2ki
lmm2<-update(lmm1,
model=id1 ∼ id2)
+εki
Linear mixed-effects models
lmm3<-update(lmm2,
random= ∼ id2|plot)
October 2015, Würzburg
13 / 49
Fits and predictions from three models
Random constant and slope
0
20
40
60
70
70
60
●
●
50
10
●
40
●
0
60
50
40
30
20
Future diameter growth
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●●●
●
● ●●●●
●
●
●
●
●
●
●
●
●
● ● ●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●● ●
●●
●●
●
●●●●
●●
●
● ●●
●
●
●●●●
●
●●●
●●●●
●
●
●● ●
●
●
●●●
●●●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●●●
●●
●●
●
●
●● ●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
10
●
●
Future diameter growth
●
●
●
●
0
●
●
● ●●
●
30
70
60
50
40
30
Future diameter growth
20
10
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
●
●
●
● ● ● ● ● ●●
●●
●●
●
● ●●●●
●
●
● ●●
●●
●
● ●●●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●
●
●
● ●●●● ●
●
●
●
●
●●
●
●
●
●
●● ●● ● ●●●
● ●
●●
●●●
●
●
● ●●
● ●
●
●
●●● ●
●
●● ●
● ● ●
● ●
● ● ●●
●●
●
●●
●
●
●
●
●●
●
●
●●
● ●●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●●
●
●
● ●●
●
●
●
●
●
●●●●●
●
●●
●●●●
●
●
●
● ●
●
●
●●●
●●●
●
●
●●
●
●
●●●
●●
●
●
●●
●
●
●●
●
●
●●
●●
●
●
●
●●
●●
●
●
●
●● ●
●
●●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●●
●●
●
●
●
●●
●
●
●
● ●●
●
●
●
●●
●
●
●●●
●
●
●●●●●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●●● ●●● ●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
Random constant
●
20
Variance component model
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●
●
●
●
●
●
●● ●
●
●
●
● ●●
●
●● ●●●
● ●
●●
●● ● ●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●
●●
●●
●
●●
●●● ●
●●
●●
●●
●
●
●
●●●●
●
●
●
●
●
● ● ●●
●
● ● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●●
●●
●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●
● ●●
●●
●
●
●●●
●
●●●●
●
●
●●●●●
●
●
●●●●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●●●●
●
●
●● ●●
●
●
●● ●
●
●●
●
●
●●
●●
●
●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
0
20
20
40
60
40
●
60
●
●
80
Past diameter growth
80
0
●
●
80
Past diameter growth
Past diameter growth
id1ki
id1ki = µ + bk + εki
lmm1<-lme(id1 ∼ 1,
random=∼ 1|plot,
data=spati)
Lauri Mehtätalo (UEF)
=
β0 + β1 id2ki
(1)
(2)
id1ki = β0 + β1 id2ki + bk + εki
+bk + bk id2ki
lmm2<-update(lmm1,
model=id1 ∼ id2)
+εki
Linear mixed-effects models
lmm3<-update(lmm2,
random= ∼ id2|plot)
October 2015, Würzburg
13 / 49
Fits and predictions from three models
Random constant and slope
0
20
40
60
70
70
60
●
●
50
10
●
40
●
0
60
50
40
30
20
Future diameter growth
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●●●
●
● ●●●●
●
●
●
●
●
●
●
●
●
● ● ●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●● ●
●●
●●
●
●●●●
●●
●
● ●●
●
●
●●●●
●
●●●
●●●●
●
●
●● ●
●
●
●●●
●●●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●●●
●●
●●
●
●
●● ●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
10
●
●
Future diameter growth
●
●
●
●
0
●
●
● ●●
●
30
70
60
50
40
30
Future diameter growth
20
10
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
●
●
●
● ● ● ● ● ●●
●●
●●
●
● ●●●●
●
●
● ●●
●●
●
● ●●●
● ●
●●
●● ● ●
●
●
●
●
●
● ● ●●
●
● ●
●● ●
● ●
●●
●
●●
●
●●
●●● ●
●●
●●
●●
●
●●
●
●●●●
●
●
●●
● ● ●
●
●● ●● ●
●●●
● ●
● ●
●
●
●
●
●
● ●●●● ●
●
●
●
●
●●
●
●
●
●
●● ●● ● ●●●
● ●
●●
●●●
●
●
● ●●
● ●
●
●
●●● ●
●
●● ●
● ● ●
● ●
● ● ●●
●●
●
●●
●
●
●
●
●●
●
●
●●
● ●●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●●
●
●
● ●●
●
●
●
●
●
●●●●●
●
●●
●●●●
●
●
●
● ●
●
●
●●●
●●●
●
●
●●
●
●
●●●
●●
●
●
●●
●
●
●●
●
●
●●
●●
●
●
●
●●
●●
●
●
●
●● ●
●
●●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●●
●●
●
●
●
●●
●
●
●
● ●●
●
●
●
●●
●
●
●●●
●
●
●●●●●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●●● ●●● ●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
Random constant
●
20
Variance component model
●
●
●
●
●
●
●
●
● ● ●
●
● ●●
●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●● ● ●
●
●
● ● ●●●
●● ●
●
●
●
● ●● ●●●● ●
●●
●
●
●
●
●
●
●● ●
●
●
●
● ●●
●
●● ●●●
● ●
●●
●● ● ●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●
●●
●●
●
●●
●●● ●
●●
●●
●●
●
●
●
●●●●
●
●
●
●
●
● ● ●●
●
● ● ●
●●●
● ●
● ●
●
●
●●●
●
●
●
●
●● ●●●
●
●
●● ●
● ●● ●
●●●
● ●●
●
●●
●●●
●
● ●●
● ●●
●●●● ●
●
●
●
●● ●
● ● ●
● ●
●●●
●●
●
●●
●●
●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●
●
● ● ●●
●●
●●
●
●
● ●●
●●
●
●
●●●
●
●●●●
●
●
●●●●●
●
●
●●●●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●●●●
●
●
●● ●●
●
●
●● ●
●
●●
●
●
●●
●●
●
●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
● ●●●
●●
●
●
●
●●●
●
●
●
●
●
● ● ●
● ●●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●
●●● ●●● ●
●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●●●
●
● ● ●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
0
20
20
40
60
40
●
60
●
●
80
Past diameter growth
80
0
●
●
80
Past diameter growth
Past diameter growth
id1ki
id1ki = µ + bk + εki
lmm1<-lme(id1 ∼ 1,
random=∼ 1|plot,
data=spati)
Lauri Mehtätalo (UEF)
=
β0 + β1 id2ki
(1)
(2)
id1ki = β0 + β1 id2ki + bk + εki
+bk + bk id2ki
lmm2<-update(lmm1,
model=id1 ∼ id2)
+εki
Linear mixed-effects models
lmm3<-update(lmm2,
random= ∼ id2|plot)
October 2015, Würzburg
13 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
14 / 49
Matrix formulation for a single group
Model 3 for group k can be written as
yk = Xk β + Zk bk + εk
where



yk = 


yk1
yk2
..
.
yknk







, Xk = 



(2)
xk1
(2)
xk2
1
1
..
.
1
(p)
xk1
(p)
xk2
(2)
xk1
(2)
xk2
..
.
(2)
xknk

...
...
..
.
...

(p)
xk1
(p)
xk2
..
.
(p)
xknk
(6)



β1


 β2 



, β =  . ,

 .. 

βp



(1)
bk
εk1
(2) 
 εk2 
bk ) 



.. ,
.. , εk = 

. 
. 
(p)
εknk
b
1
...



1
...


, bk = 
..
..
.. 
..


.

.
.
. 
(2)
(p)
1 xknk . . . xknk
k
var(bk ) = D• , and var(εk ) = Rk = σ 2 Ink ×nk .



Zk = 


The other models are obtained as special cases, by dropping some elements form β and/or
bk and columns form X and Zk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
15 / 49
Matrix formulation for a single group
Model 3 for group k can be written as
yk = Xk β + Zk bk + εk
where



yk = 


yk1
yk2
..
.
yknk







, Xk = 



(2)
xk1
(2)
xk2
1
1
..
.
1
(p)
xk1
(p)
xk2
(2)
xk1
(2)
xk2
..
.
(2)
xknk

...
...
..
.
...

(p)
xk1
(p)
xk2
..
.
(p)
xknk
(6)



β1


 β2 



, β =  . ,

 .. 

βp



(1)
bk
εk1
(2) 
 εk2 
bk ) 



.. ,
.. , εk = 

. 
. 
(p)
εknk
b
1
...



1
...


, bk = 
..
..
.. 
..


.

.
.
. 
(2)
(p)
1 xknk . . . xknk
k
var(bk ) = D• , and var(εk ) = Rk = σ 2 Ink ×nk .



Zk = 


The other models are obtained as special cases, by dropping some elements form β and/or
bk and columns form X and Zk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
15 / 49
Means, variances and covariances
The expected value for a single group k is
E (yk )
=
E (Xk β + Zk bk + εk )
=
E (Xk β ) + Zk E (bk ) + E (εk )
=
Xk β
The variance is
var (yk )
=
var (Xk β + Zk bk + εk )
=
var (Xk β ) + var (Zk bk ) + var (εk )
=
Zk var (bk ) Z0k + var (εk )
=
Zk D• Z0k + Rk
The normality of randomeffects and residuals, and linearity of the model further yields
yk ∼ N(Xk β , Zk D• Z0k + Rk ) .
The covariance between the random effects and observations of group k are
cov(bk , y0k )
Lauri Mehtätalo (UEF)
=
cov(bk , (Xk β + Zk bk + εk )0 )
=
cov(bk , (Xk β )0 ) + cov(bk , (Zk bk )0 ) + cov(bk , εk0 )
=
0 + cov(bk , b0k )Z0k + 0
=
D• Z0k .
Linear mixed-effects models
October 2015, Würzburg
16 / 49
Means, variances and covariances
The expected value for a single group k is
E (yk )
=
E (Xk β + Zk bk + εk )
=
E (Xk β ) + Zk E (bk ) + E (εk )
=
Xk β
The variance is
var (yk )
=
var (Xk β + Zk bk + εk )
=
var (Xk β ) + var (Zk bk ) + var (εk )
=
Zk var (bk ) Z0k + var (εk )
=
Zk D• Z0k + Rk
The normality of randomeffects and residuals, and linearity of the model further yields
yk ∼ N(Xk β , Zk D• Z0k + Rk ) .
The covariance between the random effects and observations of group k are
cov(bk , y0k )
Lauri Mehtätalo (UEF)
=
cov(bk , (Xk β + Zk bk + εk )0 )
=
cov(bk , (Xk β )0 ) + cov(bk , (Zk bk )0 ) + cov(bk , εk0 )
=
0 + cov(bk , b0k )Z0k + 0
=
D• Z0k .
Linear mixed-effects models
October 2015, Würzburg
16 / 49
Means, variances and covariances
The expected value for a single group k is
E (yk )
=
E (Xk β + Zk bk + εk )
=
E (Xk β ) + Zk E (bk ) + E (εk )
=
Xk β
The variance is
var (yk )
=
var (Xk β + Zk bk + εk )
=
var (Xk β ) + var (Zk bk ) + var (εk )
=
Zk var (bk ) Z0k + var (εk )
=
Zk D• Z0k + Rk
The normality of randomeffects and residuals, and linearity of the model further yields
yk ∼ N(Xk β , Zk D• Z0k + Rk ) .
The covariance between the random effects and observations of group k are
cov(bk , y0k )
Lauri Mehtätalo (UEF)
=
cov(bk , (Xk β + Zk bk + εk )0 )
=
cov(bk , (Xk β )0 ) + cov(bk , (Zk bk )0 ) + cov(bk , εk0 )
=
0 + cov(bk , b0k )Z0k + 0
=
D• Z0k .
Linear mixed-effects models
October 2015, Würzburg
16 / 49
Means, variances and covariances
The expected value for a single group k is
E (yk )
=
E (Xk β + Zk bk + εk )
=
E (Xk β ) + Zk E (bk ) + E (εk )
=
Xk β
The variance is
var (yk )
=
var (Xk β + Zk bk + εk )
=
var (Xk β ) + var (Zk bk ) + var (εk )
=
Zk var (bk ) Z0k + var (εk )
=
Zk D• Z0k + Rk
The normality of randomeffects and residuals, and linearity of the model further yields
yk ∼ N(Xk β , Zk D• Z0k + Rk ) .
The covariance between the random effects and observations of group k are
cov(bk , y0k )
Lauri Mehtätalo (UEF)
=
cov(bk , (Xk β + Zk bk + εk )0 )
=
cov(bk , (Xk β )0 ) + cov(bk , (Zk bk )0 ) + cov(bk , εk0 )
=
0 + cov(bk , b0k )Z0k + 0
=
D• Z0k .
Linear mixed-effects models
October 2015, Würzburg
16 / 49
Example: the model with random constant
The model with random constant is specified by defining


1
h
i


Zk =  ...  , bk = bk(1) , Dk = σb2
1
This gives
var (yk )
=
Zk Dk Z0k + Rk
=
1nk σb2 10nk + σ 2 Ink ×nk
 2
σb σb2 . . . σb2
 σb2 σb2 . . . σb2

 .
..
..
..
 ..
.
.
.
σb2 σb2 . . . σb2
 2
σb + σ 2
σb2
2
2 +σ2

σ
σ
b
b


..
..

.
.
σb2
σb2
=
=
Lauri Mehtätalo (UEF)


 
 
+
 
Linear mixed-effects models
...
...
..
.
...
σ2
0
..
.
0
...
...
..
.
...
0
σ2
..
.
0
σb2
σb2
..
.
σb2 + σ 2
0
0
..
.
σ2










October 2015, Würzburg
17 / 49
Example: the model with random constant
The model with random constant is specified by defining


1
h
i


Zk =  ...  , bk = bk(1) , Dk = σb2
1
This gives
var (yk )
=
Zk Dk Z0k + Rk
=
1nk σb2 10nk + σ 2 Ink ×nk
 2
σb σb2 . . . σb2
 σb2 σb2 . . . σb2

 .
..
..
..
 ..
.
.
.
σb2 σb2 . . . σb2
 2
σb + σ 2
σb2
2
2 +σ2

σ
σ
b
b


..
..

.
.
σb2
σb2
=
=
Lauri Mehtätalo (UEF)


 
 
+
 
Linear mixed-effects models
...
...
..
.
...
σ2
0
..
.
0
...
...
..
.
...
0
σ2
..
.
0
σb2
σb2
..
.
σb2 + σ 2
0
0
..
.
σ2










October 2015, Würzburg
17 / 49
Relaxing the assumptions on var(ε)
Parametric modeling of non-constant variance through diagonal elements of var(εk ) is
possible.
See examples 6.9. and 6.10.
Parametric modeling of covariance structures (e.g. temporal or spatial) through
non-diagonal elements of var(εk ) is possible.
Within-group covariance structures caused by additional level of grouping are treated
through multilevel mixed-effects models.
Combinations of variance and covariance structures are naturally possible.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
18 / 49
Relaxing the assumptions on var(ε)
Parametric modeling of non-constant variance through diagonal elements of var(εk ) is
possible.
See examples 6.9. and 6.10.
Parametric modeling of covariance structures (e.g. temporal or spatial) through
non-diagonal elements of var(εk ) is possible.
Within-group covariance structures caused by additional level of grouping are treated
through multilevel mixed-effects models.
Combinations of variance and covariance structures are naturally possible.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
18 / 49
Relaxing the assumptions on var(ε)
Parametric modeling of non-constant variance through diagonal elements of var(εk ) is
possible.
See examples 6.9. and 6.10.
Parametric modeling of covariance structures (e.g. temporal or spatial) through
non-diagonal elements of var(εk ) is possible.
Within-group covariance structures caused by additional level of grouping are treated
through multilevel mixed-effects models.
Combinations of variance and covariance structures are naturally possible.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
18 / 49
Relaxing the assumptions on var(ε)
Parametric modeling of non-constant variance through diagonal elements of var(εk ) is
possible.
See examples 6.9. and 6.10.
Parametric modeling of covariance structures (e.g. temporal or spatial) through
non-diagonal elements of var(εk ) is possible.
Within-group covariance structures caused by additional level of grouping are treated
through multilevel mixed-effects models.
Combinations of variance and covariance structures are naturally possible.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
18 / 49
Relaxing the assumptions on var(ε)
Parametric modeling of non-constant variance through diagonal elements of var(εk ) is
possible.
See examples 6.9. and 6.10.
Parametric modeling of covariance structures (e.g. temporal or spatial) through
non-diagonal elements of var(εk ) is possible.
Within-group covariance structures caused by additional level of grouping are treated
through multilevel mixed-effects models.
Combinations of variance and covariance structures are naturally possible.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
18 / 49
Matrix formulation for all data
The model for all data of K groups can be written as
y = Xβ + Zb + ε
by defining



y1
 y2 




y =  . , X = 
 .. 

y
 K
Z1 0 · · ·
 0 Z2 · · ·

Z= .
..
..
 ..
.
.
0
0 ···


ε1
 ε2 


and ε =  . 
 .. 
εK
Lauri Mehtätalo (UEF)
X1
X2
..
.
XK
(7)



,

0
0
..
.
ZK






, b = 


b1
b2
..
.
bK



,

Linear mixed-effects models
October 2015, Würzburg
19 / 49
Matrix formulation for all data
We have



var(b) = var 

b1
b2
..
.
bK


 
 
=
 

D1
0
..
.
0
D2
..
.
0
0
...
...
..
.
..
.
0
0
..
.



 = IK×K ⊗ D•


DK
and



var(ε) = var 

ε1
ε2
..
.
εK


 
 
=
 

R1
0
..
.
0
R2
..
.
0
0
...
...
..
.
..
.
0
0
..
.






RK
The blocks of R are not identical (they are of nk × nk ), but parameters specifying them (σ 2
etc.) are (usually) common to all groups.
See examples 6.11 and 6.12.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
20 / 49
Matrix formulation for all data
We have



var(b) = var 

b1
b2
..
.
bK


 
 
=
 

D1
0
..
.
0
D2
..
.
0
0
...
...
..
.
..
.
0
0
..
.



 = IK×K ⊗ D•


DK
and



var(ε) = var 

ε1
ε2
..
.
εK


 
 
=
 

R1
0
..
.
0
R2
..
.
0
0
...
...
..
.
..
.
0
0
..
.






RK
The blocks of R are not identical (they are of nk × nk ), but parameters specifying them (σ 2
etc.) are (usually) common to all groups.
See examples 6.11 and 6.12.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
20 / 49
Matrix formulation for all data
We have



var(b) = var 

b1
b2
..
.
bK


 
 
=
 

D1
0
..
.
0
D2
..
.
0
0
...
...
..
.
..
.
0
0
..
.



 = IK×K ⊗ D•


DK
and



var(ε) = var 

ε1
ε2
..
.
εK


 
 
=
 

R1
0
..
.
0
R2
..
.
0
0
...
...
..
.
..
.
0
0
..
.






RK
The blocks of R are not identical (they are of nk × nk ), but parameters specifying them (σ 2
etc.) are (usually) common to all groups.
See examples 6.11 and 6.12.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
20 / 49
Variance of all data
The n × n variance-covariance matrix of y is
var(y)
=
var(Xβ + Zb + ε)
=
var(Zb + ε)
=
var(Zb) + var(ε)
=
ZDZ0 + R.
We can think the random effects as a part of residual, and define e = Zb + ε. We get
y
=
Xβ + Zb + ε
=
Xβ + e
where
var (e) = ZDZ0 + R
Therefore, the random effects model is a special case of the extended linear model, where
the correlation between observations of same group is modelled through
variance-covariance matrix var(y) = ZDZ0 + R
Linearity of the model and normality of b and ε further yield y ∼ N(Xβ , ZDZ0 + R),
which provides the starting point for the (REML-) estimation of model parameters.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
21 / 49
Variance of all data
The n × n variance-covariance matrix of y is
var(y)
=
var(Xβ + Zb + ε)
=
var(Zb + ε)
=
var(Zb) + var(ε)
=
ZDZ0 + R.
We can think the random effects as a part of residual, and define e = Zb + ε. We get
y
=
Xβ + Zb + ε
=
Xβ + e
where
var (e) = ZDZ0 + R
Therefore, the random effects model is a special case of the extended linear model, where
the correlation between observations of same group is modelled through
variance-covariance matrix var(y) = ZDZ0 + R
Linearity of the model and normality of b and ε further yield y ∼ N(Xβ , ZDZ0 + R),
which provides the starting point for the (REML-) estimation of model parameters.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
21 / 49
Variance of all data
The n × n variance-covariance matrix of y is
var(y)
=
var(Xβ + Zb + ε)
=
var(Zb + ε)
=
var(Zb) + var(ε)
=
ZDZ0 + R.
We can think the random effects as a part of residual, and define e = Zb + ε. We get
y
=
Xβ + Zb + ε
=
Xβ + e
where
var (e) = ZDZ0 + R
Therefore, the random effects model is a special case of the extended linear model, where
the correlation between observations of same group is modelled through
variance-covariance matrix var(y) = ZDZ0 + R
Linearity of the model and normality of b and ε further yield y ∼ N(Xβ , ZDZ0 + R),
which provides the starting point for the (REML-) estimation of model parameters.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
21 / 49
Variance of all data
The n × n variance-covariance matrix of y is
var(y)
=
var(Xβ + Zb + ε)
=
var(Zb + ε)
=
var(Zb) + var(ε)
=
ZDZ0 + R.
We can think the random effects as a part of residual, and define e = Zb + ε. We get
y
=
Xβ + Zb + ε
=
Xβ + e
where
var (e) = ZDZ0 + R
Therefore, the random effects model is a special case of the extended linear model, where
the correlation between observations of same group is modelled through
variance-covariance matrix var(y) = ZDZ0 + R
Linearity of the model and normality of b and ε further yield y ∼ N(Xβ , ZDZ0 + R),
which provides the starting point for the (REML-) estimation of model parameters.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
21 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
22 / 49
Estimation
We use the model formulation
=
Xβ + e ,
where b ∼ N(0, σ 2 D) and ε ∼ N(0, σ 2 R) so that
var(e) = var(y) = σ 2 ZD(θD )Z0 + R(θR ) = σ 2 V(θ )
If the residual errors εki are independent with constant variance, R = IN (with no
parameters), where N = ∑ nk .
The likelihood based on
y ∼ N Xβ , σ 2 V(θ )
is
l(β , σ 2 , θ )
=
=
N
1 1
ln (2π) − ln σ 2 V − 2 (y − Xβ ) V−1 (y − Xβ )
2
2
2σ
N
N 1 K
− ln (2π) − ln σ 2 − ∑ ln |Vk |
2
2
2 k=1
−
−
1
2σ 2
K
∑ (yk − Xk β ) V−1
k (yk − Xk β )
k=1
where the sums in the second formulation result from the independent groups.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
23 / 49
Estimation
We use the model formulation
=
Xβ + e ,
where b ∼ N(0, σ 2 D) and ε ∼ N(0, σ 2 R) so that
var(e) = var(y) = σ 2 ZD(θD )Z0 + R(θR ) = σ 2 V(θ )
If the residual errors εki are independent with constant variance, R = IN (with no
parameters), where N = ∑ nk .
The likelihood based on
y ∼ N Xβ , σ 2 V(θ )
is
l(β , σ 2 , θ )
=
=
N
1 1
ln (2π) − ln σ 2 V − 2 (y − Xβ ) V−1 (y − Xβ )
2
2
2σ
N
N 1 K
− ln (2π) − ln σ 2 − ∑ ln |Vk |
2
2
2 k=1
−
−
1
2σ 2
K
∑ (yk − Xk β ) V−1
k (yk − Xk β )
k=1
where the sums in the second formulation result from the independent groups.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
23 / 49
Estimation
We use the model formulation
=
Xβ + e ,
where b ∼ N(0, σ 2 D) and ε ∼ N(0, σ 2 R) so that
var(e) = var(y) = σ 2 ZD(θD )Z0 + R(θR ) = σ 2 V(θ )
If the residual errors εki are independent with constant variance, R = IN (with no
parameters), where N = ∑ nk .
The likelihood based on
y ∼ N Xβ , σ 2 V(θ )
is
l(β , σ 2 , θ )
=
=
N
1 1
ln (2π) − ln σ 2 V − 2 (y − Xβ ) V−1 (y − Xβ )
2
2
2σ
N
N 1 K
− ln (2π) − ln σ 2 − ∑ ln |Vk |
2
2
2 k=1
−
−
1
2σ 2
K
∑ (yk − Xk β ) V−1
k (yk − Xk β )
k=1
where the sums in the second formulation result from the independent groups.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
23 / 49
Profiling the likelihood
The task of maximizing the likelihood can be simplified by profiling, where some
parameters are written as a function of others. Writing the function in the likelihood
eliminates parameters from the likelihood.
c2 (β , θ ) = 1 (y − Xβ )0 V−1 (y − Xβ ) to the likelihood eliminates σ 2 yielding
Writing σ
l(β , σ 2 (β , θ ), θ )
N
Thereafter, β can be eliminated (from both places) by using the known ML/GLS solution
−1
βb (θ ) = X0 V−1 X
X0 V−1 y ,
to get
c2 (βb (θ ) , θ ), θ ) .
lp (θ ) = l(βb (θ ) , σ
which is a function of θ only:
Solution includes maximizing the profiled likelihood w.r.t. θ to get θbML , using the
number-valued solution in function β (θ ) to get the nuber-valued estimate of β and finally
using both solutions in the function of σ 2 .
For the model with random constant, θ = σb2 /σ 2 , see example 6.14.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
24 / 49
Profiling the likelihood
The task of maximizing the likelihood can be simplified by profiling, where some
parameters are written as a function of others. Writing the function in the likelihood
eliminates parameters from the likelihood.
c2 (β , θ ) = 1 (y − Xβ )0 V−1 (y − Xβ ) to the likelihood eliminates σ 2 yielding
Writing σ
l(β , σ 2 (β , θ ), θ )
N
Thereafter, β can be eliminated (from both places) by using the known ML/GLS solution
−1
βb (θ ) = X0 V−1 X
X0 V−1 y ,
to get
c2 (βb (θ ) , θ ), θ ) .
lp (θ ) = l(βb (θ ) , σ
which is a function of θ only:
Solution includes maximizing the profiled likelihood w.r.t. θ to get θbML , using the
number-valued solution in function β (θ ) to get the nuber-valued estimate of β and finally
using both solutions in the function of σ 2 .
For the model with random constant, θ = σb2 /σ 2 , see example 6.14.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
24 / 49
Profiling the likelihood
The task of maximizing the likelihood can be simplified by profiling, where some
parameters are written as a function of others. Writing the function in the likelihood
eliminates parameters from the likelihood.
c2 (β , θ ) = 1 (y − Xβ )0 V−1 (y − Xβ ) to the likelihood eliminates σ 2 yielding
Writing σ
l(β , σ 2 (β , θ ), θ )
N
Thereafter, β can be eliminated (from both places) by using the known ML/GLS solution
−1
βb (θ ) = X0 V−1 X
X0 V−1 y ,
to get
c2 (βb (θ ) , θ ), θ ) .
lp (θ ) = l(βb (θ ) , σ
which is a function of θ only:
Solution includes maximizing the profiled likelihood w.r.t. θ to get θbML , using the
number-valued solution in function β (θ ) to get the nuber-valued estimate of β and finally
using both solutions in the function of σ 2 .
For the model with random constant, θ = σb2 /σ 2 , see example 6.14.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
24 / 49
Profiling the likelihood
The task of maximizing the likelihood can be simplified by profiling, where some
parameters are written as a function of others. Writing the function in the likelihood
eliminates parameters from the likelihood.
c2 (β , θ ) = 1 (y − Xβ )0 V−1 (y − Xβ ) to the likelihood eliminates σ 2 yielding
Writing σ
l(β , σ 2 (β , θ ), θ )
N
Thereafter, β can be eliminated (from both places) by using the known ML/GLS solution
−1
βb (θ ) = X0 V−1 X
X0 V−1 y ,
to get
c2 (βb (θ ) , θ ), θ ) .
lp (θ ) = l(βb (θ ) , σ
which is a function of θ only:
Solution includes maximizing the profiled likelihood w.r.t. θ to get θbML , using the
number-valued solution in function β (θ ) to get the nuber-valued estimate of β and finally
using both solutions in the function of σ 2 .
For the model with random constant, θ = σb2 /σ 2 , see example 6.14.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
24 / 49
Profiling the likelihood
The task of maximizing the likelihood can be simplified by profiling, where some
parameters are written as a function of others. Writing the function in the likelihood
eliminates parameters from the likelihood.
c2 (β , θ ) = 1 (y − Xβ )0 V−1 (y − Xβ ) to the likelihood eliminates σ 2 yielding
Writing σ
l(β , σ 2 (β , θ ), θ )
N
Thereafter, β can be eliminated (from both places) by using the known ML/GLS solution
−1
βb (θ ) = X0 V−1 X
X0 V−1 y ,
to get
c2 (βb (θ ) , θ ), θ ) .
lp (θ ) = l(βb (θ ) , σ
which is a function of θ only:
Solution includes maximizing the profiled likelihood w.r.t. θ to get θbML , using the
number-valued solution in function β (θ ) to get the nuber-valued estimate of β and finally
using both solutions in the function of σ 2 .
For the model with random constant, θ = σb2 /σ 2 , see example 6.14.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
24 / 49
Profiling the likelihood
The task of maximizing the likelihood can be simplified by profiling, where some
parameters are written as a function of others. Writing the function in the likelihood
eliminates parameters from the likelihood.
c2 (β , θ ) = 1 (y − Xβ )0 V−1 (y − Xβ ) to the likelihood eliminates σ 2 yielding
Writing σ
l(β , σ 2 (β , θ ), θ )
N
Thereafter, β can be eliminated (from both places) by using the known ML/GLS solution
−1
βb (θ ) = X0 V−1 X
X0 V−1 y ,
to get
c2 (βb (θ ) , θ ), θ ) .
lp (θ ) = l(βb (θ ) , σ
which is a function of θ only:
Solution includes maximizing the profiled likelihood w.r.t. θ to get θbML , using the
number-valued solution in function β (θ ) to get the nuber-valued estimate of β and finally
using both solutions in the function of σ 2 .
For the model with random constant, θ = σb2 /σ 2 , see example 6.14.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
24 / 49
Restricted Maximum Likelihood
The model rewritten as
K0 y ∼ N[0, K0 V−1 K] ,
where Kn×(n−p) has full column rank, and fulfills K0 X = 0. The previous discussion on K
is valid here, see example 5.14
The REML-log-likelihood
0
−1 0
1
N −p
1 lR (σ 2 , θ ) = −
ln (2π) − ln σ 2 K0 VK − 2 K0 y K0 VK
Ky
2
2
2σ
N −p
1
N −p
ln (2π) −
ln σ 2 − ln K0 VK
= −
2
2
2
−1 0
1
− 2 y0 K K0 VK
K y,
2σ
The likelihood is a function of only θ and σ 2 , therefore only profiling of σ 2 is needed. We
use the REML estimator
0
−1 0 c2 = 1
σ
K0 y K0 VK
Ky .
R
N −p
to get
c2 (θ ), θ )
lp,R (θ ) = lR (σ
R
The function is first maximized w.r.t. θ . Thereafter, an estimate of σ 2 is computed. Finally
GLS estimator is used to get estimate of β .
Implementation in the data of example 6.14 is left as an exercise.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
25 / 49
Restricted Maximum Likelihood
The model rewritten as
K0 y ∼ N[0, K0 V−1 K] ,
where Kn×(n−p) has full column rank, and fulfills K0 X = 0. The previous discussion on K
is valid here, see example 5.14
The REML-log-likelihood
0
−1 0
1
N −p
1 lR (σ 2 , θ ) = −
ln (2π) − ln σ 2 K0 VK − 2 K0 y K0 VK
Ky
2
2
2σ
N −p
1
N −p
ln (2π) −
ln σ 2 − ln K0 VK
= −
2
2
2
−1 0
1
− 2 y0 K K0 VK
K y,
2σ
The likelihood is a function of only θ and σ 2 , therefore only profiling of σ 2 is needed. We
use the REML estimator
0
−1 0 c2 = 1
σ
K0 y K0 VK
Ky .
R
N −p
to get
c2 (θ ), θ )
lp,R (θ ) = lR (σ
R
The function is first maximized w.r.t. θ . Thereafter, an estimate of σ 2 is computed. Finally
GLS estimator is used to get estimate of β .
Implementation in the data of example 6.14 is left as an exercise.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
25 / 49
Restricted Maximum Likelihood
The model rewritten as
K0 y ∼ N[0, K0 V−1 K] ,
where Kn×(n−p) has full column rank, and fulfills K0 X = 0. The previous discussion on K
is valid here, see example 5.14
The REML-log-likelihood
0
−1 0
1
N −p
1 lR (σ 2 , θ ) = −
ln (2π) − ln σ 2 K0 VK − 2 K0 y K0 VK
Ky
2
2
2σ
N −p
1
N −p
ln (2π) −
ln σ 2 − ln K0 VK
= −
2
2
2
−1 0
1
− 2 y0 K K0 VK
K y,
2σ
The likelihood is a function of only θ and σ 2 , therefore only profiling of σ 2 is needed. We
use the REML estimator
0
−1 0 c2 = 1
σ
K0 y K0 VK
Ky .
R
N −p
to get
c2 (θ ), θ )
lp,R (θ ) = lR (σ
R
The function is first maximized w.r.t. θ . Thereafter, an estimate of σ 2 is computed. Finally
GLS estimator is used to get estimate of β .
Implementation in the data of example 6.14 is left as an exercise.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
25 / 49
Restricted Maximum Likelihood
The model rewritten as
K0 y ∼ N[0, K0 V−1 K] ,
where Kn×(n−p) has full column rank, and fulfills K0 X = 0. The previous discussion on K
is valid here, see example 5.14
The REML-log-likelihood
0
−1 0
1
N −p
1 lR (σ 2 , θ ) = −
ln (2π) − ln σ 2 K0 VK − 2 K0 y K0 VK
Ky
2
2
2σ
N −p
1
N −p
ln (2π) −
ln σ 2 − ln K0 VK
= −
2
2
2
−1 0
1
− 2 y0 K K0 VK
K y,
2σ
The likelihood is a function of only θ and σ 2 , therefore only profiling of σ 2 is needed. We
use the REML estimator
0
−1 0 c2 = 1
σ
K0 y K0 VK
Ky .
R
N −p
to get
c2 (θ ), θ )
lp,R (θ ) = lR (σ
R
The function is first maximized w.r.t. θ . Thereafter, an estimate of σ 2 is computed. Finally
GLS estimator is used to get estimate of β .
Implementation in the data of example 6.14 is left as an exercise.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
25 / 49
Restricted Maximum Likelihood
The model rewritten as
K0 y ∼ N[0, K0 V−1 K] ,
where Kn×(n−p) has full column rank, and fulfills K0 X = 0. The previous discussion on K
is valid here, see example 5.14
The REML-log-likelihood
0
−1 0
1
N −p
1 lR (σ 2 , θ ) = −
ln (2π) − ln σ 2 K0 VK − 2 K0 y K0 VK
Ky
2
2
2σ
N −p
1
N −p
ln (2π) −
ln σ 2 − ln K0 VK
= −
2
2
2
−1 0
1
− 2 y0 K K0 VK
K y,
2σ
The likelihood is a function of only θ and σ 2 , therefore only profiling of σ 2 is needed. We
use the REML estimator
0
−1 0 c2 = 1
σ
K0 y K0 VK
Ky .
R
N −p
to get
c2 (θ ), θ )
lp,R (θ ) = lR (σ
R
The function is first maximized w.r.t. θ . Thereafter, an estimate of σ 2 is computed. Finally
GLS estimator is used to get estimate of β .
Implementation in the data of example 6.14 is left as an exercise.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
25 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
26 / 49
Prediction of random effects
Model fitting does not provide estimates of bk ; however these may be of much interest.
The standard method is to predict the random effects using Best Linear Unbiased Predictor
(BLUP)
BLUP is a weighted average of the population-level prediction and a plot-specific
fixed-effect model prediction. The weights are based on the estimated variance
components in such an optimal way that the variance of the predictor is minimized.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
27 / 49
Prediction of random effects
Model fitting does not provide estimates of bk ; however these may be of much interest.
The standard method is to predict the random effects using Best Linear Unbiased Predictor
(BLUP)
BLUP is a weighted average of the population-level prediction and a plot-specific
fixed-effect model prediction. The weights are based on the estimated variance
components in such an optimal way that the variance of the predictor is minimized.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
27 / 49
Prediction of random effects
Model fitting does not provide estimates of bk ; however these may be of much interest.
The standard method is to predict the random effects using Best Linear Unbiased Predictor
(BLUP)
BLUP is a weighted average of the population-level prediction and a plot-specific
fixed-effect model prediction. The weights are based on the estimated variance
components in such an optimal way that the variance of the predictor is minimized.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
27 / 49
BLUP - the general case
Consider random vector h which is partitioned as follows:
h1
h=
h2
and has the following mean and variance:
h1
µ1
V1
∼
,
h2
µ2
V012
V12
V2
Consider a situation where the value of h2 has been observed and one wants to predict the
value of unobserved vector h1 .
The Best Linear Unbiased Predictor (BLUP) of h1 is
f1 = µ1 + V12 V−1 (h2 − µ2 )
BLUP(h1 ) = h
2
(8)
The prediction variance is
f1 − h1 ) = V1 − V12 V−1 V0
var(h
12
2
(9)
If h has multivariate normal distribution, BLUP is BP.
If the mean and variances are estimates, the resulting estimator is called Estimated or
empirical BLUP (EBLUP).
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
28 / 49
BLUP - the general case
Consider random vector h which is partitioned as follows:
h1
h=
h2
and has the following mean and variance:
h1
µ1
V1
∼
,
h2
µ2
V012
V12
V2
Consider a situation where the value of h2 has been observed and one wants to predict the
value of unobserved vector h1 .
The Best Linear Unbiased Predictor (BLUP) of h1 is
f1 = µ1 + V12 V−1 (h2 − µ2 )
BLUP(h1 ) = h
2
(8)
The prediction variance is
f1 − h1 ) = V1 − V12 V−1 V0
var(h
12
2
(9)
If h has multivariate normal distribution, BLUP is BP.
If the mean and variances are estimates, the resulting estimator is called Estimated or
empirical BLUP (EBLUP).
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
28 / 49
BLUP - the general case
Consider random vector h which is partitioned as follows:
h1
h=
h2
and has the following mean and variance:
h1
µ1
V1
∼
,
h2
µ2
V012
V12
V2
Consider a situation where the value of h2 has been observed and one wants to predict the
value of unobserved vector h1 .
The Best Linear Unbiased Predictor (BLUP) of h1 is
f1 = µ1 + V12 V−1 (h2 − µ2 )
BLUP(h1 ) = h
2
(8)
The prediction variance is
f1 − h1 ) = V1 − V12 V−1 V0
var(h
12
2
(9)
If h has multivariate normal distribution, BLUP is BP.
If the mean and variances are estimates, the resulting estimator is called Estimated or
empirical BLUP (EBLUP).
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
28 / 49
BLUP - the general case
Consider random vector h which is partitioned as follows:
h1
h=
h2
and has the following mean and variance:
h1
µ1
V1
∼
,
h2
µ2
V012
V12
V2
Consider a situation where the value of h2 has been observed and one wants to predict the
value of unobserved vector h1 .
The Best Linear Unbiased Predictor (BLUP) of h1 is
f1 = µ1 + V12 V−1 (h2 − µ2 )
BLUP(h1 ) = h
2
(8)
The prediction variance is
f1 − h1 ) = V1 − V12 V−1 V0
var(h
12
2
(9)
If h has multivariate normal distribution, BLUP is BP.
If the mean and variances are estimates, the resulting estimator is called Estimated or
empirical BLUP (EBLUP).
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
28 / 49
BLUP - the general case
Consider random vector h which is partitioned as follows:
h1
h=
h2
and has the following mean and variance:
h1
µ1
V1
∼
,
h2
µ2
V012
V12
V2
Consider a situation where the value of h2 has been observed and one wants to predict the
value of unobserved vector h1 .
The Best Linear Unbiased Predictor (BLUP) of h1 is
f1 = µ1 + V12 V−1 (h2 − µ2 )
BLUP(h1 ) = h
2
(8)
The prediction variance is
f1 − h1 ) = V1 − V12 V−1 V0
var(h
12
2
(9)
If h has multivariate normal distribution, BLUP is BP.
If the mean and variances are estimates, the resulting estimator is called Estimated or
empirical BLUP (EBLUP).
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
28 / 49
BLUP - the general case
Consider random vector h which is partitioned as follows:
h1
h=
h2
and has the following mean and variance:
h1
µ1
V1
∼
,
h2
µ2
V012
V12
V2
Consider a situation where the value of h2 has been observed and one wants to predict the
value of unobserved vector h1 .
The Best Linear Unbiased Predictor (BLUP) of h1 is
f1 = µ1 + V12 V−1 (h2 − µ2 )
BLUP(h1 ) = h
2
(8)
The prediction variance is
f1 − h1 ) = V1 − V12 V−1 V0
var(h
12
2
(9)
If h has multivariate normal distribution, BLUP is BP.
If the mean and variances are estimates, the resulting estimator is called Estimated or
empirical BLUP (EBLUP).
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
28 / 49
BLUP for prediction of random effects
Consider a single group k. The observed part of h is yk and the unobserved part is bk
We start from the following known properties:
bk
0
D•
∼
,
yk
Xk β
Zk D•
D• Z0k
Zk D• Z0 + Rk
to predict bk using yk .
The BLUP of random effects, and their prediction error variance become
−1
bek = D• Z0k Zk D• Z0k + Rk
(yk − Xk β )
−1
0
0
var(bek − bk ) = D• − D• Zk Zk D• Zk + Rk
Zk D•
Henderson mixed model equations give an alternative, equivalent solution, which is
computationally better.
−1
−1
Z0k R−1
Z0 R−1
bek =
k Zk + D•
k (yk − Xk β )
−1
−1
var(bek − bk ) =
Z0k R−1
k Z k + D•
See examples 6.17-6.19.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
29 / 49
BLUP for prediction of random effects
Consider a single group k. The observed part of h is yk and the unobserved part is bk
We start from the following known properties:
bk
0
D•
∼
,
yk
Xk β
Zk D•
D• Z0k
Zk D• Z0 + Rk
to predict bk using yk .
The BLUP of random effects, and their prediction error variance become
−1
bek = D• Z0k Zk D• Z0k + Rk
(yk − Xk β )
−1
0
0
var(bek − bk ) = D• − D• Zk Zk D• Zk + Rk
Zk D•
Henderson mixed model equations give an alternative, equivalent solution, which is
computationally better.
−1
−1
Z0k R−1
Z0 R−1
bek =
k Zk + D•
k (yk − Xk β )
−1
−1
var(bek − bk ) =
Z0k R−1
k Z k + D•
See examples 6.17-6.19.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
29 / 49
BLUP for prediction of random effects
Consider a single group k. The observed part of h is yk and the unobserved part is bk
We start from the following known properties:
bk
0
D•
∼
,
yk
Xk β
Zk D•
D• Z0k
Zk D• Z0 + Rk
to predict bk using yk .
The BLUP of random effects, and their prediction error variance become
−1
bek = D• Z0k Zk D• Z0k + Rk
(yk − Xk β )
−1
0
0
var(bek − bk ) = D• − D• Zk Zk D• Zk + Rk
Zk D•
Henderson mixed model equations give an alternative, equivalent solution, which is
computationally better.
−1
−1
Z0k R−1
Z0 R−1
bek =
k Zk + D•
k (yk − Xk β )
−1
−1
var(bek − bk ) =
Z0k R−1
k Z k + D•
See examples 6.17-6.19.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
29 / 49
BLUP for prediction of random effects
Consider a single group k. The observed part of h is yk and the unobserved part is bk
We start from the following known properties:
bk
0
D•
∼
,
yk
Xk β
Zk D•
D• Z0k
Zk D• Z0 + Rk
to predict bk using yk .
The BLUP of random effects, and their prediction error variance become
−1
bek = D• Z0k Zk D• Z0k + Rk
(yk − Xk β )
−1
0
0
var(bek − bk ) = D• − D• Zk Zk D• Zk + Rk
Zk D•
Henderson mixed model equations give an alternative, equivalent solution, which is
computationally better.
−1
−1
Z0k R−1
Z0 R−1
bek =
k Zk + D•
k (yk − Xk β )
−1
−1
var(bek − bk ) =
Z0k R−1
k Z k + D•
See examples 6.17-6.19.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
29 / 49
BLUP for prediction of random effects
Consider a single group k. The observed part of h is yk and the unobserved part is bk
We start from the following known properties:
bk
0
D•
∼
,
yk
Xk β
Zk D•
D• Z0k
Zk D• Z0 + Rk
to predict bk using yk .
The BLUP of random effects, and their prediction error variance become
−1
bek = D• Z0k Zk D• Z0k + Rk
(yk − Xk β )
−1
0
0
var(bek − bk ) = D• − D• Zk Zk D• Zk + Rk
Zk D•
Henderson mixed model equations give an alternative, equivalent solution, which is
computationally better.
−1
−1
Z0k R−1
Z0 R−1
bek =
k Zk + D•
k (yk − Xk β )
−1
−1
var(bek − bk ) =
Z0k R−1
k Z k + D•
See examples 6.17-6.19.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
29 / 49
Example: the variance component model
For group k with n observations,
Vk = var(yk ) = σ 2 I + σb2 Jn .
where Jn is a square n × n matrix filled by ones.
The inverse is
!
σb2
1
−1
V k = 2 In − 2
Jn .
σ
σ + nσb2
The covariance is
cov(bk , y0k ) = σb2 10n ,
where 1n is a column vector of ones.
The BLUP of the random effect for group k becomes
bek
=
=
=
σb2 10n
1
σ2
σ2
In − 2 b 2 J n
σ + nσb
(y − µ1)
nσb2
(ȳ − µ)
2
σ + nσb2
σb2
(ȳ − µ)
1 2
2
n σ + σb
The predicted value for group k becomes e
yk = µ + e
bk =
Lauri Mehtätalo (UEF)
!!
Linear mixed-effects models
σ2
1/nσ 2
µ + 1/nσ 2b+σ 2 ȳ
1/nσ 2 +σb2
b
October 2015, Würzburg
30 / 49
Example: the variance component model
For group k with n observations,
Vk = var(yk ) = σ 2 I + σb2 Jn .
where Jn is a square n × n matrix filled by ones.
The inverse is
!
σb2
1
−1
V k = 2 In − 2
Jn .
σ
σ + nσb2
The covariance is
cov(bk , y0k ) = σb2 10n ,
where 1n is a column vector of ones.
The BLUP of the random effect for group k becomes
bek
=
=
=
σb2 10n
1
σ2
σ2
In − 2 b 2 J n
σ + nσb
(y − µ1)
nσb2
(ȳ − µ)
2
σ + nσb2
σb2
(ȳ − µ)
1 2
2
n σ + σb
The predicted value for group k becomes e
yk = µ + e
bk =
Lauri Mehtätalo (UEF)
!!
Linear mixed-effects models
σ2
1/nσ 2
µ + 1/nσ 2b+σ 2 ȳ
1/nσ 2 +σb2
b
October 2015, Würzburg
30 / 49
Example: the variance component model
For group k with n observations,
Vk = var(yk ) = σ 2 I + σb2 Jn .
where Jn is a square n × n matrix filled by ones.
The inverse is
!
σb2
1
−1
V k = 2 In − 2
Jn .
σ
σ + nσb2
The covariance is
cov(bk , y0k ) = σb2 10n ,
where 1n is a column vector of ones.
The BLUP of the random effect for group k becomes
bek
=
=
=
σb2 10n
1
σ2
σ2
In − 2 b 2 J n
σ + nσb
(y − µ1)
nσb2
(ȳ − µ)
2
σ + nσb2
σb2
(ȳ − µ)
1 2
2
n σ + σb
The predicted value for group k becomes e
yk = µ + e
bk =
Lauri Mehtätalo (UEF)
!!
Linear mixed-effects models
σ2
1/nσ 2
µ + 1/nσ 2b+σ 2 ȳ
1/nσ 2 +σb2
b
October 2015, Würzburg
30 / 49
Example: the variance component model
For group k with n observations,
Vk = var(yk ) = σ 2 I + σb2 Jn .
where Jn is a square n × n matrix filled by ones.
The inverse is
!
σb2
1
−1
V k = 2 In − 2
Jn .
σ
σ + nσb2
The covariance is
cov(bk , y0k ) = σb2 10n ,
where 1n is a column vector of ones.
The BLUP of the random effect for group k becomes
bek
=
=
=
σb2 10n
1
σ2
σ2
In − 2 b 2 J n
σ + nσb
(y − µ1)
nσb2
(ȳ − µ)
2
σ + nσb2
σb2
(ȳ − µ)
1 2
2
n σ + σb
The predicted value for group k becomes e
yk = µ + e
bk =
Lauri Mehtätalo (UEF)
!!
Linear mixed-effects models
σ2
1/nσ 2
µ + 1/nσ 2b+σ 2 ȳ
1/nσ 2 +σb2
b
October 2015, Würzburg
30 / 49
Example: the variance component model
For group k with n observations,
Vk = var(yk ) = σ 2 I + σb2 Jn .
where Jn is a square n × n matrix filled by ones.
The inverse is
!
σb2
1
−1
V k = 2 In − 2
Jn .
σ
σ + nσb2
The covariance is
cov(bk , y0k ) = σb2 10n ,
where 1n is a column vector of ones.
The BLUP of the random effect for group k becomes
bek
=
=
=
σb2 10n
1
σ2
σ2
In − 2 b 2 J n
σ + nσb
(y − µ1)
nσb2
(ȳ − µ)
2
σ + nσb2
σb2
(ȳ − µ)
1 2
2
n σ + σb
The predicted value for group k becomes e
yk = µ + e
bk =
Lauri Mehtätalo (UEF)
!!
Linear mixed-effects models
σ2
1/nσ 2
µ + 1/nσ 2b+σ 2 ȳ
1/nσ 2 +σb2
b
October 2015, Würzburg
30 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Understanding the BLUP
BLUP of bk (of any length!) can be computed by using even one observation per group.
BLUP is marginally unbiased over groups
But taking repeatedly samples for a particular group, BLUP will not average to the true bk
of that group. As a shrinkage estimator, it is conditionally biased.
A conditionally unbiased alternative would be the group mean or group-specific
regression. However, the variance of such prediction would be high especially for groups
with small number of observations, and it could not be computed if nk ≤ p.
BLUP can be calculated afterwards for a new group by using observation(s) of y from that
group. This is a widely useful property e.g. in prediction of height-diameter relationship in
a forest stand by using small number of measured sample tree heights.
Bayesian taste: the model provides the prior information, which is updated using local
information from the group.
BLUP for a group without predictors is E(bk ) = 0, i.e., the population-level prediction.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
31 / 49
Fixed or random: the prediction variance
Consider a model with random constant.
The prediction variance of the random constant is
var(e
bk − bi ) =
σb2
!
1 2
2
n σ + σb
σ2
.
n
The prediction error variance of yki is contributed by the prediction error of the random
effect and the residual variance:
!
σb2
σ2
+σ2
var(e
yki − yki ) = 1
2
2 +σ
n
σ
b
n
If the group effect is estimated as fixed, the estimation error of the group effect is
2
var(ḃk − bi ) = σn .
The prediction error variance of y is
var(y˙ki − yki ) =
σ2
+ σ 2.
n
which is higher than var(e
yki − yki ), the difference vanishes as n gets large.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
32 / 49
Fixed or random: the prediction variance
Consider a model with random constant.
The prediction variance of the random constant is
var(e
bk − bi ) =
σb2
!
1 2
2
n σ + σb
σ2
.
n
The prediction error variance of yki is contributed by the prediction error of the random
effect and the residual variance:
!
σb2
σ2
+σ2
var(e
yki − yki ) = 1
2
2 +σ
n
σ
b
n
If the group effect is estimated as fixed, the estimation error of the group effect is
2
var(ḃk − bi ) = σn .
The prediction error variance of y is
var(y˙ki − yki ) =
σ2
+ σ 2.
n
which is higher than var(e
yki − yki ), the difference vanishes as n gets large.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
32 / 49
Fixed or random: the prediction variance
Consider a model with random constant.
The prediction variance of the random constant is
var(e
bk − bi ) =
σb2
!
1 2
2
n σ + σb
σ2
.
n
The prediction error variance of yki is contributed by the prediction error of the random
effect and the residual variance:
!
σb2
σ2
+σ2
var(e
yki − yki ) = 1
2
2 +σ
n
σ
b
n
If the group effect is estimated as fixed, the estimation error of the group effect is
2
var(ḃk − bi ) = σn .
The prediction error variance of y is
var(y˙ki − yki ) =
σ2
+ σ 2.
n
which is higher than var(e
yki − yki ), the difference vanishes as n gets large.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
32 / 49
Fixed or random: the prediction variance
Consider a model with random constant.
The prediction variance of the random constant is
var(e
bk − bi ) =
σb2
!
1 2
2
n σ + σb
σ2
.
n
The prediction error variance of yki is contributed by the prediction error of the random
effect and the residual variance:
!
σb2
σ2
+σ2
var(e
yki − yki ) = 1
2
2 +σ
n
σ
b
n
If the group effect is estimated as fixed, the estimation error of the group effect is
2
var(ḃk − bi ) = σn .
The prediction error variance of y is
var(y˙ki − yki ) =
σ2
+ σ 2.
n
which is higher than var(e
yki − yki ), the difference vanishes as n gets large.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
32 / 49
Fixed or random: the prediction variance
Consider a model with random constant.
The prediction variance of the random constant is
var(e
bk − bi ) =
σb2
!
1 2
2
n σ + σb
σ2
.
n
The prediction error variance of yki is contributed by the prediction error of the random
effect and the residual variance:
!
σb2
σ2
+σ2
var(e
yki − yki ) = 1
2
2 +σ
n
σ
b
n
If the group effect is estimated as fixed, the estimation error of the group effect is
2
var(ḃk − bi ) = σn .
The prediction error variance of y is
var(y˙ki − yki ) =
σ2
+ σ 2.
n
which is higher than var(e
yki − yki ), the difference vanishes as n gets large.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
32 / 49
Fixed or random: the prediction variance
From de Souza-Vismara et al. forthcoming.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
33 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
34 / 49
Were all assumptions met
Assumptions on fixed part: Is the fixed part correctly specified? Were some covariates
omitted.
Assumptions on εi : variance and covariance.
Are bk identically distributed? Do they correlate with any group-specific aggregates of
predictors? Do they correlate with group size?
Normality of residual errors.
Multivariate normality of bk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
35 / 49
Were all assumptions met
Assumptions on fixed part: Is the fixed part correctly specified? Were some covariates
omitted.
Assumptions on εi : variance and covariance.
Are bk identically distributed? Do they correlate with any group-specific aggregates of
predictors? Do they correlate with group size?
Normality of residual errors.
Multivariate normality of bk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
35 / 49
Were all assumptions met
Assumptions on fixed part: Is the fixed part correctly specified? Were some covariates
omitted.
Assumptions on εi : variance and covariance.
Are bk identically distributed? Do they correlate with any group-specific aggregates of
predictors? Do they correlate with group size?
Normality of residual errors.
Multivariate normality of bk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
35 / 49
Were all assumptions met
Assumptions on fixed part: Is the fixed part correctly specified? Were some covariates
omitted.
Assumptions on εi : variance and covariance.
Are bk identically distributed? Do they correlate with any group-specific aggregates of
predictors? Do they correlate with group size?
Normality of residual errors.
Multivariate normality of bk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
35 / 49
Were all assumptions met
Assumptions on fixed part: Is the fixed part correctly specified? Were some covariates
omitted.
Assumptions on εi : variance and covariance.
Are bk identically distributed? Do they correlate with any group-specific aggregates of
predictors? Do they correlate with group size?
Normality of residual errors.
Multivariate normality of bk .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
35 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
36 / 49
Inference and tests
The null and alternative hypotheses are formulated as before
H0 : The constrained model is sufficient.
H1 : The full model is significantly better than the constrained model.
The REML or ML likelihood ratio
LR = 2 ln
L2
L1
= −2 (l1 − l2 ) .
has under the null hypothesis (at least asymptotically)
LRT ∼ χ 2 (p − q) .
Assuming that V is known, we can compute
RSS1 = (y − X1 βb1 )0 V−1 (y − X1 βb1 )
RSS2 = (y − X2 βb2 )0 V−1 (y − X2 βb2 )
where βb1 and βb2 are the GLS estimates of the regression coefficients of the two nested
models.
Under the null hypothesis,
Fobs =
(RSS1 − RSS2 )/(p − q)
∼ F(p − q, n − p) .
RSS2 /(n − p)
However because V is not known in reality, it is replaced with an estimate of V from a
REML fit, which results to tests that are conditional to the estimate of V.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
37 / 49
Inference and tests
The null and alternative hypotheses are formulated as before
H0 : The constrained model is sufficient.
H1 : The full model is significantly better than the constrained model.
The REML or ML likelihood ratio
LR = 2 ln
L2
L1
= −2 (l1 − l2 ) .
has under the null hypothesis (at least asymptotically)
LRT ∼ χ 2 (p − q) .
Assuming that V is known, we can compute
RSS1 = (y − X1 βb1 )0 V−1 (y − X1 βb1 )
RSS2 = (y − X2 βb2 )0 V−1 (y − X2 βb2 )
where βb1 and βb2 are the GLS estimates of the regression coefficients of the two nested
models.
Under the null hypothesis,
Fobs =
(RSS1 − RSS2 )/(p − q)
∼ F(p − q, n − p) .
RSS2 /(n − p)
However because V is not known in reality, it is replaced with an estimate of V from a
REML fit, which results to tests that are conditional to the estimate of V.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
37 / 49
Inference and tests
The null and alternative hypotheses are formulated as before
H0 : The constrained model is sufficient.
H1 : The full model is significantly better than the constrained model.
The REML or ML likelihood ratio
LR = 2 ln
L2
L1
= −2 (l1 − l2 ) .
has under the null hypothesis (at least asymptotically)
LRT ∼ χ 2 (p − q) .
Assuming that V is known, we can compute
RSS1 = (y − X1 βb1 )0 V−1 (y − X1 βb1 )
RSS2 = (y − X2 βb2 )0 V−1 (y − X2 βb2 )
where βb1 and βb2 are the GLS estimates of the regression coefficients of the two nested
models.
Under the null hypothesis,
Fobs =
(RSS1 − RSS2 )/(p − q)
∼ F(p − q, n − p) .
RSS2 /(n − p)
However because V is not known in reality, it is replaced with an estimate of V from a
REML fit, which results to tests that are conditional to the estimate of V.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
37 / 49
Inference and tests
The null and alternative hypotheses are formulated as before
H0 : The constrained model is sufficient.
H1 : The full model is significantly better than the constrained model.
The REML or ML likelihood ratio
LR = 2 ln
L2
L1
= −2 (l1 − l2 ) .
has under the null hypothesis (at least asymptotically)
LRT ∼ χ 2 (p − q) .
Assuming that V is known, we can compute
RSS1 = (y − X1 βb1 )0 V−1 (y − X1 βb1 )
RSS2 = (y − X2 βb2 )0 V−1 (y − X2 βb2 )
where βb1 and βb2 are the GLS estimates of the regression coefficients of the two nested
models.
Under the null hypothesis,
Fobs =
(RSS1 − RSS2 )/(p − q)
∼ F(p − q, n − p) .
RSS2 /(n − p)
However because V is not known in reality, it is replaced with an estimate of V from a
REML fit, which results to tests that are conditional to the estimate of V.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
37 / 49
Inference and tests
The null and alternative hypotheses are formulated as before
H0 : The constrained model is sufficient.
H1 : The full model is significantly better than the constrained model.
The REML or ML likelihood ratio
LR = 2 ln
L2
L1
= −2 (l1 − l2 ) .
has under the null hypothesis (at least asymptotically)
LRT ∼ χ 2 (p − q) .
Assuming that V is known, we can compute
RSS1 = (y − X1 βb1 )0 V−1 (y − X1 βb1 )
RSS2 = (y − X2 βb2 )0 V−1 (y − X2 βb2 )
where βb1 and βb2 are the GLS estimates of the regression coefficients of the two nested
models.
Under the null hypothesis,
Fobs =
(RSS1 − RSS2 )/(p − q)
∼ F(p − q, n − p) .
RSS2 /(n − p)
However because V is not known in reality, it is replaced with an estimate of V from a
REML fit, which results to tests that are conditional to the estimate of V.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
37 / 49
Inference and tests
The null and alternative hypotheses are formulated as before
H0 : The constrained model is sufficient.
H1 : The full model is significantly better than the constrained model.
The REML or ML likelihood ratio
LR = 2 ln
L2
L1
= −2 (l1 − l2 ) .
has under the null hypothesis (at least asymptotically)
LRT ∼ χ 2 (p − q) .
Assuming that V is known, we can compute
RSS1 = (y − X1 βb1 )0 V−1 (y − X1 βb1 )
RSS2 = (y − X2 βb2 )0 V−1 (y − X2 βb2 )
where βb1 and βb2 are the GLS estimates of the regression coefficients of the two nested
models.
Under the null hypothesis,
Fobs =
(RSS1 − RSS2 )/(p − q)
∼ F(p − q, n − p) .
RSS2 /(n − p)
However because V is not known in reality, it is replaced with an estimate of V from a
REML fit, which results to tests that are conditional to the estimate of V.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
37 / 49
Inference and tests
Conditional F-test are suggested for fixed effects. ML- based LR are an alternative but
should be used with caution (see Pinheiro and Bates for a discussion).
The degrees of freedom in F- tests of group-specific predictors in unbalanced datasets is
unclear. SAS and SPSS use different defaults (Satterhwaite approximation) than nlme. It
is not even clear whether the test statistic has F distribution in this case.
REML- likelihood ratio tests are commonly used for tests on random parameters.
However, testing if variance is zero is problematic because zero is at the boundary of the
parameter space.
See examples 6.24.
Simulation from the null model (i.e. parametric bootstrapping) is also an alternative. See
Pinheiro and Bates (2000) and Example 6.25.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
38 / 49
Inference and tests
Conditional F-test are suggested for fixed effects. ML- based LR are an alternative but
should be used with caution (see Pinheiro and Bates for a discussion).
The degrees of freedom in F- tests of group-specific predictors in unbalanced datasets is
unclear. SAS and SPSS use different defaults (Satterhwaite approximation) than nlme. It
is not even clear whether the test statistic has F distribution in this case.
REML- likelihood ratio tests are commonly used for tests on random parameters.
However, testing if variance is zero is problematic because zero is at the boundary of the
parameter space.
See examples 6.24.
Simulation from the null model (i.e. parametric bootstrapping) is also an alternative. See
Pinheiro and Bates (2000) and Example 6.25.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
38 / 49
Inference and tests
Conditional F-test are suggested for fixed effects. ML- based LR are an alternative but
should be used with caution (see Pinheiro and Bates for a discussion).
The degrees of freedom in F- tests of group-specific predictors in unbalanced datasets is
unclear. SAS and SPSS use different defaults (Satterhwaite approximation) than nlme. It
is not even clear whether the test statistic has F distribution in this case.
REML- likelihood ratio tests are commonly used for tests on random parameters.
However, testing if variance is zero is problematic because zero is at the boundary of the
parameter space.
See examples 6.24.
Simulation from the null model (i.e. parametric bootstrapping) is also an alternative. See
Pinheiro and Bates (2000) and Example 6.25.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
38 / 49
Inference and tests
Conditional F-test are suggested for fixed effects. ML- based LR are an alternative but
should be used with caution (see Pinheiro and Bates for a discussion).
The degrees of freedom in F- tests of group-specific predictors in unbalanced datasets is
unclear. SAS and SPSS use different defaults (Satterhwaite approximation) than nlme. It
is not even clear whether the test statistic has F distribution in this case.
REML- likelihood ratio tests are commonly used for tests on random parameters.
However, testing if variance is zero is problematic because zero is at the boundary of the
parameter space.
See examples 6.24.
Simulation from the null model (i.e. parametric bootstrapping) is also an alternative. See
Pinheiro and Bates (2000) and Example 6.25.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
38 / 49
Inference and tests
Conditional F-test are suggested for fixed effects. ML- based LR are an alternative but
should be used with caution (see Pinheiro and Bates for a discussion).
The degrees of freedom in F- tests of group-specific predictors in unbalanced datasets is
unclear. SAS and SPSS use different defaults (Satterhwaite approximation) than nlme. It
is not even clear whether the test statistic has F distribution in this case.
REML- likelihood ratio tests are commonly used for tests on random parameters.
However, testing if variance is zero is problematic because zero is at the boundary of the
parameter space.
See examples 6.24.
Simulation from the null model (i.e. parametric bootstrapping) is also an alternative. See
Pinheiro and Bates (2000) and Example 6.25.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
38 / 49
Inference and tests
Conditional F-test are suggested for fixed effects. ML- based LR are an alternative but
should be used with caution (see Pinheiro and Bates for a discussion).
The degrees of freedom in F- tests of group-specific predictors in unbalanced datasets is
unclear. SAS and SPSS use different defaults (Satterhwaite approximation) than nlme. It
is not even clear whether the test statistic has F distribution in this case.
REML- likelihood ratio tests are commonly used for tests on random parameters.
However, testing if variance is zero is problematic because zero is at the boundary of the
parameter space.
See examples 6.24.
Simulation from the null model (i.e. parametric bootstrapping) is also an alternative. See
Pinheiro and Bates (2000) and Example 6.25.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
38 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
39 / 49
Nested two-level model
A nested two-level model with random constant and slope at both levels of grouping is defined
as
(2)
(p)
(1)
(2) (2)
(1)
(2) (2)
yijk = β1 + β2 xijk + . . . , +βp xijk + ai + ai xijk + cij + cij xijk + εijk ,
where
(1)
ai
(2)
ai
(1)
cij
(2)
cij
!
= ai ∼ N(0, Da ) ,
!
= cij ∼ N(0, Dc ) ,
εijk ∼ N(0, σ 2 ) .
The random effects are i.i.d. and uncorrelated among levels:
cov(a0i , cij ) = 0
cov(c0ij , εijk ) = 0 .
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
40 / 49
Nested two-level model
A straightforward result from the nested grouping structure are the predictions at different
levels:
g
(0)
yijk
=
g
(1)
yijk
=
g
(2)
yijk
=
(2)
(p)
βb1 + βb2 xijk + . . . , +βbp xijk
g
g
(1)
(2)
(2)
(p)
βb1 + ai + βb2 + ai
xijk + . . . , +βbp xijk
g
g
(1) g
(1)
(2) g
(2)
(2)
(p)
βb1 + ai + cij + βb2 + ai + cij xijk + . . . , +βbp xijk ,
And corresponding residuals
g
(0)
eijk
=
g
(0)
yijk − yijk
g
(1)
eijk
=
g
(1)
yijk − yijk
g
(2)
eijk
=
g
(2)
yijk − yijk
g
(2)
Term residual is commonly used for the highest-level residual εf
ijk = eijk , which is also the
default in nlme.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
41 / 49
Nested two-level model
A straightforward result from the nested grouping structure are the predictions at different
levels:
g
(0)
yijk
=
g
(1)
yijk
=
g
(2)
yijk
=
(2)
(p)
βb1 + βb2 xijk + . . . , +βbp xijk
g
g
(1)
(2)
(2)
(p)
βb1 + ai + βb2 + ai
xijk + . . . , +βbp xijk
g
g
(1) g
(1)
(2) g
(2)
(2)
(p)
βb1 + ai + cij + βb2 + ai + cij xijk + . . . , +βbp xijk ,
And corresponding residuals
g
(0)
eijk
=
g
(0)
yijk − yijk
g
(1)
eijk
=
g
(1)
yijk − yijk
g
(2)
eijk
=
g
(2)
yijk − yijk
g
(2)
Term residual is commonly used for the highest-level residual εf
ijk = eijk , which is also the
default in nlme.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
41 / 49
Nested two-level model
A straightforward result from the nested grouping structure are the predictions at different
levels:
g
(0)
yijk
=
g
(1)
yijk
=
g
(2)
yijk
=
(2)
(p)
βb1 + βb2 xijk + . . . , +βbp xijk
g
g
(1)
(2)
(2)
(p)
βb1 + ai + βb2 + ai
xijk + . . . , +βbp xijk
g
g
(1) g
(1)
(2) g
(2)
(2)
(p)
βb1 + ai + cij + βb2 + ai + cij xijk + . . . , +βbp xijk ,
And corresponding residuals
g
(0)
eijk
=
g
(0)
yijk − yijk
g
(1)
eijk
=
g
(1)
yijk − yijk
g
(2)
eijk
=
g
(2)
yijk − yijk
g
(2)
Term residual is commonly used for the highest-level residual εf
ijk = eijk , which is also the
default in nlme.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
41 / 49
Matrix formulation for subgroup ij
LME for group j within group i is
yij = Xij β + Zi,j ai + Zij cij + εij



yij1




where yij =  yij2 , Zi,j = 

..
.ijnij


εij1
" (1) #
 εij2 
cij


cij =
,
ε
=
 .. 
ij
(2)
 . 
cij
1
1
..
.
1
xij1
xij2
..
.
xijnij






, Zij = 


1
1
..
.
1
xij1
xij2
..
.
xijnij



, ai =

"
(1)
ai
(2)
ai
#
,
εijnij
"
(1)
var(ai )
Da =
(1) (2)
cov(ai , ai )
Rij = σ 2 Inij ×nij
Lauri Mehtätalo (UEF)
(1)
(2)
cov(ai , ai )
(2)
var(ai )
#
, Dc =
"
(1)
(1)
(2)
var(cij )
cov(cij , cij )
(1)
var(cij )
(2)
cov(cij , cij )
Linear mixed-effects models
(2)
#
, and
October 2015, Würzburg
42 / 49
Matrix formulation for group i
The model for the whole group i is
(a)
(c)
yi = Xi β + Zi ai + Zi ci + εi



where yi = 

Zi1
 0

(c)
.
Zi = 
 ..


0




yi1
ci1
Zi,1
"
#
(1)
 ci2 
 Zi,2
yi2 
a


 (a) 
i
, ci =  . , Zi =  .
.. , ai =
(2)
 .. 
 ..
ai
. 
yini
cini
Zi,ni



0
···
0
εi1
Zi1 · · ·
0 
 εi2 


..
.. , εi =  . 
..

.
.
. 
 .. 

..
εini
0
.
Zini
Lauri Mehtätalo (UEF)
Linear mixed-effects models



,

October 2015, Würzburg
43 / 49
Matrix formulation for group i
The two separate parts of random effects can be pooled by defining
h
i
ai
bi =
, Zi = Zi(a) Z(c)
.
i
ci
Now the model for group i can be written as
yi = Xi β + Zi bi + εi .
where



Di = var(bi ) = 

Da
0
..
.
0
0
Dc
..
.
0
...
...
..
.
...
0
0
..
.
Dc



,

(Dc occurs on the diagonal once for each subgroups of group i).
Furthermore



Ri = var(εi ) = 

Lauri Mehtätalo (UEF)
Ri1
0
..
.
0
0
Ri2
..
.
0
Linear mixed-effects models
...
...
..
.
...
0
0
..
.
Rini





October 2015, Würzburg
44 / 49
Matrix formulation for group i
The two separate parts of random effects can be pooled by defining
h
i
ai
bi =
, Zi = Zi(a) Z(c)
.
i
ci
Now the model for group i can be written as
yi = Xi β + Zi bi + εi .
where



Di = var(bi ) = 

Da
0
..
.
0
0
Dc
..
.
0
...
...
..
.
...
0
0
..
.
Dc



,

(Dc occurs on the diagonal once for each subgroups of group i).
Furthermore



Ri = var(εi ) = 

Lauri Mehtätalo (UEF)
Ri1
0
..
.
0
0
Ri2
..
.
0
Linear mixed-effects models
...
...
..
.
...
0
0
..
.
Rini





October 2015, Würzburg
44 / 49
Matrix formulation for group i
The two separate parts of random effects can be pooled by defining
h
i
ai
bi =
, Zi = Zi(a) Z(c)
.
i
ci
Now the model for group i can be written as
yi = Xi β + Zi bi + εi .
where



Di = var(bi ) = 

Da
0
..
.
0
0
Dc
..
.
0
...
...
..
.
...
0
0
..
.
Dc



,

(Dc occurs on the diagonal once for each subgroups of group i).
Furthermore



Ri = var(εi ) = 

Lauri Mehtätalo (UEF)
Ri1
0
..
.
0
0
Ri2
..
.
0
Linear mixed-effects models
...
...
..
.
...
0
0
..
.
Rini





October 2015, Würzburg
44 / 49
Matrix formulation for group i
We have (again!)
var(yi ) = Zi Di Z0i + Ri
and
cov(bi , yi ) = Di Z0i .
The rest of the story would repeat the previous steps of model formulation for all data,
estimation, prediction of random effects (but now we need to predict jointly al random
effects that are related to group i), diagnostic (but now we need to consider the
assumptions at all levels), and inference.
Examples 7.1-7.4.
A model with more than two levels is built in a similar manner, starting from the innermost
level of grouping.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
45 / 49
Matrix formulation for group i
We have (again!)
var(yi ) = Zi Di Z0i + Ri
and
cov(bi , yi ) = Di Z0i .
The rest of the story would repeat the previous steps of model formulation for all data,
estimation, prediction of random effects (but now we need to predict jointly al random
effects that are related to group i), diagnostic (but now we need to consider the
assumptions at all levels), and inference.
Examples 7.1-7.4.
A model with more than two levels is built in a similar manner, starting from the innermost
level of grouping.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
45 / 49
Matrix formulation for group i
We have (again!)
var(yi ) = Zi Di Z0i + Ri
and
cov(bi , yi ) = Di Z0i .
The rest of the story would repeat the previous steps of model formulation for all data,
estimation, prediction of random effects (but now we need to predict jointly al random
effects that are related to group i), diagnostic (but now we need to consider the
assumptions at all levels), and inference.
Examples 7.1-7.4.
A model with more than two levels is built in a similar manner, starting from the innermost
level of grouping.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
45 / 49
Matrix formulation for group i
We have (again!)
var(yi ) = Zi Di Z0i + Ri
and
cov(bi , yi ) = Di Z0i .
The rest of the story would repeat the previous steps of model formulation for all data,
estimation, prediction of random effects (but now we need to predict jointly al random
effects that are related to group i), diagnostic (but now we need to consider the
assumptions at all levels), and inference.
Examples 7.1-7.4.
A model with more than two levels is built in a similar manner, starting from the innermost
level of grouping.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
45 / 49
Outline
1
Model for single observation
2
Matrix formulation
3
Estimation
4
Prediction of random effects
5
Model diagnostics
6
Inference
7
Multiple nested levels
8
Multiple crossed levels
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
46 / 49
Two crossed levels: A motivating example
Tree effects
200
−20
−200
0
200
20
800
600
400
Ring basal area, mm2
1000
1200
Year effects
1995
1999
2003
1:79 3:3 5:59 7:8
9:1
0
1991
1992
1994
1996
1998
2000
2002
2004
Tree age
The annual tree growth (growth ring basal area at breast height) for individual trees on calendar
year (left). Predicted effects of calendar year (middle) and tree (right) using a LME with
crossed random effects.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
47 / 49
Random constant at two crossed levels
Consider model
(2)
(p)
yij = β (1) + β (2) xij + . . . , +β (p) xij + ai + cj + εij ,
where ai ∼ N(0, σa2 ), cj ∼ N(0, σb2 ), and eij ∼ N(0, σ 2 ) are all i.i.d. and independent of
each other.
Coefficients, predictions and residuals are now defined either at the level of group i, j, or ij.
The model needs to be directly defined for the whole data.
Denote by n the number of groups at the first level and by m the number groups at the
second level, with associated group-specific random effects ai , i = 1, . . . , n and cj ,
j = 1, . . . , m.
The random effects are pooled into
a = a01 , . . . , a0n
0
c = c01 , . . . , c0m
0
and
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
48 / 49
Random constant at two crossed levels
Consider model
(2)
(p)
yij = β (1) + β (2) xij + . . . , +β (p) xij + ai + cj + εij ,
where ai ∼ N(0, σa2 ), cj ∼ N(0, σb2 ), and eij ∼ N(0, σ 2 ) are all i.i.d. and independent of
each other.
Coefficients, predictions and residuals are now defined either at the level of group i, j, or ij.
The model needs to be directly defined for the whole data.
Denote by n the number of groups at the first level and by m the number groups at the
second level, with associated group-specific random effects ai , i = 1, . . . , n and cj ,
j = 1, . . . , m.
The random effects are pooled into
a = a01 , . . . , a0n
0
c = c01 , . . . , c0m
0
and
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
48 / 49
Random constant at two crossed levels
Consider model
(2)
(p)
yij = β (1) + β (2) xij + . . . , +β (p) xij + ai + cj + εij ,
where ai ∼ N(0, σa2 ), cj ∼ N(0, σb2 ), and eij ∼ N(0, σ 2 ) are all i.i.d. and independent of
each other.
Coefficients, predictions and residuals are now defined either at the level of group i, j, or ij.
The model needs to be directly defined for the whole data.
Denote by n the number of groups at the first level and by m the number groups at the
second level, with associated group-specific random effects ai , i = 1, . . . , n and cj ,
j = 1, . . . , m.
The random effects are pooled into
a = a01 , . . . , a0n
0
c = c01 , . . . , c0m
0
and
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
48 / 49
Random constant at two crossed levels
Consider model
(2)
(p)
yij = β (1) + β (2) xij + . . . , +β (p) xij + ai + cj + εij ,
where ai ∼ N(0, σa2 ), cj ∼ N(0, σb2 ), and eij ∼ N(0, σ 2 ) are all i.i.d. and independent of
each other.
Coefficients, predictions and residuals are now defined either at the level of group i, j, or ij.
The model needs to be directly defined for the whole data.
Denote by n the number of groups at the first level and by m the number groups at the
second level, with associated group-specific random effects ai , i = 1, . . . , n and cj ,
j = 1, . . . , m.
The random effects are pooled into
a = a01 , . . . , a0n
0
c = c01 , . . . , c0m
0
and
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
48 / 49
Random constant at two crossed levels
Consider model
(2)
(p)
yij = β (1) + β (2) xij + . . . , +β (p) xij + ai + cj + εij ,
where ai ∼ N(0, σa2 ), cj ∼ N(0, σb2 ), and eij ∼ N(0, σ 2 ) are all i.i.d. and independent of
each other.
Coefficients, predictions and residuals are now defined either at the level of group i, j, or ij.
The model needs to be directly defined for the whole data.
Denote by n the number of groups at the first level and by m the number groups at the
second level, with associated group-specific random effects ai , i = 1, . . . , n and cj ,
j = 1, . . . , m.
The random effects are pooled into
a = a01 , . . . , a0n
0
c = c01 , . . . , c0m
0
and
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
48 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49
Random constant at two crossed levels
The model for whole data is
y = Xβ + Z(a) a + Z(c) c + ε ,
where Z(a) and Z(c) are the model matrices of the random part.
The model matrices are constructed in a similar manner as the model matrices for datasets
with single level of grouping. That is, Z(a) is similar to the model matrix for single-level
model using first level only. Correspondingly, Z(c) is similar to the model matrix using
second level only. However, the structure differs in the order of rows.
Therefore both matrices cannot have block-diagonal structure.
This implies, for example, that var(y) is not block-diagonal,
and the likelihood does not simplify to a product of group likelihoods,
and random effects need to be predicted to all groups simultaneously.
These issues are technical: everything that has been said on estimation, prediction, and
inference is valid
See Examples 7.5-7.10.
Lauri Mehtätalo (UEF)
Linear mixed-effects models
October 2015, Würzburg
49 / 49