Minitab Notes for STAT 6305
Dept. of Statistics — CSU East Bay
Unit 3: Random-Effects One-Factor ANOVA
Note: This unit covers a topic for one-factor designs that O/L 6e discusses only briefly in Sect. 17.2. In industrial applications,
issues of power and parameter estimation, not included there, are of considerable importance. Consequently, we give more of the
theoretical development here than is usual in these lab notes.
3.1. Data and Worksheet Preparation
The data below are taken from a much larger study to determine the precision with which the
calcium content of turnip leaves can be determined. Here we show results for a = 4 randomly
chosen leaves with n = 4 calcium determinations on each leaf (a "balanced" design).
Leaf 1: 3.28, 3.09, 3.03, 3.03
Leaf 2: 3.52, 3.48, 3.38, 3.38
Leaf 3: 2.88, 2.80, 2.81, 2.76
Leaf 4: 3.34, 3.38, 3.23, 3.26
These data are taken from Snedecor and Cochran: Statistical Methods, 7th ed., (1980),
Iowa State University Press, page 239. Calcium concentrations are % dry weight.
We repeat the data below in a simple text format suitable for cutting and pasting. We also show one
possible method for putting these measurements into a Minitab worksheet, with all 16 observations
in a column c1 'Calcium' with leaf numbers in c2 'Leaf' ("stacked format").
3.28
2.88
3.09
2.80
3.03
2.81
3.03
2.76
MTB >
MTB >
DATA>
DATA>
DATA>
DATA>
DATA>
MTB >
MTB >
DATA>
DATA>
name c1 'Calcium'
set c1
3.28, 3.09, 3.03,
3.52, 3.48, 3.38,
2.88, 2.80, 2.81,
3.34, 3.38, 3.23,
end
name c2 'Leaf'
set c2
(1:4)4
end
3.52
3.34
3.48
3.38
3.38
3.23
3.38
3.26
3.03
3.38
2.76
3.26
From a strictly computational point of view, a one-way ANOVA table for these data is made
according to the same formulas we used in Unit 2. The F test for a "group effect" is also done
exactly as in Unit 2. But the fact that the leaves are randomly selected, presumably to represent a
larger population of such leaves, makes a big difference in how we interpret the results of the
ANOVA, especially if we find a significant effect. We explore such issues in the next section.
Minitab Notes for STAT 6305
Unit 3-2
Problems
3.1.1. Make a worksheet as shown above. Proofread.
(a) Make dotplots of these data—four plots on the same scale. Are there any outliers?
(b) Do you see evidence that different leaves tend to have different amounts of calcium
("among leaf variation")?
(c) Does it seem that variances are the same for all four leaves ("homoscedasticity")? By hand,
perform Hartley's Fmax test of the null hypothesis that the four population variances are
the same (use the tables in O/L). Also perform Bartlett's test using Minitab menu path:
STAT ⇒ ANOVA ⇒ Test for equal variances, Response = 'Calcium', Factor = 'Leaf'.
[See O/L 6e, p462 for an explanation of Bonferroni confidence intervals. Short explanation:
These are relatively long CIs based on confidence level (100 – 5/a)%, where a = 4,
intended to give an overall error rate not exceeding 5% when the four CIs are compared.]
(d) These 16 observations show considerable variability. From what you see in the dotplot, do
you think this variability is mainly because the leaves have varying amounts of calcium,
mainly because the analytic process for measuring calcium is imprecise, or are both kinds
of variation about equally important?
3.1.2. Suppose that a formal statistical analysis shows that there are significant differences among
groups (Leaves). From the description of how and why the data were collected, do you believe it is
important to make multiple comparisons among the groups? To be specific, suppose there is strong
evidence that Leaf 3 has a lot less calcium than the other three leaves. How would you interpret this
result to someone interested in the calcium content of turnip leaves?
3.2. Distinction Between Fixed and Random-Effects Models
Fixed effects. In a fixed-effects model, we have several levels of a factor that are determined by the
investigator before the data are collected. For example, in a previous unit we considered three types
of hot dogs. The factor is the type of hot dog and there are three levels.
The general model for such a situation can be written as follows:
Yij = µ + αi + eij, where i = 1, ..., a, and j = 1, ..., ni.
Here a is the number of levels of the factor and ni is the number of observations at each level. If the
ni are all equal to n then we say that the design is balanced.
In the hot dog study, a = 3, n1 = 20, n2 = 17, and n3 = 17, so the design is not balanced. The
distributional assumptions are that Y1j are a random sample from N(µ + α1, σ2), the Y2j are a random
sample from N(µ + α2, σ2), and the Y3j are a random sample from N(µ + α3, σ2). All three treatment
groups are assumed to have the same standard deviation σ ("homoscedasticity"). An equivalent (and
briefer) formulation of the randomness of this model is that the eij are independently and identically
distributed (iid) with eij ~ N(0, σ2). We turn now to the nonrandom terms (parameters) of the model.
The mean of the ith group (level) is µ + αi. In practice, the parameters µ, ai, and σ are unknown. In
some applications, we try to estimate these parameters from the data Yij. But the primary question
in an ANOVA is often whether the three groups have the same mean.
•
•
•
By definition, the parameters αi are chosen so that Σi αi = 0.
The null hypothesis is that α1 = α2 = α3. In this case, according to this choice, it follows that
α1 = α2 = α3 = 0. so all three groups have the same mean, which is µ. Also, Σi αi2 = 0.
The alternative hypothesis is that the αi differ. In this case, some of them must be
positive and some negative in order to meet the condition Σi αi = 0. Then Σi αi2 > 0.
Minitab Notes for STAT 6305
Unit 3-3
Random effects. If the individual groups are randomly chosen during the study to represent a larger
population, then the model—and the kinds of conclusions we want to draw from the data—are
different. For example, the groups may be randomly chosen employees, batches of a product in
production, or (in the case of our current data) leaves of a plant. In an ANOVA model with a
random effect, the quantities that may make the groups different are always random.
In our data, the researcher collected four leaves, each randomly chosen from a turnip plant that was
also randomly chosen. The researcher then took four samples at random from each leaf and
analyzed all 16 samples for calcium content. Although each leaf is considered to be uniform as to
Ca level, the analyses for calcium are difficult and we assume subject to normally distributed
random error. We wonder whether the leaves may also differ randomly (from leaf to leaf) as to
calcium content. In general, the model for such a random-effects design is specified as follows:
Yij = µ + Ai + eij, where i = 1, ..., a, and j = 1, ..., ni.
In this case we assume that the eij are iid N(0, σ2) as before, but we also assume that the mean
values of the randomly chosen groups are also normally distributed. Specifically, we assume that Ai
are iid N(0, σA2). Then the variance of an observation V(Yij) = σA2 + σ2; and so this variance is said
to have two components σA2 and σ2.
In the turnip leaf example, a = 4 and ni = n = 4 (a balanced design). If we could precisely measure
the calcium content of the randomly chosen leaves, we would find that the true calcium content of
the ith randomly chosen leaf is µ + Ai. The null hypothesis in this situation is H0: σA = 0 (or at least
so close to 0 that this source of variation is unimportant) and the alternative hypothesis is H0: σA > 0
(to an extent that is important and perhaps detectable using the data at hand).
As we mentioned earlier, it turns out that the ANOVA tables (DFs, SSs, MSs, F) for fixed and
random-effects single-factor models are computed in the same way. It is the interpretation of the
results that differs between fixed and random-effects models.
Note on terminology: Some texts refer to fixed-effects models as Model 1, and to randomeffects models as Model II.
Problem
3.2.1. The essential ingredients in computing an F ratio in a one-way ANOVA are the sizes, means,
and standard deviations of each of the a groups. This is true whether you have a fixed or a random
effects model. Here is a summary table from Minitab—containing the information that is necessary
for now (some unnecessary summary statistics have been omitted):
Descriptive Statistics: Calcium
Variable
Calcium
Leaf
1
2
3
4
N
4
4
4
4
N*
0
0
0
0
Mean
3.1075
3.4400
2.8125
3.3025
SE Mean
0.0592
0.0356
0.0250
0.0347
StDev
0.1184
0.0712
0.0499
0.0695
(a) What command/subcommand or menu path can be used to make output similar to the
above? (In menus for Minitab 15, there is a way to select just the descriptive statistics you
want. See if you can produce output that contains exactly the information shown above.)
Minitab Notes for STAT 6305
Unit 3-4
(b) MS(Error) in the ANOVA table can be found as
MS(Error) = [0.11842 + 0.07122 + 0.04992 + 0.06952] / 4.
Do this computation and compare the result with the ANOVA table of the next section.
Is the divisor best explained as a = 4 or n = 4? Precisely which formulas in your textbook
simplify to this result when you take into account that this is a balanced design?
(c) MS(Group) = MS(Factor) = MS(Leaf) can be found from the information in this Minitab
display as a multiple of the variance of the four group means: 3.1075, 3.4400, 2.8125, and
3.3025. Find the variance of these means. What is the appropriate multiplier? (For our data
it happens that n = a = 4. Express the multiplier in terms of either n or a so that you have a
general statement.) What formulas in your textbook simplify to this result?
(d) Use the results of parts (b) and (c) to find the F ratio. What are the appropriate degrees of
freedom (numerator and denominator)? For the degrees of freedom give both numbers and
formulas.
(e) We will see in the next section that MS(Error) estimates σ2 and that MS(Group) estimates
σ2 + 4σA2. Use this information together with your numerical results in parts (b) and (c) to
estimate σA2. Which is larger, σ2 or σA2? Compare this finding with your speculation in
part (d) of problem 3.1.1.
3.3. Analysis of One-Factor Random-Effects Data
Here is the ANOVA table for the turnip-leaf data, treating leaves as a random effect. Notice
especially the subcommand to declare 'Leaf' as a random effect. (The last two subcommands
produce the table of expected mean squares, discussed in the next section.)
MTB > anova Calcium = Leaf;
SUBC> random Leaf;
SUBC> ems.
ANOVA: Calcium versus Leaf
Factor
Leaf
Type
random
Levels
4
Values
1, 2, 3, 4
Analysis of Variance for Calcium
Source
Leaf
Error
Total
DF
3
12
15
SS
0.88837
0.07923
0.96759
S = 0.0812532
1
2
Source
Leaf
Error
MS
0.29612
0.00660
R-Sq = 91.81%
Variance
component
0.07238
0.00660
Error
term
2
F
44.85
P
0.000
R-Sq(adj) = 89.77%
Expected
Mean Square
for Each
Term (using
restricted
model)
(2) + 4 (1)
(2)
The small P-value indicates that the random leaf effect, represented by the variance σA2 is
significantly different from 0.
Minitab Notes for STAT 6305
Unit 3-5
Problem
3.3.1. Make a normal probability plot of the residuals from this model in order to assess whether the data are
normal. In menus (STAT ⇒ ANOVA ⇒ Balanced) you can select such a plot under Graphs.
Alternatively, use an additional subcommand to store residuals: SUBC> resids c3; then make a (slightly
different style of) probability plot using MTB> pplot c3.
3.4. Estimating the Parameters of the Model
Variance estimates. Estimates of the two variances in this model are given in the Variance
Component column of the EMS table.
•
MS(Leaf) for our data is computed to be 0.29612. MS(Leaf) is a random variable. In terms
of the parameters of the model its expected value is E[MS(Leaf)] = EMS(Leaf) = σ2 + 4σA2.
We regard MS(Leaf) as an estimate of EMS(Leaf).
•
Because Minitab does not print Greek letters or subscripted or superscripted symbols,
EMS(Leaf) is represented in the output as (2) + 4(1), where σ2 is represented by (2)
and σA2 is represented by (1). The numbers 1 and 2 in parentheses correspond to the rows
labeled Leaf and Error.
•
Similarly, MS(Error) = 0.00660 is the estimate of EMS(Error) = σ2.
•
By subtraction, the estimate of σA2 is (0.29612 – 0.0066)/4 = 0.07238. Because of the
subtraction, if σA2 is near 0, its estimate might turn out to be negative. In practice, a
negative estimate of σA2 is taken to be an indication that σA2 must be nearly 0. (Maximum
likelihood, Bayesian and other more advanced methods of estimation, avoid such inelegant
negative estimates of quantities known to be nonnegative.)
Confidence intervals for σ2 and EMS(Leaf) can be obtained using the chi-squared distribution with
12 and 3 degrees of freedom, respectively. Finding a confidence interval for σA2 is more difficult.
Because both components of variance σ2 and σA2 contribute to the variability of the Yij, it is difficult
cleanly to disentangle these two variance components.
Estimate of the grand population mean µ. The average 3.1656 of all 16 observations is an estimate
of the parameter µ of the ANOVA model.
Incorrect Confidence Interval
MTB > onet c1
One-Sample T: Calcium
Variable
Calcium
N
16
Mean
3.16563
StDev
0.25398
SE Mean
0.06350
95% CI
(3.03029, 3.30096)
But it would be a mistake to use the confidence interval based on Minitab's one-sample t procedure
because these 16 observations are not iid; they come from four groups that we have shown to differ
–
significantly. The variance of this "grand average" Y •• for a balanced design is
–
V(Y ••) = V[µ + (1/a) Σi Ai + (1/na) Σi Σj eij] = (1/a2) Σi V(Ai) + [1/(na)2] Σi Σj V(eij)
= (1/a2) Σi σA2 + [1/(na)2] Σi Σj σ2 = (1/a) σA2 + (1/na) σ2 = [σ2 + nσA2] / na.
–
For our study this is V(Y ••) = (σ2 + 4σΑ2)/16, which is estimated by
MS(Leaf)/16 = 0.29612/16 = 0.0185.
Minitab Notes for STAT 6305
Unit 3-6
Thus the standard error of the estimate of µ is √0.0185 = 0.1360. Using the t distribution with
DF(Leaf) = 3 degrees of freedom, a 95% confidence interval for µ is 3.1656 ± t*(0.136) or
3.1656 ± 0.433. Here t* = 3.182 can be found in tables or using Minitab as shown below.
MTB > invcdf .975;
SUBC> t 3.
Inverse Cumulative Distribution Function
Student's t distribution with 3 DF
P( X <= x )
0.975
x
3.18245
Problems
3.4.1. In estimating σA2 from a balanced design as [MS(Group) – MS(Error)]/n, it is possible to get
a negative result. This can be awkward because, of course, we know that σA2 ≥ 0.
(a) In terms of the value of the F statistic (or F ratio), when will this method give a negative
estimate of σA2?
(b) If a = n = 4 and σA2 = 0, then the F statistic has an F-distribution with numerator degrees of
freedom ν1 = a – 1 = 3 and denominator degrees of freedom ν2 = a(n – 1) = 12. Use the
command MTB > cdf 1; with the subcommand SUBC> f 3 12. to find the
probability of getting a negative estimate of σA2 in these circumstances.
3.4.2. In a fresh worksheet, generate fake data using the command MTB > random 10 c1-c5;
and the subcommand SUBC> norm 100 10. Consider the columns as a = 5 groups of n = 10
observations each. Stack the data and analyze according to a one-way random-effects model. Here
it is known that σ = 10, σ2 = 100 and σA = 0. What estimates does your analysis give? Repeat this
simulation several times as necessary until you see a negative estimate of σA2.
3.4.3. The manufacture of a plastic material involves a hardening process. The variability in
strength of the finished product is unacceptably large and engineers want to know what may be
responsible for the excessive variability. First, five Batches (B1 – B5) of raw plastic are sampled
at random. Ten specimens are then taken from each batch and "hardened." Finally, the hardness of
each of the 50 specimens is measured. There is some variability in how individual specimens react
to the hardening process, but the process of measuring hardness is known to have negligible error.
B1:
B2:
B3:
B4:
B5:
426,
619,
492,
505,
389,
539,
460,
481,
479,
502,
506,
420,
442,
538,
499,
473,
489,
515,
480,
566,
466,
545,
481,
550,
557,
506,
530,
527,
453,
493,
545,
553,
409,
455,
525,
571,
575,
441,
456,
481,
518,
434,
419,
466,
431,
420
499
385
450
470
(a) Is the variability among Batches significant?
(b) Estimate the two variance components in this study (batch and hardening). Upon which
component do you recommend that efforts to reduce product variability be concentrated?
(c) Give a 95% confidence interval for µ.
(d) These are randomly generated data (patterned roughly after a real-life situation), so the true
parameter values are known. They are: µ = 500, σ = 50, σA = 15. Comment on how well or
poorly you were able to estimate these values.
Minitab Notes for STAT 6305
Unit 3-7
3.5. Using R to Analyze a One-Factor Design with Two Variance Components
We begin by defining the two required variables, one numerical and the other a factor variable,
making stripcharts of measurements on the four leaves, and making an ANOVA table as if Leaf
were a fixed factor. (The ANOVA table agrees with the one we made above in Minitab.)
Ca = c(3.28, 3.09, 3.03, 3.03,
3.52, 3.48, 3.38, 3.38,
2.88, 2.80, 2.81, 2.76,
3.34, 3.38, 3.23, 3.26)
Leaf = as.factor(rep(1:4, each=4))
stripchart(Ca ~ Leaf, method="stack", xlab="Calcium", ylab="Leaf",
main="Calcium Measurements on 4 Randomly Chosen Turnip Leaves")
anova(lm(Ca ~ Leaf))
Analysis of Variance Table
Response: Ca
Df Sum Sq Mean Sq F value
Pr(>F)
Leaf
3 0.88837 0.29612 44.853 8.52e-07 ***
Residuals 12 0.07923 0.00660
--Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
In R the aov function permits designating Leaf as a random effect (or error term). But, except for
the estimate of the grand mean µ, the output does not tell us anything more than we have in the
ANOVA table just above. The comments show the correspondences.
aov(Ca ~ Error(Leaf))
Call:
aov(formula = Ca ~ Error(Leaf))
Minitab Notes for STAT 6305
Grand Mean: 3.165625
Unit 3-8
# estimate of µ
Stratum 1: Leaf
Terms:
Residuals
Sum of Squares 0.8883688
Deg. of Freedom
3
# SS(Leaf)
# DF(Leaf)
Residual standard error: 0.5441718
# √MS(Leaf)
Stratum 2: Within
Terms:
Sum of Squares
Deg. of Freedom
Residuals
0.079225
12
Residual standard error: 0.0812532
# SS(Resid) = SS(Error)
# DF(Resid) = DF(Error)
# √MS(Resid), estimate of σ
So far we have been using method of moments estimates (MMEs) of the parameters of our randomeffects model. That is we have obtained the estimates by equating realizations of random variables
to their expected values. Statistical theory shows that estimated obtained by the method of
maximum likelihood (MLEs) often have superior properties. In this situation the two methods differ
only in the estimation of σA2. In particular, the MLE of σA2 cannot be negative. The R library nlme
(for nonlinear maximum-likelihood estimation) provides MLEs and (in many cases) also associated
confidence intervals.
It is not our purpose here to explain the theory of MLEs or the general syntax of the library nlme.
For the one-factor random-effect model computationally intensive numerical methods are required
to find MLEs. In particular, if the MLE of σA2 is very small, then these methods do not produce
stable results for confidence intervals and so confidence intervals are not printed.
Below we show the results obtained using this method. Comments show corresponding results
obtained in Section 3.4. In this problem we have very little data, so it is not surprising that
fundamentally different methods of estimation give somewhat different results. We would have to
sample more then 4 leaves to expect to get a better estimate of σA2.
require (nlme)
ml.fit = lme(Ca ~ 1, random = ~1|Leaf, method="ML")
ml.fit
Linear mixed-effects model fit by maximum likelihood
Data: NULL
Log-likelihood: 10.42853
Fixed: Ca ~ 1
(Intercept)
3.165625
Random effects:
Formula: ~1 | Leaf
(Intercept) Residual
StdDev:
0.2321046 0.0812532
Number of Observations: 16
Number of Groups: 4
Minitab Notes for STAT 6305
Unit 3-9
intervals(ml.fit)
Approximate 95% confidence intervals
Fixed effects:
lower
est.
upper
(Intercept) 2.908925 3.165625 3.422325
attr(,"label")
[1] "Fixed effects:"
Random Effects:
Level: Leaf
# For µ :
# 2.73
3.60
# For σA:
lower
est.
upper
sd((Intercept)) 0.1136245 0.2321046 0.4741278
Within-group standard error:
lower
est.
upper
0.05446151 0.08125320 0.12122475
3.17
0.269
(CI not available)
# For σ:
# .0545
.0813
.1212
Problems
3.5.1. In R, execute pf(1, 3, 12). Report the result and explain what it means in the setting of
a one-factor random-effect design with a = n = 4. Repeat for a design with a = 5 and n = 10.
3.5.2. Repeat the simulation of problem 3.4.2 in R using the following code. Repeat as necessary to
see F < 1 (that is, simulated data that give a negative estimate of σA2). In view of your answer to
problem 3.5.1, about how many repetitions would the average student have to make?
a = 5; n = 10
Y = rnorm(a*n, 100, 10)
Batch = as.factor(rep(1:a, each=n))
anova(lm(Y ~ Batch))
3.5.3. Use the data of problem 3.4.3 and the R code below that includes maximum likelihood
estimators of the grand mean and variance components. Comment, provide confidence intervals.
B1 = c(426, 539, 506, 473, 466, 506, 545, 571, 518, 420)
B2 = c(619, 460, 420, 489, 545, 530, 553, 575, 434, 499)
B3 = c(492, 481, 442, 515, 481, 527, 409, 441, 419, 385)
B4 = c(505, 479, 538, 480, 550, 453, 455, 456, 466, 450)
B5 = c(389, 502, 499, 566, 557, 493, 525, 481, 431, 470)
Y = c(B1, B2, B3, B4, B5); Batch = as.factor(rep(1:5, each=10))
anova(lm(Y ~ Batch))
stripchart(Y ~ Batch, method="stack", xlab="Hardness", ylab="Batch",
main="Hardness Measurements on 5 Batches of Plastic")
anova(lm(Y ~ Batch))
require (nlme)
ml.fit = lme(Y ~ 1, random = ~1|Batch, method="ML")
ml.fit
intervals(ml.fit)
3.5.4. About MLE. In the ANOVA designs we have been using, there are a × n normally
distributed observations and there are three parameters to estimate. So finding MLEs is rather
complicated and requires numerical approximation methods.
The purpose of this problem is to illustrate maximum likelihood estimation in two more elementary
settings where data are simpler and there is only one parameter to estimate.
(a) Binomial. Suppose a coin has P(Heads) = θ and that we toss it n = 10 times. If θ is known then
the function f(x, θ) = C(n, x) θx (1 – θ)n–x can be to find P{X = x}, the probability of seeing x
Minitab Notes for STAT 6305
Unit 3-10
Heads. For example, using R, with theta = 0.3 the function dbinom(4, 10, theta)
returns P{X = 4} = 0.20012.
Now, suppose θ is unknown and we have observed X = 4. The MLE of θ is the value of θ for
which f(4, θ) is maximized. Viewed as a function of θ in this way, f is called the likelihood
function. The maximization problem can be solved by differential calculus (perhaps more
easily so, if we take the derivative of log f rather than of f itself). The solution is that the MLE
is X/n. Because E(X/n) = θ, X/n is also the MME. Here we use a grid search to find the MLE in
our particular case, looking at many narrowly spaced values of θ, and picking the one that
maximizes the likelihood function. Use the code to find the MLE.
theta = seq(0, 1, by=.0001);
like = dbinom(4, 10, theta);
theta[like==max(like)]
(b) Uniform. Suppose X ~ UNIF(0, θ) and we have three observation. Because the expected value
of the sample mean is θ/2, it follows that the MME of θ is double the sample mean. So if we
observed values 3.821, 4.117, 0.283. Then the MME is 6.164667. Although methods of
calculus do not apply here, one can show that the MLE is the maximum of the three
observations. So here the MLE is 4.117. Here is R code for a grid search for the MLE in our
particular case. Show that it returns 4.117, and explain what the graph shows.
theta = seq(.0001, 10, by=.0001) #avoid 0 and use three place accuracy to get a 'hit'
like = dunif(3.821, 0, theta) * dunif(4.117, 0, theta) * dunif(0.283, 0, theta)
mle = theta[like==max(like)]; mle
plot(theta, like, type="l", col="blue", main="Likelihood Function with MLE");
abline(v=mle, lty="dotted", col="red", lwd=3)
Technical notes: For estimating θ, the MLE has properties that make it superior to the MME. In particular, as a
random variable, its average squared distance from θ is smaller. (This is called the MSE, for mean squared
error.) Here, E(MLE) = 3θ/4, so MLE is too small on average. Multiplying the MLE by 4/3, we get an unbiased
estimator that turns out to have smaller variance than does the MME. In our example, the three values were
simulated from UNIF(0, 5), so we know θ = 5. And "as expected," in this example MLE = 4.117 happens to be
closer to θ = 5 than does MME = 6.165.
Minitab Notes for Statistics 6503 by Bruce E. Trumbo,
Department of Statistics, California State University, East Bay, Hayward CA, 94542,
Copyright © 2004, 2010 by Bruce E. Trumbo. All rights reserved. These notes are intended primarily for use at CSU
East Bay. Please request permission for other uses.
Comments and corrections welcome. Email: [email protected].
First version 1/2004, last revised 1/2010.
© Copyright 2026 Paperzz