Chap 15 Show

Chapter 15
The Analysis of Variance
1
A Problem
A study was done on the survival time of
patients with advanced cancer of the
stomach, bronchus, colon, ovary or breast
when treated with ascorbate1. In this study,
the authors wanted to determine if the
survival times differ based on the affected
organ.
1
Cameron, E. and Pauling, L. (1978) Supplemental ascorbate in the supportive
treatment of cancer: re-evaluation of prolongation of survival time in terminal human
cancer. Proceedings of the National Academy of Science, USA, 75, 4538-4542.
2
A Problem
A comparative dotplot of the survival times is
shown below.
Dotplot for Survival Time
Cancer Type
Stomach
Ovary
Colon
Bronchus
Breast
0
3
1000
2000
3000
Survival Time (in days)
1
A Problem
The hypotheses used to answer the question
of interest are
H0: µstomach = µbronchus = µcolon = µovary = µbreast
Ha: At least two of the µ’s are different
The question is similar to ones encountered in
chapter 11 where we looked at tests for the
difference of means of two different variables. In
this case we are interested in looking a more than
two variable.
4
Single-factor Analysis of Variance
(ANOVA)
A single-factor analysis of variance
(ANOVA) problems involves a comparison of
k population or treatment means µ1, µ2, … , µk.
The objective is to test the hypotheses:
H0: µ1 = µ2 = µ3 = … = µk
Ha: At least two of the µ’s are different
5
Single-factor Analysis of Variance
(ANOVA)
The analysis is based on k independently
selected samples, one from each population
or for each treatment.
In the case of populations, a random
sample from each population is selected
independently of that from any other
population.
When comparing treatments, the
experimental units (subjects or objects)
that receive any particular treatment are
chosen at random from those available
for the experiment.
6
2
Single-factor Analysis of Variance
(ANOVA)
A comparison of treatments based on
independently selected experimental units is
often referred to as a completely randomized
design.
7
Single-factor Analysis of Variance
(ANOVA)
Dotplots of Yield by Fertilizer
(group means are indicated by lines)
Yield
70
60
50
40
Type 1
Type 2
Type 3
Fertilizer
Notice that in the above comparative dotplot, the
differences in the treatment means is large relative to
the variability within the samples.
8
Single-factor Analysis of Variance
(ANOVA)
Dotplots of Price by Subject
(group means are indicated by lines)
Price
85
75
Statistics
Psychology
Economics
Business
65
Subject
Notice that in the above comparative dotplot, the
differences in the treatment means is not easily
understood relative to the sample variability.
ANOVA techniques will allow us to determined if those
differences are significant.
9
3
ANOVA Notation
k = number of populations or treatments being compared
Population or treatment
1
2
… k
Population or treatment mean
µ1 µ2
… µk
Population or treatment variance σ12
σ22
…
σk2
n1
n2
…
nk
Sample size
Sample mean
x1
x2
…
xk
Sample variance
s12
s22
…
sk2
10
ANOVA Notation
N = n1 + n2 + … + nk
(Total number of observations in the data set)
T = grand total = sum of all N observations
= n1x1 + n2 x 2 + L + nk xk
x = grand mean =
T
N
11
Assumptions for ANOVA
1. Each of the k populations or treatments,
the response distribution is normal.
2. σ1 = σ2 = … = σk (The k normal
distributions have identical standard
deviations.
3. The observations in the sample from any
particular one of the k populations or treatments
are independent of one another.
4. When comparing population means, k random
samples are selected independently of one
another. When comparing treatment means,
treatments are assigned at random to subjects or
objects.
12
4
Definitions
A measure of disparity among the sample
means is the treatment sum of squares,
denoted by SSTr is given by
SSTr = n1 ( x1 − x ) + n2 ( x 2 − x ) + L + nk ( xk − x )
2
2
2
A measure of variation within the k samples, called
error sum of squares and denoted by SSE is
given by
SSE = ( n1 − 1) s12 + ( n2 − 1) s22 + L + ( nk − 1) sk2
13
Definitions
A mean square is a sum of squares divided
by its df. In particular,
mean square for
treatments = MSTr =
mean square for error = MSE =
SSTr
k −1
SSE
N−k
The error df comes from adding the df’s associated
with each of the sample variances:
(n1 - 1) + (n2 - 1) + …+ (nk - 1)
= n1 + n2 … + nk - 1 - 1 - … - 1 = N - k
14
Example
Three filling machines are used by a bottler to
fill 12 oz cans of soda. In an attempt to
determine if the three machines are filling the
cans to the same (mean) level, independent
samples of cans filled by each were selected
and the amounts of soda in the cans measured.
The samples are given below.
Machine 1
12.033
12.033
Machine 2
12.031
11.985
15
Machine 3
12.034
12.001
12.021
11.985
12.025
12.009
12.054
12.009
12.050
11.985
12.027
11.998
11.987
11.992
12.021
12.020
12.038
12.029
12.058
12.011
5
Example
n1 = 8, x1 = 12.0248, s1 = 0.02301
n2 = 7, x 2 = 12.0007, s2 = 0.01989
n3 = 9, x 3 = 12.0259, s3 = 0.01650
x = 12.018167
SSTr = n1 ( x1 − x ) + n2 ( x 2 − x ) + L + nk ( xk − x )
2
2
2
= 8(0.0065833) + 7(-0.0174524) + 9(0.0077222)2
2
2
= 0.000334672+0.00213210+0.00053669
= 0.00301552
16
Example
n1 = 8, x1 = 12.0248, s1 = 0.02301
n2 = 7, x 2 = 12.0007, s2 = 0.01989
n3 = 9, x 3 = 12.0259, s3 = 0.01650
x = 12.018167
SSE = ( n1 − 1) s12 + ( n2 − 1) s22 + L + ( nk − 1) sk2
= 7(0.0230078)2 + 6(0.0198890)2 + 8(0.01649579)2
= 0.0037055 + 0.0023734 + 0.0021769
= 0.00825582
17
Example
n1 = 8, x1 = 12.0248, s1 = 0.02301
n2 = 7, x 2 = 12.0007, s2 = 0.01989
n3 = 9, x 3 = 12.0259, s3 = 0.01650
x = 12.018167
mean square for treatments = MSTr =
MSTr =
SSTr 0.00301552
=
= 0.0015078
k −1
3 −1
mean square for error = MSE =
MSE =
SSTr
k −1
SSE
N−k
SSE 0.0082579
=
= 0.00039313
N−k
24 − 3
18
6
Comments
Both MSTr and MSE are quantities that are
calculated from sample data.
As such, both MSTr and MSE are statistics
and have sampling distributions.
More specifically, when H0 is true, µMSTr = µMSE.
However, when H0 is false, µMSTr = µMSE and the
greater the differences among the μ’s, the larger µMSTr
will be relative to µMSE.
19
The Single-Factor ANOVA F Test
Null hypothesis: H0: µ1 = µ2 = µ3 = … = µk
Alternate hypothesis: At least two of the µ’s
are different
Test Statistic:
F=
MSTr
MSE
20
The Single-Factor ANOVA F Test
When H0 is true and the ANOVA assumptions
are reasonable, F has an F distribution with
df1 = k - 1 and df2 = N - k.
Values of F more contradictory to H0 than what was
calculated are values even farther out in the upper tail,
so the P-value is the area captured in the upper tail of
the corresponding F curve.
21
7
Example
Consider the earlier example involving the
three filling machines.
Machine 1
12.033
12.025
Machine 2
12.031
12.027
Machine 3
12.034
12.020
11.985
12.054
12.009
12.050
12.009
12.033
11.985
11.987
11.998
11.992
11.985
12.021
12.029
12.038
12.011
12.058
12.021
12.001
22
Example
n1 = 8, x1 = 12.0248, s1 = 0.02301
n2 = 7, x 2 = 12.0007, s2 = 0.01989
n3 = 9, x 3 = 12.0259, s3 = 0.01650
x = 12.018167
SSTr = 0.00301552
SSE = 0.00825582
MSTr = 0.0015078
MSE = 0.00039313
23
Example
1. Let µ1, µ2 and µ3 denote the true mean
amount of soda in the cans filled by
machines 1, 2 and 3, respectively.
2. H0: µ1 = µ2 = µ3
3. Ha: At least two among are µ1, µ2 and µ3
different
4. Significance level: α = 0.01
5. Test statistic: F =
MSTr
MSE
24
8
Example
6. Looking at the comparative dotplot, it
seems reasonable to assume that the
distributions have the same σ’s. We shall
look at the normality assumption on the
next slide.*
Dotplot for Fill
Machine
Machine 3
Machine 2
Machine 1
11.99
12.00
12.01
12.02
12.03
12.04
12.05
12.06
Fill
*When the sample sizes are large, we can make judgments about
both the equality of the standard deviations and the normality of the
underlying populations with a comparative boxplot.
25
Example
6. Looking at normal plots for the samples, it
certainly appears reasonable to assume that
the samples from Machine’s 1 and 2 are
samples from normal distributions.
Unfortunately, the normal plot for the sample
from Machine 2 does not appear to be a
sample from a normal population. So as to
have a computational example, we shall
continue and finish the test, treating the
result with a “grain of salt.”
Normal Probability Plot
.999
.99
.95
.99
.95
Probability
.80
.50
.20
Probability
.999
.99
.95
Probability
Normal Probability Plot
Normal Probability Plot
.999
.80
.50
.20
.80
.50
.20
.05
.01
.05
.01
.01
.001
.001
.001
.05
11.985 11.995 12.005 12.015 12.025 12.035 12.045 12.055
11.99
Machine 1
Average: 12.0248
StDev: 0.0230078
N: 8
12.00
12.01
12.02
12.03
12.00
Machine 2
Anderson-Darling Normality Te
A-Squared: 0.235
P-Value: 0.692
Average: 12.0007
StDev: 0.0198890
N: 7
12.01
12.02
12.03
12.04
12.05
12.06
Machine 3
Anderson-Darling Normality T
Average:
e
12.0259
StDev: 0.0164958
A-Squared: 0.729
P-Value: 0.031
N: 9
Anderson-Darling Normality T
A-Squared: 0.237
P-Value: 0.702
26
Example
7. Computation:
n1 = 8, x1 = 12.0248, s1 = 0.02301
n2 = 7, x 2 = 12.0007, s2 = 0.01989
n3 = 9, x 3 = 12.0259, s3 = 0.01650
x = 12.018167
SSTr = 0.00301552
SSE = 0.00825582
MSTr = 0.0015078
MSE = 0.00039313
N = n1 + n2 + n3 = 8 + 7 + 9 = 24, k = 3
MSTr 0.0015078
=
= 3.835
MSE 0.00039313
df1 = treatment df = k − 1 = 3 − 1 = 2
F=
27
df2 = error df = N − k = 24 − 3 = 21
9
Example
8. P-value:
Recall
MSTr 0.0015078
=
= 3.835
MSE 0.00039313
df1 = treatment df = k − 1 = 3 − 1 = 2
F=
df2 = error df = N − k = 24 − 3 = 21
From the F table with
numerator df1 = 2 and
denominator df2 = 21 we
can see that
dfden / dfnum
21
0.025 < P-value < 0.05
(Minitab reports this value
to be 0.038
α
2
0.100
0.050
0.025
0.010
0.001
2.57
3.47
4.42
5.78
9.77
3.835
28
Example
9. Conclusion:
Since P-value > α = 0.01, we fail to reject
H0. We are unable to show that the mean
fills are different and conclude that the
differences in the mean fills of the
machines show no statistically significant
differences.
29
Total Sum of Squares
Total sum of squares, denoted by SSTo,
is given by
SSTo =
∑ (x − x)
2
all N obs.
with associated df = N - 1.
The relationship between the three sums of
squares is SSTo = SSTr + SSE
which is often called the fundamental identity
for single-factor ANOVA.
Informally this relation is expressed as
Total variation = Explained variation + Unexplained variation
30
10
Single-factor ANOVA Table
The following is a fairly standard way of
presenting the important calculations from
an single-factor ANOVA. The output from
most statistical packages will contain an
additional column giving the P-value.
31
Single-factor ANOVA Table
The ANOVA table supplied by Minitab
One-way ANOVA: Fills versus Machine
Analysis of Variance for Fills
Source
DF
SS
MS
Machine
2 0.003016 0.001508
Error
21 0.008256 0.000393
Total
23 0.011271
F
3.84
P
0.038
32
Another Example
A food company produces 4 different
brands of salsa. In order to determine if the
four brands had the same sodium levels, 10
bottles of each Brand were randomly (and
independently) obtained and the sodium
content in milligrams (mg) per tablespoon
serving was measured.
The sample data are given on the next
slide.
Use the data to perform an appropriate
hypothesis test at the 0.05 level of
significance.
33
11
Another Example
Brand A
43.85 44.30 45.69 47.13 43.35
45.59 45.92 44.89 43.69 44.59
Brand B
42.50 45.63 44.98 43.74 44.95
42.99 44.95 45.93 45.54 44.70
Brand C
45.84 48.74 49.25 47.30 46.41
46.35 46.31 46.93 48.30 45.13
Brand D
43.81 44.77 43.52 44.63 44.84
46.30 46.68 47.55 44.24 45.46
34
Another Example
1. Let µ1, µ2 , µ3 and µ4 denote the true
mean sodium content per tablespoon in
each of the brands respectively.
2. H0: µ1 = µ2 = µ3 = µ4
3. Ha: At least two among are µ1, µ2, µ3 and
µ4 are different
4. Significance level: α = 0.05
5. Test statistic: F =
MSTr
MSE
35
Another Example
6. Looking at the following comparative
boxplot, it seems reasonable to assume
that the distributions have the equal σ’s as
well as the samples being samples from
normal distributions.
Boxplots of Brand A - Brand D
(means are indicated by solid circles)
49
48
47
46
45
44
43
Brand D
Brand C
Brand B
Brand A
42
36
12
Example
7. Computation:
Brand
k
Brand A
10
Brand B
10
Brand C
10
Brand D
10
xi
44.900
44.591
47.056
45.180
si
1.180
1.148
1.331
1.304
x = 45.432
SSTr = n1(x1 − x)2 + n2 (x 2 − x)2 + n3 (x 3 − x)2 + n4 (x 4 − x)2
= 10(44.900 − 45.432)2 + 10(44.591 − 45.432)2
+ 10(47.056 − 45.432)2 + 10(45.180 − 45.432)2
= 36.912
Treatment df = k - 1 = 4 - 1 = 3
37
Example
7. Computation (continued):
SSE = (n1 − 1) s12 + (n2 − 1) s22 + (n3 − 1) s32 + (n4 − 1) s42
= 9(1.180)2 + 9(1.148)2 + 9(1.331)2 + 9(1.304)2
= 55.627
Error df = N - k = 40 - 4 = 36
F=
SSTr
36.912
MSTr
dfSSTr
3 = 12.304 = 7.963
=
=
SSE
55.627
MSE
1.5452
dfSSE
36
38
Example
8. P-value:
F = 7.96 with dfnumerator= 3 and dfdenominator= 36
7.96
Using df = 30 we find
P-value < 0.001
39
13
Example
9. Conclusion:
Since P-value < α = 0.001, we reject
H0. We can conclude that the mean
sodium content is different for at least
two of the Brands.
We need to learn how to interpret the results and
will spend some time on developing techniques to
describe the differences among the µ’s.
40
Multiple Comparisons
A multiple comparison procedure is a
method for identifying differences among the
µ’s once the hypothesis of overall equality
(H0) has been rejected.
The technique we will present is based on
computing confidence intervals for difference
of means for the pairs.
Specifically, if k populations or treatments are studied,
we would create k(k-1)/2 differences. (i.e., with 3
treatments one would generate confidence intervals for
µ1 - µ2, µ1 - µ3 and µ2 - µ3.) Notice that it is only
necessary to look at a confidence interval for µ1 - µ2 to
see if µ1 and µ2 differ.
41
The Tukey-Kramer Multiple
Comparison Procedure
When there are k populations or treatments
being compared, k(k-1)/2 confidence
intervals must be computed. If we denote the
relevant Studentized range critical value by
q, the intervals are as follows:
For μi - μj: (μi − μ j ) ± q
MSE ⎛ 1 1 ⎞
⎜ + ⎟
2 ⎝ ni n j ⎠
Two means are judged to differ significantly
if the corresponding interval does not include
zero.
42
14
The Tukey-Kramer Multiple
Comparison Procedure
When all of the sample sizes are the same,
we denote n by n = n1 = n2 = n3 = … = nk,
and the confidence intervals (for µi - µj)
simplify to
(μ i − μ j ) ± q
MSE
n
43
Example (continued)
Continuing with example dealing with the
sodium content for the four Brands of salsa we
shall compute the Tukey-Kramer 95% TukeyKramer confidence intervals for µA - µB, µA - µC,
µA - µD, µB - µC, µB - µD and µC - µD.
55.627
= 1.5452, n = nA = nB = nC = nD = 10
36
⎛ Interpolating from the table
⎞
q = 3.81 ⎜
⎟
⎝ i.e. 60% of the way from 3.85 to 3.79 ⎠
MSE =
q
MSE
1.5452
= 3.81
= 1.498
n
10
44
Example (continued)
Difference
μA - μB
μA - μC
μA - μD
μB - μC
μB - μD
μC - μD
45
95% Confidence 95% Confidence
Limits
Interval
0.309 ± 1.498
(-1.189, 1.807)
-2.156 ± 1.498
(-3.654, -0.658)
-0.280 ± 1.498
(-1.778, 1.218)
-2.465 ± 1.498
(-3.963, -0.967)
-0.589 ± 1.498
(-2.087, 0.909)
1.876 ± 1.498
(0.378, 3.374)
Notice that the confidence intervals for µA – µB, µA – µC
and µC – µD do not contain 0 so we can infer that the mean
sodium content for Brands C is different from Brands A, B
and D.
15
Example (continued)
We also illustrate the differences with the
following listing of the sample means in
increasing order with lines underneath those
blocks of means that are indistinguishable.
Brand B
Brand A
Brand D
Brand C
44.591
44.900
45.180
47.056
Notice that the confidence interval for µA – µC, µB – µC, and
µC – µD do not contain 0 so we can infer that the mean
sodium content for Brand C and all others differ.
46
Minitab Output for Example
One-way ANOVA: Sodium versus Brand
Analysis of Variance for Sodium
Source
DF
SS
MS
Brand
3
36.91
12.30
Error
36
55.63
1.55
Total
39
92.54
Level
Brand
Brand
Brand
Brand
A
B
C
D
N
10
10
10
10
Mean
44.900
44.591
47.056
45.180
Pooled StDev =
47
StDev
1.180
1.148
1.331
1.304
1.243
F
7.96
P
0.000
Individual 95% CIs For Mean
Based on Pooled StDev
------+---------+---------+---------+
(-----*------)
(------*-----)
(------*------)
(------*-----)
------+---------+---------+---------+
44.4
45.6
46.8
48.0
Minitab Output for Example
Tukey's pairwise comparisons
Family error rate = 0.0500
Individual error rate = 0.0107
Critical value = 3.81
Intervals for (column level mean) - (row level mean)
Brand A
Brand B
Brand B
-1.189
1.807
Brand C
-3.654
-0.658
-3.963
-0.967
Brand D
-1.778
1.218
-2.087
0.909
Brand C
0.378
3.374
48
16
Simultaneous Confidence Level
The Tukey-Kramer intervals are created in a
manner that controls the simultaneous
confidence level.
For example at the 95% level, if the procedure is used
repeatedly on many different data sets, in the long run only
about 5% of the time would at least one of the intervals not
include that value of what it is estimating.
We then talk about the family error rate being 5% which
is the maximum probability of one or more of the
confidence intervals of the differences of mean not
containing the true difference of mean.
49
Randomized Block Experiment
Suppose that experimental units (individuals
or objects to which the treatments are
applied) are first separated into groups
consisting of k units in such a way that the
units within each group are as similar as
possible. Within any particular group, the
treatments are then randomly allocated so
that each unit in a group receives a different
treatment. The groups are often called
blocks and the experimental design is
referred to as a randomized block design.
50
Example
When choosing a variety of melon to plant, one
thing that a farmer might be interested in is the
length of time (in days) for the variety to bear
harvestable fruit. Since the growing conditions
(soil, temperature, humidity) also affect this, a
farmer might experiment with three hybrid
melons (denoted hybrid A, hybrid B and hybrid
C) by taking each of the four fields that he
wants to use for growing melons and
subdividing each field into 3 subplots (1, 2 and
3) and then planting each hybrid in one subplot
of each field. The blocks are the fields and the
treatments are the hybrid that is planted. The
question of interest would be “Are the mean
times to bring harvestable fruit the same for all
three hybrids?”
51
17
Assumptions and Hypotheses
The single observation made on any
particular treatment in a given block is
assumed to be selected from a normal
distribution. The variance of this distribution
is σ2, the same for each block-treatment
combinations. However, the mean value
may depend separately both on the
treatment applied and on the block. The
hypotheses of interest are as follows:
H0: The mean value does not depend on
which treatment is applied
Ha: The mean value does depend on
which treatment is applied
52
Summary of the Randomized
Block F Test
Notation:
Let
k = number of treatments
l = number of blocks
xi = average if all observations for
treatment i
bi= average of all observations in block I
x= average of all kl observations in the
experiment (the grand mean)
53
Summary of the Randomized
Block F Test
Sums of squares and associated df’s are as
follows.
Sum of Squares
Symbol
df
Formula
Treatments
SSTr
k-1
SSTr = l[(x 1 − x ) 2 + (x 2 − x ) 2 + ... + (x k − x ) 2 ]
SSBl = k[(b1 − x) + ( b 2 − x ) 2 + ... + ( b l − x ) 2 ]
Blocks
Error
SSBl
l-1
SSE
(k - 1)(l - 1)
Total
SSTo
kl - 1
2
SSE = SSTo − SSTr − SSBl
∑ (x − x )
2
all x
54
18
Summary of the Randomized
Block F Test
SSE is obtained by subtraction through the
use of the fundamental identity
SSTo = SSTr + SSBl + SSE
Test statistic: F =
MSTr
MSE
where
MSTr =
SSTr
SSE
and MSE =
k −1
(k − 1)(l − 1)
The test is based on df1 = k - 1 and df2 = (k - 1)(l - 1)
55
The ANOVA Table for a
Randomized Block Experiment
Source of
Variation
Sum of
Squares
df
Treatments
k –1
SSTr
Blocks
l -1
SSBl
Error
(k – 1)(l – 1)
SSE
Total
kl - 1
SSTo
Mean Square
F
MSTr
SSTr
F=
MSTr =
MSE
k −1
SSBl
MSBl =
l −1
SSE
MSE =
(k − 1)(l − 1)
56
Multiple Comparisons
As before, in single-factor ANOVA, once
H0 has been rejected, declare that
treatments I and j differ significantly if the
interval
(μ i − μ j ) ± q
MSE
l
does not include zero, where q is based
on a comparison of k treatments and
error df = (k - 1)(l - 1).
57
19
Example (Food Prices)
In an attempt to measure which of 3 grocery
chains has the best overall prices, it was felt
that there would be a great deal of variability
of prices if items were randomly selected
from each of the chains, so a randomized
block experiment was devised to answer the
question.
58
A list of standard items was developed
(typically a fairly large representative list
would be used, but do to a problem with
insufficient planning, only 7 items were left
“in the shopping cart.” and the price
recorded for each of these items in each of
the stores.
Example (Food Prices)
Because of the problem that the Blocking
variable (the item) wasn’t set up with a well
designed, representative sample of the
items in a typical shopping basket, the
results should be taken with a “grain of salt.”
For the purposes of showing the
calculations, we shall treat this as being the
contents of a “representative” shopping
basket.
The data appear in the next slide along with
the hypotheses.
59
Example (Food Prices)
H0: µA = µB = µC
Ha: At least two among are µA, µB and µC
are different
Product
Tide (100 oz liquid detergent)
1 lb Land O'Lakes Butter
1 dozen Large Grade AA eggs
Tropicana (no pulp, non-conc) OJ (64 oz)
2 Liter Diet Coke
1 loaf Wonderbread
18 oz jar Skippy Peanut Butter
Store A
6.39
3.99
1.49
3.99
1.39
2.09
2.49
Store B
5.59
3.49
1.49
2.99
1.50
2.09
2.49
Store C
5.24
2.98
0.72
2.50
1.04
1.43
1.77
60
20
Calculations
Treatments: k = 3
x=
Blocks: l = 7
57.15
= 2.7214
21
SSTr = l[(x1 − x) 2 + (x 2 − x)2 + (x 3 − x) 2 ]
= 7[(3.1186 − 2.7214)2 + (2.8057 − 2.7214)2 + (2.2400 − 2.7214)2 ]
= 7[0.15772 + 0.00710 + 0.23177] = 7[0.39660] = 2.7762
MSTr =
SSTr 2.7762
=
= 1.3881
k −1
3 −1
61
Calculations
SSBl = k[(b1 − x)2 + (b2 − x)2 + ... + (b7 − x)2 ]
= 3[(5.7400 − 2.7214) 2 + (3.4867 − 2.7214)2 + (1.2333 − 2.7214)2
+ (3.1600 − 2.7214) 2 + (1.3100 − 2.7214)2 + (1.8700 − 2.7214) 2
+ (2.2500 − 2.7214)2 ]
= 3[9.1118+0.58559+2.21443+0.19234+1.9921+0.72493+0.22224]
= 3[15.04344] = 45.1303
MSTr =
SSTr 2.7762
=
= 1.3881
k −1
3 −1
62
Calculations
SSE = SSTo − SSTr − SSBl
= 48.6356 − 2.7762 − 45.1303
= 0.72893
SSE
0.72893
=
(k − 1)(l − 1) (3 − 1)(7 − 1)
= 0.06074
MSE =
F=
MSTr
1.3881
=
= 22.85
MSE 0.060744
df den = (k − 1)(l − 1) = (3 − 1)(7 − 1) = 12
df num = k − 1 = 3 − 1 = 2
63
21
Conclusions
df den = (k − 1)(l − 1) = (3 − 1)(7 − 1) = 12
df num = k − 1 = 3 − 1 = 2
F=
MSTr
1.3881
=
= 22.85
MSE 0.060744
We can reject the hypothesis that
the mean prices are the same in
all three stores.
The actual differences can be
estimated with confidence
intervals.
64
Conclusions
We find q = 4.34 for the 95% Tukey confidence
intervals. The confidence intervals are
Difference
95% Confidence 95% Confidence
Limits
Interval
μA - μB
μA - μC
μB - μC
Store C
$2.24
0.313 ± 0.404
(-0.091, 0.717)
0.879 ± 0.404
(0.474, 1.283)
0.566 ± 0.404
Store B
$2.81
(0.161, 0.970)
Store A
$3.20
We therefore conclude that Store A is cheaper on the average
than Store B and Store C.
65
Two-Factor ANOVA
Notation:
k = number of levels of factor A
l = number of levels of factor B
kl = number of treatments (each one a
combination of a factor A level and
a factor B level)
m = number of observations on each
treatment
66
22
Two-Factor ANOVA Example
A grocery store has two stocking
supervisors, Fred & Wilma. The store is
open 24 hours a day and would like to
schedule these two individuals in a manner
that is most effective. To help determine how
to schedule them, a sample of their work
was obtained by scheduling each of them for
5 times in each of the three shifts and then
tracked the number of cases of groceries
that were emptied and stacked during the
shift. The data follows on the next slide.
67
Two-Factor ANOVA Example
Supervisor
Fred
Wilma
495
607
481
533
Shift
Day
Swing
547 481 457 500 578
517
515 428
520 498 508 471 560
507
518 578
504
518
572
625
Night
496 485
497
550 583
598
68
Interactions
There is said to be an interaction between
the factors, if the change in true average
response when the level of one factor
changes depend on the level of the other
factor.
One can look at the possible interaction
between two factors by drawing an
interactions plot, which is a graph of the
means of the response for one factor plotted
against the values of the other factor.
69
23
Two-Factor ANOVA Example
A table of the sample means for the 30
observations.
Mean Output
for Each
Day Swing Night Supervisor
529.40 495.60 500.00
508.33
507.80 527.00 585.60
540.13
Shift
Supervisor
Fred
Wilma
Mean Output
for Each Shift
518.60 511.30 542.80
524.23
70
Two-Factor ANOVA Example
Typically, only one of these interactions plots will
be constructed. As you can see from these
diagrams, there is a suggestion that Fred does
better during the day and Wilma is better at night
or during the swing shift. The question to ask is
“Are these differences significant?” Specifically is
there an interaction between the supervisor and
the shift.
Interaction Plot - Data Means for Cases
Interaction Plot - Data Means for Cases
Supervisor
Shift
590
Day
580
Night
Swing 570
560
590
580
570
550
Mean
Mean
560
540
530
550
540
530
520
520
510
510
500
500
Fred
71
Fred
W ilma
W ilma
Day
Supervisor
Night
Swing
Shift
Interactions
If the graphs of true average responses are
connected line segments that are parallel, there
is no interaction between the factors. In this
case, the change in true average response
when the level of one factor is changed is the
same for each level of the other factor.
Special cases of no interaction are as follows:
1.The true average response is the same for
each level of factor A (no factor A main
effects).
2.The true average response is the same for
each level of factor B (no factor B main
effects).
72
24
Basic Assumptions for TwoFactor ANOVA
The observations on any particular treatment
are independently selected from a normal
distribution with variance σ2 (the same
variance for each treatment), and samples
from different treatments are independent of
one another.
73
Two-Factor ANOVA Table
The following is a fairly standard way of
presenting the important calculations for an twofactor ANOVA.
The fundamental identity is SSTo = SSA + SSB + SSAB +SSE
74
Two-Factor ANOVA Example
Source
Shift
Supervisor
Interaction
Error
Total
df
2
1
2
24
29
Sum of
Squares
5437
7584
14365
35878
63265
Mean
Square
2719
7584
7183
1495
F
1.82
5.07
4.80
75
25
Two-Factor ANOVA Example
Minitab output for the Two-Factor ANOVA
1. Test of H0: no interaction between
supervisor and Shift
There is evidence of an interaction.
Two-way ANOVA: Cases versus Shift, Supervisor
Analysis of Variance for Cases
Source
DF
SS
Shift
2
5437
Supervis
1
7584
Interaction
2
14365
Error
24
35878
Total
29
63265
MS
2719
7584
7183
1495
F
1.82
5.07
4.80
P
0.184
0.034
0.018
76
26