chapter 4: introductory linear regression

CHAPTER 3:
INTRODUCTORY LINEAR
REGRESSION
Chapter Outline
3.1
Simple Linear Regression
• Scatter Plot/Diagram
• Simple Linear Regression Model
3.2
Curve Fitting
3.3
Inferences About Estimated Parameters
3.4
Adequacy of the model coefficient of
determination
3.5
Pearson Product Moment Correlation
Coefficient
3.6
Test for Linearity of Regression
3.7
ANOVA Approach Testing for Linearity of
Regression
INTRODUCTION TO LINEAR
REGRESSION
– is a statistical procedure for establishing
the r/ship between 2 or more variables.
 This is done by fitting a linear equation to the
observed data.
 The regression line is used by the researcher to see the
trend and make prediction of values for the data.
 There are 2 types of relationship:
 Simple ( 2 variables)
 Multiple (more than 2 variables)
 Regression
INTRODUCTION TO LINEAR REGRESSION
Many problems in science and engineering
involve exploring the relationship between two
or more variables.
 Two statistical techniques:
(1) Regression Analysis
(2) Computing the Correlation Coefficient (r).
 Linear regression - study on the linear
relationship between two or more variables.
 This is done by fitting a linear equation to the
observed data.
 The linear equation is then used to predict
values for the data.

 In
simple linear regression only two variables
are involved:
i. X is the independent variable.
ii. Y is dependent variable.
 The correlation coefficient (r ) tells us how
strongly two variables are related.
Example 3.1:
5
1) A nutritionist studying weight loss programs might
wants to find out if reducing intake of carbohydrate can
help a person reduce weight.
a) X is the carbohydrate intake (independent
variable).
b) Y is the weight (dependent variable).
2) An entrepreneur might want to know whether
increasing the cost of packaging his new product will
have an effect on the sales volume.
a) X is cost
b) Y is sales volume
3.1 SIMPLE LINEAR REGRESSION MODEL


Linear regression model is a model that
expresses the linear relationship between two
variables.
The simple linear regression model is written
as:
Y  0  1 X  
where ;
0 = intercept of the line with the Y-axis
1  slope of the line
 = random error
Random error is the difference of data point
from the deterministic value.
3.2 CURVE FITTING (SCATTER PLOT)
SCATTER PLOT



Scatter plots show the relationship between
two variables by displaying data points on
a two-dimensional graph.
The variable that might be considered as
an explanatory variable is plotted on the
x-axis, and the response variable is plotted
on the y- axis.
Scatter plots are especially useful when
there are a large number of data points.

They provide the following information about
the relationship between two variables:
(1) Strength
(2) Shape - linear, curved, etc.
(3) Direction - positive or negative
(4) Presence of outliers
EXAMPLES:
PLOTTING LINEAR REGRESSION MODEL
A linear regression can be develop by freehand plot of the data.
Example 3.2:
The given table contains values for 2 variables, X and Y. Plot
the given data and make a freehand estimated regression line.

10
X
-3
-2
-1
0
1
2
3
Y
1
2
3
5
8
11
12
11
3.3 INFERENCES ABOUT ESTIMATED PARAMETERS
LEAST SQUARES METHOD


The Least Square method is the method
most commonly used for estimating the
regression coefficients 0 and 1
The straight line fitted to the data set is the
line:
ˆ
ˆ
Ŷ  0  1 X
where Yˆ is the estimated value of y for a
given value of X.
i)
y-Intercept for the Estimated Regression
Equation, ̂ 0
ˆ0  y  ˆ1 x
x and y are the mean of x and y respectively
ii) Slope for the Estimated Regression Equation,
ˆ1 
S xy
S xx
S xy 
n

i 1
 n
 n

x
y
 i  i 
  i 1

xi yi   i 1
n


y
 i 

  i 1
n
n
S yy 
S xx 
n
y
i 1
2
i
n

i 1
xi2
 n

  xi 

  i 1
n
2
2
EXAMPLE 3.3: STUDENTS SCORE IN HISTORY
The data below represent scores obtained by ten primary school
students before and after they were taken on a tour to the
museum (which is supposed to increase their interest in history)
Before, x
65
63
76
46
68
72
68
57
36
96
After, y
68
66
86
48
65
66
71
57
42
87
a) Develop a linear regression model with “before” as the
independent variable and “after” as the dependent variable.
b) Predict the score a student would obtain “after” if he scored
60 marks “before”.
Solution
n  10
 x  647
 y  656
S xy
 xy  44435
 x  44279
 y  44884
2
x  64.7
2
y = 65.6
647  656 

 44435 
 1991.8
10
647 2
S xx  44279 
 2418.1
10
6562
S yy  448.84 
 1850.4
10
S xy 1991.8
ˆ
a) 1 

 0.8237
S xx 2418.1
ˆ  y  ˆ x  65.6   0.8237  64.7   12.3063
0
1

Y  12.3063  0.8237 X
b) X  60

Y  12.3063  0.8237  60   61.7283
EXERCISE 3.1:
INCOME, x
FOOD EXPENDITURE, y
55
14
83
24
38
13
61
16
33
9
49
15
67
17
a) Fit a linear regression model with
income as the
independent variable and
food expenditure as the
dependent variable.
b) Predict the food expenditure if income is 50.
EXERCISE 3.2:
3.4 ADEQUACY OF THE MODEL COEFFICIENT OF
DETERMINATION( R 2 )
 The
20
coefficient of determination is a measure of
the variation of the dependent variable (Y) that is
explained by the regression line and the
independent variable (X).
 The symbol for the coefficient of determination is r 2
2
or R .
2
 If r =0.90, then r =0.81. It means that 81% of the
variation in the dependent variable (Y) is accounted
for by the variations in the independent variable (X).
 The
rest of the variation, 0.19 or 19%, is
unexplained and called the coefficient of non
determination.
 Formula for the coefficient of non determination
is 1.00  r 2

Relationship Among SST, SSR, SSE
SST = SSR + SSE
where:

SST = total sum of squares
SSR = sum of squares due to regression
SSE = sum of squares due to error
The coefficient of determination is:
Sxy 
SSR
r 
  
SST SxxSyy
2
2
where:
SSR = sum of squares due to regression
SST = total sum of squares
22
2
2
2
ˆ
ˆ
(
y

y
)

(
y

y
)

(
y

y
)
 i
 i
 i i
3.5 PEARSON PRODUCT MOMENT
CORRELATION COEFFICIENT (r)
 Correlation
measures the strength of a linear
relationship between the two variables.
 Also known as Pearson’s product moment coefficient
of correlation.
 The symbol for the sample coefficient of correlation
is (r)
 Formula :
S
r
or
xy
S xx .S yy
r  (sign of b1 ) r
2
Properties of (r):
1  r  1



Values of r close to 1 implies there is a strong
positive linear relationship between x and y.
Values of r close to -1 implies there is a strong
negative linear relationship between x and y.
Values of r close to 0 implies little or no linear
relationship between x and y.
ASSUMPTIONS ABOUT THE ERROR TERM E
1. The error  is a random variable with mean of zero.
2. The variance of  , denoted by  2, is the same for
all values of the independent variable.
3. The values of  are independent.
4. The error  is a normally distributed random
variable.
EXAMPLE 3.4: REFER PREVIOUS EXAMPLE 1,
STUDENTS SCORE IN HISTORY
Calculate the value of r and interpret its meaning.
SOLUTION:
r

Sxy
Sxx .Syy
1991.8
 2418.1 1850.4 
 0.9416
Thus, there is a strong positive linear relationship between
score obtain before (x) and after (y).
EXERCISE 3.3:
Refer to previous Exercise 3.1 and Exercise 3.2,
calculate coefficient correlation and interpret the
results.
3.6
TEST FOR LINEARITY OF REGRESSION
To test the existence of a linear relationship
between two variables x and y, we proceed with
testing the hypothesis.
 Two test are commonly used:
(i)

t -Test
(ii)
F -Test
(i) t-Test
1. Determine the hypotheses.
H 0 : 1  0 ( no linear r/ship)
H 1 : 1  0 (exist linear r/ship)
2. Compute Critical Value/ level of significance.
t
2
,n2
or p  value
3. Compute the test statistic.
t
ˆ1
Var ( ˆ1 )
 S yy  ˆ1 S xy  1
Var ( ˆ1 )  

 n  2  S xx
4. Determine the Rejection Rule.
Reject H0 if :
t  t 
2
,n 2
or t  t 
2
,n  2
p-value < 
5.Conclusion.
There is a significant relationship between
variable X and Y.
EXAMPLE 3.5: REFER PREVIOUS EXAMPLE 3.3,
STUDENTS SCORE IN HISTORY
Test to determine if their scores before and after the trip is related.
Use a=0.05
SOLUTION:
1) H :   0
0
1
H 1 : 1  0
2)   0.05
t 0.05  2.306
2
,8
( no linear r/ship)
(exist linear r/ship)
3)

ttest 
1

Var (  1 )
0.8237

 7.926
0.0108




S


S
1 xy  1
yy
Var (  1 )  
 n  2  Sxx


 1850.4  (0.8237)(1991.8)  1

 2418.1
8

 0.0108
4) Rejection Rule:
ttest  t0.025 ,8
 7.926 2.306
5) Conclusion:
Thus, we reject H0. The score before (x) is linear relationship
to the score after (y) the trip.
EXERCISE 3.4:
EXERCISE 3.5:
(ii) F-Test
1. Determine the hypotheses.
H 0 : 1  0 ( no linear r/ship)
H 1 : 1  0 (exist linear r/ship)
2. Specify the level of significance.
F ,1,n  2 or p  value
3. Compute the test statistic.
F = MSR/MSE
4. Determine the Rejection Rule.
Reject H0 if :
p-value < a
F test > F
 ,1, n  2
5.Conclusion.
There is a significant relationship between
variable X and Y.
3.7 ANOVA APPROACH FOR TESTING
LINEARITY OF REGRESSION
 The analysis of variance (ANOVA) method is an approach to
test the significance of the regression.
 We can arrange the test procedure using this approach in an
ANOVA table as shown below;
EXAMPLE 3.6:

The manufacturer of Cardio Glide exercise equipment
wants to study the relationship between the number of
months since the glide was purchased and the length of
time (hours) the equipment was used last week.
At   0.01 , test whether there is a linear relationship between
the variables.
Solution:
1)
Hypothesis:
H 0 : 1  0
H1 : 1  0
F-distribution table: F0.01,1,8  11.26
2)
Test Statistic:
F = MSR/MSE = 17.303
or using p-value approach:
significant value =0.003
4)
Rejection region:
Since F statistic > F table (17.303>11.2586 ), we reject
H0 or since p-value (0.003  0.01 ) we reject H0
5) Thus, there is a linear relationship between the variables
(month X and hours Y).
1)
EXERCISE 3.6:
An agricultural scientist planted alfalfa on several plots of
land, identical except for the soil pH. Following are the dry
matter yields (in pounds per acre) for each plot.
pH
Yield
4.6
1056
4.8
1833
5.2
1629
5.4
1852
5.6
1783
5.8
2647
6.0
2131
a)
b)
c)
d)
e)
Construct a scatter plot of yield (y) versus pH (x).
Verify that a linear model is appropriate.
Compute the estimated regression line for predicting
Yield from pH.
If the pH is increased by 0.1, by how much would you
predict the yield to increase or decrease?
For what pH would you predict a yield of 1500 pounds
per acre?
Calculate coefficient correlation, and interpret the
results.
Answer : b) yˆ  2090.9  737.1x
c) yˆ  73.71
d ) pH  4.872
EXERCISE 3.7
A regression analysis relating the current market value in
dollars to the size in square feet of homes in Greeny
County, Tennessee, follows. The portion of a regression
software output as below:
Predictor
Constant
Size
Coef
12.726
0.00011386
Analysis of Variance
Source
DF
Regression
1
Error
18
Total
19
a)
b)
c)
SS
10354
12054
22408
SE Coef
8.115
0.00002896
MS
10354
670
T
1.57
3.93
P
0.134
0.001
F
15.46
P
0.001
Determine how many states in the sample.
Determine the regression equation.
Can you conclude that there a linear relationship
between the variables at   0.05 ?