Download attachment

Econometrics I
QM 6203
Mohd‐Pisal Zainal, Ph.D.
INCEIF
Week 2
January 22, 09
Basic Econometrics
Chapter 3: TWO‐VARIABLE REGRESSION MODEL: The Problem of Estimation
The Organization of the Presentation
• The Concept of Regression Population Function (PRF)
• The Concept of Linearity and Stochastic Specification in PRF
• The Importance of the Error Term
• The Sample Regression Function (SRF)
• Summary & Conclusions
The method of ordinary least square (OLS)
z Least‐square criterion:
z Minimizing ∑U^2i = ∑(Yi – Y^i) 2
= ∑(Yi‐ β^1 ‐ β^2X)2 (3.1.2) z Normal Equation and solving it for β^1 and β^2 = Least‐square estimators [See (3.1.6)(3.1.7)]
z Numerical and statistical properties of OLS are as follows:
The method of ordinary least square
(OLS)
™ OLS estimators are expressed solely in terms of observable quantities. They are point estimators
™ The sample regression line passes through sample means of X and Y
™ The mean value of the estimated Y^ is equal to the mean value of the actual Y: E(Y) = E(Y^)
™ The mean value of the residuals U^i is zero: E(u^i
)=0
™ u^i are uncorrelated with the predicted Y^i and with Xi : That are ∑u^iY^i = 0; ∑u^iXi = 0
The assumptions underlying the method of least squares
z Ass 1: Linear regression model (in parameters)
z Ass 2: X values are fixed in repeated sampling
z Ass 3: Zero mean value of ui : E(ui⏐Xi)=0
z Ass 4: Homoscedasticity or equal variance of ui : Var (ui⏐Xi) = σ2
[VS. Heteroscedasticity]
z Ass 5: No autocorrelation between the disturbances: Cov(ui,uj⏐Xi,Xj ) = 0 with i # j [VS. Correlation, + or ‐ ]
The assumptions underlying the method of least squares z Ass 6: Zero covariance between ui and Xi
Cov(ui, Xi) = E(ui, Xi) = 0
z Ass 7: The number of observations n must be greater than the number of parameters to be estimated
z Ass 8: Variability in X values. They must not all be the same
z Ass 9: The regression model is correctly specified
z Ass 10: There is no perfect multicollinearity
between Xs
Precision or standard errors of least-squares
estimates
z In statistics the precision of an estimate is measured by its standard error (SE)
z var( β^2) = σ2 / ∑x2i
(3.3.1)
z se(β^2) = √ Var(β^2) (3.3.2)
z var( β^1) = σ2 ∑X2i / n ∑x2i
(3.3.3)
z se(β^1) = √ Var(β^1) (3.3.4)
z σ^ 2 = ∑u^2i / (n ‐ 2)
(3.3.5)
z σ^ = √ σ^ 2 is standard error of the estimate
Precision or standard errors of leastsquares estimates
z Features of the variance: + var( β^2) is proportional to σ2 and inversely proportional to ∑x2i + var( β^1) is proportional to σ2 and ∑X2i but inversely proportional to ∑x2i and the sample size n.
+ cov ( β^1 , β^2) = ‐ Xvar( β^2) shows the independence between β^1 and β^2
Properties of least-squares estimators: The
Gauss-Markov Theorem
z An OLS estimator is said to be BLUE if :
+ It is linear, that is, a linear function of a random variable, such as the dependent variable Y in the regression model
+ It is unbiased , that is, its average or expected value, E(β^2), is equal to the true value β2
+ It has minimum variance in the class of all such linear unbiased estimators
An unbiased estimator with the least variance is known as an efficient estimator
Properties of least-squares estimators:
The Gauss-Markov Theorem
zGauss‐ Markov Theorem:
Given the assumptions of the classical linear regression model, the least‐squares estimators, in class of unbiased linear estimators, have minimum variance, that is, they are BLUE
The coefficient of determination r2: A
measure of “Goodness of fit”
z TSS = ∑ yi2 = Total Sum of Squares
z ESS = ∑ Y^ i2 = β^22 ∑x2i = Explained Sum of Squares z RSS = ∑ u^2I = Residual Sum of Squares ESS RSS 1 = ‐‐‐‐‐‐‐‐ + ‐‐‐‐‐‐‐‐ ; or TSS TSS
1 = r2 +
RSS RSS
‐‐‐‐‐‐‐ ; or r2 = 1 ‐ ‐‐‐‐‐‐‐
TSS TSS
The coefficient of determination r2: A
measure of “Goodness of fit”
z TSS = ∑ yi2 = Total Sum of Squares
z ESS = ∑ Y^ i2 = β^22 ∑x2i = Explained Sum of Squares z RSS = ∑ u^2I = Residual Sum of Squares ESS RSS 1 = ‐‐‐‐‐‐‐‐ + ‐‐‐‐‐‐‐‐ ; or TSS TSS
1 = r2 +
RSS RSS
‐‐‐‐‐‐‐ ; or r2 = 1 ‐ ‐‐‐‐‐‐‐
TSS TSS
β
2
The coefficient of determination r2: A
measure of “Goodness of fit”
z Yi = Ŷi + Ûi or z Yi ‐ Y = Ŷi ‐ Ŷ
i + i Û or z yi = ŷ i + Ûi (Note: = ) Y Ŷ
Squaring on both side and summing => z ∑ yi2 = 2 ∑x2i + ∑ 2i ; or Û
β̂
2
z TSS = ESS + RSS
The coefficient of determination r2: A measure of “Goodness of fit”
z r2 = ESS/TSS is coefficient of determination, it measures the proportion or percentage of the total variation in Y explained by the regression Model
z 0 ≤ r2 ≤ 1; z r = ±√ r2 is sample correlation coefficient z Some properties of r
The coefficient of determination r2: A measure of “Goodness of fit”
•A numerical Example (pages 80‐83)
•Illustrative Examples (pages 83‐85)
•Coffee demand Function
•Monte Carlo Experiments (page 85)
•Summary and conclusions (pages 86‐
87)
Suggestions, Questions, and or Comments
Showing Next…
Chapter Four
Classical Normal Linear Regression Model (CNLRM)