Adam Hoppes notes on ARCH

GENERALIZED
R
C
A
H
MODELS
WITH
APPLICATIONS
FALL 2004
R. Adam Hoppes
Department of Statistics
North Carolina State University
TABLE OF CONTENTS
R. A. HOPPES
Contents
1 Introduction and Motivation
1.1
1
Examples: Securities and Commodities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Univariate ARCH Processes
1
5
2.1
ARCH(1) Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.2
ARCH(p) Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
3 Univariate GARCH Processes
13
3.1
GARCH(p, q) Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2
Extended GARCH(p, q) Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 References
15
i
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
1
R. A. HOPPES
Introduction and Motivation
For the most part, an introductory course in time series focuses on analyzing the conditional mean
behaviour of a process, also known as the first central moment defined as: µ1 = E[y −E(y)] = E[y −µ] =
R
(y − µ)fy (y) dy. During this semester a gallimaufry of techniques have enabled us to model the
characteristics of these ‘special’ types of processes. Recent problems in finance have motivated the study
of modelling the conditional variance of a process, or the second central moment: µ2 = E[y − µ]2 =
R
(y − µ)2 fy (y) dy. This concept is applicable in many risk management applications: such as, options
pricing and value-at-risk estimation. However, the most pervasive role in modelling volatility is illustrated
in Example 1.1.
1.1
Examples: Securities and Commodities
Example 1.1 (Amazon Series). The Amazon series, Brocklebank & Dickey (2003), represents daily
stock prices from May 16, 1997 to May 25, 1999. The following scenario is indicative for analyzing
security prices, with respect to the ARMA modelling framework.
• Plot the xt series. It appears to have a nonconstant variance. The natural log transformation is
well known as a variance stabilizer; hence, we analyze the ln(xt ) series.
Log Amazon Series
4
1
2
3
log(x_t)
100
50
0
x_t
150
5
200
Amazon Series
0
100
200
300
400
500
0
100
200
Time
300
400
500
Time
Figure 1.1: Amazon and ln(Amazon) closing prices.
• Plot the autocorrelation function (ACF) and partial autocorrelation function (PACF), see Figure
1.2. As the lag length h increases, the estimated autocorrelations for xt and ln(xt ) slowly decay.
PAGE 1
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
Hence, first differences are taken to correct for nonstationarity in the mean: yt = ln(xt ) − ln(xt−1 ).
ACF
0.0
0.2
0.4
0.6
0.8
1.0
Series : amz
0
10
20
30
40
50
60
40
50
60
Lag
0.6
0.4
0.2
0.0
Partial ACF
0.8
1.0
Series : amz
0
10
20
30
Lag
Figure 1.2: Amazon series (xt ) autocorrelations.
• After we address nonconstant variance and nonstationarity, the yt series resembles a white noise
process, see Figure 1.3. This is rather unfortunate. Any suggestions?
Series : lamz.diff
ACF
0.1
0.0
0.2
0.4
0.6
0.2
0.8
1.0
First Differenced Log Amazon Series
0
10
20
30
40
50
60
40
50
60
0.0
y_t
Lag
0.0
-0.10
-0.2
Partial ACF
-0.1
0.05
Series : lamz.diff
0
100
200
300
400
500
0
Time
10
20
30
Lag
Figure 1.3: First differenced log Amazon series (yt ) and autocorrelations.
Hence, our final model for yt is an ARIMA(0,1,0): ln(xt ) − ln(xt−1 ) = ²t . This is well known as the
random walk model or stock market model. Random walk theory merely states that the future price
PAGE 2
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
movements cannot be predicted from past price movements alone. For example, the change in the stock
price from time t to t+1 is unpredictable with past information. What is the statistician, econometrician,
or financier to do?
The log differences of the Amazon series resembled white noise in Example (1.1). By an analogous ARMA
modelling building process, the IBM series (Example 1.2), follows a similar random walk process. In
light of this rather unsettling plight, the yt series has other types of structures present.
Example 1.2 (IBM Series). The IBM series represents daily stock returns from February 2, 1984 to
December 31, 1991, Zivot and Wang (2003).
• Firstly, the distribution of yt has heavier tails than a normal distribution. Kurtosis, the normalized fourth cental moment of a distribution is defined as κ = µ4 /µ22 and measures the
degree of peakedness in a distribution.
The standard normal distribution has a kurtosis of
κN (0,1) = µ4 /µ22 = 3/12 = 3. In the literature, leptokurtic is often used to describe distributions that are peaked and have fat tails.
Sample Moments:
mean
std skewness kurtosis
0.0001348 0.01443
-2.004
38.27
• Secondly, the changes in yt tend to be clustered. (This may be easier to visualize in a graph of the
squared yt series and even easier to see in Example 1.3.) Hence, dependence in the variability or
volatility of the observed values is present.
Daily Stock Returns of IBM^2
0.00
-0.20
0.01
-0.15
0.02
-0.10
-0.05
0.03
0.00
0.04
0.05
0.05
0.10
Daily Stock Returns of IBM
Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1
Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1
1984
1985
1986
1987
1988
1989
1990
1991
1984
1992
1985
Figure 1.4: IBM series: yt and yt2 .
PAGE 3
1986
1987
1988
1989
1990
1991
1992
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
Series : ibm.s^2
0.0
0.0
0.2
0.2
ACF
0.4 0.6
ACF
0.4 0.6
0.8
0.8
1.0
1.0
Series : ibm.s
0
10
20
30
40
50
0
10
20
Lag
30
40
50
Lag
Series : ibm.s^2
-0.05
-0.04
Partial ACF
0.0
Partial ACF
0.05
0.15
0.04
Series : ibm.s
0
10
20
30
40
50
0
10
Lag
20
30
40
50
Lag
Figure 1.5: IBM series: yt and yt2 correlations.
• And finally, the yt2 series is correlated and nonnegative.
Example 1.3 (Copper Series). In this example, the concept of volatility clustering is visually represented more clearly. The copper series represents the cash settlement of Copper Prices in U.S. Dollars
($) in the spot market on the London Metal Exchange from January 3, 1989 to October 31, 2002.
Log Returns of Copper^2
y_t
0.002
0.0
-0.04
-0.02
0.001
0.0
y_t
0.02
0.003
0.04
0.004
0.06
Log Returns of Copper
0
1000
2000
3000
0
Time
1000
2000
Time
Figure 1.6: Copper series: yt and yt2 .
PAGE 4
3000
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
2
R. A. HOPPES
Univariate ARCH Processes
In order to accurately model the the conditional variance of a process, we need to introduce the autoregressive conditionally heteroskedastic (ARCH) class of models. In this sense, autoregressive means
the variance of a process can be modelled as a linear combination of past variances and conditionally
heteroskedastic is a scholarly way to express volatility clustering. In Example 1.2, it seemed reasonable
to consider a model that allows dependence in the variances of yt , denoted by σt2 . This is the basic idea
behind heteroskedasticity and gives us the H in ARCH. ARCH models were first introduced by Engle
(1982).
2.1
ARCH(1) Model
The ARCH(1) model:
yt = σt ²t
2
σt2 = α0 + α1 yt−1
(2.1)
(2.2)
where ²t ∼ iid(0, 1). Some notes on the ARCH(1) model.
1. Model constraints. As with ARMA models, one must impose constraints on the parameters, α0
and α1 , in order to obtain tractable properties, such as, σt2 > 0. Think back to the stationarity
requirements of ARMA processes. How important was this property with respect to estimation,
forecasting, etc.? More specific constraints on the model parameters will be derived below.
2. Models with ARCH errors. We can think of yt as a white noise process with its variance
a function of past variances. However, if a process is not initially white noise, some correlation
structure among the residuals exists, the researcher may need to initially fit a regression or ARMA
model, output the residuals, and then model them as an ARCH process.
3. Examine the behaviour of yt conditionally. Assume the following distributional assumption
q
2
on the error series: ²t ∼ N (0, 1). Rewrite the ARCH(1) model as yt = α0 + α1 yt−1
²t . Condi2
tional on yt−1 , yt has a normal distribution: yt |yt−1 ∼ N (0, α0 + α1 yt−1
). Some standard results
follow:
• E(yt |yt−1 ) = 0
2
• V (yt |yt−1 ) = E(yt2 |yt−1 ) − [E(yt |yt−1 )]2 = E(yt2 |yt−1 ) = α0 + α1 yt−1
= σt2
2
Hence, the conditional variance of yt , V (yt |yt−1 ), is a function of yt−1
, just like an AR(1) model.
This is where the AR (autoregressive) and C (conditional) parts of ARCH originate.
PAGE 5
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
4. Non-Normal AR(1) model for yt2 with νt errors. Likewise, we could express the model as:
yt2 = yt2 + (σt2 − σt2 )
2
= (σt ²t )2 + α0 + α1 yt−1
− σt2
2
= α0 + α1 yt−1
+ σt2 (²2t − 1)
2
= α0 + α1 yt−1
+ νt
where νt = σt2 (²2t − 1). Since ²t ∼ iid N (0, 1), ²2t ∼ iid χ21 . As a result, (²2t − 1) is a shifted (to have
mean zero) χ21 random variable.
5. Examine the behaviour of yt unconditionally.
Using the law of iterated expectations:
Ey (y) = Ex [Ey|x (y|x)], and the variance computing formula: Vy (y) = Eyt (yt2 ) − [Eyt (yt )]2 , the
following ARCH(1) properties are examined:
• Eyt (yt ) = Eyt−1 [Eyt |yt−1 (yt |yt−1 )] = Eyt−1 [0] = 0
• Vyt (yt ) = Eyt (yt2 ) − [Eyt (yt )]2 = Eyt (yt2 )
= Eyt−1 [Eyt |yt−1 (yt2 |yt−1 )]
= Eyt−1 [Vyt |yt−1 (yt |yt−1 ) + [Eyt |yt−1 (yt |yt−1 )]2 ]
= Eyt−1 [Vyt |yt−1 (yt |yt−1 )]
2
= Eyt−1 (α0 + α1 yt−1
)
2
= α0 + α1 Eyt−1 (yt−1
)
= α0 + α1 Eyt (yt2 )
= α0 + α1 (Vyt (yt ) + [Eyt (yt )]2 )
= α0 + α1 Vyt (yt )
Thus, Vyt (yt ) = α0 /(1 − α1 ). Because the variance of yt must be positive α0 > 0; whereas, the
support for α1 is restricted to the set [0, 1). Typically, this constraint is stated as: 0 ≤ α1 < 1.
6. Higher order moments of yt . In some applications, assumptions on higher moments of yt
are necessary. This is critical in extreme value theory (EVT) settings, such as stress-testing. In
particular, we require the fourth moment to be finite: E(yt4 ) < ∞. Since the forth moment is
positive, it can be shown that the variance of yt2 (presented below) is also finite, provided that
3α12 < 1. Combining this result with the previous constraint: 0 ≤ α12 < 1/3 or alternatively
p
0 ≤ α1 < 1/3.
3α02 (1 − α12 )
V (yt2 ) = E(yt4 ) =
(1 − α1 )2 (1 − 3α12 )
The kurtosis of yt is:
κ=
µ4
1 − α12
=3
2
µ2
1 − 3α12
PAGE 6
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
where µi represents the ith central moment. Thus, κ > 3, and the distribution of yt will always
have fatter tails than the normal distribution. Recall the kurtosis for the IBM series in Example
1.2 was 38.27. This result empirically supports the phenomenon that outliers appear more often
in asset returns than those that would appear from a white noise sequence.
7. Alternative representation. Let ²t be an iid sequence with mean zero and conditional variance
σt2 . In other words, E(²t ) = 0 and V (²t |=t−1 ) = E(²2t |=t−1 ) − E(²t |=t−1 )2 = E(²2t |=t−1 ) = σt2 ,
where =t−1 represents the set of information up to time t − 1. Then, the following equations
alternatively represent an ARCH(1) process.
yt = ²t
(2.3)
σt2 = α0 + α1 ²2t−1
(2.4)
If Equation 2.4 is rewritten such that
σt2 = α0 + α1 ²2t−1
E(²2t |=t−1 ) = α0 + α1 ²2t−1 + [²2t − ²2t ]
²2t = α0 + α1 ²2t−1 + [²2t − E(²2t |=t−1 )]
²2t = α0 + α1 ²2t−1 + ωt
where ωt = ²2t − E(²2t |=t−1 ) is an iid sequence with mean zero. Then the last equation above
represents an AR(1) process for ²2t . This is where the AR (autoregressive) and C (conditional)
parts of ARCH originate in the alternative representation.
2.2
ARCH(p) Model
The ARCH(1) can easily be extended to the ARCH(p) model:
yt = σt ²t
(2.5)
2
2
2
+ α2 yt−2
+ . . . + αp yt−p
σt2 = α0 + α1 yt−1
= α0 +
p
X
(2.6)
2
αi yt−i
i=1
where ²t ∼ iid(0, 1) and the parameters must satisfy the following constraints: α0 > 0, αi ≥ 0 for all
Pp
i = 1, . . . , p and i=1 αi < 1. Some authors prefer to denote the conditional variance, σt2 , as ht . Hence,
Equations 2.5 and 2.6 become:
yt =
p
ht ²t
(2.7)
2
2
2
ht = α0 + α1 yt−1
+ α2 yt−2
+ . . . + αp yt−p
Some weaknesses of ARCH(p) models:
PAGE 7
(2.8)
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
• The model treats positive and negative returns similarly. Empirical evidence suggests the variability of asset returns differ between gains and losses in financial markets.
• Strong restrictions are placed on the αi coefficients.
• Oftentimes, the model overestimates volatility because it responds slowly to large isolated shocks
in the asset return series.
Model Building:
(1) Build a model for the observed time series in order to remove any serial correlation in the data.
With respect to financial assets, this typically involves taking logarithms and first differences.
Define this result to be yt .
(2) Examine the squared series to check for conditional heteroskedasticity. In other words, plot the
ACF and PACF of yt2 . What would you expect to see if conditional heteroskedasticity is present?
For those theoreticians, two tests for detecting ARCH effects are available: [1] the Ljung-Box test
and [2] the Lagrange Multiplier test proposed by Engle (1982) such that under the null hypothesis
(no ARCH effects are present): α1 = . . . = αp = 0.
(3) Find the appropriate order of the ARCH model, obtain parameter estimates via maximum likelihood estimation, perform diagnostic tests and plots, and use the final model to forecast.
Example 2.1 (ARCH(2) Simulation). In the following example, 500 observations from an ARCH(2)
process is simulated in S-Plus three different ways. In the first simulation, the starting value of the yt
series was randomly generated from its derived distribution.
# [1] Generate an ARCH(2) series of length 500: theory
set.seed(7)
n <- 502
e <- rnorm(n)
e <- rts(e)
y <- double(n)
# initialize the y vector to be of size of n
a <- c(0.05, 0.50, 0.35)
# alpha coefficient vector
y[1:2] <- rnorm(1, sd = sqrt(a[1]/(1.0-a[2]-a[3])))
# starting value outside of loop
for(i in 3:n)
# generate the ARCH(2) process
{
y[i] <- e[i]*sqrt(a[1]+a[2]*y[i-1]^2+a[3]*y[i-2]^2)
}
y <- rts(y[3:502])
# Drop the first two elements of y
In the second simulation, drop the first 100 observations from the series. Why is this method of simulation
consistent? Recall the AR(p) simulation at the beginning to the semester.
PAGE 8
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
# [2] Generate an ARCH(2) series: practical
set.seed(816)
ee <- rnorm(n+100)
yy <- double(n+100)
for(i in 3:(n+100)) { yy[i] <- ee[i]*sqrt(a[1]+a[2]*yy[i-1]^2+a[3]*yy[i-2]^2) }
yy <- rts(yy[103:(n+100)])
# Drop the first 100 observations
And finally, let’s just use the simulate.garch function available in the finmetrics module.
# [3] Simulated an ARCH(2): series of length 100:
module(finmetrics)
ysim = simulate.garch(model=list(a.value=0.05, arch=c(0.50, 0.35, n=500, n.start=100, sigma=F))
names(ysim)
Plot the error series, e, produced in simulation [1] as well as the ARCH(2) processes in simulations
[1] and [2]. Can you visualize the conditional heteroskedasticity? Can you visualize an autoregressive
structure? In particular, what properties are you looking for in these graphs?
-3
-1
e_t
1 2 3
White Noise
0
100
200
300
400
500
400
500
400
500
Time
0
-5
y_t
5
10
ARCH(2): conditional standard deviation provided
0
100
200
300
Time
0
-2
yy_t
1
2
ARCH(2): drop the first hundred observations
0
100
200
300
Time
Figure 2.7: White noise, ARCH(2) simulation [1] and simulation [2].
PAGE 9
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
Compare and contrast the ACF and PACF plots for the two pairs of series: et , yt and e2t , yt2 .
Series : e^2
0
10
20
30
Lag
40
50
60
0
10
30
Lag
40
50
60
0
10
Series : y
20
30
Lag
40
50
60
0
10
Series : e^2
20
30
Lag
40
50
60
50
60
Series : y^2
Partial ACF
0.2
0.4
0.0
-0.2
-0.10
-0.05
Partial ACF
0.0
Partial ACF
0.0
0.1
Partial ACF
0.0
0.05
0.2
0.05
0.6
Series : e
20
0.0
0.2
ACF
0.4 0.6
0.8
1.0
ACF
0.4 0.6
0.2
-0.2
0.0
0.0
0.2
0.2
ACF
ACF
0.4 0.6
0.6
0.8
0.8
1.0
Series : y^2
1.0
Series : y
1.0
Series : e
20
30
Lag
40
50
60
0
10
20
30
Lag
40
50
60
0
10
20
30
Lag
40
50
60
0
10
20
30
Lag
Figure 2.8: ACF and PACF for et , yt and e2t , yt2 .
The plot below contains the simulated values of yt in addition to the simulated σt2 values. Can you
picture the serial correlation between plots?
0
-5
e_t
5
Simulated ARCH(2) errors
0
20
40
60
80
100
80
100
4
sigma_t
6
Simulated ARCH(2) volatility
2
10
0
0
0
20
40
60
Figure 2.9: ARCH(2) series generated from simulate.garch: yt and σt2 .
Example 2.2 (Example 2.1 continued). Analysis for the simulated series in the previous example.
Plotting the series along with the ACF and PACF for yt and yt2 is always a good starting point. In
addition, perform the Ljung-Box and Lagrange Multiplier tests to detect for conditional heteroskedasPAGE 10
40
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
ticity effects. There tests can be performed by calling the autocorTest(y, lag=p) and archTest(y,
lag.n=p) functions in S-Plus respectively.
Null Hypothesis: no ARCH effects
Test Stat 238.8776
p.value
0.0000
Series : y
Series : y^2
1.0
-0.2
0.0
5
0.2
0.2
ACF
ACF
0.4 0.6
0.6
0.8
10
1.0
y series
10
20
30
Lag
40
50
60
0
10
20
30
Lag
40
50
60
50
60
0
y_t
0
Series : y^2
0
100
200
300
400
500
-0.2
0.0
-5
Partial ACF
0.0
0.1
Partial ACF
0.2
0.4
0.2
0.6
Series : y
0
10
20
Time
30
Lag
40
50
60
0
10
20
30
Lag
Figure 2.10: The yt series.
In Figure 2.10, the yt series mostly resembles white noise except between time= 125 and time= 175.
The ACF and PACF plots are difficult to interpret. Discuss plots.
The yt2 series exhibits a chopping exponential decay in the ACF plot whereas the PACF displays groups of
significant spikes. To start, fit an ARCH(1) model in S-Plus: y.splus = garch(y∼-1, ∼garch(1,0))
or in R: y.r <- garch(y, order = c(0,1)). Discuss output and plots.
GARCH(1,0)
> summary(y.mod)
Call: garch(formula.mean = y ~ -1, formula.var =
Mean Equation: y ~ -1
Conditional Variance Equation: ~ garch(1, 0)
Conditional Distribution: gaussian
~ garch(1, 0))
Estimated Coefficients:
-------------------------------------------------------------Value Std.Error t value Pr(>|t|)
A 0.1145 0.006328
18.10
0
ARCH(1) 1.0082 0.085263
11.83
0
AIC(2) = 849.9204
PAGE 11
40
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
BIC(2) = 858.3496
Normality Test:
-------------------------------------------------------------Jarque-Bera P-value Shapiro-Wilk P-value
217.2
0
0.9758 0.003718
Ljung-Box test for standardized residuals:
-------------------------------------------------------------Statistic P-value Chi^2-d.f.
18.58 0.09911
12
Ljung-Box test for squared standardized residuals:
-------------------------------------------------------------Statistic
P-value Chi^2-d.f.
104.6 1.11e-016
12
Lagrange multiplier test:
-------------------------------------------------------------TR^2
P-value F-stat
P-value
87.08 1.812e-013 9.635 0.0001024
Series : residuals(y.mod, T)
-0.2
-0.2
0.2
0.2
ACF
ACF
0.4 0.6
0.6
0.8
1.0
1.0
Series : residuals(y.mod)
0
10
20
30
Lag
40
50
60
0
20
30
Lag
40
50
60
Series : residuals(y.mod, T)
-0.15
-0.2
Partial ACF
-0.05
Partial ACF
0.0
0.1
0.05
0.2
Series : residuals(y.mod)
10
0
10
20
30
Lag
40
50
60
0
10
20
30
Lag
40
50
60
Figure 2.11: Residuals and standardized residuals from the GARCH(1,0) model
GARCH(2,0)
Call: garch(formula.mean = y ~ -1, formula.var =
Mean Equation: y ~ -1
Conditional Variance Equation: ~ garch(2, 0)
Conditional Distribution: gaussian
~ garch(2, 0))
Estimated Coefficients:
-------------------------------------------------------------Value Std.Error t value
Pr(>|t|)
A 0.04977 0.009297
5.354 6.588e-008
ARCH(1) 0.55543 0.089053
6.237 4.771e-010
ARCH(2) 0.43562 0.094299
4.620 2.452e-006
PAGE 12
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
AIC(3) = 720.5209
BIC(3) = 733.1647
Normality Test:
-------------------------------------------------------------Jarque-Bera P-value Shapiro-Wilk P-value
0.5854 0.7463
0.9856 0.5714
Ljung-Box test for standardized residuals:
-------------------------------------------------------------Statistic P-value Chi^2-d.f.
12.9 0.3761
12
Ljung-Box test for squared standardized residuals:
-------------------------------------------------------------Statistic P-value Chi^2-d.f.
7.127 0.8491
12
Lagrange multiplier test:
-------------------------------------------------------------TR^2 P-value F-stat P-value
6.374 0.8961 0.5871 0.9304
Series : residuals(y.mod1, T)
-0.2
0.0
0.2
0.2
ACF
ACF
0.4 0.6
0.6
0.8
1.0
1.0
Series : residuals(y.mod1)
0
10
20
30
Lag
40
50
60
0
20
30
Lag
40
50
60
Series : residuals(y.mod1, T)
-0.2
-0.10
Partial ACF
0.0
Partial ACF
0.0
0.1
0.2
0.05
Series : residuals(y.mod1)
10
0
10
20
30
Lag
40
50
60
0
10
20
30
Lag
40
50
60
Figure 2.12: Residuals and standardized residuals from the GARCH(2,0) model
3
Univariate GARCH Processes
A generalized version of the ARCH model, or GARCH, was developed by Bollerslev (1986). The GARCH
model estimates the time varying volatility σt based not only on lagged values from the yt2 series but
also on lagged values of σt2 . Thus, the conditional variance can be modelled within the class of ARMA
models. In practice, a large number of parameters are often required to obtain a good model fit for
PAGE 13
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
R. A. HOPPES
estimating ARCH(p) models. The GARCH(p, q) model allows for a more parsimonious representation
for the underlying process.
3.1
GARCH(p, q) Model
The GARCH(p, q) model is presented below:
yt = σt ²t
(3.9)
2
2
2
2
2
2
σt2 = α0 + α1 yt−1
+ α2 yt−2
+ . . . + αp yt−p
+ β1 σt−1
+ β2 σt−2
+ . . . + βq σt−q
= α0 +
p
X
2
αi yt−i
+
i=1
q
X
(3.10)
2
βj σt−j
j=1
where ²t ∼ iid(0, 1) and the parameters satisfy the following constraints: α0 > 0, αi ≥ 0 for all
Pmax(p,q)
i = 1, . . . , p, βj ≥ 0 for all i = 1, . . . , q, and i=1
(αi + βi ) < 1.
3.2
Extended GARCH(p, q) Models
(IGARCH) The integrated-GARCH models are unit-root GARCH models.
(GARCH-M) The GARCH-mean model incorporates a mean such that an addition layer is added to the original
GARCH(p,q) model. For example,
yt = µ + cσt2 + ωt
(3.11)
ωt = σt ²t
(3.12)
2
2
2
2
2
2
σt2 = α0 + α1 yt−1
+ α2 yt−2
+ . . . + αp yt−p
+ β1 σt−1
+ β2 σt−2
+ . . . + βq σt−q
(3.13)
(CHARMA) The conditional heteroskedastic ARMA model by Tsay (1987) applies a random effects structure
to produce conditional heteroskedasticity.
(EGARCH) The exponential-GARCH proposed by Nelson (1991) allows for asymmetric effects between positive
and negative asset returns.
(TGARCH) The threshold-GARCH model incorporates a binary effect on the conditional variance.
(PGARCH) The power-GARCH allows the powers of σtd , where d is a positive integer, to be modelled.
PAGE 14
ST 730: CONDITIONAL HETEROSKEDASTIC MODELS
4
R. A. HOPPES
References
• Bilder, C.R. (2002) Time Series Analysis. Course notes for Statistics 5053, Oklahoma State
University, Stillwater, Oklahoma.
• Bollerslev, T. (1986) “Generalized autoregressive conditional heteroskedasticity,” Journal of Econometrics, 31, 307-327.
• Brocklebank, J.C. and Dickey, D.A. (2003) SAS for Forecasting Time Series: Second Edition.
Cary, NC: SAS Institute.
• Engle, R.F. (1982) “Autoregressive conditional heteroscedasticity with estimates of the variance
of United Kingdom inflations,” Econometrica, 50, 987-1007.
• Nelson, D.B. (1991) “Conditional heteroskedasticity in asset returns: A new approach,” Econometrica, 59, 347-370.
• Shumway, R.H. and Stoffer, D.S. (2000) Time Series Analysis and Its Applications. New York:
Springer-Verlag.
• Tebbs, J.M. (2004) Statistical Theory II. Course notes for Statistics 771, Kansas State University,
Manhattan, Kansas.
• Tsay, R.S. (1987) “Conditional heteroscedastic time series models,” Journal of the American Statistical Association, 82, 590-604.
• Tsay, R.S. (2002) Analysis of Financial Time Series. New York: John Wiley and Sons.
• Venables, W.N. and Ripley, B.D. (2002) Modern Applied Statistics with S: Fourth Edition. New
York: Springer-Verlag.
• Zivot, E. and Wang, J. (2003) Modeling Financial Time Series with S-PLUS. Seattle, WA: Insightful.
PAGE 15
c 2004 R. Adam Hoppes
°