Lecture 3-4: Probability models for time series

Lecture 3-4, page 1
Lecture 3-4: Probability
models for time series
Outline of lesson 3-4 (chapter 3)
The most heavy (and theoretical) part (and
lesson)
• AR(p) processes
• MA(q) processes
• ARMA(p,q) processes
• Linear process representation
• ARIMA processes
- But then also the core of the course
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 1 of 1
Lecture 3-4, page 2
Some “repetition”
Observed Autocorelation function (ACF):
N −k
rk =
∑ (x
t =1
t
N
∑ (x
t =1
t
− x)2
1 n
∑ xt
n t =1
Sample mean:
x=
Sample ACVF:
γ̂ (h) =
Sample ACF:
− x )( x t + k − x )
1
n
n− h
∑ (x
t =1
ρˆ (h) =
t+ h
− x )( xt − x )
γˆ (h)
γˆ (0)
-n < h < n
-n < h < n,
0 ≤ ρˆ ≤ 1
[For large n, the observed ACF for a i.i.d. time series {Y} with finite
variance is ACF(h)~N(0, 1/n).
Thus a 95% C.I. will be +-1.96/SQRT(n)]
(C.f. the lines in the ACF-figures; Actually the CI is variable. Can only be trusted for
k < 1/3 of length of time series)
--------Partial ACF: Measures the excess correlation at lag p which is not
accounted for by the first p - 1 lags. (p.56)
Nice properties.
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 2 of 2
Lecture 3-4, page 3
Moving Average, MA(q),
processes (3.4.3)
• Hard to understand conceptually
• The formulation is:
{εt} is a purely random process with mean zero and variance σ2ε
{Xt} is said to be a moving average process of order q [MA(q)] if:
Xt = β0εt + β1εt-1 +… + βqεt-q
- In the book: Zt is used instead of εt
- {βi} are constants. Usually scaled so that β0 = 1.
“Stochastic”
What is the MA-process?
"Disturbance":
Weather etc.
Year t-1
Year t-2
"Disturbance":
Weather etc.
Year t
Year t-1
Year t
εt-1
“Deterministic”
"Disturbance":
Weather etc.
Year t+1
Year t+1
εt
Mature adults
Progeny
Give rise to
Surviving adults and new
adults
New progeny
Give rise to
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 3 of 3
Lecture 3-4, page 4
-> The εi‘s represent the “disturbance”. We talk about “correlated noise”.
(While the AR-process can be understood as a deterministic process, at
least in a biological sense, the MA-process is stochastic.
Another way to understand it, is as the AR-process is determined by
intrinsic factors [e.g., growth, mating, etc.], the MA-process is governed
by extrinsic factors [e.g., rain, temperature, storms, NAO etc.].)
Some (useful) calculations:
What do we know about the covariance of this process?
- The process is random, i.e., there is no correlation/covariance between
the εi‘s at different time points:
Cov(εs, εt) =
σ ε2 , s = t

0, s ≠ t
The implication of this is that there is a value for γ(h) only up to q, and
then γ(h) is 0.
The Backward shift operator, B
Here we introduce the Backward shift operator, B,
defined by:
BjXt = Xt-j for all j
then we can write the MA(q)-equation as:
Xt = (β0 + β1B+ … + βqBq)εt
= θ(B)εt
where θ(B) is a polynomial of order q in B.
-> Will not be used much in this course, but is very
common in time series literature.
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 4 of 4
Lecture 3-4, page 5
How/when will this be evident?
(The ACF-plot: The ACF “cuts off” at q.)
MA(1) process with positive coefficient
par(mfrow=c(2,3))
ma <- arima.sim(model=list(ma=c(-.5)))
ts.plot(ma, main="Simulated ts, ma= 0.5")
acf(ma)
acf(ma, type="partial")
ts.plot(ma, main="Simulated ts, ma= 0.5")
tmp <- acf(ma, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
tmp <- acf(ma, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
Series : ma
Series : ma
0
20
40
60
80
0.2
Partial ACF
0.0
0.1
-0.1
-0.2
-3
-0.2
-2
0.0
-1
0.2
0
ACF
0.4
1
0.6
2
0.8
3
1.0
0.3
Simulated ts, ma= 0.5
0
100
5
10
Lag
15
20
0
5
10
Lag
15
20
Time
0
20
40
60
Time
80
100
1.0
0.5
0.0
partial acf
-0.5
0.0
-1.0
-3
-1.0
-2
-0.5
-1
0
acf
1
0.5
2
3
1.0
Simulated ts, ma= 0.5
0
5
10
15
0
lags
5
10
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 5 of 5
15
Lecture 3-4, page 6
MA(1) process with negative coefficient
par(mfrow=c(2,3))
ma <- arima.sim(model=list(ma=c(.5)))
ts.plot(ma, main="Simulated ts, ma= - 0.5")
acf(ma)
acf(ma, type="partial")
ts.plot(ma, main="Simulated ts, ma= - 0.5")
# NB: Splus reverses sign for MA! (Venables & Ripley, p. 412)
tmp <- acf(ma, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
tmp <- acf(ma, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
Series : ma
Series : ma
-3
-0.5
-2
-0.4
-1
0.0
0
ACF
1
0.5
Partial ACF
-0.2
0.0
2
3
1.0
0.2
Simulated ts, ma= - 0.5
0
20
40
60
80
0
100
5
10
Lag
15
20
0
5
10
Lag
15
20
Time
0
20
40
60
Time
80
100
1.0
0.5
0.0
partial acf
-0.5
0.0
-1.0
-3
-1.0
-2
-0.5
-1
0
acf
1
0.5
2
3
1.0
Simulated ts, ma= - 0.5
0
5
10
15
0
lags
5
10
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 6 of 6
15
Lecture 3-4, page 7
MA(2) process with 2 coefficients
par(mfrow=c(2,3))
ma <- arima.sim(model=list(ma=c(.3, .1)))
# NB: Splus reverses sign for MA! (Venables & Ripley, p. 412)
ts.plot(ma, main="Simulated ts, b1=-0.3, b2=-0.1")
tmp <- acf(ma, plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="MA(2)"); lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
tmp <- acf(ma, type="partial", plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16)
lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
ma <- arima.sim(model=list(ma=c(.3, -.1)))
ts.plot(ma, main="Simulated ts, b1=-0.3, b2=0.1")
tmp <- acf(ma, plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="MA(2)"); lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
tmp <- acf(ma, type="partial", plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16)
lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ma)), 0, col=8)
abline(-2/sqrt(length(ma)), 0, col=8)
0.5
partial acf
0.5
acf
0.0
0
-0.5
-1.0
20
40
60
80
100
-1.0
-0.5
-1
-2
-3
0
5
10
Simulated ts, b1=-0.3, b2=0.1
MA(2)
15
5
10
15
10
15
lags
0
20
40
60
Time
80
100
0.5
0.0
partial acf
-0.5
0.0
-1.0
-3
-1.0
-2
-0.5
-1
0
acf
1
0.5
2
1.0
lags
1.0
Time
3
0.0
2
1
1.0
MA(2)
1.0
Simulated ts, b1=-0.3, b2=-0.1
5
10
15
lags
5
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 7 of 7
Lecture 3-4, page 8
The Autoregressive, AR(p),
processes (3.4.4)
We introduced the AR(1) process of a ln-tranformed time series, where we
investigated the recruitment rate (i.e., the difference xt-1-xt):
xt = a0 + (1 + a1)xt-1
Let us be more general:
{Xt} is said to be an autoregressive process of order p [AR(p)] if:
Xt = α1Xt-1 + … + αpXt-p + εt
In the book: Zt is used instead of εt)
Note that this εt is the same as the one in the MA-process when
β0 is 1.
If the mean is not zero, the expression is:
Xt - µ = α1(Xt-1 - µ) + … + αp(Xt-p - µ) + εt
(Does not make any difference for the time dependency)
a.) First order process
Xt = αXt-1 + εt
Such a process is an example of a Markov process.
(Very important concept in simulation based models, i.e., “MCMC –
Markov Chain Monte Carlo”, and in very many approaches.
Usage: A Markov process is only dependent on the previous time step.)
By realising that Xt-1 = αXt-2 + εt-1 we can see that:
Xt = α(αXt-2 + εt-1)+ εt
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 8 of 8
Lecture 3-4, page 9
= α2(αXt-3 + εt-2)+ αεt-1 + εt
and so on, ending up with:
Xt = εt + + αεt-1 + α2εt-2+ ….
-> The AR(1) process can be expressed as an infinite-order MA-process
-> In fact, all AR-processes can be expressed as an MA-process of infinite
order.
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 9 of 9
Lecture 3-4, page 10
The AR-process does not “cut off” at p.
(C.f. figure 3.1)
par(mfrow=c(2,3))
ar <- arima.sim(model=list(ar=c(.5)))
ts.plot(ar, main="Simulated ts, ar=0.5")
acf(ar)
acf(ar, type="partial")
ts.plot(ar, main="Simulated ts, ar=0.5")
tmp <- acf(ar, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
tmp <- acf(ar, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
Series : ar
Series : ar
Partial ACF
0.0
0.2
-4
-0.2
-0.2
-3
-2
0.2
-1
ACF
0.4
0
0.6
1
0.8
2
0.4
1.0
Simulated ts, ar=0.5
0
20
40
60
80
0
100
5
10
Lag
15
20
0
5
10
Lag
15
20
Time
1.0
0.5
0.0
partial acf
-0.5
0
20
40
60
Time
80
100
-1.0
-4
-1.0
-3
-0.5
acf
-1
-2
0.0
0
1
0.5
2
1.0
Simulated ts, ar=0.5
0
5
10
15
lags
0
5
10
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 10 of 10
15
Lecture 3-4, page 11
par(mfrow=c(2,3))
ar <- arima.sim(model=list(ar=c(-.5)))
ts.plot(ar, main="Simulated ts, ar=-0.5")
acf(ar)
acf(ar, type="partial")
ts.plot(ar, main="Simulated ts, ar=-0.5")
tmp <- acf(ar, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
tmp <- acf(ar, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16)
lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
Series : ar
Series : ar
-0.5
-3
-0.4
-2
0.0
-1
ACF
0
Partial ACF
-0.2
0.0
0.5
1
0.2
2
1.0
Simulated ts, ar=-0.5
0
20
40
60
80
0
100
5
10
Lag
15
20
0
5
10
Lag
15
20
Time
0
20
40
60
Time
80
100
1.0
0.5
-1.0
-0.5
0.0
acf
-1.0
-3
-2
-0.5
-1
0.0
0
partial acf
0.5
1
2
1.0
Simulated ts, ar=-0.5
0
5
10
15
lags
0
5
10
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 11 of 11
15
Lecture 3-4, page 12
par(mfrow=c(2,3))
ar <- arima.sim(model=list(ar=c(.3, .2)))
ts.plot(ar, main="Simulated ts, a1=0.3, a2=0.2")
tmp <- acf(ar, plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="AR(2)"); lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
tmp <- acf(ar, type="partial", plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16); lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
ar <- arima.sim(model=list(ar=c(.3, -.2)))
ts.plot(ar, main="Simulated ts, a1=0.3, a2=-0.2")
tmp <- acf(ar, plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="AR(2)"); lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
tmp <- acf(ar, type="partial", plot=F)
plot(1:16, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16); lines(1:16, tmp$acf[1:16])
abline(2/sqrt(length(ar)), 0, col=8)
abline(-2/sqrt(length(ar)), 0, col=8)
0
20
40
60
80
0.5
0.0
partial acf
-0.5
0.0
100
-1.0
-2
-1.0
-1
-0.5
0
acf
1
0.5
2
1.0
AR(2)
1.0
Simulated ts, a1=0.3, a2=0.2
5
10
Simulated ts, a1=0.3, a2=-0.2
AR(2)
15
5
10
15
10
15
lags
0
20
40
60
Time
80
100
0.5
0.0
partial acf
-0.5
0.0
-1.0
-3
-1.0
-2
-0.5
-1
0
acf
1
0.5
2
3
1.0
lags
1.0
Time
5
10
15
lags
5
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 12 of 12
Lecture 3-4, page 13
Linear process
As any AR-process can be expressed as an MA-process of infinite order,
we can express any stationary time series (with expectation 0) as a linear
process:
Xt =
∞
∑Ψ ε
j = −∞
j
t− j
∀ t,
εt ~ N(0, σ2)
This is the (mathematical) framework for analysing stationary processes.
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 13 of 13
Lecture 3-4, page 14
ARMA(p,q) processes (3.4.5)
By expressing a time series as a linear process, it can be estimated.
However, by combining AR-processes and MA-processes we can
• Achieve an adequate representation with much fewer parameters
• Arrive at a model that can be understood biologically
(What does it mean that a MA(4) process describes out time series?)
An ARMA-process of order (p,q) is given by:
Xt = α1Xt-1 + … + αpXt-p + εt + β1εt-1 +… + βqεt-q
(If the time series is not stationary, and differencing it is the remedy, then
the time series is said to have been integrated. Such a process is called An
Autoregressive Integrated Moving Average process of order (p,d,q), where
the d simply tells us how many times the series has been differenced
(“integrated”).
From earlier: Very skeptical to this type of black box detrending. Thus, we
will not deal with ARIMA-models.)
-----
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 14 of 14
Lecture 3-4, page 15
ARMA(1,1) Processes
par(mfrow=c(2,3))
arma <- arima.sim(model=list(ar=c(.5), ma=c(-.5)))
ts.plot(arma, main="Simulated ts, ar=0.5, ma=-0.5")
tmp <- acf(arma, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="ARMA(1,1)"); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
tmp <- acf(arma, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
arma <- arima.sim(model=list(ar=c(.5), ma=c(.4)))
ts.plot(arma, main="Simulated ts, ar=0.5, ma=0.4")
tmp <- acf(arma, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="ARMA(1,1)"); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
tmp <- acf(arma, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
0
20
40
60
80
100
0.5
0
5
10
Simulated ts, ar=0.5, ma=0.4
ARMA(1,1)
15
0
5
10
15
10
15
lags
0.5
0.0
partial acf
-0.5
0.0
0
20
40
60
Time
80
100
-1.0
-1.0
-2
-1
-0.5
0
acf
1
0.5
2
1.0
lags
1.0
Time
3
0.0
-1.0
-0.5
acf
-1.0
-2
-0.5
0.0
0
partial acf
0.5
2
1.0
ARMA(1,1)
1.0
Simulated ts, ar=0.5, ma=-0.5
0
5
10
15
lags
0
5
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 15 of 15
Lecture 3-4, page 16
ARMA(p,q) Processes
par(mfrow=c(2,3))
arma <- arima.sim(model=list(ar=c(.3,.1), ma=c(.2, -.1)))
ts.plot(arma, main="Simulated ts, ar=0.3/0.1, ma=0.2/-0.1")
tmp <- acf(arma, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="ARMA(2,2)"); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
tmp <- acf(arma, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
arma <- arima.sim(model=list(ar=c(.3,.1), ma=c(-.1)))
ts.plot(arma, main="Simulated ts, ar=0.3/0.1, ma=-0.1")
tmp <- acf(arma, plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("acf"), pch=16,
main="ARMA(2,1)"); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
tmp <- acf(arma, type="partial", plot=F)
plot(0:15, tmp$acf[1:16], ylim=c(-1,1), xlab=("lags"), ylab=("partial acf"),
pch=16); lines(0:15, tmp$acf[1:16])
abline(2/sqrt(length(arma)), 0, col=8)
abline(-2/sqrt(length(arma)), 0, col=8)
0
20
40
60
80
100
0.5
0.0
partial acf
-1.0
-0.5
0.0
-1.0
-2
-1
-0.5
0
acf
1
0.5
2
1.0
ARMA(2,2)
1.0
Simulated ts, ar=0.3/0.1, ma=0.2/-0.1
0
5
10
Simulated ts, ar=0.3/0.1, ma=-0.1
ARMA(2,1)
15
0
5
10
15
10
15
lags
0.5
-0.5
0.0
partial acf
0.0
0
20
40
60
Time
80
100
-1.0
-1.0
-1
-0.5
0
acf
1
0.5
2
1.0
lags
1.0
Time
0
5
10
15
lags
0
5
lags
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 16 of 16
Lecture 3-4, page 17
Begin
Does a time-plot of the data
appear to be stationary?
No
Difference
the data
Yes
Does the correlogram of the
data decay to zero?
No
Yes
Is there a sharp cut-off
in the correlogram?
Yes
MA
No
Is there a sharp cut-off in
the partial correlogram?
No
ARMA
Yes
AR
(From Diggle 2000, Fig. 6.2, page 169)
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 17 of 17
Lecture 3-4, page 18
Begin
Does a time-plot of the data
appear to be stationary?
No
Remove reasonable trend and/or
do a transformation
Yes
Does the correlogram of the
data decay to zero?
No
Yes
Is there a sharp cut-off
in the correlogram?
Yes
MA
No
Is there a sharp cut-off in
the partial correlogram?
No
ARMA
Yes
AR
(Modified from Diggle 2000, Fig. 6.2, page 169)
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 18 of 18
Lecture 3-4, page 19
Summing up
You should (later?) be able to:
• Explain (conceptually) what the AR-, MA-, and
ARMA-processes are
• Write down their expressions
• Explain some properties of the AR-, and MAprocesses (Biologically and statistically – the
AFC and PACF.)
• Be familiar with some terms (Markov process,
ARIMA, Backward shift operator etc.)
___________________________________________
C:\Kyrre\studier\drgrad\Kurs\Timeseries\lecture 03-04 031022.doc, KL, 22.10.03, page 19 of 19