Irregular variations of Sea Level Anomaly data and their impact on

Comparison of the autoregressive and autocovariance
prediction results on different stationary time series
Wiesław Kosek
University of Agriculture in Krakow, Poland
Abstract. The advantages and disadvantages of the autoregressive and autocovariance
prediction methods are presented using different model time series similar to the observed
geophysical ones, e.g. Earth orientation parameters or sea level anomalies data. In the Autocovariance prediction
autocovariance prediction method the first predicted value is determined by the principle
that the autocovariances estimated from the extended by the first prediction value series
coincide as closely as possible with the autocovariances estimated from the given series. Autoregressive prediction
In the autoregressive prediction method the autoregressive model is used to estimate the
first prediction value which depends on the autoregressive order and coefficients
computed from the autocovariance estimate. In both methods the autocovariance
Results
estimations of time series must be computed, thus application of them makes sense when
these series are stationary. However, the autoregressive prediction is more suitable for
less noisy data and can be applied to short time span series. The autocovariance
Conclusions
prediction is recommended for longer time series and but unlike autoregressive method
can be applied to more noisy data. The autoregressive method can be applied for time
series having close frequency oscillations while the autocovariance prediction is not
suitable for such data. In case of the autocovariance prediction the problem of estimation
of the appropriate forecast amplitude is also discussed.
1/2
European Geosciences Union General Assembly 2015, Vienna | Austria | 12 – 17 April 2015
Comparison of the autoregressive and autocovariance
prediction results on different stationary time series
Wiesław Kosek
University of Agriculture in Krakow
MODEL DATA
 2  t

zt   Ai exp 
 i (t )  nt
i 1
 Ti

L
t
 i (t )   nk
k 1
Noise
Autocovariance prediction
standard
deviations:
0, 3
Autoregressive prediction
Periods
Amplitudes
Phases
MODEL 1
L=3
M=100
20, 30, 50
All equal to 1.0
All equal to 0.0
MODEL 2
L=3
M=100
50, 57, 60
MODEL 3
L=9
M=100
10, 15, 20, 25,
40, 60, 90,
120, 180
All equal to 1.0
All equal to 0.0
standard
deviations:
0, 1, 2, 3, 5
MODEL 4
L=1
M=1000
365.24
1.0
Random walk computed by integration of
white nose with standard deviations equal to
1o,2o, 3o
standard
deviation:
0.1
MODEL 5
L=2
M=1000
365.24, 182.62 1.0, 0.5
Random walk computed by integration of
white nose with standard deviations equal to
2o
standard
deviation:
0.03
MODEL 6
L=2
M=1000
365.24
433.00
Random walk computed by integration of
white nose with standard deviations equal to
2o
standard
deviation:
0.0003
Ti
Ai
All equal to 1.0
0.08, 0.016
 i (t )
All equal to 0.0
nt
standard
deviations:
0, 1, 2, 3
Results
Conclusions
2/2
Autocovariance prediction
zt ,
t  1, 2,..., n - complex-valued stationary time series with
z n 1  ?
nk
1
(n)
cˆzz
( k )   zt zt  k ,
n t 1
for
nk
1
( n 1)
cˆzz
(k ) 
zt zt  k ,

n  1 t 1
nk
R ( z n 1 )   cˆ
k 1
(n)
zz
R ( z n 1 )
0
z n 1
(k )  cˆ
E[ zt ]  0
n - the number of data
- prediction
k  0,1,..., n  1 - biased autocovariance estimate
for
( n 1)
zz
k  0,1,..., n
2
(k )  min,
nk
zˆn 1 
 cˆ
k 1
(n)
zz
cˆ
(k ) z n  k 1
(n)
zz
(0)
next slide ▼
The biased autocovariances of a complex-valued stationary time series can be expressed by the
real-valued auto/cross-covariances of the real and imaginary parts:
nk
1
(n)
( n)
cˆ (k )   zt zt k  cˆxx
(k )  cˆ(yyn) (k )  i  cˆ(yxn) (k )  cˆxy
(k ),
n t 1
( n)
zz
After the time series is extended by the first prediction point
computed by:
n 1

 xn1 





 yn1 

 xnk 1 
 ynk 1  




ˆ
ˆ
a
(
k
)

b
(
k
)






k 1


 ynk 1 

 xnk 1 


 xn2k 1  yn2k 1 
nk
k  0,1,..., n 1
zn1  xn1  iyn1
where
( n)
aˆ(k )  cˆxx
(k )  cˆ(yyn) (k )
( n)
bˆ(k )  cˆ(yxn) (k )  cˆxy
(k )
k 1
a new estimation of the autocovariance can be computed using the previous one by the following
recursion formula:
( n)
ˆ
(
n

k
)
c
zz (k )  xnk 1 xn1  ynk 1 yn1  i   ynk 1 xn1  xnk 1 yn1 
( n1)
cˆzz (k ) 
n  k 1
and it can be used to compute the next prediction point
zn2  xn2  iyn2 etc.
results
Autoregressive prediction
xt  a1 xt 1  a2 xt 2  ...  aM xt  M  nt
xˆn1  aˆ1 xn  aˆ2 xn1  ...  aˆ M xn M 1
xˆn 2  aˆ1 xˆn1  aˆ2 xn  ...  aˆ M xn M 2
xˆn L  aˆ1 xˆn L 1  aˆ2 xˆn L 2  ...  aM xn M  L
Autoregressive coefficients:
Autoregressive order:
Akaike godness-of-fit criterion:
n  M 1
2
P( M )   n ( M )
 min
n  M 1
where
ˆ ( M )  cˆo  aˆ1c1  aˆ 2 c2  ...  aˆ M cˆM
2
n
 aˆ1

 aˆ 2
 .

 aˆ
 M
  cˆo
 
  cˆ1
 .
 
  cˆ
  M 1
cˆ1
cˆo
.
.
.
cˆM  2
.
1 nk
cˆk   xt xt  k ,
n t 1
.
for
cˆM 1 

cˆM  2 
. 

cˆo 
1
 cˆ1

 cˆ2
 .

 cˆ
 M







k  0,1,..., n  1
next slide ▼
Uncorrected autocovariance predictions (red) of the deterministic and noise data
T=20, 40, 50; A=1, 1, 1; f=0, 0, 0; sd=0
T=50, 57, 60; A=1, 1, 1; f=0, 0, 0; sd=0
2
2
0
0
-2
-2
0
100
200
300
400
500
600
700
800
900
1000
0
1100
100
200
300
T=10, 15, 20, 25, 40, 60, 90, 120, 180; A=1 (all); f=0 (all); sd=0
400
500
600
700
800
900
1000
700
800
900
1000
1100
noise data; sd=1
2
4
0
0
-2
-4
0
100
200
300
400
500
600
700
800
900
1000
1100
0
100
200
300
400
500
600
1100
next slide ▼
Correction to amplitudes of the autocovariance prediction
5
xn+m
signal
4
β×xN+L
3
2
1
0
-1
-2
-3
-4
time series
-5
0
1.
2.
3.
4.
5.
6.
100
200
300
400
500
600
700
800
900
1000
1100
n=0.7×N where N is the total number of data
computation of the autocovariance ck of xt time series for k=0,1,…,n-1
computation of uncorrected autocovariance predictions xn+m for m=1,2,…,N-n+1
computation of the autocovariance ck of prediction time series xn+m for k=0,1,…,m-1
computation of the amplitude coefficient β= sqrt[(|c1| + |c2| +…+ |c8|)/(|c1| + |c2| +…+ |c8|)]
computation of corrected autocovariance predictions β×xN+L for L=1,2,….M
next slide ▼
Autocovariance (red) and autoregressive (green) predictions of the model data
[T=20, 30, 50; A=1, 1, 1]
T = (20, 30, 50); A = (1, 1, 1); f = (0, 0, 0), sd=0.0
4
2
0
-2
-4
0
100
200
300
400
500
600
700
800
900
1000
700
800
900
1000
700
800
900
1000
700
800
900
1000
T = (20, 30, 50); A = (1, 1, 1); f=(0, 0, 0), sd=3.0
8
4
0
-4
-8
0
100
200
300
400
500
600
T=(20,30,50), A=(1,1,1), f=(0,0,0), sd=0.0
4
2
0
-2
-4
0
100
200
300
400
500
600
T=(20,30,50), A=(1,1,1), f=(0,0,0), sd=3.0
6
2
-2
-6
0
100
200
300
400
500
600
next slide ▼
Autocovariance (red) and autoregressive (green) predictions of the model data
with close frequencies (T=50,57,60, A=1,1,1, noise std. dev.: sd=0,1,2,3)
T = 50, 57, 60; A = 1, 1, 1; f=0, 0, , sd=0.0
4
2
0
-2
-4
0
200
400
T = 50, 57, 60; A = 1, 1, 1; f=0, 0, , sd=1.0
600
800
1000
0
200
400
T = 50, 57, 60; A = 1, 1, 1; f=0, 0, , sd=2.0
600
800
1000
0
200
T = 50, 57, 60; A = 1, 1, 1; f=0, 0, ,
600
800
1000
4
2
0
-2
-4
8
4
0
-4
-8
8
400
sd=3.0
4
0
-4
-8
0
200
400
600
800
1000
next slide ▼
Autocovariance predictions (red) of the deterministic model data
with close frequencies [T=50, 57, 60; A=1, 1, 1] [sd=0.0]
4
2
0
-2
-4
1000
4
2
0
-2
-4
2000
4
2
0
-2
-4
3000
T = 50, 57, 60;
A = 1, 1, 1;
f=0, 0, ,
1200
T = 50, 57, 60;
1400
A = 1, 1, 1;
1600
f=0, 0, ,
sd=0.0
1800
2000
2200
T = 50, 57, 60;
2400
A = 1, 1, 1;
2600
f=0, 0, ,
sd=0.0
2800
3000
3800
4000
3200
3400
sd=0.0
3600
next slide ▼
Autocovariance (red) and autoregressive (green) predictions of the model data with many frequencies
T=10, 15, 20, 25, 40, 60, 90, 120, 180; A=1 (all); f=0 (all) (noise sd=0,1,2)
T = 10, 15, 20. 25, 40, 60, 90, 120, 180; A = 1 (all) f = 0 (all), sd=0.0
4
0
-4
0
100
200
300
400
500
600
700
800
900
1000
1100
1200
1000
1100
1200
1000
1100
1200
T = 10, 15, 20. 25, 40, 60, 90, 120, 180; A = 1 (all) f = 0 (all), sd=1.0
4
0
-4
0
100
200
300
400
500
600
700
800
900
T = 10, 15, 20. 25, 40, 60, 90, 120, 180; A = 1 (all) f = 0 (all), sd=2.0
4
0
-4
0
100
200
300
400
500
600
700
800
900
next slide ▼
Autocovariance predictions (red) of the model data with big number of frequencies
T=10, 15, 20, 25, 40, 60, 90, 120, 180; A=1 (all); f=0 (all)
T = 10, 15, 20. 25, 40, 60, 90, 120, 180; A = 1 (all) f = 0 (all),
sd=3.0
8
4
0
-4
-8
0
400
800
1200
T = 10, 15, 20. 25, 40, 60, 90, 120, 180; A = 1 (all) f = 0 (all),
1600
2000
1600
2000
sd=5.0
8
4
0
-4
-8
0
400
800
1200
next slide ▼
Autocovariance (red) and autoregressive (green) predictions of the seasonal model data
with random walk phase: Random walk computed by integration of white noise (sd=1o, 2o , 3o)
2
T=365.24, A=1.0 (sd=0.1), f=(random walk, sd=1.0)
1
0
-1
-2
2
0
2000
4000
6000
8000
10000
12000
10000
12000
10000
12000
T=365.24, A=1.0 (sd=0.1), f=(random walk, sd=2.0)
1
0
-1
-2
0
2000
2
4000
6000
8000
T=365.24, A=1.0 (sd=0.1), f=(random walk, sd=3.0)
1
0
-1
-2
0
2000
4000
6000
8000
next slide ▼
Autocovariance (red) and autoregressive (green) predictions of the model data
with random walk phase. Random walk computed by integration of white noise (sd= 2o )
T = 365.24, 182.62; A=1.0, 0.5, f = random walk sd=2.0
2
1
0
-1
-2
0
2000
4000
6000
8000
10000
12000
14000
T=365.24, 433.00; A=0.08, 0.16, f=random walk (sd=2)
0.4
0.2
0
-0.2
-0.4
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
next slide ▼
Autocovariance (red) and autoregressive (green) predictions of noise data [sd=1.0]
noise data,
sd=1.0
3
2
1
0
-1
-2
-3
0
200
400
600
800
1000
1200
1400
1600
1800
2000
next slide ▼
Conclusions
• The input time series for computation of autocovariance and autoregressive
predictions should be stationary, because in both methods the autocovarince
estimates are assumed as functions of time lag only.
• The autocovariance prediction formulae do not able to estimate the appropriate
value of prediction amplitude, so it must be rescaled using constant value of the
amplitude coefficient β estimated empirically.
• The accuracy of the autocovariance predictions depend on the length of time series
and noise level in data. The predictions may become unstable and when the length
of time series decreases, the noise level is big or the frequencies of oscillations are
too close.
• The autoregressive prediction is not recommended for noisy time series, but it can
be applied when oscillation frequencies are close.
• The autocovariance prediction method can be applied to noisy time series if their
length is long enough, but it is not recommended if frequencies of oscillations are
close.
• The autocovariance prediction shows better performance than the autoregressive
method on the model data in which the phases are modeled as a random walk.
• The autocovariance predictions of noise data are similar to noise with smaller
standard deviations and autoregressive predictions are close to zero.
next slide
References
•
•
•
•
•
Barrodale I. and Erickson R. E., 1980, Algorithms for least-squares linear prediction and
maximum entropy spectral analysis - Part II: Fortran program, Geophysics, 45, 433-446.
Brzeziński A., 1994, Algorithms for estimating maximum entropy coefficients of the
complex valued time series, Allgemeine Vermessungs-Nachrichten, Heft 3/1994, pp.101112, Herbert Wichman Verlag GmbH, Heidelberg.
Kosek W., 1993, The Autocovariance Prediction of the Earth Rotation Parameters. Proc.
7th International Symposium ”Geodesy and Physics of the Earth” IAG Symposium No.
112, Potsdam, Germany, Oct. 5-10, 1992. H. Montag and Ch. Reigber (eds.), Springer
Verlag, 443-446.
Kosek W., 1997, Autocovariance Prediction of Short Period Earth Rotation Parameters,
Artificial Satellites, Journal of Planetary Geodesy, 32, 75-85
Kosek W., 2002, Autocovariance prediction of complex-valued polar motion time series,
Advances of Space Research, 30, 375-380.
next slide
Acknowledgments
Paper was supported by the Polish Ministry of Science and
Education, project UMO-2012/05/B/ST10/02132 under the
leadership of Prof. A. Brzeziński.