FSfIMATING POTENTIAL FUNCfIONS OF ONE-DIMENSIONAL GIBBS
STATES UNDER roNSTRANTS*
by
Chuanshu Ji
Department of Statistics
University of North Carolina
Chapel Hill, NC 27514
USA
Some consistent estimators are constructed for estimating potential
functions of one-dimensional Gibbs states.
are
imposed
to
resolve
the
Certain normalization constraints
identifiability
selection is also discussed in terms of
problem.
The
step-length
the convergence rates of
those
estimators.
Key Words and Phrases:
Running Title:
*Research
potential function, Gibbs state, consistent
estimator
Estimation of Potential Functions for Gibbs States
partially supported by a David Ross Fellowship at Purdue University
1.
Introduction and Background
A one-dimensional Gibbs state
2+
=
eo
IT {l, ... ,r}.
i=O
coordinates xi
operator a
2
is a probability measure on the space
Each element of 2+ is a sequence x
have
+
~f
possible
+
-+};
states
1, ... , r.
=
Define
by {ax)n = x + ' n=O, 1, ... ,
n l
the
~f{y : Yi=x ,
0
i
~ i
forward
+
for x € };.
measure ~f is the unique a-invariant probability measure on
{I. 1)
(xO'x ",,) whose
l
};+
shift
The Gibbs
satisfying
~ m-l)
exp{-mp + ~~ f{ajx)}
J=v
for some constants c '
l
c
€
2
(O,eo) and for all x € 2+' m € IN, where p is
called the pressure for f, and f
is a real-valued function defined on
called the potential (or energy) function.
It is observed that f determines
the dependence in the stationary sequence X = (X '
O
probability distribution
Assuming
the
X ' ... ,X are
O
n 1
observations.
[2].
adopted
to
Xl"") which has the
~f'
potential
given.
function
f
One may want
is
to
unknown and
estimate
f
the
based
observations
on
those
n
The motivation for considering such a problem is mentioned in
However,
Gibbs measure
};+,
since two different functions f and g may induce the same
~f{~g)'
resolve
f is not identifiable; only
the
normalization constraints.
linear functional 9
identifiability problem:
~f
is.
Two approaches are
reparameterization and
In [2], instead of estimating f we estimate the
~ f '" ~f'
where '" is a lmown function.
Estimators of
-2-
maximum likelihood type are constructed and shown to be strongly consistent,
asymptotically normal and asymptotically efficient.
In this paper, we show
that under appropriate normalization constraints f is identifiable.
consistent
(in sup-norm) estimators T
n
for
Strongly
the unknown function e
f
are
constructed.
After renormalization e
f
becomes an infinite-step backward transi tion
This suggests us to use a sequence of fini te-step
function (See (2.5».
(backward) transition functions {g , m €
m
step m to estimate g
m
~}
f
to approximate e , and at each
by a "sample transi tion function" which is a ratio of
two empirical measures.
A key question is what is the appropriate order for
the step-length m as the sample size n tends to infini ty.
Some heuristic
arguments indicate that m should be of the order log n so that T can achieve
n
f
the "nearly best" convergence rate among all consistent estimators of e .
For simplicity, we only consider the case of the sample space I+ in this
paper.
However, all resul ts here can be extended to the case of a more
general sample space I~ in which transi tions between certain states are not
allowed.
The definition and description of I~ are given in [2].
Now we define Gibbs states rigorously by Ruelle-Perron-Frobenius theory.
(1)
Forward shift:
~+ -Reca 11 t ha tour samp 1e space is .L.
rr
{1, ... ,r},
i::O
which is compact and metrizable in the product topology.
Define the forward shift operator a : I+ ~ I+ by (ax)
€
I+.
n
= x
n+
l' n
€
~, x
Observe that a, although continuous and surjective, is not generally
1-1.
(2)
HOlder
continuity:
Let
complex-valued functions on I + .
C(I+)
denote
+
the
For f € C(I ) define
varn f = sup{lf(x) - f(y)1
space
of
continuous,
~
-3-
for 0
<p <1
let
If I = sup
n€IN
p
var
f
n
pn
and
Elements of
+ are referred to as Holder continuous functions.
~
is a Banach algebra when endowed with the norm II-II p
(3)
The space ,.+
p
p
Ruelle-Perron-Frobenius (RPF) operators:
= I-I p
For f. g €
+ 11-11(1).
C(~
+
). define
~f
C(~+) ~ C(~+) by
~f g(x)
Theorem 1. 1.
=
~
y:ay=x
ef(y)g(y). x € ~+ .
For each reaL-vaLued f € ~;. there exists A € (0.01). a simpLe
f
eigenvalue of ~f : ~; ~ ~;. with strictLy positive eigenfWlCtion h
BoreL measure v on
f
~+
such that
*
~f
v f = Afv f ·
Moreover. spectrum
is contained in a disc of radius strictLy Less than A .
f
lim II~ gIA~ - (I gdvf)hfllOl)
f
and a
(~f)\{Af}
FinaLLy.
= O.
n~
The proof may be found in [1]. [4].
(4)
Gibbs states:
the Gibbs measure
~f
Assume that I hfdv
f
= 1.
For each real-valued f € ~;.
is defined by
It is easy to verify that
~f
is an invariant probability measure under a.
Let M (~+) denote the set of all a-invariant probability measures on ~+.
a
+ there exist constants c • c €
Theorem 1.2. For each reaL-vaLued f € ~p'
1
2
(0.01)
such that (1.1) hoLds for aLL x € ~+ and aLL m € ~; and ~f is the
-4+
unique element in MO'(~ ) satisfying (1.1).
In (1.1). p = p(f) = log A
is
f
the pressure for f.
The proof is given in [1].
Two functions f. g €
Remark 1.3.
C(~
+
) are said to be homologous. written f
~ g, if there exists ~ € C(~+) such that
f - g =
~ 0
0' -
Homology is clearly an equivalence relation.
J.L f
= J.Lg if f f - g
~
cons tant ;
0
~.
It can be shown (cf. [1]) that
therwi se J.L f 1 J.Lg • because J.L f and J.Lg are
ergodic measures.
Remark
The Gibbs state model includes the following special cases:
1.~.
Let
x
= (XO' Xl' ... ) be a stationary sequence with underlying distribution J.L .
f
then
(1)
If f(x)
=c.
for all x
+
€ ~ •
then X is a sequence of iid random
variables with discrete uniform distribution.
(ii)
If f(x) = f(xO)' for all x €
+
~ ,
i.e .• f only depends on the first
coordinate. then X is a sequence of iid random variables with P(XO=t) =
ce
f(t)
f(t)
r
• t=l ....• r. where c = l/It=l e
(iii) If f(x) = f(xO'x ). for all x €
l
+
~ •
.
i.e .• f only depends on the first
two coordinates. then X forms a stationary Markov chain with state
space {l ....• r} and suitable transition probabilities.
(iv)
If f(x) = f(xO •...• ~). for all x €
+
~
and some k
€~.
i.e .• f only
depends on the first k+l coordinates. then X is a k-step Markov
dependent chain.
In fact the family of Gibbs states includes all finite state stationary
k-step Markov chains. k €
~.
-5-
2.
Qmstruction of Consistent Estialtors for e
f
under certain constraints
on f
The reason that the identifiability problem arises when estimating the
potential function f is because all potential functions equivalent to f in
the sense of homology induce the same Gibbs state Il
next
lemma indicates
that
f
(See Remark 1.3).
in each equivalence class
there is a
The
unique
distinguished element which satisfies certain normalization conditions.
We
will construct estimators of this distinguished element later on.
Leuma 2.1.
For every f
(i)
Af
= 1:
(ii)
h
f
== 1:
€
~+
p'
there uniquely exists
f
€
~+ such that
P
""
(iii) f "" f + constant.
Proof.
(2.1)
Let
f
=f
+ log h
f
- log h f
0
a - log A .
f
then (i). (ii). (iii) are straightforward.
Furthermore. by [3] Proposition 1 we have
ef(X)hf(X)
(2.2)
Ilf(xolxl·~····) = Afhf(ax)
+
Vx€};.
where the LHS is the conditional probability of X appearing in the slot 0
o
given that
xl'~' ...
appear in the slots 1.2.....
Since the martingale
convergence theorem implies that the limit
(2.3)
exists for almost every x €
the limit in (2.3).
};+
under Il • the LHS in (2.2) is well-defined as
f
Therefore. the uniqueness follows from (2.2).
•
-6-
Let :f C 'S+p be the set of all functions that satisfy (i) and (il) in
Lemma 2.1.
In the sequel we just use the notation f to denote the generic
element in :f when there is no confusion.
Assume that X
distribution
We
want
to
=
(XO'X •... ) is a stationary sequence wi th probabili ty
1
~f'
f € :f and let x = (xO'x 1 •... ) denote a specific value of X.
f
estimate the unknown function e
based on observations
f
X ' .... X - 1 . f and e are in 1-1 correspondence.
O
n
f
that e is identifiable for f € :f.
Hence Lemma. 2.1 guarantees
Our goal is to construct a random function Tn on
~
+
based on XO ....• X _
n 1
such that for every f € :f
(2.4)
a.s.
sup+
under
~f
as
y€~
The random function T
satisfying (2.4)
n
.
est unator
0
is called
a
strongLy consistent
f e.
f
Notice that Lemma 2.1 (i) and (ii) are equivalent to the normalization
constraints
~
e
f(xO·x 1 ····)
+
= 1, Vx€1:.
X
o
Moreover. for f € :f. by (2.2)
So e
f
may be regarded as an infinite-step backward transition function. which
sheds light on the construction of T .
n
First of all. we may use a sequence of finite-step (backward) transition
functions {Jlf(xOIx l •.... xm-l)' m € IN. x € ~+} to approximate e
each
stage m we estimate ~f(xO IX I ..... xm-l)
by
the
"sample
f
Then at
transition
-7-
function".
Given n observations. the correct order for the step-length m
should be clog n. where c € (0.1) also depends on f. hence is unlmown.
Certain adaptive
procedures
are
proposed
in
that
situation.
Further
discussion on the choice of the step-length m will be given in Section 4.
Construction of Consistent Estimator T
n
Given observations
X .... X _ we first construct n periodic sequences
n l
O
oj X(n). j = O.I ..... n-l with
Then for every y €
+ and m
~
N~n)(y)
<n
define
n-l
=
~
I{(ojX(n»k = Yk'
k=O.I •.... m-l}.
j=O
n-l
= ~ I{(ojX(n»k = Yk' k=I •.... rn-l}.
j=O
where (ojX(n»k represents the k-th coordinate of the sequence ojX(n).
And
define
N(n) (y)
m
Rln)(y)
if
={ N~l(y)
o.
N(n)(y)
R(n) (y). also written as _m~__ /
m
n
N~~~ (y) > O.
otherwise
N(n)(y)
m-l
n
• is the "sample conditional
frequency" of YO apPearing in the slot 0 given that yl •...• y m-l apPear in the
slots 1 ..... m-l.
The next two theorems show that under certain conditions
R(n) is just a strongly consistent estimator of e f
m
-8-
Theorem 2.2.
(AI)
f €
(A2)
IIfll p
Suppose f is an Wlknoun potentiaL fWlCtion satisfying
~;
~
for
K
a
knoun constant K
> o.
Let
2K
I-p
(2.6)
a = - - and
(2.7)
m = [c log n],
where c € (O,l) satisfies
(2.8)
1 -
ac
> 0;
the notation [z] represents the integer part of z.
Define
T (y)
n
= R{n){y),
m
y € ~+,
then (2.4) hoLds for T .
n
Theorem 2.3.
Under the asslUllptions in Theorem. 2.2 without (A2), Tn defined
by the foLLowing procedure aLso satisfies (2.4).
Procedure 2.4.
that c
00).
n
!
Choose a sequence of posi tive constants {c , n
0 as n ~
n
00
with arbitrarily slow rate (e.g. clog n ~
n
€
00
N}, such
as
n ~
Set
m = [cn log n],
then define
The proofs of Theorem 2.2 and Theorem 2.3 will be given in Section 3.
3.
Exponential Decay of Certain Large Deviation Probabilities
In this section the deviation of the estimator T
n
function e f is investigated in detail.
from the estimated
The main result is that the related
e
-9-
large deviation probabilities drop to zero exponentially as n tends
As a corollary. the strong consistency of T
infinity.
n
The
next
lemma
provides
uniform
bounds
for
to
is established.
certain
conditional
probabilities. which will be used very often.
For every f €:It. there exists a positive constant a unich depends
Leoma 3.1.
on f. such that
uniformLy for aLL y
Proof.
€ ~
+ and aLL m €
m.
For f €:It. (1.1) implies that
e
f(a
m-1
V Y €
Bowen [1]
y)
d
an
+
~ •
m €
m.
with
gives
(Xl
1}
=
~
k=O
If I
vark f <
~
- 1-p
Therefore. (3.1) and (3.2) follow by setting
(3.3)
-
a -
211fll
p
1-p -
For y €
+ and m
~
•
< n. let
Xi
= Yi'
i
= O•.... m-1);
-10-
and
Xi
= Yi'
i
= 1 •.... m-1).
Then
p{n) (y)
By (2.5).
m
(n)
is close to ef{y) for every y when m is large.
pm-1 (y)
Notice that
(3.4)
The first term has a. uniform upper bound.
(3.5)
For m sufficiently large.
1)
IIflloo var f
IIflloo
sup+ D( (y) ~ e
(e
m -1) ~ 2 e
var f.
n
y€~
m
In what follows we simply denote the probability of event A under
P{A). and the corresponding expectation operator by E{-).
For every
Lemma 3.2.
~
€ (O.~).
For every
~
> O.
~f
by
-11-
Proof.
Since
D(3) (y) ~ I
n
(N(n)(y»O)
m-1
N(n) (y)
m-1
P(D(3) (y) > 2EO)
n
~ p(IN~n)(y) _ np~n)(y)1 > EON~n)(y»
+ p(IN(n)(y)_np(n)(y)I > EO • N(n)(y»
m-1
m-1
-a
m-1
1-e
~ P((l+EO)IN~n)(y) - np~n)(y)1 > EOnp~n)(y»
+ P((l + EO
I -e
-a
)IN(n)(y) _ np(n)(y)I > EO • np(n)(y»
m-1
m-1
-a
m-1
1-e
•
(3.6) and (3.7) indicate that it suffices to evaluate
N(n) (y)
p(1 men)
- 11
> EO)
for large n.
nPm (y)
Now let
Zj
= I{(ojX(n»k = Yk'
k
= O.1 ..... m-1}
- p~n)(y).
j
= O.1 ..... n-1;
-12-
Then
and
N(n}(y}
n-l
np(n}(y}
I Zjl
j=O
p(l_m~~__ 11 > ~} = p(1
m
This
is
double-array,
the
large
deviation
mean
zero,
mixing
> ~np(n}(y}}.
m
probability
sequence.
for
The
partial
following
sum
of
a
"splitting"
procedure turns out to be useful.
For a small number A
€ (o,~,,>.
Set
p =
q
[n~A],
= [n~A],
e
~
q]
k = [n-m+l+
p+q
,
i .e.
k satisfies
kp + (k-l}q ~ n-m+l
< (k+l}p
+ kq.
Let
U = Z
+
p+q
2
+ Z2p+q_l'
+ ... +
And
VI = Zp + ... + Zp+q- I'
z.Kp+(k-l}q-l'
.
.
-13-
Vk
={
zn-m+l
+
if kp + (k-l)q = n-m+1,
+ Zn- l'
~(k-l)q
+ ... + Zn-m + Zn-m+ 1 + ... + Zn- l'
if kp + (k-l)q
< n-m+l .
Each Ui ' i=l, ... ,k contains p Z-terms; each Vj ' j=l, ... ,k-l contains q
Z-terms. In particular, V contains s Z-terms with
k
~
m-1
~
s
(p+q-1) + (m-1).
The idea is that for large n both {U., i=l, ... ,k} and {V., j=1, ... ,k-1}
J
1
behave approximately
like
iid sequences.
And V does not affect
k
the
n-1
magnitude of
Zj very much.
~
j=O
Denote np(n)(y) by b2 and note that
m
n
=
k
~
i=1
k-l
Ui +
~
j=1
Vj + Vk ·
Therefore,
n-1
p(1
~
.=0
Zjl
> ~b2)
n
J-
with {,
= ~.
Recall the following weak Bernoulli property of J.L
f
(cf.
[1] Theorem
1.25).
Let sAm- 1 be the a-field generated by (XO,···,Xm- 1); sAm+n,m be the
a-field generated by (Xi' i
~
m+n).
Then there exist constants C
(0,1), which only depends on f, such that
uniformly for all A € sAm- l' B € sAm+n,m and all m,n €
~.
>0
and P €
-14-
Leama. 3.3.
(3.9)
E(Z Zl )
I
°2
I = O(pl-m).
E Zo
Proof.
V l ~ m.
(3.8) implies that
/E(ZOZl) - E Zo e E zll ~ ceE/Zol e E(Zll e pl-m.
(3.9) follows since E Zj
Leama. 3.4.
Let v
€
LHS
=1
l ~ m.
V j € W.
•
Wsatisfy v "'" nb as n
E(ZO+·· .+Zv-1)
(3.10)
2
veE Zo
Proof.
= O.
V
-+
00
with b € (0.1].
Then
2
= 0(1). as
n
-+
00.
v-I
+ 2
~
l=l
m-1
= 1 + 2
~
l=l
By
(3.9).
v-I E(ZOZl)
~
2
l=m+1
Moreover. for 1
E(ZOZl)
~
= O( 1) •
as n
-+
00.
Zo
l ~ m.
= P«XO·····Xm_1) = (Xl .···.Xl +m- 1) = (YO..... Ym- 1»
E Z~
And
2
E
= p~n)(y)
e
(1 _ p~n)(y»;
- (p~n)(Y»2.
-15-
= P(Xm=ym-c;nlxo=yo·····Xm- 1 = y m- 1)
• P(Xm+ 1 = Ym-c;+
n 1)lxo =yO·····Xm- l=ym- l' Xm=Ym-c;n)
~
e
-bl!
by (3.1).
(b = -log(l-e
-a
»
Therefore.
-+ O.
as
n -+ co;
And by the Kronecker lemma.
2
1-
v-I
:I
v I!=m+l
I!.
E(ZOZI!) I
< 2C v~1
- v
E Z2
o
~
I!=m+l
I!Jj
I!-m
-+ O.
as
n -+ co.
Finally.
pen) (y)
m
2a
-+-1- •
-a
as
n -+
co.
•
Thus (3.10) follows.
The next lemma indicates that {U .
i
sequence.
Lemma 3.5.
For every
t
)
O.
i=l, .... k}
is similar to an iid
-16-
k
t
E[exp(~ ~
(3.11)
n i=l
Proof.
Ui )] =
t
{E[exp(~
U1 )]}
n
k
(1+0(1»,
as
n
~
00.
Applying (3.8) to the sequence {U , i=l, ... ,k} iteratively gives that
i
Since
1(1 ± ~q-m)k-1 - 11 ~ CkPq - m ~ 0,
n ~OO,
as
•
(3.11) follows.
Lemma 3.6.
t
{E[exp(~
(3.12)
> 0,
For every t
n
U1 )]} k = 0(1),
as n
~ 00.
By Taylor expansion,
Proof.
E~
b
where
la/
Eui
• b3
n
2
n
~ 1 may be different on each appearance.
By (3.10),
~
-
b2
1
= 0(E.) = O( *-~),
n
as
n
~ 00;
n
n
And the same argument as in [5] Lemma 5.4.8 implies that
Hence n
~ 00
k •
E~
b
2
n
= 0(1),
-17-
and
k •
Err:
b
3
= o{ 1).
n
Therefore.
as n
~ 00.
•
The main result is
For every 6
Theorem 3.7.
> O.
there exist
+ and aLL n
uniformLy for aLL y €
~
>0
'Y
> nO.
Proof.
It suffices to verify the inequality
(3.14)
P{
k
~
U
i=l
For every
i
t
2
> 6b
)
~
e
and nO € IN such that
- 6n'Y
n
>0
and n sufficiently large.
=e
=e
-t6b
n
t
{E[exp(~
n
U1)]}
k
(I + o{l»
-t6b
n
• O{l)
by (3.12).
by (3.11)
-18-
(3.14) follows by setting 0
1-ac
< ~ < -2--'
•
Since the same argument shows that
•
k-1
(3.15)
p(1
~
Vjl
> Ob2 ) ~
n
j=1
~
e-
on
,
and
uniformly for all y €
+
~
> nO'
and n
by combining (3.13), (3.15) and (3.16)
we obtain
Corollary 3.8.
For every
~
> 0,
N(n) (y)
(3.17)
P(
I-m-::=--- 11 > ~)
2
b
~
n
uniFormly For all y €
+
~
and n
> nO'
ProoF of Theorem 2.2 and Theorem 2.3.
First by (3.4)
sup
y€~+
IT (y)_ef(y)I ~ sup O(1}(y) + sup O(2)(y) + sup O(3)(y).
n
y€~+ n
y€~+ n
y€~+ n
Then recall that each coordinate of y €
~
+ may take r different values.
Thus
Hence
Theorem
2.2
follows
from
(3.5),
(3.6),
(3.7),
(3.17)
Borel-cantelli lemma.
Furthermore, for every f € 1, the quantity a
=
211fll
1 P satisfies
-p
and
the
-19-
1 - ac
n
Theorem 2.3 is proved just like Theorem 2.2.
for n sufficiently large.
4.
>0
Remark on the step-length selection
Many consistent estimators T
n
could be constructed in the same way as in
Section 2 provided the step-length m tends
to infini ty "not
too fast".
Therefore their convergence rates need to be taken into consideration.
In
this section we explain why m should be of the order log n and what is the
corresponding convergence rate.
First of all, we have a stronger theorem than Theorem 2.2.
Theorem 4.1.
Suppose f is an unknown potentiaL function satisfying (AI) and
(A2) in Theorem 2.2.
Let
a = ~p and m = [c
log n] (same as (2.6), (2.7»,
where the constant c satisfies
< c < 1-2A
A
(4.1)
-log p
a
and A is a constant satifying
(4.2)
0
<A <
~1~___
2 + ~a~_
-log p
Define T (y) = R(n) (y), y € };+.
n
m
(4.3)
n
A
sup+ ITn(y) - ef(y)
I ~ 0,
Then
a.s.
under
~f as n ~
y€};
The first inequality in (4.1) implies that
Proof.
(4.4)
n
A
p
m
~
Hence by (3.5),
0
as
n
~ 00.
00.
-20-
a.s.
(4.5)
under 1J. as n
f
~
00.
Moreover, the second inequality in (4.1) allows us to obtain a stronger
result than (3.17):
For every
~
> 0, there exist
~
> 0 and nO
€
m such that
(4.6)
uniformly for all y €
+
~
and n
> nO.
It follows from the same arguments in Section 3 that
(4.7)
a. s. under 1J.f as n
i=2,3,
~
00.
•
Therefore, (4.3) holds.
Theorem 4.1 shows the sufficiency of the order log n for step-length m.
N(n)(_)
Is it also necessary?
Notice that the empirical measure
m
n
plays a role
of sufficient statistics in this nonparametric estimation problem.
To derive
N(n)(y)
/ penley) has to be close
consistent estimator T in (2.4), the ratio m
n
n
m
to one for every y.
Hence n penley) should be large for every y.
m
By (3.1)
we have
uniformly for y €
~
+
and all m €
m,
where b = -log(l-e ~ )
grow no faster than clog n for some c
>
o.
On
> O.
So m should
the other hand,
(3.4)
suggests that there is a trade-off between the good approximation (evaluated
by l1J. f (Yo Iy l' ... 'Ym-l) - ef(y)
(evaluated by
I)
and the accurate estimation at each step
ITn (y)-1J. f (yo IYl' ... 'Ym-l)
I).
The convergence rate of
the
-21-
former part will be damaged if m grows too slowly.
Therefore, log n is the
right order for m and the constant c is determined by (4.1).
Let A
1
=
2+
Then for A
~
A no constant c will satisfy (4.1).
a
-log p
Therefore (4.3) can not be established under our construction of T .
n
We
conjecture that in that situation no other methods can produce the resul t
(4.3), either.
e
f
i.e. if A
~
in the sense of (2.4).
(A2).
For the time being,
A, let T be an arbitrary consistent estimator of
n
Then (4.3) fails for some f satisfying A(I) and
the rigorous proof is still in the process of
development.
Acknowledgement
This work constitutes a part of the author's doctoral dissertation,
which was written under the supervision of Professor Steven Lalley.
author gratefully acknowledges Professor Lalley's guidance and support.
The
-22-
REFERENCES
[1]
EquiLibrium States and the Ergodic Theory of Anosov
Bowen, R. (1975).
Lecture Notes in Math. 470.
DiFFeomorphisms.
Springer-Verlag, New
York.
[2]
Ji, C. (1987).
Estimating functionals of one-dimensional Gibbs states.
Technical Report #87-33, Department of Statistics, Purdue University.
[3]
Lalley, S.P. (1985).
Ruelle's Perron-Frobenius theorem and the central
limit theorem for additive functionals of one-dimensional Gibbs states.
Proc. Conf. in honor in H. Robbins.
[4]
Ruelle, D. (1978).
Theonwdynamic FormaLism.
Addison-Wesley, Reading,
Massachusetts.
[5]
Stout, W.F. (1974).
ALmost Sure Convergence.
Academic Press, New York.
© Copyright 2026 Paperzz